text
stringlengths 6
128k
|
---|
In this paper we establish the existence of monads on multiprojective spaces
$X=\mathbb{P}^{2n+1}\times\mathbb{P}^{2n+1}\times\cdots\times\mathbb{P}^{2n+1}$.
We prove stability of the kernel bundle which is a dual of a generalized
Schwarzenberger bundle associated to the monads and prove that the cohomology
vector bundle is simple, a generalization of instanton bundles. Next we
construct monads on $\mathbb{P}^{a_1}\times\cdots\times\mathbb{P}^{a_n}$ and
prove stability of the kernel bundle and that the cohomology vector bundle is
simple. Lastly, we construct the morphisms that establish the existence of
monads on $\mathbb{P}^1\times\cdots\times\mathbb{P}^1$.
|
We study a reinsurance Stackelberg game in which both the insurer and the
reinsurer adopt the mean-variance (abbr. MV) criterion in their decision-making
and the reinsurance is irreversible. We apply a unified singular control
framework where irreversible reinsurance contracts can be signed in both
discrete and continuous times. The results theoretically illustrate that,
rather than continuous-time contracts or a bunch of discrete-time contracts, a
single once-for-all reinsurance contract is preferred. Moreover, the
Stackelberg game turns out to be centering on the signing time of the single
contract. The insurer signs the contract if the premium rate is lower than a
time-dependent threshold and the reinsurer designs a premium that triggers the
signing of the contract at his preferred time. Further, we find that
reinsurance preference, discount and reversion have a decreasing dominance in
the reinsurer's decision-making, which is not seen for the insurer.
|
Charge excitations were studied for stipe-ordered 214 compounds,
La$_{5/3}$Sr$_{1/3}$NiO$_{4}$ and 1/8-doped La$_{2}$(Ba, Sr)$_{x}$CuO$_{4}$
using resonant inelastic x-ray scattering in hard x-ray regime. We have
observed charge excitations at the energy transfer of 1 eV with the momentum
transfer corresponding to the charge stripe spatial period both for the
diagonal (nikelate) and parallel (cuprates) stripes. These new excitations can
be interpreted as a collective stripe excitation or charge excitonic mode to a
stripe-related in-gap state.
|
We consider functions of multi-dimensional versions of truncated Wiener--Hopf
operators with smooth symbols, and study the scaling asymptotics of their
traces. The obtained results extend the asymptotic formulas obtained by H.
Widom in the 1980's to non-smooth functions, and non-smooth truncation domains.
The obtained asymptotic formulas are used to analyse the scaling limit of the
spatially bipartite entanglement entropy of thermal equilibrium states of
non-interacting fermions at positive temperature.
|
It is now well-established that mechanical equilibrium in athermal disordered
solids gives rise to anisotropic spatial correlations of the coarse-grained
stress field that decay in space as $1/r^d$, where $r$ is the distance from the
origin, and $d$ denotes the spatial dimension. In this note we present a
simple, geometry based argument for the scaling form of the emergent spatial
correlations of the stress field in disordered solids.
|
Our understanding of learning input-output relationships with neural nets has
improved rapidly in recent years, but little is known about the convergence of
the underlying representations, even in the simple case of linear autoencoders
(LAEs). We show that when trained with proper regularization, LAEs can directly
learn the optimal representation -- ordered, axis-aligned principal components.
We analyze two such regularization schemes: non-uniform $\ell_2$ regularization
and a deterministic variant of nested dropout [Rippel et al, ICML' 2014].
Though both regularization schemes converge to the optimal representation, we
show that this convergence is slow due to ill-conditioning that worsens with
increasing latent dimension. We show that the inefficiency of learning the
optimal representation is not inevitable -- we present a simple modification to
the gradient descent update that greatly speeds up convergence empirically.
|
The most promising concept for low frequency gravitational wave observatories
are laser interferometric detectors in space. It is usually assumed that the
noise floor for such a detector is dominated by optical shot noise in the
signal readout. For this to be true, a careful balance of mission parameters is
crucial to keep all other parasitic disturbances below shot noise. We developed
a web application that uses over 30 input parameters and considers many
important technical noise sources and noise suppression techniques. It
optimizes free parameters automatically and generates a detailed report on all
individual noise contributions. Thus you can easily explore the entire
parameter space and design a realistic gravitational wave observatory.
In this document we describe the different parameters, present all underlying
calculations, and compare the final observatory's sensitivity with
astrophysical sources of gravitational waves. We use as an example parameters
currently assumed to be likely applied to a space mission to be launched in
2034 by the European Space Agency. The web application itself is publicly
available on the Internet at http://spacegravity.org/designer.
|
The local minimum degree of a graph is the minimum degree that can be reached
by means of local complementation. For any n, there exist graphs of order n
which have a local minimum degree at least 0.189n, or at least 0.110n when
restricted to bipartite graphs. Regarding the upper bound, we show that for any
graph of order n, its local minimum degree is at most 3n/8+o(n) and n/4+o(n)
for bipartite graphs, improving the known n/2 upper bound. We also prove that
the local minimum degree is smaller than half of the vertex cover number (up to
a logarithmic term). The local minimum degree problem is NP-Complete and hard
to approximate. We show that this problem, even when restricted to bipartite
graphs, is in W[2] and FPT-equivalent to the EvenSet problem, which
W[1]-hardness is a long standing open question. Finally, we show that the local
minimum degree is computed by a O*(1.938^n)-algorithm, and a
O*(1.466^n)-algorithm for the bipartite graphs.
|
A frequentist asymptotic expansion method for error estimation is employed
for a network of gravitational wave detectors to assess the amount of
information that can be extracted from gravitational wave observations.
Mathematically we derive lower bounds in the errors that any parameter
estimator will have in the absence of prior knowledge to distinguish between
the post-Einsteinian (ppE) description of coalescing binary systems and that of
general relativity. When such errors are smaller than the parameter value,
there is possibility to detect these violations from GR. A parameter space with
inclusion of dominant dephasing ppE parameters $(\beta, b)$ is used for a study
of first- and second-order (co)variance expansions, focusing on the inspiral
stage of a nonspinning binary system of zero eccentricity detectible through
Adv. LIGO and Adv. Virgo. Our procedure is an improvement of the Cram\'{e}r-Rao
Lower Bound. When Bayesian errors are lower than our bound it means that they
depend critically on the priors. The analysis indicates the possibility of
constraining deviations from GR in inspiral SNR ($\rho \sim 15-17$) regimes
that are achievable in upcoming scientific runs (GW150914 had an inspiral SNR
$\sim 12$). The errors on $\beta$ also increase errors of other parameters such
as the chirp mass $\mathcal{M}$ and symmetric mass ratio $\eta$. Application is
done to existing alternative theories of gravity, which include modified
dispersion relation of the waveform, non-spinning models of quadratic modified
gravity, and dipole gravitational radiation (i.e., Brans-Dicke type)
modifications.
|
Low-temperature electronic states in SrRu_{1-x}Mn_xO_3 for x <= 0.6 have been
investigated by means of specific-heat C_p measurements. We have found that a
jump anomaly observed in C_p at the ferromagnetic (FM) transition temperature
for SrRuO_3 changes into a broad peak by only 5% substitution of Mn for Ru.
With further doping Mn, the low-temperature electronic specific-heat
coefficient gamma is markedly reduced from the value at x=0 (33 mJ/K^2 mol), in
connection with the suppression of the FM phase as well as the enhancement of
the resistivity. For x >= 0.4, gamma approaches to ~ 5 mJ/K^2 mol or less,
where the antiferromagnetic order with an insulating feature in resistivity is
generated. We suggest from these results that both disorder and reconstruction
of the electronic states induced by doping Mn are coupled with the magnetic
ground states and transport properties.
|
We report Gemini Planet Imager H band high-contrast imaging/integral field
spectroscopy and polarimetry of the HD 100546, a 10 $Myr$-old early-type star
recently confirmed to host a thermal infrared bright (super)jovian protoplanet
at wide separation, HD 100546 b. We resolve the inner disk cavity in polarized
light, recover the thermal-infrared (IR) bright arm, and identify one
additional spiral arm. We easily recover HD 100546 b and show that much of its
emission originates an unresolved, point source. HD 100546 b likely has
extremely red infrared colors compared to field brown dwarfs, qualitatively
similar to young cloudy superjovian planets, however, these colors may instead
indicate that HD 100546 b is still accreting material from a circumplanetary
disk. Additionally, we identify a second point source-like peak at $r_{proj}$
$\sim$ 14 AU, located just interior to or at inner disk wall consistent with
being a 10--20 $M_{J}$ candidate second protoplanet-- "HD 100546 c" -- and
lying within a weakly polarized region of the disk but along an extension of
the thermal IR bright spiral arm. Alternatively, it is equally plausible that
this feature is a weakly polarized but locally bright region of the inner disk
wall. Astrometric monitoring of this feature over the next 2 years and emission
line measurements could confirm its status as a protoplanet, rotating disk hot
spot that is possibly a signpost of a protoplanet, or a stationary emission
source from within the disk.
|
There is increasing interest in employing large language models (LLMs) as
cognitive models. For such purposes, it is central to understand which
properties of human cognition are well-modeled by LLMs, and which are not. In
this work, we study the biases of LLMs in relation to those known in children
when solving arithmetic word problems. Surveying the learning science
literature, we posit that the problem-solving process can be split into three
distinct steps: text comprehension, solution planning and solution execution.
We construct tests for each one in order to understand whether current LLMs
display the same cognitive biases as children in these steps. We generate a
novel set of word problems for each of these tests, using a neuro-symbolic
approach that enables fine-grained control over the problem features. We find
evidence that LLMs, with and without instruction-tuning, exhibit human-like
biases in both the text-comprehension and the solution-planning steps of the
solving process, but not in the final step, in which the arithmetic expressions
are executed to obtain the answer.
|
In the gravitational field of a Schwarzschild-like black hole, particles
infalling from rest at infinity, and black hole "wind" particles with
relativistic velocity leaking radially out from the nominal horizon, both have
the same magnitude of velocity at any radius from the hole. Hence when equally
massive infalling and wind particles collide at any radius, they yield
collision products with zero center of mass radial velocity, which can then
nucleate star formation at the collision radius. We suggest that this gives a
mechanism by which a central black hole can catalyze galaxy formation. For disk
galaxies, this mechanism explains the observed approximately exponential
falloff of the surface brightness with radius, and gives an estimate of the
associated scale length.
|
Recommending the best course of action for an individual is a major
application of individual-level causal effect estimation. This application is
often needed in safety-critical domains such as healthcare, where estimating
and communicating uncertainty to decision-makers is crucial. We introduce a
practical approach for integrating uncertainty estimation into a class of
state-of-the-art neural network methods used for individual-level causal
estimates. We show that our methods enable us to deal gracefully with
situations of "no-overlap", common in high-dimensional data, where standard
applications of causal effect approaches fail. Further, our methods allow us to
handle covariate shift, where test distribution differs to train distribution,
common when systems are deployed in practice. We show that when such a
covariate shift occurs, correctly modeling uncertainty can keep us from giving
overconfident and potentially harmful recommendations. We demonstrate our
methodology with a range of state-of-the-art models. Under both covariate shift
and lack of overlap, our uncertainty-equipped methods can alert decisions
makers when predictions are not to be trusted while outperforming their
uncertainty-oblivious counterparts.
|
In this paper, we consider curves of degree 10 of torus type (2,5), C :
f_5(x, y)^2 + f_2(x, y)^5 = 0. Assume that f_2(0, 0) = f_5(0, 0) = 0. Then O =
(0, 0) is a singular point of C which is called an inner singularity. In this
paper, we give a topological classification of singularities of (C,O).
|
In this paper we are concerned with $L^p$-maximal parabolic regularity for
abstract nonautonomous parabolic systems and their quasilinear counterpart in
negative Sobolev spaces incorporating mixed boundary conditions. Our results
are derived in the setting of nonsmooth domains with mixed boundary conditions
by an extrapolation technique which also yields uniform estimates for the
parabolic solution operators. We require only very mild boundary regularity,
not in the Lipschitz-class, and generally only bounded and measurable complex
coefficients. The nonlinear functions in the quasilinear formulation can be
nonlocal-in-time; this allows also to consider certain systems whose stationary
counterpart fails to satisfy the usual ellipticity conditions.
|
The task of Chinese text spam detection is very challenging due to both glyph
and phonetic variations of Chinese characters. This paper proposes a novel
framework to jointly model Chinese variational, semantic, and contextualized
representations for Chinese text spam detection task. In particular, a
Variation Family-enhanced Graph Embedding (VFGE) algorithm is designed based on
a Chinese character variation graph. The VFGE can learn both the graph
embeddings of the Chinese characters (local) and the latent variation families
(global). Furthermore, an enhanced bidirectional language model, with a
combination gate function and an aggregation learning function, is proposed to
integrate the graph and text information while capturing the sequential
information. Extensive experiments have been conducted on both SMS and review
datasets, to show the proposed method outperforms a series of state-of-the-art
models for Chinese spam detection.
|
Detectors with low thresholds for electron recoil open a new window to direct
searches of sub-GeV dark matter (DM) candidates. In the past decade, many
strong limits on DM-electron interactions have been set, but most on the one
which is spin-independent (SI) of both dark matter and electron spins. In this
work, we study DM-atom scattering through a spin-dependent (SD) interaction at
leading order (LO), using well-benchmarked, state-of-the-art atomic many-body
calculations. Exclusion limits on the SD DM-electron cross section are derived
with data taken from experiments with xenon and germanium detectors at leading
sensitivities. In the DM mass range of 0.1 - 10 GeV, the best limits set by the
XENON1T experiment: $\sigma_e^{\textrm{(SD)}}<10^{-41}-10^{-40}\,\textrm{cm}^2$
are comparable to the ones drawn on DM-neutron and DM-proton at slightly bigger
DM masses. The detector's responses to the LO SD and SI interactions are
analyzed. In nonrelativistic limit, a constant ratio between them leads to an
indistinguishability of the SD and SI recoil energy spectra. Relativistic
calculations however show the scaling starts to break down at a few hundreds of
eV, where spin-orbit effects become sizable. We discuss the prospects of
disentangling the SI and SD components in DM-electron interactions via spectral
shape measurements, as well as having spin-sensitive experimental signatures
without SI background.
|
Given a tame knot K presented in the form of a knot diagram, we show that the
problem of determining whether K is knotted is in the complexity class NP,
assuming the generalized Riemann hypothesis (GRH). In other words, there exists
a polynomial-length certificate that can be verified in polynomial time to
prove that K is non-trivial. GRH is not needed to believe the certificate, but
only to find a short certificate. This result complements the result of Hass,
Lagarias, and Pippenger that unknottedness is in NP. Our proof is a corollary
of major results of others in algebraic geometry and geometric topology.
|
We introduce the idea of {\it effective} dark matter halo catalog in $f(R)$
gravity, which is built using the {\it effective} density field. Using a suite
of high resolution N-body simulations, we find that the dynamical properties of
halos, such as the distribution of density, velocity dispersion, specific
angular momentum and spin, in the effective catalog of $f(R)$ gravity closely
mimic those in the $\Lambda$CDM model. Thus, when using effective halos, an
$f(R)$ model can be viewed as a $\Lambda$CDM model. This effective catalog
therefore provides a convenient way for studying the baryonic physics, the
galaxy halo occupation distribution and even semi-analytical galaxy formation
in $f(R)$ cosmologies.
|
We study two ways (levels) of finding free-probability analogues of classical
infinitely divisible measures. More precisely, we identify their Voiculescu
transforms. For free-selfdecomposable measures we found the formula (a
differential equation) for their background driving transforms. We illustrate
our methods on the hyperbolic characteristic functions. As a by-product our
approach may produce new formulas for some definite integrals.
|
We propose and discuss a novel strategy for protein design. The method is
based on recent theoretical advancements which showed the importance to treat
carefully the conformational free energy of designed sequences. In this work we
show how computational cost can be kept to a minimum by encompassing negative
design features, i.e. isolating a small number of structures that compete
significantly with the target one for being occupied at low temperature. The
method is succesfully tested on minimalist protein models and using a variety
of amino acid interaction potentials.
|
The helical magnetorotational instability of the magnetized Taylor-Couette
flow is studied numerically in a finite cylinder. A distant upstream insulating
boundary is shown to stabilize the convective instability entirely while
reducing the growth rate of the absolute instability. The reduction is less
severe with larger height. After modeling the boundary conditions properly, the
wave patterns observed in the experiment turn out to be a noise-sustained
convective instability. After the source of the noise resulted from unstable
Ekman and Stewartson layers is switched off, a slowly-decaying inertial
oscillation is observed in the simulation. We reach the conclusion that the
experiments completed to date have not yet reached the regime of absolute
instability.
|
We investigate the importance of including quantized initial conditions in
Langevin dynamics for adsorbates interacting with a thermal reservoir of
electrons. For quadratic potentials the time evolution is exactly described by
a classical Langevin equation and it is shown how to rigorously obtain quantum
mechanical probabilities from the classical phase space distributions resulting
from the dynamics. At short time scales, classical and quasiclassical initial
conditions lead to wrong results and only correctly quantized initial
conditions give a close agreement with an inherently quantum mechanical master
equation approach. With CO on Cu(100) as an example, we demonstrate the effect
for a system with ab initio frictional tensor and potential energy surfaces and
show that quantizing the initial conditions can have a large impact on both the
desorption probability and the distribution of molecular vibrational states.
|
Let $P$ be a set of $n$ points in the plane. We show how to find, for a given
integer $k>0$, the smallest-area axis-parallel rectangle that covers $k$ points
of $P$ in $O(nk^2 \log n+ n\log^2 n)$ time. We also consider the problem of,
given a value $\alpha>0$, covering as many points of $P$ as possible with an
axis-parallel rectangle of area at most $\alpha$. For this problem we give a
probabilistic $(1-\varepsilon)$-approximation that works in near-linear time:
In $O((n/\varepsilon^4)\log^3 n \log (1/\varepsilon))$ time we find an
axis-parallel rectangle of area at most $\alpha$ that, with high probability,
covers at least $(1-\varepsilon)\mathrm{\kappa^*}$ points, where
$\mathrm{\kappa^*}$ is the maximum possible number of points that could be
covered.
|
We develop some techniques which allow an analytic evaluation of space-like
observables in high temperature lattice gauge theories. We show that such
variables are described extremely well by dimensional reduction. In particular,
by using results obtained in the context of ``Induced QCD'', we evaluate the
contributions to space-like observables coming from the Higgs sector of the
dimensionally reduced action, we find that they are of higher order in the
coupling constant compared to those coming from the space-like action and hence
neglegible near the continuum limit. In the case of SU(2) gauge theory our
results agree with those obtained through Montecarlo simulations both in (2+1)
and (3+1) dimensions and they also indicate a possible way of removing the gap
between the two values of $g^2(T)$ recently appeared in the literature.
|
Bayesian inference remains one of the most important tool-kits for any
scientist, but increasingly expensive likelihood functions are required for
ever-more complex experiments, raising the cost of generating a Monte Carlo
sample of the posterior. Recent attention has been directed towards the use of
emulators of the posterior based on Gaussian Process (GP) regression combined
with active sampling to achieve comparable precision with far fewer costly
likelihood evaluations. Key to this approach is the batched acquisition of
proposals, so that the true posterior can be evaluated in parallel. This is
usually achieved via sequential maximization of the highly multimodal
acquisition function. Unfortunately, this approach parallelizes poorly and is
prone to getting stuck in local maxima. Our approach addresses this issue by
generating nearly-optimal batches of candidates using an almost-embarrassingly
parallel Nested Sampler on the mean prediction of the GP. The resulting
nearly-sorted Monte Carlo sample is used to generate a batch of candidates
ranked according to their sequentially conditioned acquisition function values
at little cost. The final sample can also be used for inferring marginal
quantities. Our proposed implementation (NORA) demonstrates comparable accuracy
to sequential conditioned acquisition optimization and efficient
parallelization in various synthetic and cosmological inference problems.
|
We present the analysis of archival XMM-Newton and Chandra observations of CH
Cyg, one of the most studied symbiotic stars (SySts). The combination of the
high-resolution XMM-Newton RGS and Chandra HETG X-ray spectra allowed us to
obtain reliable estimates of the chemical abundances and to corroborate the
presence of multi-temperature X-ray-emitting gas. Spectral fitting of the
medium-resolution XMM-Newton MOS (MOS1+MOS2) spectrum required the use of an
additional component not seen in previous studies in order to fit the 2.0-4.0
keV energy range. Detailed spectral modelling of the XMM-Newton MOS data
suggests the presence of a reflection component, very similar to that found in
active galactic nuclei. The reflection component is very likely produced by an
ionised disk (the accretion disk around the white dwarf) and naturally explains
the presence of the fluorescent Fe emission line at 6.4 keV while also
contributing to the soft and medium energy ranges. The variability of the
global X-ray properties of CH Cyg are discussed as well as the variation of the
three Fe lines around the 6-7 keV energy range. We conclude that reflection
components are needed to model the hard X-ray emission and may be present in
most $\beta/\delta$-type SySt.
|
Let $ X_{n} $ be $ n\times N $ random complex matrices, $R_{n}$ and $T_{n}$
be non-random complex matrices with dimensions $n\times N$ and $n\times n$,
respectively. We assume that the entries of $ X_{n} $ are independent and
identically distributed, $ T_{n} $ are nonnegative definite Hermitian matrices
and $T_{n}R_{n}R_{n}^{*}= R_{n}R_{n}^{*}T_{n} $.
The general information-plus-noise type matrices are defined by
$C_{n}=\frac{1}{N}T_{n}^{\frac{1}{2}} \left( R_{n} +X_{n}\right)
\left(R_{n}+X_{n}\right)^{*}T_{n}^{\frac{1}{2}} $.
In this paper, we establish the limiting spectral distribution of the large
dimensional general information-plus-noise type matrices $C_{n}$. Specifically,
we show that as $n$ and $N$ tend to infinity proportionally, the empirical
distribution of the eigenvalues of $C_{n}$ converges weakly to a non-random
probability distribution, which is characterized in terms of a system of
equations of its Stieltjes transform.
|
Using angle-resolved photoemission spectroscopy we have studied the
low-energy electronic structure and the Fermi surface topology of
Fe$_{1+y}$Te$_{1-x}$Se$_x$ superconductors. Similar to the known iron pnictides
we observe hole pockets at the center and electron pockets at the corner of the
Brillouin zone (BZ). However, on a finer level, the electronic structure around
the $\Gamma$- and $Z$-points in $k$-space is substantially different from other
iron pnictides, in that we observe two hole pockets at the $\Gamma$-point, and
more interestingly only one hole pocket is seen at the $Z$-point, whereas in
$1111$-, $111$-, and $122$-type compounds, three hole pockets could be readily
found at the zone center. Another major difference noted in the
Fe$_{1+y}$Te$_{1-x}$Se$_x$ superconductors is that the top of innermost
hole-like band moves away from the Fermi level to higher binding energy on
going from $\Gamma$ to $Z$, quite opposite to the iron pnictides. The
polarization dependence of the observed features was used to aid the
attribution of the orbital character of the observed bands. Photon energy
dependent measurements suggest a weak $k_z$ dispersion for the outer hole
pocket and a moderate $k_z$ dispersion for the inner hole pocket. By evaluating
the momentum and energy dependent spectral widths, the single-particle
self-energy was extracted and interestingly this shows a pronounced non-Fermi
liquid behaviour for these compounds. The experimental observations are
discussed in context of electronic band structure calculations and models for
the self-energy such as the spin-fermion model and the marginal-Fermi liquid.
|
While network coding can be an efficient means of information dissemination
in networks, it is highly susceptible to "pollution attacks," as the injection
of even a single erroneous packet has the potential to corrupt each and every
packet received by a given destination. Even when suitable error-control coding
is applied, an adversary can, in many interesting practical situations,
overwhelm the error-correcting capability of the code. To limit the power of
potential adversaries, a broadcast transformation is introduced, in which nodes
are limited to just a single (broadcast) transmission per generation. Under
this broadcast transformation, the multicast capacity of a network is changed
(in general reduced) from the number of edge-disjoint paths between source and
sink to the number of internally-disjoint paths. Exploiting this fact, we
propose a family of networks whose capacity is largely unaffected by a
broadcast transformation. This results in a significant achievable transmission
rate for such networks, even in the presence of adversaries.
|
The article considers tidal forces in the vicinity of the Kottler black hole.
We find a solution of the geodesic deviation equation for radially falling
bodies, which is determined by elliptic integrals. And also the asymptotic
behavior of all spatial geodesic deviation vector components were found. We
demonstrate that the radial component of the tidal force changes sign outside
the single event horizon for any negative values of the cosmological constant,
in contrast to the Schwarzschild black hole, where all the components of the
tidal force are sign-constant. We also find the similarity between the Kottler
black hole and the Reissner-Nordstrom black hole, because we indicate the value
of the cosmological constant, which ensures the existence of two horizons of
the black hole, between which the angular components of the tidal force change
sign. It was possible to detect non-analytical behavior of geodesic deviation
vector components in Anti-de Sitter spacetime and to describe it locally.
|
This study explores the use of non-line-of-sight (NLOS) components in
millimeter-wave (mmWave) communication systems for joint localization and
environment sensing. The radar cross section (RCS) of a reconfigurable
intelligent surface (RIS) is calculated to develop a general path gain model
for RISs and traditional scatterers. The results show that RISs have a greater
potential to assist in localization due to their ability to maintain high RCSs
and create strong NLOS links. A one-stage linear weighted least squares
estimator is proposed to simultaneously determine user equipment (UE)
locations, velocities, and scatterer (or RIS) locations using line-of-sight
(LOS) and NLOS paths. The estimator supports environment sensing and UE
localization even using only NLOS paths. A second-stage estimator is also
introduced to improve environment sensing accuracy by considering the nonlinear
relationship between UE and scatterer locations. Simulation results demonstrate
the effectiveness of the proposed estimators in rich scattering environments
and the benefits of using NLOS paths for improving UE location accuracy and
assisting in environment sensing. The effects of RIS number, size, and
deployment on localization performance are also analyzed.
|
We report the discovery of a large-scale coherent filamentary structure of
Lyman alpha emitters in a redshift space at z=3.1. We carried out spectroscopic
observations to map the three dimensional structure of the belt-like feature of
the Lyman alpha emitters discovered by our previous narrow-band imaging
observations centered on the protocluster at z=3.1. The feature was found to
consist of at least three physical filaments connecting with each other. The
result is in qualitative agreement with the prediction of the 'biased'
galaxy-formation theories that galaxies preferentially formed in large-scale
filamentary or sheet-like mass overdensities in the early Universe. We also
found that the two known giant Lyman alpha emission-line nebulae showing high
star-formation activities are located near the intersection of these filaments,
which presumably evolves into a massive cluster of galaxies in the local
Universe. This may suggest that massive galaxy formation occurs at the
characteristic place in the surrounding large-scale structure at high redshift.
|
We study a variant of the univariate approximate GCD problem, where the
coefficients of one polynomial f(x)are known exactly, whereas the coefficients
of the second polynomial g(x)may be perturbed. Our approach relies on the
properties of the matrix which describes the operator of multiplication by gin
the quotient ring C[x]=(f). In particular, the structure of the null space of
the multiplication matrix contains all the essential information about GCD(f;
g). Moreover, the multiplication matrix exhibits a displacement structure that
allows us to design a fast algorithm for approximate GCD computation with
quadratic complexity w.r.t. polynomial degrees.
|
E. B. Davies et B. Simon have shown (among other things) the following
result: if T is an n\times n matrix such that its spectrum \sigma(T) is
included in the open unit disc \mathbb{D}=\{z\in\mathbb{C}:\,|z|<1\} and if
C=sup_{k\geq0}||T^{k}||_{E\rightarrow E}, where E stands for \mathbb{C}^{n}
endowed with a certain norm |.|, then ||R(1,\, T)||_{E\rightarrow E}\leq
C(3n/dist(1,\,\sigma(T)))^{3/2} where R(\lambda,\, T) stands for the resolvent
of T at point \lambda. Here, we improve this inequality showing that under the
same hypotheses (on the matrix T), ||R(\lambda,\, T)|| \leq
C(5\pi/3+2\sqrt{2})n^{3/2}/dist(\lambda,\,\sigma), for all
\lambda\notin\sigma(T) such that |\lambda|\geq1.
|
Agreeing suitability for purpose and procurement decisions depend on
assessment of real or simulated performances of sonar systems against user
requirements for particular scenarios. There may be multiple pertinent aspects
of performance (e.g. detection, track estimation, identification/classification
and cost) and multiple users (e.g. within picture compilation, threat
assessment, resource allocation and intercept control tasks), each with
different requirements. Further, the estimates of performances and the user
requirements are likely to be uncertain. In such circumstances, how can we
reliably assess and compare the effectiveness of candidate systems? This paper
presents a general yet simple mathematical framework that achieves all of this.
First, the general requirements of a satisfactory framework are outlined. Then,
starting from a definition of a measure of effectiveness (MOE) based on set
theory, the formulae for assessing performance in various applications are
obtained. These include combined MOEs, multiple and possibly conflicting user
requirements, multiple sources and types of performance data and different
descriptions of uncertainty. Issues raised by implementation of the scheme
within a simulator are discussed. Finally, it is shown how this approach to
performance assessment is used to treat some challenging examples from sonar
system assessment.
|
By and large, existing computational models of visual attention tacitly
assume perfect vision and full access to the stimulus and thereby deviate from
foveated biological vision. Moreover, modeling top-down attention is generally
reduced to the integration of semantic features without incorporating the
signal of a high-level visual tasks that have been shown to partially guide
human attention. We propose the Neural Visual Attention (NeVA) algorithm to
generate visual scanpaths in a top-down manner. With our method, we explore the
ability of neural networks on which we impose a biologically-inspired foveated
vision constraint to generate human-like scanpaths without directly training
for this objective. The loss of a neural network performing a downstream visual
task (i.e., classification or reconstruction) flexibly provides top-down
guidance to the scanpath. Extensive experiments show that our method
outperforms state-of-the-art unsupervised human attention models in terms of
similarity to human scanpaths. Additionally, the flexibility of the framework
allows to quantitatively investigate the role of different tasks in the
generated visual behaviors. Finally, we demonstrate the superiority of the
approach in a novel experiment that investigates the utility of scanpaths in
real-world applications, where imperfect viewing conditions are given.
|
We provide analytic proofs for the shape invariance of the recently
discovered (Odake and Sasaki, Phys. Lett. B679 (2009) 414-417) two families of
infinitely many exactly solvable one-dimensional quantum mechanical potentials.
These potentials are obtained by deforming the well-known radial oscillator
potential or the Darboux-P\"oschl-Teller potential by a degree \ell
(\ell=1,2,...) eigenpolynomial. The shape invariance conditions are attributed
to new polynomial identities of degree 3\ell involving cubic products of the
Laguerre or Jacobi polynomials. These identities are proved elementarily by
combining simple identities.
|
To extract useful information about quantum effects in cold atom experiments,
one central task is to identify the intrinsic quantum fluctuation from
extrinsic system noises of various kinds. As a data processing method,
principal component analysis can decompose fluctuations in experimental data
into eigen modes, and give a chance to separate noises originated from
different physical sources. In this paper, we demonstrate for Bose-Einstein
condensates in one-dimensional optical lattices that the principal component
analysis can be applied to time-of-flight images to successfully separate and
identify noises from different origins of leading contribution, and can help to
reduce or even eliminate noises via corresponding data processing procedures.
The attribution of noise modes to their physical origins is also confirmed by
numerical analysis within a mean-field theory.
|
Photonic systems based on microring resonators have a fundamental constrain
given by the strict relationship among free spectral range (FSR), total quality
factor (QT) and resonator size, intrinsically making filter spacing, photonic
lifetime and footprint interdependent. Here we break this paradigm employing
CMOS compatible Silicon-on-Insulator (SOI) photonic molecules based on coupled
multiple ring resonators. The resonance wavelengths and their respective
linewidths are controlled by the hybridization of the quasi-orthogonal photonic
states. We demonstrate photonic molecules with doublet and triplet resonances
with spectral spliting only achievable with single rings orders of magnitude
larger in foot print. Besides, these splitting are potentially controllable
based on the coupling (bonds) between resonators. Finally, the spatial
distribution of the hybrid states allows up to sevenfold QT enhancement.
|
The venerable NOD2 data reduction software package for single-dish radio
continuum observations, developed for use at the 100-m Effelsberg radio
telescope, has been successfully applied over many decades. Modern computing
facilities call for a new design.
We aim to develop an interactive software tool with a graphical user
interface (GUI) for the reduction of single-dish radio continuum maps. Special
effort is given on the reduction of distortions along the scanning direction
(scanning effects) by combining maps scanned in orthogonal directions or dual-
or multiple-horn observations that need to be processed in a restoration
procedure. The package should also process polarisation data and offer the
possibility to include special tasks written by the individual user.
Based on the ideas of the NOD2 package we developed NOD3, which includes all
necessary tasks from the raw maps to the final maps in total intensity and
linear polarisation. Furthermore, plot routines and several methods for map
analysis are available. The NOD3 package is written in Python which allows to
extend the package by additional tasks. The required data format for the input
maps is FITS.
NOD3 is a sophisticated tool to process and analyse maps from single-dish
observations that are affected by 'scanning effects' due to clouds, receiver
instabilities, or radio-frequency interference (RFI). The 'basket-weaving' tool
combines orthogonally scanned maps to a final map that is almost free of
scanning effects. The new restoration tool for dual-beam observations reduces
the noise by a factor of about two compared to the NOD2 version. Combining
single-dish with interferometer data in the map plane ensures the full recovery
of the total flux density.
|
The TCPD-IPD dataset is a collection of questions and answers discussed in
the Lower House of the Parliament of India during the Question Hour between
1999 and 2019. Although it is difficult to analyze such a huge collection
manually, modern text analysis tools can provide a powerful means to navigate
it. In this paper, we perform an exploratory analysis of the dataset. In
particular, we present insightful corpus-level statistics and a detailed
analysis of three subsets of the dataset. In the latter analysis, the focus is
on understanding the temporal evolution of topics using a dynamic topic model.
We observe that the parliamentary conversation indeed mirrors the political and
socio-economic tensions of each period.
|
Neutrinos in a core-collapse supernova undergo coherent flavor
transformations in their own background. We explore this phenomenon during the
cooling stage of the explosion. Our three-flavor calculations reveal
qualitatively new effects compared to a two-flavor analysis. These effects are
especially clearly seen for the inverted mass hierarchy: we find a different
pattern of spectral "swaps" in the neutrino spectrum and a novel "mixed"
spectrum for the antineutrinos. A brief discussion of the relevant physics is
presented, including the instability of the two-flavor evolution trajectory,
the 3-flavor pattern of spectral "swaps," and partial nonadiabaticity of the
evolution.
|
Evidence of discrete scale invariance (DSI) in daytime healthy heart rate
variability (HRV) is presented based on the log-periodic power law scaling of
the heart beat interval increment. Our analysis suggests multiple DSI groups
and a dynamic cascading process. A cascade model is presented to simulate such
a property.
|
For equations of order two with the Dirichlet boundary condition, as the
Laplace problem, the Stokes and the Navier-Stokes systems, perforated domains
were only studied when the distance between the holes $d_{\varepsilon}$ is
equal or much larger than the size of the holes $\varepsilon$. Such a diluted
porous medium is interesting because it contains some cases where we have a
non-negligible effect on the solution when $(\varepsilon,d_{\varepsilon})\to
(0,0)$. Smaller distance was avoided for mathematical reasons and for theses
large distances, the geometry of the holes does not affect -- or few -- the
asymptotic result. Very recently, it was shown for the 2D-Euler equations that
a porous medium is non-negligible only for inter-holes distance much smaller
than the size of the holes. For this result, the regularity of holes boundary
plays a crucial role, and the permeability criterium depends on the geometry of
the lateral boundary. In this paper, we relax slightly the regularity
condition, allowing a corner, and we note that a line of irregular obstacles
cannot slow down a perfect fluid in any regime such that $\varepsilon \ln
d_{\varepsilon} \to 0$.
|
Recently it has been shown that the heuristic Rosenfeld functional derives
from the virial expansion for particles which overlap in one center. Here, we
generalize this approach to any number of intersections. Starting from the
virial expansion in Ree-Hoover diagrams, it is shown in the first part that
each intersection pattern defines exactly one infinite class of diagrams.
Determining their automorphism groups, we sum over all its elements and derive
a generic functional. The second part proves that this functional factorizes
into a convolute of integral kernels for each intersection center. We derive
this kernel for N dimensional particles in the N dimensional, flat Euclidean
space. The third part focuses on three dimensions and determines the
functionals for up to four intersection centers, comparing the leading order to
Rosenfeld's result. We close by proving a generalized form of the Blaschke,
Santalo, Chern equation of integral geometry.
|
We study the exclusive semileptonic decays B_s->D_{s0}^*\ell\bar\nu and
B_s->D_{s1}^*\ell\bar\nu, where p-wave excited D_{s0}^* and D_{s1}^* states are
identified with the newly observed D_{sJ}(2317) and D_{sJ}(2460) states. Within
the framework of HQET the Isgur-Wise functions up to the subleading order of
the heavy quark expansion are calculated by QCD sum rules. The decay rates and
branching ratios are computed with the inclusion of the order of 1/m_Q
corrections. We point out that the investigation of the B_s semileptonic decays
to excited D_s mesons may provide some information about the nature of the new
D_{sJ}^* mesons.
|
We reassess the hypothesis that Lyman-break galaxies (LBGs) at redshifts z~3
mark the centres of the most massive dark matter haloes at that epoch. First we
reanalyse the kinematic measurements of Pettini et al., and of Erb et al., of
the rest-frame optical emission lines of LBGs. We compare the distribution of
the ratio of the rotation velocity to the central line width, against the
expected distribution for galaxies with random inclination angles, modelled as
singular isothermal spheres. The model fits the data well. On this basis we
argue that the central line width provides a predictor of the circular velocity
at a radius of several kpc. Assembling a larger sample of LBGs with measured
line widths, we compare these results against the theoretical Lambda-CDM
rotation curves of Mo, Mao & White, under the hypothesis that LBGs mark the
centres of the most massive dark matter halos. We find that the circular
velocities are over-predicted by a substantial factor, which we estimate
conservatively as 1.8+/-0.4. This indicates that the model is probably
incorrect. The model of LBGs as relatively low-mass starburst systems, of
Somerville, Primack, and Faber (2001), provides a good fit to the data.
|
We introduce a new model of combinatorial contracts in which a principal
delegates the execution of a costly task to an agent. To complete the task, the
agent can take any subset of a given set of unobservable actions, each of which
has an associated cost. The cost of a set of actions is the sum of the costs of
the individual actions, and the principal's reward as a function of the chosen
actions satisfies some form of diminishing returns. The principal incentivizes
the agents through a contract, based on the observed outcome.
Our main results are for the case where the task delegated to the agent is a
project, which can be successful or not. We show that if the success
probability as a function of the set of actions is gross substitutes, then an
optimal contract can be computed with polynomially many value queries, whereas
if it is submodular, the optimal contract is NP-hard. All our results extend to
linear contracts for higher-dimensional outcome spaces, which we show to be
robustly optimal given first moment constraints.
Our analysis uncovers a new property of gross substitutes functions, and
reveals many interesting connections between combinatorial contracts and
combinatorial auctions, where gross substitutes is known to be the frontier for
efficient computation.
|
This paper aims to introduce a robust singing voice synthesis (SVS) system to
produce very natural and realistic singing voices efficiently by leveraging the
adversarial training strategy. On one hand, we designed simple but generic
random area conditional discriminators to help supervise the acoustic model,
which can effectively avoid the over-smoothed spectrogram prediction and
improve the expressiveness of SVS. On the other hand, we subtly combined the
spectrogram with the frame-level linearly-interpolated F0 sequence as the input
for the neural vocoder, which is then optimized with the help of multiple
adversarial conditional discriminators in the waveform domain and multi-scale
distance functions in the frequency domain. The experimental results and
ablation studies concluded that, compared with our previous auto-regressive
work, our new system can produce high-quality singing voices efficiently by
fine-tuning different singing datasets covering from several minutes to a few
hours. A large number of synthesized songs with different timbres are available
online https://zzw922cn.github.io/wesinger2 and we highly recommend readers to
listen to them.
|
Keyword spotting (KWS) based on deep neural networks (DNNs) has achieved
massive success in voice control scenarios. However, training of such DNN-based
KWS systems often requires significant data and hardware resources.
Manufacturers often entrust this process to a third-party platform. This makes
the training process uncontrollable, where attackers can implant backdoors in
the model by manipulating third-party training data. An effective backdoor
attack can force the model to make specified judgments under certain
conditions, i.e., triggers. In this paper, we design a backdoor attack scheme
based on Voiceprint Selection and Voice Conversion, abbreviated as VSVC.
Experimental results demonstrated that VSVC is feasible to achieve an average
attack success rate close to 97% in four victim models when poisoning less than
1% of the training data.
|
Deep learning methods have proven to outperform traditional computer vision
methods in various areas of image processing. However, the application of deep
learning in industrial surface defect detection systems is challenging due to
the insufficient amount of training data, the expensive data generation
process, the small size, and the rare occurrence of surface defects. From
literature and a polymer products manufacturing use case, we identify design
requirements which reflect the aforementioned challenges. Addressing these, we
conceptualize design principles and features informed by deep learning
research. Finally, we instantiate and evaluate the gained design knowledge in
the form of actionable guidelines and strategies based on an industrial surface
defect detection use case. This article, therefore, contributes to academia as
well as practice by (1) systematically identifying challenges for the
industrial application of deep learning-based surface defect detection, (2)
strategies to overcome these, and (3) an experimental case study assessing the
strategies' applicability and usefulness.
|
We review recent advancements in modeling the stellar to substellar
transition. The revised molecular opacities, solar oxygen abundances and cloud
models allow to reproduce the photometric and spectroscopic properties of this
transition to a degree never achieved before, but problems remain in the
important M-L transition characteristic of the effective temperature range of
characterizable exoplanets. We discuss of the validity of these classical
models. We also present new preliminary global Radiation HydroDynamical M
dwarfs simulations.
|
To enhance precision and comprehensiveness in identifying targets in electric
power construction monitoring video, a novel target recognition algorithm
utilizing infrared imaging is explored. This algorithm employs a color
processing technique based on a local linear mapping method to effectively
recolor monitoring images. The process involves three key steps: color space
conversion, color transfer, and pseudo-color encoding. It is designed to
accentuate targets in the infrared imaging. For the refined identification of
these targets, the algorithm leverages a support vector machine approach,
utilizing an optimal hyperplane to accurately predict target types. We
demonstrate the efficacy of the algorithm, which achieves high target
recognition accuracy in both outdoor and indoor electric power construction
monitoring scenarios. It maintains a false recognition rate below 3% across
various environments.
|
Recent work has highlighted several advantages of enforcing orthogonality in
the weight layers of deep networks, such as maintaining the stability of
activations, preserving gradient norms, and enhancing adversarial robustness by
enforcing low Lipschitz constants. Although numerous methods exist for
enforcing the orthogonality of fully-connected layers, those for convolutional
layers are more heuristic in nature, often focusing on penalty methods or
limited classes of convolutions. In this work, we propose and evaluate an
alternative approach to directly parameterize convolutional layers that are
constrained to be orthogonal. Specifically, we propose to apply the Cayley
transform to a skew-symmetric convolution in the Fourier domain, so that the
inverse convolution needed by the Cayley transform can be computed efficiently.
We compare our method to previous Lipschitz-constrained and orthogonal
convolutional layers and show that it indeed preserves orthogonality to a high
degree even for large convolutions. Applied to the problem of certified
adversarial robustness, we show that networks incorporating the layer
outperform existing deterministic methods for certified defense against
$\ell_2$-norm-bounded adversaries, while scaling to larger architectures than
previously investigated. Code is available at
https://github.com/locuslab/orthogonal-convolutions.
|
For the stationary advection-diffusion problem the standard continuous
Galerkin method is unstable without some additional control on the mesh or
method. The interior penalty discontinuous Galerkin method is stable but at the
expense of an increased number of degrees of freedom. The hybrid method
proposed in [5] combines the computational complexity of the continuous method
with the stability of the discontinuous method without a significant increase
in degrees of freedom. We discuss the implementation of this method using the
finite element library deal.ii and present some numerical experiments.
|
The Galaxy Evolution Explorer (GALEX) satellite has obtained high time
resolution ultraviolet photometry during a large flare on the M4 dwarf star GJ
3685A. Simultaneous NUV (1750 - 2800A) and FUV (1350 - 1750A) time-tagged
photometry with time resolution better than 0.1 s shows that the overall
brightness in the FUV band increased by a factor of 1000 in 200 s. Under the
assumption that the NUV emission is mostly due to a stellar continuum, and that
the FUV flux is shared equally between emission lines and continuum, then there
is evidence for two distinct flare components for this event. The first flare
type is characterized by an exponential increase in flux with little or no
increase in temperature. The other involves rapid increases in both temperature
and flux. While the decay time for the first flare component may be several
hours, the second flare event decayed over less than 1 minute, suggesting that
there was little or no confinement of the heated plasma.
|
We will establish several arithmetic and geometric properties regarding the
bi-sequences of approximation coefficients (BAC) associated with the two
one-parameter families of piecewise-continuous Mobius transformations
introduced by Haas and Molnar. The Gauss and Renyi maps, which lead to the
expansions of irrational numbers on the interval as regular and backwards
continued fractions, are realized as special cases. The results are natural
generalizations of theorems from Diophantine approximation.
|
The multiple scattering theory (MST) is one of the most widely used methods
in electronic structure calculations. It features a perfect separation between
the atomic configurations and site potentials, and hence provides an efficient
way to simulate defected and disordered systems. This work studies the MST
methods from a numerical point of view and shows the convergence with respect
to the truncation of the angular momentum summations, which is a fundamental
approximation parameter for all MST methods. We provide both rigorous analysis
and numerical experiments to illustrate the efficiency of the MST methods
within the angular momentum representations.
|
Inflationary reheating via resonant production of non-minimally coupled
scalar particles with only gravitational coupling is shown to be extremely
strong, exhibiting a negative coupling instability for $\xi < 0$ and a wide
resonance decay for $\xi \gg 1$. Since non-minimal fields are generic after
renormalisation in curved spacetime, this offers a new paradigm in reheating -
one which naturally allows for efficient production of the massive bosons
needed for GUT baryogenesis. We also show that both vector and tensor fields
are produced resonantly during reheating, extending the previously known
correspondences between bosonic fields of different spins during preheating.
|
The hydrostatic equilibrium state is the consequence of the exact hydrostatic
balance between hydrostatic pressure and external force. Standard finite volume
or finite difference schemes cannot keep this balance exactly due to their
unbalanced truncation errors. In this study, we introduce an auxiliary variable
which becomes constant at isothermal hydrostatic equilibrium state and propose
a well-balanced gas kinetic scheme for the Navier-Stokes equations with a
global reconstruction. Through reformulating the convection term and the force
term via the auxiliary variable, zero numerical flux and zero numerical source
term are enforced at the hydrostatic equilibrium state instead of the balance
between hydrostatic pressure and external force. Several problems are tested
numerically to demonstrate the accuracy and the stability of the new scheme,
and the results confirm that, the new scheme can preserve the exact hydrostatic
solution. The small perturbation riding on hydrostatic equilibria can be
calculated accurately. The viscous effect is also illustrated through the
propagation of small perturbation and the Rayleigh-Taylor instability. More
importantly, the new scheme is capable of simulating the process of converging
towards hydrostatic equilibrium state from a highly non-balanced initial
condition. The ultimate state of zero velocity and constant temperature is
achieved up to machine accuracy. As demonstrated by the numerical experiments,
the current scheme is very suitable for small amplitude perturbation and long
time running under gravitational potential.
|
This paper concerns a fundamental class of convex matrix optimization
problems. It presents the first algorithm that uses optimal storage and
provably computes a low-rank approximation of a solution. In particular, when
all solutions have low rank, the algorithm converges to a solution. This
algorithm, SketchyCGM, modifies a standard convex optimization scheme, the
conditional gradient method, to store only a small randomized sketch of the
matrix variable. After the optimization terminates, the algorithm extracts a
low-rank approximation of the solution from the sketch. In contrast to
nonconvex heuristics, the guarantees for SketchyCGM do not rely on statistical
models for the problem data. Numerical work demonstrates the benefits of
SketchyCGM over heuristics.
|
Bad-Metal (BM) behavior featuring linear temperature dependence of the
resistivity extending to well above the Mott-Ioffe-Regel (MIR) limit is often
viewed as one of the key unresolved signatures of strong correlation. Here we
associate the BM behavior with the Mott quantum criticality by examining a
fully frustrated Hubbard model where all long-range magnetic orders are
suppressed, and the Mott problem can be rigorously solved through Dynamical
Mean-Field Theory. We show that for the doped Mott insulator regime, the
coexistence dome and the associated first-order Mott metal-insulator transition
are confined to extremely low temperatures, while clear signatures of Mott
quantum criticality emerge across much of the phase diagram. Remarkable scaling
behavior is identified for the entire family of resistivity curves, with a
quantum critical region covering the entire BM regime, providing not only
insight, but also quantitative understanding around the MIR limit, in agreement
with the available experiments.
|
In this paper the global mode structures of linear ion-temperature-gradient
(ITG) modes in tokamak plasmas are obtained by combining results from the local
gyrokinetic code GS2 with analytical theory. Local gyrokinetic calculations,
using GS2, are performed for a range of radial flux surfaces, ${x}$, and
ballooning phase angles, ${p}$, to map out the local complex mode frequency,
${\Omega_{0}(x,p)=\omega_{0}(x,p)+i\gamma_{0}(x,p)}$ for a single toroidal mode
number, ${n}$. Taylor expanding ${\Omega_{0}}$ about ${x=0}$, and employing the
Fourier-ballooning representation leads to a second order ODE for the amplitude
envelope, ${A\left(p\right)}$ , which describes how the local results are
combined to form the global mode. We employ the so-called CYCLONE base case for
circular Miller equilibrium model. Assuming radially varying profiles of
${a/L_{T}}$ and ${a/L_{n}}$, peaked at ${x=0}$, and with all other equilibrium
profiles held constant, ${\Omega_{0}(x,p)}$ is found to have a stationary
point. The reconstructed global mode sits at the outboard mid-plane of the
tokamak, with global growth rate, ${\gamma\sim}$Max${\left[\gamma_{0}\right]}$.
Including the radial variation of other equilibrium profiles like safety factor
and magnetic shear, leads to a mode that peaks away from the outboard
mid-plane, with a reduced global growth rate. Finally, the influence of
toroidal flow shear has also been investigated through the introduction of a
Doppler shift, ${\omega_{0} \rightarrow \omega_{0} - n\Omega_{\phi}^{\prime}
x}$, where ${\Omega_{\phi}}$ is the equilibrium toroidal flow, and a prime
denotes the radial derivative. The equilibrium profile variations introduce an
asymmetry into the global growth rate spectrum with respect to the sign of
${\Omega_{\phi}^{\prime}}$, such that the maximum growth rate is achieved with
non-zero shearing, consistent with recent global gyrokinetic calculations.
|
Experiments were performed at a proton accelerator and an infrared laser
acility to investigate the sound generation caused by the energy deposition of
pulsed particle and laser beams in water. The beams with an energy range of 1
PeV to 400 PeV per proton beam spill and up to 10 EeV for the laser pulse were
dumped into a water volume and the resulting acoustic signals were recorded
with pressure sensitive sensors. Measurements were performed at varying pulse
energies, sensor positions, beam diameters and temperatures. The data is well
described by simulations based on the thermo-acoustic model. This implies that
the primary mechanism for sound generation by the energy deposition of
particles propagating in water is the local heating of the media giving rise to
an expansion or contraction of the medium resulting in a pressure pulse with
bipolar shape. A possible application of this effect would be the acoustical
detection of neutrinos with energies greater than 1 EeV.
|
This paper explores the modeling method of polyphonic music sequence. Due to
the great potential of Transformer models in music generation, controllable
music generation is receiving more attention. In the task of polyphonic music,
current controllable generation research focuses on controlling the generation
of chords, but lacks precise adjustment for the controllable generation of
choral music textures. This paper proposed Condition Choir Transformer
(CoCoFormer) which controls the output of the model by controlling the chord
and rhythm inputs at a fine-grained level. In this paper, the self-supervised
method improves the loss function and performs joint training through
conditional control input and unconditional input training. In order to
alleviate the lack of diversity on generated samples caused by the teacher
forcing training, this paper added an adversarial training method. CoCoFormer
enhances model performance with explicit and implicit inputs to chords and
rhythms. In this paper, the experiments proves that CoCoFormer has reached the
current better level than current models. On the premise of specifying the
polyphonic music texture, the same melody can also be generated in a variety of
ways.
|
We obtain necessary and sufficient conditions for a matrix $A$ to be
Birkhoff-James orthogonal to another matrix $B$ in the Ky Fan $k$-norms. A
characterization for $A$ to be Birkhoff-James orthogonal to any subspace
$\mathscr W$ of $\mathbb M(n)$ is also obtained.
|
In addition to being the core quantity in density functional theory, the
charge density can be used in many tertiary analyses in materials sciences from
bonding to assigning charge to specific atoms. The charge density is data-rich
since it contains information about all the electrons in the system. With
increasing utilization of machine-learning tools in materials sciences, a
data-rich object like the charge density can be utilized in a wide range of
applications. The database presented here provides a modern and user-friendly
interface for a large and continuously updated collection of charge densities
as part of the Materials Project. In addition to the charge density data, we
provide the theory and code for changing the representation of the charge
density which should enable more advanced machine-learning studies for the
broader community.
|
The discovery of gravitational waves from compact objects coalescence opens a
brand-new window to observe the universe. With more events being detected in
the future, statistical examinations would be essential to better understand
the underlying astrophysical processes. In this work we investigate the
prospect of measuring the mass function of black holes that are merging with
the neutron stars. Applying Bayesian parameter estimation for hundreds of
simulated neutron star$-$black hole (NSBH) mergers, we find that the parameters
for most of the injected events can be well recovered. We also take a Bayesian
hierarchical model to reconstruct the population properties of the masses of
black holes, in the presence of a low mass gap, both the mass gap and power-law
index ($\alpha$) of black hole mass function can be well measured, thus we can
reveal where the $\alpha$ is different for binary black hole (BBH) and NSBH
systems. In the absence of a low mass gap, the gravitational wave data as well
as the electromagnetic data can be used to pin down the nature of the merger
event and then measure the mass of these very light black holes. However, as a
result of the misclassification of BBH into NSBH, the measurement of $\alpha$
is more challenging and further dedicated efforts are needed.
|
We develop a probabilistic framework for global modeling of the traffic over
a computer network. This model integrates existing single-link (-flow) traffic
models with the routing over the network to capture the global traffic
behavior. It arises from a limit approximation of the traffic fluctuations as
the time--scale and the number of users sharing the network grow. The resulting
probability model is comprised of a Gaussian and/or a stable, infinite variance
components. They can be succinctly described and handled by certain
'space-time' random fields. The model is validated against simulated and real
data. It is then applied to predict traffic fluctuations over unobserved links
from a limited set of observed links. Further, applications to anomaly
detection and network management are briefly discussed.
|
In this paper, we construct a new family of $(q^4+1)$-tight sets in $Q(24,q)$
or $Q^-(25,q)$ according as $q=3^f$ or $q\equiv 2\pmod 3$. The novelty of the
construction is the use of the action of the exceptional simple group $F_4(q)$
on its minimal module over $\F_q$.
|
In daily life, subjects often face a social dilemma in two stages. In Stage
1, they recognize the social dilemma structure of the decision problem at hand
(a tension between personal interest and collective interest); in Stage 2, they
have to choose between gathering additional information to learn the exact
payoffs corresponding to each of the two options or making a choice without
looking at the payoffs. While previous theoretical research suggests that the
mere act of considering one's strategic options in a social dilemma will be met
with distrust, no experimental study has tested this hypothesis. What does
"looking at payoffs" signal in observers? Do observers' beliefs actually match
decision makers' intentions? Experiment 1 shows that the actual action of
looking at payoffs signals selfish behavior, but it does not actually mean so.
Experiments 2 and 3 show that, when the action of looking at payoffs is
replaced by a self-report question asking the extent to which participants look
at payoffs in their everyday lives, subjects in high looking mode are indeed
more selfish than those in low looking mode, and this is correctly predicted by
observers. These results support Rand and colleagues' Social Heuristics
Hypothesis and the novel "cooperate without looking" model by Yoeli, Hoffman,
and Nowak. However, Experiment 1 shows that actual looking may lead to
different results, possibly caused by the emergence of a moral cleansing
effect.
|
Fiducial markers are commonly used in navigation assisted minimally invasive
spine surgery (MISS) and they help transfer image coordinates into real world
coordinates. In practice, these markers might be located outside the
field-of-view (FOV), due to the limited detector sizes of C-arm cone-beam
computed tomography (CBCT) systems used in intraoperative surgeries. As a
consequence, reconstructed markers in CBCT volumes suffer from artifacts and
have distorted shapes, which sets an obstacle for navigation. In this work, we
propose two fiducial marker detection methods: direct detection from distorted
markers (direct method) and detection after marker recovery (recovery method).
For direct detection from distorted markers in reconstructed volumes, an
efficient automatic marker detection method using two neural networks and a
conventional circle detection algorithm is proposed. For marker recovery, a
task-specific learning strategy is proposed to recover markers from severely
truncated data. Afterwards, a conventional marker detection algorithm is
applied for position detection. The two methods are evaluated on simulated data
and real data, both achieving a marker registration error smaller than 0.2 mm.
Our experiments demonstrate that the direct method is capable of detecting
distorted markers accurately and the recovery method with task-specific
learning has high robustness and generalizability on various data sets.
|
We study what we call topological cylindric algebras and tense cylindric
algebras defined for every ordinal $\alpha$. The former are cylindric algebras
of dimension $\alpha$ expanded with $\sf S4$ modalities indexed by $\alpha$.
The semantics of representable topological algebras is induced by the interior
operation relative to a topology defined on their bases. Tense cylindric
algebras are cylindric algebras expanded by the modalities $F$(future) and $P$
(past) algebraising predicate temporal logic.
We show for both tense and topological cylindric algebras of finite dimension
$n>2$ that infinitely many varieties containing and including the variety of
representable algebras of dimension $n$ are not atom canonical. We show that
any class containing the class of completely representable algebras having a
weak neat embedding property is not elementary. From these two results we draw
the same conclusion on omitting types for finite variable fragments of
predicate topologic and temporal logic. We show that the usual version of the
omitting types theorem restricted to such fragments when the number of
variables is $>2$ fails dramatically even if we considerably broaden the class
of models permitted to omit a single non principal type in countable atomic
theories, namely, the non-principal type consting of co atoms.
|
Conventional adversarial defenses reduce classification accuracy whether or
not a model is under attacks. Moreover, most of image processing based defenses
are defeated due to the problem of obfuscated gradients. In this paper, we
propose a new adversarial defense which is a defensive transform for both
training and test images inspired by perceptual image encryption methods. The
proposed method utilizes a block-wise pixel shuffling method with a secret key.
The experiments are carried out on both adaptive and non-adaptive maximum-norm
bounded white-box attacks while considering obfuscated gradients. The results
show that the proposed defense achieves high accuracy (91.55 %) on clean images
and (89.66 %) on adversarial examples with noise distance of 8/255 on CIFAR-10
dataset. Thus, the proposed defense outperforms state-of-the-art adversarial
defenses including latent adversarial training, adversarial training and
thermometer encoding.
|
In laboratory experiments, we heated chondritic material up to 1400K in a
hydrogen atmosphere. Moessbauer spectroscopy and magnetometry reveal that, at
high temperatures, metallic iron forms from silicates. The transition
temperature is about 1200K after 1 h of tempering, likely decreasing to about
1000K for longer tempering. This implies that in a region of high temperatures
within protoplanetary disks, inward drifting solids will generally be a
reservoir of metallic iron. Magnetic aggregation of iron-rich matter then
occurs within the magnetic field of the disk. However, the Curie temperature of
iron, 1041 K, is a rather sharp discriminator that separates the disk into a
region of strong magnetic interactions of ferromagnetic particles and a region
of weak paramagnetic properties. We call this position in the disk the Curie
line. Magnetic aggregation will be turned on and off here. On the outer,
ferromagnetic side of the Curie line, large clusters of iron-rich particles
grow and might be prone to streaming instabilities. To the inside of the Curie
line, these clusters dissolve, but that generates a large number density that
might also be beneficial for planetesimal formation by gravitational
instability. One way or the other, the Curie line may define a preferred region
for the formation of iron-rich bodies.
|
The paper disproves a basic theorem on quasi-birth-and-death processes given
in [M. F. Neuts (1995). Matrix Geometric Solutions in Stochastic Models: An
Algorithmic Approach. Dover, New York].
|
We study the vortex zero-energy bound states in presence of pairing among the
low-energy Dirac fermions on the surface of a topological insulator. The
pairing symmetries considered include the $s$-wave, $p$-wave, and, in
particular, the mixed-parity symmetry, which arises in absence of the inversion
symmetry on the surface. The zero-mode is analyzed within the generalized
Jackiw-Rossi-Dirac Hamiltonian that contains a momentum-dependent mass-term,
and includes the effects of the electromagnetic gauge field and the Zeeman
coupling as well. At a finite chemical potential, as long as the spectrum
without the vortex is fully gapped, the presence of a single Fermi surface with
a definite helicity always leads to one Majorana zero-mode, in which both
electron's spin projections participate. In particular, the critical effects of
the Zeeman coupling on the zero-mode are discussed.
|
Based on 5344 quasar spectra taken from the SDSS Data Release 2, the
dependences of various emission-line flux ratios on redshift and quasar
luminosity are investigated in the ranges 2.0 < z < 4.5 and -24.5 > M_B >
-29.5$. We show that the emission lines in the composite spectra are fitted
better with power-law profiles than with double Gaussian or modified Lorentzian
profiles, and in particular we show that the power-law profiles are more
appropriate to measure broad emission-line fluxes than other methods. The
composite spectra show that there are statistically significant correlations
between quasar luminosity and various emission-line flux ratios, such as NV/CIV
and NV/HeII, while there are only marginal correlations between quasar redshift
and emission-line flux ratios. We obtain detailed photoionization models to
interpret the observed line ratios. The correlation of line ratios with
luminosity is interpreted in terms of higher gas metallicity in more luminous
quasars. For a given quasar luminosity, there is no metallicity evolution for
the redshift range 2.0 < z < 4.5. The typical metallicity of BLR gas clouds is
estimated to be Z ~ 5 Z_sun, although the inferred metallicity depends on the
assumed BLR cloud properties, such as their density distribution function and
their radial distribution. The absence of a metallicity evolution up to z ~ 4.5
implies that the active star-formation epoch of quasar host galaxies occurred
at z > 7.
|
We report an implementation of a code for SU(3) matrix multiplication on
Cell/B.E., which is a part of our project, Lattice Tool Kit on Cell/B.E.. On
QS20, the speed of the matrix multiplication on SPE in single precision is
227GFLOPS and it becomes 20GFLOPS {this vaule was remeasured and corrcted.}
together with data transfer from main memory by DNA transfer, which is 4.6% of
the hardware peak speed (460GFLOPS), and is 7.4% of the theoretical peak speed
of this calculation (268.77GFLOPS). We briefly describe our tuning procedure.
|
We analyzed a small flux rope eruption converted into a helical blowout jet
in a fan-spine configuration using multi-wavelength observations taken by SDO,
which occurred near the limb on 2016 January 9. In our study, first, we
estimated the fan-spine magnetic configuration with the potential field
calculation and found a sinistral small filament inside it. The filament along
with the flux rope erupted upward and interacted with the surrounding fan-
spine magnetic configuration, where the flux rope breaks in the middle section.
We observed compact brightening, flare ribbons and post-flare loops underneath
the erupting filament. The northern section of the flux rope reconnected with
the surrounding positive polarity, while the southern section straightened.
Next, we observed the untwisting motion of the southern leg, which was
transformed into a rotating helical blowout jet. The sign of the helicity of
the mini-filament matches the one of the rotating jet. This is consistent with
the jet models presented by Adams et al. (2014) and Sterling et al. (2015). We
focused on the fine thread structure of the rotating jet and traced three blobs
with the speed of 60-120 km/s, while the radial speed of the jet is approx 400
km/s. The untwisting motion of the jet accelerated plasma upward along the
collimated outer spine field lines, and it finally evolved into a narrow
coronal mass ejection at the height of approx 9 Rsun . On the basis of the
detailed analysis, we discussed clear evidence of the scenario of the breaking
of the flux rope and the formation of the helical blowout jet in the fan-spine
magnetic configuration.
|
Petri games are a multi-player game model for the synthesis problem in
distributed systems, i.e., the automatic generation of local controllers. The
model represents causal memory of the players, which are tokens on a Petri net
and divided into two teams: the controllable system and the uncontrollable
environment. For one environment player and a bounded number of system players,
the problem of solving Petri games can be reduced to that of solving B\"uchi
games.
High-level Petri games are a concise representation of ordinary Petri games.
Symmetries, derived from a high-level representation, can be exploited to
significantly reduce the state space in the corresponding B\"uchi game. We
present a new construction for solving high-level Petri games. It involves the
definition of a unique, canonical representation of the reduced B\"uchi game.
This allows us to translate a strategy in the B\"uchi game directly into a
strategy in the Petri game. An implementation applied on six structurally
different benchmark families shows in most cases a performance increase for
larger state spaces.
|
The authors consider the Dirichlet problem for the nonstationary Stokes
system in a threedimensional cone. They obtain existence and uniqueness results
for solutions in weighted Sobolev spaces and prove a regularity assertion for
the solutions.
|
In the classroom, we traditionally visualize inferential concepts using
static graphics or interactive apps. For example, there is a long history of
using apps to visualize sampling distributions. Recent developments in
statistical graphics have created an opportunity to bring additional
visualizations into the classroom to hone student understanding. Specifically,
the lineup protocol for visual inference provides a framework for students see
the difference between signal and noise by embedding a plot of observed data in
a field of null (noise) plots. Lineups have proven valuable in visualizing
randomization/permutation tests, diagnosing models, and even conducting valid
inference when distributional assumptions break down. This paper provides an
overview of how the lineup protocol for visual inference can be used to hone
understanding of key statistical topics throughout the statistics curricula.
|
Our herein described combined analysis of the latest neutrino oscillation
data presented at the Neutrino2020 conference shows that previous hints for the
neutrino mass ordering have significantly decreased, and normal ordering (NO)
is favored only at the $1.6\sigma$ level. Combined with the $\chi^2$ map
provided by Super-Kamiokande for their atmospheric neutrino data analysis the
hint for NO is at $2.7\sigma$. The CP conserving value $\delta_\text{CP} =
180^\circ$ is within $0.6\sigma$ of the global best fit point. Only if we
restrict to inverted mass ordering, CP violation is favored at the $\sim
3\sigma$ level. We discuss the origin of these results - which are driven by
the new data from the T2K and NOvA long-baseline experiments -, and the
relevance of the LBL-reactor oscillation frequency complementarity. The
previous $2.2\sigma$ tension in $\Delta m^2_{21}$ preferred by KamLAND and
solar experiments is also reduced to the $1.1\sigma$ level after the inclusion
of the latest Super-Kamiokande solar neutrino results. Finally we present
updated allowed ranges for the oscillation parameters and for the leptonic
Jarlskog determinant from the global analysis.
|
We study the existence and uniqueness of Lp-bounded mild solutions for a
class ofsemilinear stochastic evolutions equations driven by a real L\'evy
processes withoutGaussian component not square integrable for instance the
stable process through atruncation method by separating the big and small jumps
together with the classicaland simple Banach fixed point theorem ; under local
Lipschitz, Holder, linear growthconditions on the coefficients.
|
Pre-trained large-scale language models have increasingly demonstrated high
accuracy on many natural language processing (NLP) tasks. However, the limited
weight storage and computational speed on hardware platforms have impeded the
popularity of pre-trained models, especially in the era of edge computing. In
this paper, we seek to find the best model structure of BERT for a given
computation size to match specific devices. We propose the first compiler-aware
neural architecture optimization framework. Our framework can guarantee the
identified model to meet both resource and real-time specifications of mobile
devices, thus achieving real-time execution of large transformer-based models
like BERT variants. We evaluate our model on several NLP tasks, achieving
competitive results on well-known benchmarks with lower latency on mobile
devices. Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU
with 0.5-2% accuracy loss compared with BERT-base. Our overall framework
achieves up to 7.8x speedup compared with TensorFlow-Lite with only minor
accuracy loss.
|
We present Keck Cosmic Web Imager IFU observations around extended Ly$\alpha$
halos of 27 typical star-forming galaxies with redshifts $2.0 < z < 3.2$ drawn
from the MOSFIRE Deep Evolution Field survey. We examine the average Ly$\alpha$
surface-brightness profiles in bins of star-formation rate (SFR), stellar mass
($M_*$), age, stellar continuum reddening, SFR surface density ($\rm
\Sigma_{SFR}$), and $\rm \Sigma_{SFR}$ normalized by stellar mass ($\rm
\Sigma_{sSFR}$). The scale lengths of the halos correlate with stellar mass,
age, and stellar continuum reddening; and anti-correlate with star-formation
rate, $\rm \Sigma_{SFR}$, and $\rm \Sigma_{sSFR}$. These results are consistent
with a scenario in which the down-the-barrel fraction of Ly$\alpha$ emission is
modulated by the low-column-density channels in the ISM, and that the neutral
gas covering fraction is related to the physical properties of the galaxies.
Specifically, we find that this covering fraction increases with stellar mass,
age, and $E(B-V)$; and decreases with SFR, $\rm \Sigma_{SFR}$ and $\rm
\Sigma_{sSFR}$. We also find that the resonantly scattered Ly$\alpha$ emission
suffers greater attenuation than the (non-resonant) stellar continuum emission,
and that the difference in attenuation increases with stellar mass, age, and
stellar continuum reddening, and decreases with $\rm \Sigma_{sSFR}$. These
results imply that more reddened galaxies have more dust in their CGM.
|
We present a progress report on a project to derive the evolution of the
volumetric supernova Type Ia rate from the Supernova Legacy Survey. Our
preliminary estimate of the rate evolution divides the sample from Neill et al.
(2006) into two redshift bins: 0.2 < z < 0.4, and 0.4 < z < 0.6. We extend this
by adding a bin from the sample analyzed in Sullivan et al. (2006) in the range
0.6 < z < 0.75 from the same time period. We compare the derived trend with
previously published rates and a supernova Type Ia production model having two
components: one component associated closely with star formation and an
additional component associated with host galaxy mass. Our observed trend is
consistent with this model, which predicts a rising SN Ia rate out to at least
z=2.
|
This is the first Monte Carlo study of the hard-sphere lattice gas with
nearest neighbour exclusion on the body-centred cubic lattice. We estimate the
critical activity to be $0.7223 \pm 0.0003$. This result confirms that there is
a re-entrant phase transition of an antiferromagnetic Ising model in an
external field and a Blume-Emery-Griffiths model on the body-centred cubic
lattice.
|
Anomaly detection is critical for the secure and reliable operation of
industrial control systems. As our reliance on such complex cyber-physical
systems grows, it becomes paramount to have automated methods for detecting
anomalies, preventing attacks, and responding intelligently. {This paper
presents a novel deep generative model to meet this need. The proposed model
follows a variational autoencoder architecture with a convolutional encoder and
decoder to extract features from both spatial and temporal dimensions.
Additionally, we incorporate an attention mechanism that directs focus towards
specific regions, enhancing the representation of relevant features and
improving anomaly detection accuracy. We also employ a dynamic threshold
approach leveraging the reconstruction probability and make our source code
publicly available to promote reproducibility and facilitate further research.
Comprehensive experimental analysis is conducted on data from all six stages of
the Secure Water Treatment (SWaT) testbed, and the experimental results
demonstrate the superior performance of our approach compared to several
state-of-the-art baseline techniques.
|
Recently, Glasner, Lin and Meyerovitch gave a first example of a partial
invariant order on a certain group that cannot be invariantly extended to an
invariant random total order. Using their result as a starting point we prove
that any invariant random partial order on a countable group could be
invariantly extended to an invariant random total order iff the group is
amenable.
|
Recent theoretical results of the standard model expectations on
$\sin2\beta_{\rm eff}$ from penguin-dominated $b \to s$ decays are briefly
reviewed.
|
Inelastic $\alpha$-scattering excitation cross sections are calculated for
electric dipole excitations in $^{124}$Sn based on the transition densities
obtained from the relativistic quasiparticle time-blocking approximation
(RQTBA) in the framework of a semiclassical model. The calculation provides the
missing link to directly compare the results from the microscopic RQTBA
calculations to recent experimental data measured via the $(\alpha ,\alpha
'\gamma)$ reaction, which show a structural splitting of the low-lying E1
strength often denoted as pygmy dipole resonance (PDR). The experimentally
observed splitting is reproduced by the cross section calculations, which
allows to draw conclusion on the structure of the PDR.
|
We study the sensitivity of the Large Hadron Collider (LHC) to top quark
chromomagnetic (CMDM) and chromoelectric (CEDM) dipole moments and $Wtb$
effective couplings in single-top production in association with a $W^-$ boson,
followed by semileptonic decay of the top. We calculate the top polarization
and the effects of these anomalous couplings on it at two centre-of-mass (cm)
energies, 8 TeV and 14 TeV. As a measure of top polarization, we look at
decay-lepton angular distributions in the laboratory frame, without requiring
reconstruction of the rest frame of the top, and study the effect of the
anomalous couplings on these distributions. We construct certain asymmetries to
study the sensitivity of these distributions to top-quark couplings. The Wt
single-top production mode helps to isolate the anomalous $ttg$ and $Wtb$
couplings, in contrast to top-pair production and other single-top production
modes, where other new-physics effects can also contribute. We determine
individual limits on the dominant couplings, viz., the real part of the CMDM
$Re\rho_2$, the imaginary part of the CEDM $Im\rho_3$, and the real part of the
tensor Wtb coupling $Ref_{2r}$, which may be obtained by utilizing these
asymmetries at the LHC. We also obtain simultaneous limits on pairs of these
couplings taking two couplings to be non-zero at a time.
|
Both bottom-up and top-down strategies have been used for neural
transition-based constituent parsing. The parsing strategies differ in terms of
the order in which they recognize productions in the derivation tree, where
bottom-up strategies and top-down strategies take post-order and pre-order
traversal over trees, respectively. Bottom-up parsers benefit from rich
features from readily built partial parses, but lack lookahead guidance in the
parsing process; top-down parsers benefit from non-local guidance for local
decisions, but rely on a strong encoder over the input to predict a constituent
hierarchy before its construction.To mitigate both issues, we propose a novel
parsing system based on in-order traversal over syntactic trees, designing a
set of transition actions to find a compromise between bottom-up constituent
information and top-down lookahead information. Based on stack-LSTM, our
psycholinguistically motivated constituent parsing system achieves 91.8 F1 on
WSJ benchmark. Furthermore, the system achieves 93.6 F1 with supervised
reranking and 94.2 F1 with semi-supervised reranking, which are the best
results on the WSJ benchmark.
|
We develop a theory of limits for sequences of dense abstract simplicial
complexes, where a sequence is considered convergent if its homomorphism
densities converge. The limiting objects are represented by stacks of
measurable [0,1]-valued functions on unit cubes of increasing dimension, each
corresponding to a dimension of the abstract simplicial complex. We show that
convergence in homomorphism density implies convergence in a cut-metric, and
vice versa, as well as showing that simplicial complexes sampled from the limit
objects closely resemble its structure. Applying this framework, we also
partially characterize the convergence of nonuniform hypergraphs.
|
Do-calculus is concerned with estimating the interventional distribution of
an action from the observed joint probability distribution of the variables in
a given causal structure. All identifiable causal effects can be derived using
the rules of do-calculus, but the rules themselves do not give any direct
indication whether the effect in question is identifiable or not. Shpitser and
Pearl constructed an algorithm for identifying joint interventional
distributions in causal models, which contain unobserved variables and induce
directed acyclic graphs. This algorithm can be seen as a repeated application
of the rules of do-calculus and known properties of probabilities, and it
ultimately either derives an expression for the causal distribution, or fails
to identify the effect, in which case the effect is non-identifiable. In this
paper, the R package causaleffect is presented, which provides an
implementation of this algorithm. Functionality of causaleffect is also
demonstrated through examples.
|
We study the influence of the spin lattice distortion on the properties of
frustrated magnetic systems and consider the applicability of the spin-1/2
frustrated square lattice model to materials lacking tetragonal symmetry. We
focus on the case of layered vanadium phosphates AA'VO(PO4)2 (AA' = Pb2, SrZn,
BaZn, and BaCd). To provide a proper microscopic description of these
compounds, we use extensive band structure calculations for real materials and
model structures and supplement this analysis with simulations of thermodynamic
properties, thus facilitating a direct comparison with the experimental data.
Due to the reduced symmetry, the realistic spin model of layered vanadium
phosphates AA'VO(PO4)2 includes four inequivalent exchange couplings: J1 and
J1' between nearest-neighbors and J2 and J2' between next-nearest-neighbors.
The estimates of individual exchange couplings suggest different regimes, from
J1'/J1 and J2'/J2 close to 1 in BaCdVO(PO4)2, a nearly regular frustrated
square lattice, to J1'/J1 ~ 0.7 and J2'/J2 ~ 0.4 in SrZnVO(PO4)2, a frustrated
square lattice with sizable distortion. The underlying structural differences
are analyzed, and the key factors causing the distortion of the spin lattice in
layered vanadium compounds are discussed. We propose possible routes for
finding new frustrated square lattice materials among complex vanadium oxides.
Full diagonalization simulations of thermodynamic properties indicate the
similarity of the extended model to the regular one with averaged couplings. In
case of moderate frustration and moderate distortion, valid for all the
AA'VO(PO4)2 compounds reported so far, the distorted spin lattice can be
considered as a regular square lattice with the couplings (J1+J1')/2 between
nearest-neighbors and (J2+J2')/2 between next-nearest-neighbors.
|
Subsets and Splits