text
stringlengths 6
128k
|
---|
We present a simple but explicit example of a recent development which
connects quantum integrable models with Schubert calculus: there is a purely
geometric construction of solutions to the Yang-Baxter equation and their
associated Yang-Baxter algebras which play a central role in quantum integrable
systems and exactly solvable lattice models in statistical physics. We consider
the degenerate five-vertex limit of the asymmetric six-vertex model and
identify its associated Yang-Baxter algebra as convolution algebra arising from
the equivariant Schubert calculus of Grassmannians. We show how our method can
be used to construct (Schur algebra type) quotients of the current algebra
$\mathfrak{gl}_2[t]$ acting on the tensor product of copies of its evaluation
representation $\mathbb{C}^2[t]$. Finally we connect it with the COHA for the
$A_1$-quiver.
|
This manuscript explores the connections between a class of stochastic
processes called "Stochastic Loewner Evolution" (SLE) and conformal field
theory (CFT).
First some important results are recalled which we utilise in the sequel, in
particular the notion of conformal restriction and of the "restrcition
martingale", originally introduced in Conformal restriction (G.F. Lawler, et
al).
Then an explicit construction of a link between SLE and the representation
theory of the Virasoro algebra is given. In particular, we interpret the Ward
identities in terms of the restriction property and the central charge in terms
of the density of Brownian bubbles. We then show that this interpretation
permits to relate the $\kappa$ of the stochastic process with the central
charge $c$ of the conformal field theory. This is achieved by a highest-weight
representation which is degenerate at level two, of the Virasoro algebra.
Then we proceed by giving a derivation of the same relations, but from the
theoretical physics point of view. In particular, we explore the relation
between SLE and the geometry of the underlying moduli spaces.
Finally we outline a general construction which allows to construct random
curves on arbitrary Riemann surfaces. The key to this is to consider the
canonical operator $\frac{\kappa}{2}L^2_{-1}-2L_{-2}$ in conjunction with a
boundary field that is a degenerate highest-weight field $\psi$ as the
generator of a diffusion on an appropriate moduli space.
|
Composed of amino acid chains that influence how they fold and thus dictating
their function and features, proteins are a class of macromolecules that play a
central role in major biological processes and are required for the structure,
function, and regulation of the body's tissues. Understanding protein functions
is vital to the development of therapeutics and precision medicine, and hence
the ability to classify proteins and their functions based on measurable
features is crucial; indeed, the automatic inference of a protein's properties
from its sequence of amino acids, known as its primary structure, remains an
important open problem within the field of bioinformatics, especially given the
recent advancements in sequencing technologies and the extensive number of
known but uncategorized proteins with unknown properties. In this work, we
demonstrate and compare the performance of several deep learning frameworks,
including novel bi-directional LSTM and convolutional models, on widely
available sequencing data from the Protein Data Bank (PDB) of the Research
Collaboratory for Structural Bioinformatics (RCSB), as well as benchmark this
performance against classical machine learning approaches, including k-nearest
neighbors and multinomial regression classifiers, trained on experimental data.
Our results show that our deep learning models deliver superior performance to
classical machine learning methods, with the convolutional architecture
providing the most impressive inference performance.
|
For the wave and the Schr\"odinger equations we show how observability can be
deduced from the observability of solutions localized in frequency according to
a dyadic scale.
|
To provide a novel tool for the investigation of the energy landscape of the
Edwards-Anderson spin-glass model we introduce an algorithm that allows an
efficient execution of a greedy optimization based on data from a previously
performed optimization for a similar configuration. As an application we show
how the technique can be used to perform higher-order greedy optimizations and
simulated annealing searches with improved performance.
|
The dynamics of radiation pressure acceleration in the relativistic light
sail regime are analysed by means of large scale, three-dimensional (3D)
particle-in-cell simulations. Differently to other mechanisms, the 3D dynamics
leads to faster and higher energy gain than in 1D or 2D geometry. This effect
is caused by the local decrease of the target density due to transverse
expansion leading to a "lighter sail". However, the rarefaction of the target
leads to an earlier transition to transparency limiting the energy gain. A
transverse instability leads to a structured and inhomogeneous ion
distribution.
|
This paper gives a natural extension of Frobenius-Stickelberger formula and
Kiepert formula to Abelian functions for "Purely Trigonal Curves", especially,
of degree four. A description on the theory of Abelian functions for general
trigonal curves of degree four is also included.
|
In this paper, we introduce the notion of large scale resemblance structure
as a new large scale structure by axiomatizing the concept of `being alike in
large scale' for a family of subsets of a set. We see that in a particular
case, large scale resemblances on a set can induce a nearness on it, and as a
consequence, we offer a relatively big class of examples to show that `not
every near family is contained in a bunch'. Besides, We show how some large
scale properties like asymptotic dimension can be generalized to large scale
resemblance spaces.
|
In this work we design a general method for proving moment inequalities for
polynomials of independent random variables. Our method works for a wide range
of random variables including Gaussian, Boolean, exponential, Poisson and many
others. We apply our method to derive general concentration inequalities for
polynomials of independent random variables. We show that our method implies
concentration inequalities for some previously open problems, e.g. permanent of
a random symmetric matrices. We show that our concentration inequality is
stronger than the well-known concentration inequality due to Kim and Vu. The
main advantage of our method in comparison with the existing ones is a wide
range of random variables we can handle and bounds for previously intractable
regimes of high degree polynomials and small expectations. On the negative side
we show that even for boolean random variables each term in our concentration
inequality is tight.
|
For second countable discrete quantum groups, and more generally second
countable locally compact quantum groups with trivial scaling group, we show
that property (T) is equivalent to every weakly mixing unitary representation
not having almost invariant vectors. This is a generalization of a theorem of
Bekka and Valette from the group setting and was previously established in the
case of low dual by Daws, Skalsi, and Viselter. Our approach uses spectral
techniques and is completely different from those of Bekka--Valette and
Daws--Skalski--Viselter. By a separate argument we furthermore extend the
result to second countable nonunimodular locally compact quantum groups, which
are shown in particular not to have property (T), generalizing a theorem of
Fima from the discrete setting. We also obtain quantum group versions of
characterizations of property (T) of Kerr and Pichot in terms of the Baire
category theory of weak mixing representations and of Connes and Weiss in term
of the prevalence of strongly ergodic actions.
|
We present a framework of an auxiliary field quantum Monte Carlo (QMC) method
for multi-orbital Hubbard models. Our formulation can be applied to a
Hamiltonian which includes terms for on-site Coulomb interaction for both
intra- and inter-orbitals, intra-site exchange interaction and energy
differences between orbitals. Based on our framework, we point out possible
ways to investigate various phase transitions such as metal-insulator, magnetic
and orbital order-disorder transitions without the minus sign problem. As an
application, a two-band model is investigated by the projection QMC method and
the ground state properties of this model are presented.
|
It is proposed that high-speed universal quantum gates can be realized by
using non-Abelian holonomic transformation. A cyclic evolution path which
brings the system periodically back to a degenerate qubit subspace is crucial
to holonomic quantum computing. The cyclic nature and the resulting gate
operations are fully dependent on the precise control of driving parameters,
such as the modulated envelop function of Rabi frequency and the control
phases. We investigate the effects of fluctuations in these driving parameters
on the transformation fidelity of a universal set of single-qubit quantum
gates. We compare the damage effects from different noise sources and determine
the "sweet spots" in the driving parameter space. The nonadiabatic non-Abelian
quantum gate is found to be more susceptible to classical noises on the envelop
function than that on the control phases. We also extend our study to a
two-qubit quantum gate.
|
The recent Gaia Data Release 3 has unveiled a catalog of over eight hundred
thousand binary systems, providing orbital solutions for half of them. Since
most of them are unresolved astrometric binaries, several astrophysical
parameters that can be only derived from their relative orbits together with
spectroscopic data, such as the individual stellar masses, remain unknown.
Indeed, only the mass of the primary, $\texttt{m1}$, and a wide interval,
$\texttt{[m2_lower, m2_upper]}$, for the secondary companion of main-sequence
astrometric binaries have been derived to date (Gaia Collaboration et al.,
2023). In order to obtain the correct values for each component, we propose an
analytic algorithm to estimate the two most probable relative orbits and
magnitude differences of a certain main-sequence or subgiant astrometric binary
using all available Gaia data. Subsequently, both possible solutions are
constrained to the one that is consistent with $\texttt{m1, m2_lower}$ and
$\texttt{m2_upper}$. Moreover, we deduce not only the correct values of the
individual masses for each binary but also the size of the telescope necessary
to resolve their components. The workflow of our algorithm as well as the
ESMORGA (Ephemeris, Stellar Masses, and relative ORbits from GAia) catalog with
more than one hundred thousand individual masses, spectral types, and effective
temperatures derivated from its application are also presented.
|
Ion beam has been used in cancer treatment, and has a unique preferable
feature to deposit its main energy inside a human body so that cancer cell
could be killed by the ion beam. However, conventional ion accelerator tends to
be huge in its size and its cost. In this paper a future intense-laser ion
accelerator is proposed to make the ion accelerator compact. An intense
femtosecond pulsed laser was employed to accelerate ions. The issues in the
laser ion accelerator include the energy efficiency from the laser to the ions,
the ion beam collimation, the ion energy spectrum control, the ion beam
bunching and the ion particle energy control. In the study particle computer
simulations were performed to solve the issues, and each component was designed
to control the ion beam quality. When an intense laser illuminates a target,
electrons in the target are accelerated and leave from the target; temporarily
a strong electric field is formed between the high-energy electrons and the
target ions, and the target ions are accelerated. The energy efficiency from
the laser to ions was improved by using a solid target with a fine
sub-wavelength structure or by a near-critical density gas plasma. The ion beam
collimation was realized by holes behind the solid target. The control of the
ion energy spectrum and the ion particle energy, and the ion beam bunching were
successfully realized by a multi-stage laser-target interaction. The present
study proposed a novel concept for a future compact laser ion accelerator,
based on each component study required to control the ion beam quality and
parameters.
|
The DRAGON recoil mass separator at TRIUMF exists to study radiative proton
and alpha capture reactions, which are important in a variety of astrophysical
scenarios. DRAGON experiments require a data acquisition system that can be
triggered on either reaction product ($\gamma$ ray or heavy ion), with the
additional requirement of being able to promptly recognize coincidence events
in an online environment. To this end, we have designed and implemented a new
data acquisition system for DRAGON which consists of two independently
triggered readouts. Events from both systems are recorded with timestamps from
a $20$ MHz clock that are used to tag coincidences in the earliest possible
stage of the data analysis. Here we report on the design, implementation, and
commissioning of the new DRAGON data acquisition system, including the
hardware, trigger logic, coincidence reconstruction algorithm, and live time
considerations. We also discuss the results of an experiment commissioning the
new system, which measured the strength of the $E_{\text{c}.\text{m}.} = 1113$
keV resonance in the $^{20}$Ne$\left(p, \gamma \right)^{21}$Na radiative proton
capture reaction.
|
While recent progress in quantum hardware open the door for significant
speedup in certain key areas, quantum algorithms are still hard to implement
right, and the validation of such quantum programs is a challenge. Early
attempts either suffer from the lack of automation or parametrized reasoning,
or target high-level abstract algorithm description languages far from the
current de facto consensus of circuit-building quantum programming languages.
As a consequence, no significant quantum algorithm implementation has been
currently verified in a scale-invariant manner. We propose Qbricks, the first
formal verification environment for circuit-building quantum programs,
featuring clear separation between code and proof, parametric specifications
and proofs, high degree of proof automation and allowing to encode quantum
programs in a natural way, i.e. close to textbook style. Qbricks builds on best
practice of formal verification for the classical case and tailor them to the
quantum case: we bring a new domain-specific circuit-building language for
quantum programs, namely Qbricks-DSL, together with a new logical specification
language Qbricks-Spec and a dedicated Hoare-style deductive verification rule
named Hybrid Quantum Hoare Logic. Especially, we introduce and intensively
build upon HOPS, a higher-order extension of the recent path-sum symbolic
representation, used for both specification and automation. To illustrate the
opportunity of Qbricks, we implement the first verified parametric
implementations of several famous and non-trivial quantum algorithms, including
the quantum part of Shor integer factoring (Order Finding - Shor-OF), quantum
phase estimation (QPE) - a basic building block of many quantum algorithms, and
Grover search. These breakthroughs were amply facilitated by the specification
and automated deduction principles introduced within Qbricks.
|
In this paper we show that the number of all 1/2-BPS branes in string theory
compactified on a torus can be derived by universal wrapping rules whose
formulation we present. These rules even apply to branes in less than ten
dimensions whose ten-dimensional origin is an exotic brane. In that case the
wrapping rules contain an additional combinatorial factor that is related to
the highest dimension in which the ten-dimensional exotic brane, after
compactification, can be realized as a standard brane. We show that the
wrapping rules also apply to cases with less supersymmetry. As a specific
example, we discuss the compactification of IIA/IIB string theory on
$(T^4/{\mathbb{Z}_2}) \times T^n$.
|
Let $(G,+)$ be a finite abelian group. Then, $\so(G)$ and $\eta(G)$ denote
the smallest integer $\ell$ such that each sequence over $G$ of length at least
$\ell$ has a subsequence whose terms sum to $0$ and whose length is equal to
and at most, resp., the exponent of the group. For groups of rank two, we study
the inverse problems associated to these constants, i.e., we investigate the
structure of sequences of length $\so(G)-1$ and $\eta(G)-1$ that do not have
such a subsequence. On the one hand, we show that the structure of these
sequences is in general richer than expected. On the other hand, assuming a
well-supported conjecture on this problem for groups of the form $C_m \oplus
C_m$, we give a complete characterization of all these sequences for general
finite abelian groups of rank two. In combination with partial results towards
this conjecture, we get unconditional characterizations in special cases.
|
The invariance of natural objects under perceptual changes is possibly
encoded in the brain by symmetries in the graph of synaptic connections. The
graph can be established via unsupervised learning in a biologically plausible
process across different perceptual modalities. This hypothetical encoding
scheme is supported by the correlation structure of naturalistic audio and
image data and it predicts a neural connectivity architecture which is
consistent with many empirical observations about primary sensory cortex.
|
We made high-resolution spectroscopic observations of limb-spicules in
H-alpha using the Vertical Spectrograph of Domeless Solar Telescope at Hida
Observatory. While more than half of the observed spicules have Gaussian
line-profiles, some spicules have distinctly asymmetric profiles which can be
fitted with two Gaussian components. The faster of these components has radial
velocities of 10 - 40 km/s and Doppler-widths of about 0.4 A which suggest that
it is from a single spicule oriented nearly along the line-of-sight. Profiles
of the slower components and the single-Gaussian type show very similar
characteristics. Their radial velocities are less than 10 km/s and the
Doppler-widths are 0.6 - 0.9 A. Non-thermal "macroturbulent" velocities of
order 30 km/s are required to explain these width-values.
|
Let $F$ be a finite model of cardinality $M$ and denote by $\operatorname
{conv}(F)$ its convex hull. The problem of convex aggregation is to construct a
procedure having a risk as close as possible to the minimal risk over
$\operatorname {conv}(F)$. Consider the bounded regression model with respect
to the squared risk denoted by $R(\cdot)$. If
${\widehat{f}}_n^{\mathit{ERM-C}}$ denotes the empirical risk minimization
procedure over $\operatorname {conv}(F)$, then we prove that for any $x>0$,
with probability greater than $1-4\exp(-x)$,
\[R({\widehat{f}}_n^{\mathit{ERM-C}})\leq\min_{f\in \operatorname
{conv}(F)}R(f)+c_0\max \biggl(\psi_n^{(C)}(M),\frac{x}{n}\biggr),\] where
$c_0>0$ is an absolute constant and $\psi_n^{(C)}(M)$ is the optimal rate of
convex aggregation defined in (In Computational Learning Theory and Kernel
Machines (COLT-2003) (2003) 303-313 Springer) by $\psi_n^{(C)}(M)=M/n$ when
$M\leq \sqrt{n}$ and $\psi_n^{(C)}(M)=\sqrt{\log (\mathrm{e}M/\sqrt{n})/n}$
when $M>\sqrt{n}$.
|
Large-scale simulations of the Centaur population are carried out. The
evolution of 23328 particles based on the orbits of 32 well-known Centaurs is
followed for up to 3 Myr in the forward and backward direction under the
influence of the 4 massive planets. The objects exhibit a rich variety of
dynamical behaviour with half-lives ranging from 540 kyr (1996 AR20) to 32 Myr
(2000 FZ53). The mean half-life of the entire sample of Centaurs is 2.7 Myr.
The data are analyzed using a classification scheme based on the controlling
planets at perihelion and aphelion, previously given in Horner et al (2003).
Transfer probabilities are computed and show the main dynamical pathways of the
Centaur population. The total number of Centaurs with diameters larger than 1
km is estimated as roughly 44300, assuming an inward flux of one new
short-period comet every 200 yrs. The flux into the Centaur region from the
Edgeworth-Kuiper belt is estimated to be 1 new object every 125 yrs. Finally,
the flux from the Centaur region to Earth-crossing orbits is 1 new
Earth-crosser every 880 yrs
|
Explainability is one of the key ethical concepts in the design of AI
systems. However, attempts to operationalize this concept thus far have tended
to focus on approaches such as new software for model interpretability or
guidelines with checklists. Rarely do existing tools and guidance incentivize
the designers of AI systems to think critically and strategically about the
role of explanations in their systems. We present a set of case studies of a
hypothetical AI-enabled product, which serves as a pedagogical tool to empower
product designers, developers, students, and educators to develop a holistic
explainability strategy for their own products.
|
Sharing musical files via the Internet was the essential motivation of early
P2P systems. Despite of the great success of the P2P file sharing systems,
these systems support only "simple" queries. The focus in such systems is how
to carry out an efficient query routing in order to find the nodes storing a
desired file. Recently, several research works have been made to extend P2P
systems to be able to share data having a fine granularity (i.e. atomic
attribute) and to process queries written with a highly expressive language
(i.e. SQL). These works have led to the emergence of P2P data sharing systems
that represent a new generation of P2P systems and, on the other hand, a next
stage in a long period of the database research area. ? The characteristics of
P2P systems (e.g. large-scale, node autonomy and instability) make impractical
to have a global catalog that represents often an essential component in
traditional database systems. Usually, such a catalog stores information about
data, schemas and data sources. Query routing and processing are two problems
affected by the absence of a global catalog. Locating relevant data sources and
generating a close to optimal execution plan become more difficult. In this
paper, we concentrate our study on proposed solutions for the both problems.
Furthermore, selected case studies of main P2P data sharing systems are
analyzed and compared.
|
An innovative strategy for the optimal design of planar frames able to resist
to seismic excitations is here proposed. The procedure is based on genetic
algorithms (GA) which are performed according to a nested structure suitable to
be implemented in parallel computing on several devices. In particular, this
solution foresees two nested genetic algorithms. The first one, named "External
GA", seeks, among a predefined list of profiles, the size of the structural
elements of the frame which correspond to the most performing solution
associated to the highest value of an appropriate fitness function. The latter
function takes into account, among other considerations, of the seismic safety
factor and the failure mode which are calculated by means of the second
algorithm, named "Internal GA". The details of the proposed procedure are
provided and applications to the seismic design of two frames of different size
are described.
|
We report observations and modeling of the stellar remnant and presumed
double-degenerate merger of Type~Iax supernova Pa30, which is the probable
remnant of SN~1181~AD. It is the only known bound stellar SN remnant and the
only star with Wolf-Rayet features that is neither a planetary nebula central
star nor a massive Pop I progenitor. We model the unique emission-line spectrum
with broad, strong O~{\sc vi} and O~{\sc viii} lines as a fast stellar wind and
shocked, hot gas. Non-LTE wind modeling indicates a mass-loss rate of $\sim
10^{-6}\,\rm M_\odot\,yr^{-1}$ and a terminal velocity of $
\sim$15,000~km\,s$^{-1}$, consistent with earlier results. O~{\sc viii} lines
indicate shocked gas temperatures of $T \simeq 4\,$MK. We derive a magnetic
field upper limit of $B<2.5\,$MG, below earlier suggestions. The luminosity
indicates a remnant mass of 1.0--1.65\,\rm M$_\odot$ with ejecta mass
$0.15\pm0.05\,\rm M_\odot$. Archival photometry suggests the stellar remnant
has dimmed by $\sim$0.5 magnitudes over 100 years. A low Ne/O$\,<0.15$ argues
against a O-Ne white dwarf in the merger. A cold dust shell is only the second
detection of dust in a SN Iax and the first of cold dust. Our ejecta mass and
kinetic energy estimates of the remnant are consistent with Type Iax
extragalactic sources.
|
We consider settings where the observations are drawn from a zero-mean
multivariate (real or complex) normal distribution with the population
covariance matrix having eigenvalues of arbitrary multiplicity. We assume that
the eigenvectors of the population covariance matrix are unknown and focus on
inferential procedures that are based on the sample eigenvalues alone (i.e.,
"eigen-inference"). Results found in the literature establish the asymptotic
normality of the fluctuation in the trace of powers of the sample covariance
matrix. We develop concrete algorithms for analytically computing the limiting
quantities and the covariance of the fluctuations. We exploit the asymptotic
normality of the trace of powers of the sample covariance matrix to develop
eigenvalue-based procedures for testing and estimation. Specifically, we
formulate a simple test of hypotheses for the population eigenvalues and a
technique for estimating the population eigenvalues in settings where the
cumulative distribution function of the (nonrandom) population eigenvalues has
a staircase structure. Monte Carlo simulations are used to demonstrate the
superiority of the proposed methodologies over classical techniques and the
robustness of the proposed techniques in high-dimensional, (relatively) small
sample size settings. The improved performance results from the fact that the
proposed inference procedures are "global" (in a sense that we describe) and
exploit "global" information thereby overcoming the inherent biases that
cripple classical inference procedures which are "local" and rely on "local"
information.
|
We study violations of the Null Energy Condition (NEC) in Quantum Field
Theory (QFT) and their implications. For the first part of the project, we
examine these violations for classes of already known and novel (first
discussed here) QFT states. Next, we discuss the implications of these
violations focusing on the example of Wormhole Traversability. After reviewing
the current literature on the existing restrictions on these violations, we
conjecture that NEC violating states are incompatible with the Semi-Classical
Gravity approximation. We argue that this conjecture provides the only way out
of the problems introduced by the violations of NEC in this regime. Building on
this, we propose a bound that should hold for all QFT states. Finally, we show
that both our conjecture and bound hold for some relevant classes of QFT
states.
|
A Waring decomposition of a (homogeneous) polynomial f is a minimal sum of
powers of linear forms expressing f. Under certain conditions, such a
decomposition is unique. We discuss some algorithms to compute the Waring
decomposition, which are linked to the equation of certain secant varieties and
to eigenvectors of tensors. In particular we explicitly decompose a general
cubic polynomial in three variables as the sum of five cubes (Sylvester
Pentahedral Theorem).
|
This paper concerns the verification of continuous-time polynomial spline
trajectories against linear temporal logic specifications (LTL without 'next').
Each atomic proposition is assumed to represent a state space region described
by a multivariate polynomial inequality. The proposed approach samples a
trajectory strategically, to capture every one of its region transitions. This
yields a discrete word called a trace, which is amenable to established formal
methods for path checking. The original continuous-time trajectory is shown to
satisfy the specification if and only if its trace does. General topological
conditions on the sample points are derived that ensure a trace is recorded for
arbitrary continuous paths, given arbitrary region descriptions. Using
techniques from computer algebra, a trace generation algorithm is developed to
satisfy these conditions when the path and region boundaries are defined by
polynomials. The proposed PolyTrace algorithm has polynomial complexity in the
number of atomic propositions, and is guaranteed to produce a trace of any
polynomial path. Its performance is demonstrated via numerical examples and a
case study from robotics.
|
In this correspondence, we point out two typographical errors in Chai and
Tjhung's paper and we offer the correct formula of the unified Laguerre
polynomial-series-based cumulative distribution function (cdf) for small-scale
fading distributions. A Laguerre polynomial-series-based cdf formula for
non-central chi-square distribution is also provided as a special case of our
unified cdf result.
|
We classify four-dimensional manifolds endowed with symplectic pairs
admitting embedded symplectic spheres with non-negative self-intersection,
following the strategy of McDuff's classification of rational and ruled
symplectic four manifolds.
|
Most real-world networks evolve over time. Existing literature proposes
models for dynamic networks that are either unlabeled or assumed to have a
single membership structure. On the other hand, a new family of Mixed
Membership Stochastic Block Models (MMSBM) allows to model static labeled
networks under the assumption of mixed-membership clustering. In this work, we
propose to extend this later class of models to infer dynamic labeled networks
under a mixed membership assumption. Our approach takes the form of a temporal
prior on the model's parameters. It relies on the single assumption that
dynamics are not abrupt. We show that our method significantly differs from
existing approaches, and allows to model more complex systems --dynamic labeled
networks. We demonstrate the robustness of our method with several experiments
on both synthetic and real-world datasets. A key interest of our approach is
that it needs very few training data to yield good results. The performance
gain under challenging conditions broadens the variety of possible applications
of automated learning tools --as in social sciences, which comprise many fields
where small datasets are a major obstacle to the introduction of machine
learning methods.
|
Given a linear code $C$, one can define the $d$-th power of $C$ as the span
of all componentwise products of $d$ elements of $C$. A power of $C$ may
quickly fill the whole space. Our purpose is to answer the following question:
does the square of a code "typically" fill the whole space? We give a positive
answer, for codes of dimension $k$ and length roughly $\frac{1}{2}k^2$ or
smaller. Moreover, the convergence speed is exponential if the difference
$k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and
combinatorial arguments, together with algebraic tools involving the precise
computation of the number of quadratic forms of a given rank, and the number of
their zeros.
|
We developed an open-source scalar wave transport model to estimate the
generalized scattering matrix (S matrix) of a disordered medium in the
diffusion regime. Here, the term generalization refers to the incorporation of
evanescent wave field modes in addition to propagating modes while estimating
the S matrix. For that we used the scalar Kirchhoff-Helmholtz boundary integral
formulation together with the Green's function perturbation method to
generalize the conventional Fisher-Lee relations to include evanescent modes as
well. The estimated S matrix, which satisfies generalized unitarity and
reciprocity conditions, is modeled for a 2D disordered waveguide. The
generalized transmission matrix contained in the S matrix is used to estimate
the optimal phase-conjugate wavefront for focusing onto an evanescent mode. The
phenomena of universal transmission value of 2/3 for such an optimal phase
conjugate wavefront is also shown in the context of evanescent wave mode
focusing through a diffusive disorder. The presented code framework may be of
interest to wavefront shaping researchers for visualizing and estimating wave
transport properties in general.
|
We construct a non-commutative, non-cocommutative, graded bialgebra
$\mathbf{\Pi}$ with a basis indexed by the permutations in all finite symmetric
groups. Unlike the formally similar Malvenuto-Poirier-Reutenauer Hopf algebra,
this bialgebra does not have finite graded dimension. After giving formulas for
the product and coproduct, we show that there is a natural morphism from
$\mathbf{\Pi}$ to the algebra of quasi-symmetric functions, under which the
image of a permutation is its associated Stanley symmetric function. As an
application, we use this morphism to derive some new enumerative identities. We
also describe analogues of $\mathbf{\Pi}$ for the other classical types. In
these cases, the relevant objects are module coalgebras rather than bialgebras,
but there are again natural morphisms to the quasi-symmetric functions, under
which the image of a signed permutation is the corresponding Stanley symmetric
function of type B, C, or D.
|
We study the large time dynamics of a macroscopically large quantum systems
under a sudden quench. We show that, first of all, for a generic system in the
thermodynamic limit the Gibbs distribution correctly captures the large time
dynamics of its global observables. In contrast, for an integrable system, the
generalized Gibbs ensemble captures its global large time dynamics only if the
system can be thought of as a number of noninteracting uncorrelated fermionic
degrees of freedom. The conditions for the generalized Gibbs ensemble to
capture the large time dynamics of local quantities are likely to be far less
restrictive, but this question is not systematically addressed here.
|
In recent years, the buffer layer ablation failures of high voltage cables
are frequently reported by the power systems. Previous studies have dominantly
regarded the buffer layer as the continuous homogeneous medium, whereas
neglects its microstructures. In this paper, the current distribution within
the random fiber networks of buffer layer are investigated. Experiment results
of our self-designed platform revealed an uneven current distribution in buffer
layer at the moment of bearing current. This phenomenon is named as the
intrinsic current concentration where the current density concentrates at
certain sites inner the buffer layer. And the degree of current concentration
will be suppressed by compressing the sample. Then, a 2D simulation model of
the random fiber networks was constructed based on the Mikado model. The
simulation results also presented an uneven current distribution in the
networks whose every fiber can be viewed as a micro-resistor. Two types of
dimensionless current concentration factors were defined to describe the degree
of current concentration, finding their values decreasing with the rise of
fiber density. Meanwhile, it is equivalent of compressing the buffer layer and
increasing the fiber density of model. We believe that the intrinsic current
concentration phenomenon is mainly related with the inhomogeneity of geometry
structure of buffer layer. The ablation traces and fracture fibers observed by
the X-ray micro-computed tomography test supported this point. In addition, the
non-ideal surface of sample can also induce this phenomenon. The intrinsic
current concentration can aggravate the degree of originally existed
macroscopic current concentration in cables, thus causing the ablation failure.
Our work may unveil a deeper understanding on the cable ablation failure and
the electrical response of the similar fibrous materials.
|
Despite the increase in popularity of language models for code generation, it
is still unknown how training on bimodal coding forums affects a model's code
generation performance and reliability. We, therefore, collect a dataset of
over 2.2M StackOverflow questions with answers for finetuning. These fine-tuned
models have average $pass@k$ improvements of 54.64% and 85.35% on the HumanEval
(Chen et al., 2021) and Mostly Basic Program Problems (Austin et al., 2021)
tasks, respectively. This regime further decreases the number of generated
programs with both syntax and runtime errors. However, we find that at higher
temperatures, there are significant decreases to the model's ability to
generate runnable programs despite higher $pass@k$ scores, underscoring the
need for better methods of incorporating such data that mitigate these side
effects. The code can be found
https://github.com/gabeorlanski/bimodalcode-generation
|
We propose the application of iterative regularization for the development of
ensemble methods for solving Bayesian inverse problems. In concrete, we
construct (i) a variational iterative regularizing ensemble Levenberg-Marquardt
method (IR-enLM) and (ii) a derivative-free iterative ensemble Kalman smoother
(IR-ES). The aim of these methods is to provide a robust ensemble approximation
of the Bayesian posterior. The proposed methods are based on fundamental ideas
from iterative regularization methods that have been widely used for the
solution of deterministic inverse problems [21]. In this work we are interested
in the application of the proposed ensemble methods for the solution of
Bayesian inverse problems that arise in reservoir modeling applications. The
proposed ensemble methods use key aspects of the regularizing
Levenberg-Marquardt scheme developed by Hanke [16] and that we recently applied
for history matching in [18].
In the case where the forward operator is linear and the prior is Gaussian,
we show that the proposed IR-enLM and IR-ES coincide with standard randomized
maximum likelihood (RML) and the ensemble smoother (ES) respectively. For the
general nonlinear case, we develop a numerical framework to assess the
performance of the proposed ensemble methods at capturing the posterior. This
framework consists of using a state-of-the art MCMC method for resolving the
Bayesian posterior from synthetic experiments. The resolved posterior via MCMC
then provides a gold standard against to which compare the proposed IR-enLM and
IR-ES. We show that for the careful selection of regularization parameters,
robust approximations of the posterior can be accomplished in terms of mean and
variance. Our numerical experiments showcase the advantage of using iterative
regularization for obtaining more robust and stable approximation of the
posterior than standard unregularized methods.
|
We derive explicit formulas for integrals of certain symmetric polynomials
used in Keiju Sono's multidimensional sieve of $E_2$-numbers, i.e., integers
which are products of two distinct primes. We use these computations to produce
the currently best-known bounds for gaps between multiple $E_2$-numbers. For
example, we show there are infinitely many occurrences of four $E_2$-numbers
within a gap size of 94 unconditionally and within a gap size of 32 assuming
the Elliott-Halberstam conjecture for primes and sifted $E_2$-numbers.
|
Let $X$ be a complex toric variety equipped with the action of an algebraic
torus $T$, and let $G$ be a complex linear algebraic group. We classify all
$T$-equivariant principal $G$-bundles $\mathcal{E}$ over $X$ and the morphisms
between them. When $G$ is connected and reductive, we characterize the
equivariant automorphism group $\text{Aut}_T(\mathcal{E} )$ of $\mathcal{E}$ as
the intersection of certain parabolic subgroups of $G$ that arise naturally
from the $T$-action on $\mathcal{E}$. We then give a criterion for the
equivariant reduction of the structure group of $\mathcal{E}$ to a Levi
subgroup of $G$ in terms of $\text{Aut}_T(\mathcal{E} )$. We use it to prove a
principal bundle analogue of Kaneyama's theorem on equivariant splitting of
torus equivariant vector bundles of small rank over a projective space. When
$X$ is projective and $G$ is connected and reductive, we show that the notions
of stability and equivariant stability are equivalent for any $T$-equivariant
principal $G$-bundle over $X$.
|
Adversarial training (AT) has become the de-facto standard to obtain models
robust against adversarial examples. However, AT exhibits severe robust
overfitting: cross-entropy loss on adversarial examples, so-called robust loss,
decreases continuously on training examples, while eventually increasing on
test examples. In practice, this leads to poor robust generalization, i.e.,
adversarial robustness does not generalize well to new examples. In this paper,
we study the relationship between robust generalization and flatness of the
robust loss landscape in weight space, i.e., whether robust loss changes
significantly when perturbing weights. To this end, we propose average- and
worst-case metrics to measure flatness in the robust loss landscape and show a
correlation between good robust generalization and flatness. For example,
throughout training, flatness reduces significantly during overfitting such
that early stopping effectively finds flatter minima in the robust loss
landscape. Similarly, AT variants achieving higher adversarial robustness also
correspond to flatter minima. This holds for many popular choices, e.g.,
AT-AWP, TRADES, MART, AT with self-supervision or additional unlabeled
examples, as well as simple regularization techniques, e.g., AutoAugment,
weight decay or label noise. For fair comparison across these approaches, our
flatness measures are specifically designed to be scale-invariant and we
conduct extensive experiments to validate our findings.
|
For high-dimensional inference problems, statisticians have a number of
competing interests. On the one hand, procedures should provide accurate
estimation, reliable structure learning, and valid uncertainty quantification.
On the other hand, procedures should be computationally efficient and able to
scale to very high dimensions. In this note, I show that a very simple
data-dependent measure can achieve all of these desirable properties
simultaneously, along with some robustness to the error distribution, in sparse
sequence models.
|
Many planning formalisms allow for mixing numeric with Boolean effects.
However, most of these formalisms are undecidable. In this paper, we will
analyze possible causes for this undecidability by studying the number of
different occurrences of actions, an approach that proved useful for metric
fluents before. We will start by reformulating a numeric planning problem known
as restricted tasks as a search problem. We will then show how an NP-complete
fragment of numeric planning can be found by using heuristics. To achieve this,
we will develop the idea of multi-valued partial order plans, a least
committing compact representation for (sequential and parallel) plans. Finally,
we will study optimization techniques for this representation to incorporate
soft preconditions.
|
Decision trees are one of the most famous methods for solving classification
problems, mainly because of their good interpretability properties. Moreover,
due to advances in recent years in mixed-integer optimization, several models
have been proposed to formulate the problem of computing optimal classification
trees. The goal is, given a set of labeled points, to split the feature space
with hyperplanes and assign a class to each partition. In certain scenarios,
however, labels are exclusively accessible for a subset of the given points.
Additionally, this subset may be non-representative, such as in the case of
self-selection in a survey. Semi-supervised decision trees tackle the setting
of labeled and unlabeled data and often contribute to enhancing the reliability
of the results. Furthermore, undisclosed sources may provide extra information
about the size of the classes. We propose a mixed-integer linear optimization
model for computing semi-supervised optimal classification trees that cover the
setting of labeled and unlabeled data points as well as the overall number of
points in each class for a binary classification. Our numerical results show
that our approach leads to a better accuracy and a better Matthews correlation
coefficient for biased samples compared to other optimal classification trees,
even if only few labeled points are available.
|
We analyzed Chandra X-ray observations of five galaxy clusters whose
atmospheric cooling times, entropy parameters, and cooling time to free-fall
time ratios within the central galaxies lie below 1 Gyr, below 30 keV cm^2, and
between 20 < tcool/tff < 50, respectively. These thermodynamic properties are
commonly associated with molecular clouds, bright H-alpha emission, and star
formation in central galaxies. However, none of these clusters have detectable
H-alpha indicated in the ACCEPT database, nor do they have significant star
formation rates or detectable molecular gas. Among these, only RBS0533 has a
detectable radio/X-ray bubble which are commonly observed in cooling
atmospheres. Signatures of uplifted, high metallicity atmospheric gas are
absent. Despite its prominent X-ray bubble, RBS0533 lacks significant levels of
molecular gas. Cold gas is absent at appreciable levels in these systems
perhaps because their radio sources have failed to lift low entropy atmospheric
gas to an altitude where the ratio of the cooling time to the free-fall time
falls below unity.
|
Open-vocabulary querying in 3D space is challenging but essential for scene
understanding tasks such as object localization and segmentation.
Language-embedded scene representations have made progress by incorporating
language features into 3D spaces. However, their efficacy heavily depends on
neural networks that are resource-intensive in training and rendering. Although
recent 3D Gaussians offer efficient and high-quality novel view synthesis,
directly embedding language features in them leads to prohibitive memory usage
and decreased performance. In this work, we introduce Language Embedded 3D
Gaussians, a novel scene representation for open-vocabulary query tasks.
Instead of embedding high-dimensional raw semantic features on 3D Gaussians, we
propose a dedicated quantization scheme that drastically alleviates the memory
requirement, and a novel embedding procedure that achieves smoother yet high
accuracy query, countering the multi-view feature inconsistencies and the
high-frequency inductive bias in point-based representations. Our comprehensive
experiments show that our representation achieves the best visual quality and
language querying accuracy across current language-embedded representations,
while maintaining real-time rendering frame rates on a single desktop GPU.
|
In this paper, we consider a theory of gravity with a metric-dependent
torsion namely the $F(R,T)$ gravity, where $R$ is the curvature scalar and $T$
is the torsion scalar. We study a geometric root of such theory. In particular
we give the derivation of the model from the geometrical point of view. Then we
present the more general form of $F(R,T)$ gravity with two arbitrary functions
and give some of its particular cases. In particular, the usual $F(R)$ and
$F(T)$ gravity theories are the particular cases of the $F(R,T)$ gravity. In
the cosmological context, we find that our new gravitational theory can
describes the accelerated expansion of the universe.
|
We investigate the problem of autonomous racing among teams of cooperative
agents that are subject to realistic racing rules. Our work extends previous
research on hierarchical control in head-to-head autonomous racing by
considering a generalized version of the problem while maintaining the
two-level hierarchical control structure. A high-level tactical planner
constructs a discrete game that encodes the complex rules using simplified
dynamics to produce a sequence of target waypoints. The low-level path planner
uses these waypoints as a reference trajectory and computes high-resolution
control inputs by solving a simplified formulation of a racing game with a
simplified representation of the realistic racing rules. We explore two
approaches for the low-level path planner: training a multi-agent reinforcement
learning (MARL) policy and solving a linear-quadratic Nash game (LQNG)
approximation. We evaluate our controllers on simple and complex tracks against
three baselines: an end-to-end MARL controller, a MARL controller tracking a
fixed racing line, and an LQNG controller tracking a fixed racing line.
Quantitative results show our hierarchical methods outperform the baselines in
terms of race wins, overall team performance, and compliance with the rules.
Qualitatively, we observe the hierarchical controllers mimic actions performed
by expert human drivers such as coordinated overtaking, defending against
multiple opponents, and long-term planning for delayed advantages.
|
Web is often used for finding information and with a learning intention. In
this thesis, we propose a study to investigate the process of learning online
across varying cognitive learning levels using crowd-sourced participants. Our
aim was to study the impact of cognitive learning levels on search as well as
increase in knowledge. We present 150 participants with 6 search tasks for
varying cognitive levels and collect user interactions and submitted answers as
user data. We present quantitative analysis of user data which shows that the
outcome for all cognitive levels is learning by quantifying it as calculated
knowledge gain. Further, we also investigate the impact of cognitive learning
level on user interaction and knowledge gain with the help of user data. We
demonstrate that the cognitive learning level of search session has a
significant impact on user's search behavior as well as on knowledge that is
gained. Further, we establish a pattern in which the search behavior changes
across cognitive learning levels where the least complex search task has
minimum number of user interactions and most complex search task has maximum
user interactions. With this observation, we were able to demonstrate a
relation between a learner's search behavior and Krathwohl's revised Bloom's
taxonomic structure of cognitive processes. The findings of this thesis intend
to provide a significant work to bridge the relation between search, learning,
and user.
|
We connect an appropriate feedback loop to a model of 2D vertical eddy of
airflow which unfolds a wide range of vorticity behavior. Computational fluid
dynamics of the twisted roll display a class of long lifespan 3D vortices. On
the one hand, the infinitely stable columnar vortex simulated describes
waterspouts and tornadoes with extended lifetime. On the other hand, a light
modification of the retroaction exhibits strong similarities to tropical
cyclones. Moreover, we investigate the outcome of the twisting process
vertically shifted. This modelisation leads to the simulation of simultaneous
vortices associated to this other class of 3D vortices with short lifespan. Our
heuristic dynamical systems lay the foundations of a comprehensive modelisation
of vortices since it joins theory and numerical simulations.
|
Understanding and attributing mental states, known as Theory of Mind (ToM),
emerges as a fundamental capability for human social reasoning. While Large
Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms
underlying these capabilities remain elusive. In this study, we discover that
it is possible to linearly decode the belief status from the perspectives of
various agents through neural activations of language models, indicating the
existence of internal representations of self and others' beliefs. By
manipulating these representations, we observe dramatic changes in the models'
ToM performance, underscoring their pivotal role in the social reasoning
process. Additionally, our findings extend to diverse social reasoning tasks
that involve different causal inference patterns, suggesting the potential
generalizability of these representations.
|
Graph products are characterized by the existence of non-trivial equivalence
relations on the edge set of a graph that satisfy a so-called square property.
We investigate here a generalization, termed RSP-relations. The class of graphs
with non-trivial RSP-relations in particular includes graph bundles.
Furthermore, RSP-relations are intimately related with covering graph
constructions. For K_23-free graphs finest RSP-relations can be computed in
polynomial-time. In general, however, they are not unique and their number may
even grow exponentially. They behave well for graph products, however, in sense
that a finest RSP-relations can be obtained easily from finest RSP-relations on
the prime factors.
|
We construct a Banach rearrangement invariant norm on the measurable space
for which the finiteness of this norm for measurable function (random variable)
is equivalent to suitable tail (heavy tail and light tail) behavior.
We investigate also a conjugate to offered spaces and obtain some embedding
theorems.
Possible applications: Functional Analysis (for instance, interpolation of
operators), Integral Equations, Probability Theory and Statistics (tail
estimations for random variables).
|
In simulations of high energy heavy ion collisions that employ viscous
hydrodynamics, single particle distributions are distorted from their thermal
equilibrium form due to gradients in the flow velocity. These are closely
related to the formulas for the shear and bulk viscosities in the
quasi-particle approximation. Distorted single particle distributions are now
commonly used to calculate the emission of photons and dilepton pairs, and in
the late stage to calculate the conversion of a continuous fluid to individual
particles. We show how distortions of the single particle distribution
functions due to both shear and bulk viscous effects can be done rigorously in
the quasi-particle approximation and illustrate it with the linear $\sigma$
model at finite temperature.
|
It is known that $|\zeta(1+ it)|\ll (\log t)^{2/3}$. This paper provides a
new explicit estimate, viz.\ $|\zeta(1+ it)|\leq 3/4 \log t$, for $t\geq 3$.
This gives the best upper bound on $|\zeta(1+ it)|$ for $t\leq 10^{2\cdot
10^{5}}$.
|
Recent developments in multi-dimensional simulations of core-collapse
supernovae have considerably improved our understanding of this complex
phenomenon. In addition to that, one-dimensional (1D) studies have been
employed to study the explosion mechanism and its causal connection to the
pre-collapse structure of the star, as well as to explore the vast parameter
space of supernovae. Nonetheless, many uncertainties still affect the late
stages of the evolution of massive stars, their collapse, and the subsequent
shock propagation. In this review, we will briefly summarize the
state-of-the-art of both 1D and 3D simulations and how they can be employed to
study the evolution of massive stars, supernova explosions, and shock
propagation, focusing on the uncertainties that affect each of these phases.
Finally, we will illustrate the typical nucleosynthesis products that emerge
from the explosion.
|
We show that in characteristic 2, the Steinberg representation of the
symplectic group Sp(2n,q), q a power of an odd prime p, has two irreducible
constituents lying just above the socle that are isomorphic to the two Weil
modules of degree (q^n-1)/2.
|
We have embedded an artificial atom, a superconducting "transmon" qubit, in
an open transmission line and investigated the strong scattering of incident
microwave photons ($\sim6$ GHz). When an input coherent state, with an average
photon number $N\ll1$ is on resonance with the artificial atom, we observe
extinction of up to 90% in the forward propagating field. We use two-tone
spectroscopy to study scattering from excited states and we observe
electromagnetically induced transparency (EIT). We then use EIT to make a
single-photon router, where we can control to what output port an incoming
signal is delivered. The maximum on-off ratio is around 90% with a rise and
fall time on the order of nanoseconds, consistent with theoretical
expectations. The router can easily be extended to have multiple output ports
and it can be viewed as a rudimentary quantum node, an important step towards
building quantum information networks.
|
The Wide Field X-Ray Telescope (WFXT) is a medium-class mission designed to
be 2-orders-of-magnitude more sensitive than any previous or planned X-ray
mission for large area surveys and to match in sensitivity the next generation
of wide-area optical, IR and radio surveys. Using an innovative wide-field
X-ray optics design, WFXT provides a field of view of 1 square degree (10 times
Chandra) with an angular resolution of 5" (Half Energy Width, HEW) nearly
constant over the entire field of view, and a large collecting area (up to 1
m^2 at 1 keV, > 10x Chandra) over the 0.1-7 keV band. WFXTs low-Earth orbit
also minimizes the particle background. In five years of operation, WFXT will
carry out three extragalactic surveys at unprecedented depth and address
outstanding questions in astrophysics, cosmology and fundamental physics. In
this article, we illustrate the mission concept and the connection between
science requirements and mission parameters.
|
We extend the concepts of the Autler-Townes doublet and triplet spectroscopy
to quartuplet, quintuplet and suggest linkages in sodium atom in which to
display these spectra. We explore the involved fundamental processes of quantum
interference of the corresponding spectroscopy by examining the Laplace
transform of the corresponding state-vector subjected to steady coherent
illumination in the rotating wave approximation and Weisskopf-Wigner treatment
of spontaneous emission as a simplest probability loss. In the quartuplet, four
fields interact appropriately and resonantly with the five-level atom. The
spectral profile of the single decaying level, upon interaction with three
other levels, splits into four destructively interfering dressed states
generating three dark lines in the spectrum. These dark lines divide the
spectrum into four spectral components (bright lines) whose widths are
effectively controlled by the relative strength of the laser fields and the
relative width of the single decaying level. We also extend the idea to the
higher-ordered multiplet spectroscopy by increasing the number of energy levels
of the atomic system, the number of laser fields to couple with the required
states. The apparent disadvantage of these schemes is the successive increase
in the number of laser fields required for the strongly interactive atomic
states in the complex atomic systems. However, these complexities are naturally
inherited and are the beauties of these atomic systems. They provide the
foundations for the basic mechanisms of the quantum interference involved in
the higher-ordered multiplet spectroscopy.
|
The Testbed for LISA Analysis (TLA) Project aims to facilitate the
development, validation and comparison of different methods for LISA science
data analysis, by the broad LISA Science Community, to meet the special
challenges that LISA poses. It includes a well-defined Simulated LISA Data
Product (SLDP), which provides a clean interface between the communities that
have developed to model and to analyze the LISA science data stream; a
web-based clearinghouse (at <http://tla.gravity.psu.edu>) providing SLDP
software libraries, relevant software, papers and other documentation, and a
repository for SLDP data sets; a set of mailing lists for communication between
and among LISA simulators and LISA science analysts; a problem tracking system
for SLDP support; and a program of workshops to allow the burgeoning LISA
science community to further refine the SLDP definition, define specific LISA
science analysis challenges, and report their results. This note describes the
TLA Project, the resources it provides immediately, its future plans, and
invites the participation of the broader community in the furtherance of its
goals.
|
Deep learning models are trained with certain assumptions about the data
during the development stage and then used for prediction in the deployment
stage. It is important to reason about the trustworthiness of the model's
predictions with unseen data during deployment. Existing methods for specifying
and verifying traditional software are insufficient for this task, as they
cannot handle the complexity of DNN model architecture and expected outcomes.
In this work, we propose a novel technique that uses rules derived from neural
network computations to infer data preconditions for a DNN model to determine
the trustworthiness of its predictions. Our approach, DeepInfer involves
introducing a novel abstraction for a trained DNN model that enables weakest
precondition reasoning using Dijkstra's Predicate Transformer Semantics. By
deriving rules over the inductive type of neural network abstract
representation, we can overcome the matrix dimensionality issues that arise
from the backward non-linear computation from the output layer to the input
layer. We utilize the weakest precondition computation using rules of each kind
of activation function to compute layer-wise precondition from the given
postcondition on the final output of a deep neural network. We extensively
evaluated DeepInfer on 29 real-world DNN models using four different datasets
collected from five different sources and demonstrated the utility,
effectiveness, and performance improvement over closely related work. DeepInfer
efficiently detects correct and incorrect predictions of high-accuracy models
with high recall (0.98) and high F-1 score (0.84) and has significantly
improved over prior technique, SelfChecker. The average runtime overhead of
DeepInfer is low, 0.22 sec for all unseen datasets. We also compared runtime
overhead using the same hardware settings and found that DeepInfer is 3.27
times faster than SelfChecker.
|
We present a new stellar evolution code and a set of results, demonstrating
its capability at calculating full evolutionary tracks for a wide range of
masses and metallicities. The code is fast and efficient, and is capable of
following through all evolutionary phases, without interruption or human
intervention. It is meant to be used also in the context of modeling the
evolution of dense stellar systems, for performing live calculations for both
normal star models and merger-products.
The code is based on a fully implicit, adaptive-grid numerical scheme that
solves simultaneously for structure, mesh and chemical composition. Full
details are given for the treatment of convection, equation of state, opacity,
nuclear reactions and mass loss.
Results of evolutionary calculations are shown for a solar model that matches
the characteristics of the present sun to an accuracy of better than 1%; a 1
Msun model for a wide range of metallicities; a series of models of stellar
populations I and II, for the mass range 0.25 to 64 Msun, followed from
pre-main-sequence to a cool white dwarf or core collapse. An initial final-mass
relationship is derived and compared with previous studies. Finally, we briefly
address the evolution of non-canonical configurations, merger-products of
low-mass main-sequence parents.
|
In this paper, we present a multipath-based simultaneous localization and
mapping (SLAM) algorithm that continuously adapts mulitiple map feature (MF)
models describing specularly reflected multipath components (MPCs) from flat
surfaces and point-scattered MPCs, respectively. We develop a Bayesian model
for sequential detection and estimation of interacting MF model parameters, MF
states and mobile agent's state including position and orientation. The
Bayesian model is represented by a factor graph enabling the use of belief
propagation (BP) for efficient computation of the marginal posterior
distributions. The algorithm also exploits amplitude information enabling
reliable detection of weak MFs associated with MPCs of very low signal-to-noise
ratios (SNRs). The performance of the proposed algorithm is evaluated using
real millimeter-wave (mmWave) multiple-input-multiple-output (MIMO)
measurements with single base station setup. Results demonstrate the excellent
localization and mapping performance of the proposed algorithm in challenging
dynamic outdoor scenarios.
|
Animating still face images with deep generative models using a speech input
signal is an active research topic and has seen important recent progress.
However, much of the effort has been put into lip syncing and rendering quality
while the generation of natural head motion, let alone the audio-visual
correlation between head motion and speech, has often been neglected. In this
work, we propose a multi-scale audio-visual synchrony loss and a multi-scale
autoregressive GAN to better handle short and long-term correlation between
speech and the dynamics of the head and lips. In particular, we train a stack
of syncer models on multimodal input pyramids and use these models as guidance
in a multi-scale generator network to produce audio-aligned motion unfolding
over diverse time scales. Our generator operates in the facial landmark domain,
which is a standard low-dimensional head representation. The experiments show
significant improvements over the state of the art in head motion dynamics
quality and in multi-scale audio-visual synchrony both in the landmark domain
and in the image domain.
|
We provide an axiomatic foundation for the representation of
num\'{e}raire-invariant preferences of economic agents acting in a financial
market. In a static environment, the simple axioms turn out to be equivalent to
the following choice rule: the agent prefers one outcome over another if and
only if the expected (under the agent's subjective probability) relative rate
of return of the latter outcome with respect to the former is nonpositive. With
the addition of a transitivity requirement, this last preference relation has
an extension that can be numerically represented by expected logarithmic
utility. We also treat the case of a dynamic environment where consumption
streams are the objects of choice. There, a novel result concerning a canonical
representation of unit-mass optional measures enables us to explicitly solve
the investment--consumption problem by separating the two aspects of investment
and consumption. Finally, we give an application to the problem of optimal
num\'{e}raire investment with a random time-horizon.
|
This paper addresses the robust consensus problem under switching topologies.
Contrary to existing methods, the proposed approach provides decentralized
protocols that achieve consensus for networked multi-agent systems in a
predefined time. Namely, the protocol design provides a tuning parameter that
allows setting the convergence time of the agents to a consensus state. An
appropriate Lyapunov analysis exposes the capability of the current proposal to
achieve predefined-time consensus over switching topologies despite the
presence of bounded perturbations. Finally, the paper presents a comparison
showing that the suggested approach subsumes existing fixed-time consensus
algorithms and provides extra degrees of freedom to obtain predefined-time
consensus protocols that are less over-engineered, i.e., the difference between
the estimated convergence time and its actual value is lower in our approach.
Numerical results are given to illustrate the effectiveness and advantages of
the proposed approach.
|
The throughout knowledge of a X-ray beam spectrum is mandatory to assess the
quality of its source device. Since the techniques to directly measurement such
spectra are expensive and laborious, the X-ray spectrum reconstruction using
attenuation data has been a promising alternative. However, such reconstruction
corresponds mathematically to an inverse, nonlinear and ill-posed problem.
Therefore, to solve it the use of powerful optimization algorithms and good
regularization functions is required. Here, we present a generalized simulated
annealing algorithm combined with a suitable smoothing regularization function
to solve the X-ray spectrum reconstruction inverse problem. We also propose an
approach to set the initial acceptance and visitation temperatures and a
standardization of the objective function terms to automatize the algorithm to
address with different spectra range. Numerical tests considering three
different reference spectra with its attenuation curve are presented. Results
show that the algorithm provides good accuracy to retrieve the reference
spectra shapes corroborating the central importance of our regularization
function and the performance improvement of the generalized simulated annealing
compared to its classical version.
|
A combination of ground-based (NTT and VLT) and HST (HDF-N and HDF-S) public
imaging surveys have been used to collect a sample of 1712 I-selected and 319
$K\leq 21$ galaxies. Photometric redshifts have been obtained for all these
galaxies. The results have been compared with the prediction of an analytic
rendition of the current CDM hierarchical models for galaxy formation. We focus
in particular on two observed quantities: the galaxy redshift distribution at
K<21 and the evolution of the UV luminosity density. The derived photometric
redshift distribution is in agreement with the hierarchical CDM prediction,
with a fraction of only 5% of galaxies detected at z>2. This result strongly
supports hierarchical scenarios where present-day massive galaxies are the
result of merging processes. The observed UV luminosity density in the
I-selected sample is confined within a factor of 4 over the whole range
0<z<4.5. CDM models in a critical Universe are not able to produce the density
of UV photons that is observed at z>3. CDM models in $\Lambda$-dominated
universe are in better agreement at 3<z<4.5, but predict a pronounced peak at
z~1.5 and a drop by a factor of 8 from z=1.5 to z=4 that is not observed in the
data. We conclude that improvements are required in the treatment of the
physical processes directly related to the SFR, e.g. the starbust activity in
merger processes and/or different feedback to the star formation activity.
|
We present Keck/DEIMOS spectroscopy of globular clusters (GCs) around the
ultra-diffuse galaxies (UDGs) VLSB-B, VLSB-D, and VCC615 located in the central
regions of the Virgo cluster. We spectroscopically identify 4, 12, and 7 GC
satellites of these UDGs, respectively. We find that the three UDGs have
systemic velocities ($V_{sys}$) consistent with being in the Virgo cluster, and
that they span a wide range of velocity dispersions, from $\sim 16$ to $\sim
47$ km/s, and high dynamical mass-to-light ratios within the radius that
contains half the number of GCs ($ 407^{+916}_{-407}$, $21^{+15}_{-11}$,
$60^{+65}_{-38}$, respectively). VLSB-D shows possible evidence for rotation
along the stellar major axis and its $V_{sys}$ is consistent with that of the
massive galaxy M84 and the center of the Virgo cluster itself. These findings,
in addition to having a dynamically and spatially ($\sim 1$ kpc) off-centered
nucleus and being extremely elongated, suggest that VLSB-D could be tidally
perturbed. On the contrary, VLSB-B and VCC615 show no signals of tidal
deformation. Whereas the dynamics of VLSB-D suggest that it has a less massive
dark matter halo than expected for its stellar mass, VLSB-B and VCC615 are
consistent with a $\sim 10^{12}$ M$_{\odot}$ dark matter halo. Although our
samples of galaxies and GCs are small, these results suggest that UDGs may be a
diverse population, with their low surface brightnesses being the result of
very early formation, tidal disruption, or a combination of the two.
|
We discuss the results of our recent analysis [1] of deep inelastic
scattering data on F2 structure function in the non-singlet approximation with
next-to-next-to-leading-order accuracy. The study of high statistics deep
inelastic scattering data provided by BCDMS, SLAC, NMC and BFP collaborations
was performed with a special emphasis placed on the higher twist contributions.
For the coupling constant the following value alfa_s(MZ2) = 0.1167 +- 0.0022
(total exp. error) was found.
|
We determine a formula for the dimension of a family of affine Springer
fibers associated to a symmetric space arising from the block diagonal
embedding $\mathrm{GL}_n\times\mathrm{GL}_n\hookrightarrow\mathrm{GL}_{2n}$ .
As an application, we determine the dimension of affine Springer fibers
attached to certain unitary symmetric spaces.
|
In this paper, we obtain the rotating Lifshitz dilaton black brane solutions
in the presence of the quartic quasitopological gravity and then probe the
related thermodynamics. At first, we obtain the field equations form which a
total constant along the radial coordinate $r$ is deduced. Since we cannot
solve the solutions exactly, so we investigate their asymptotic behaviors at
the horizon and at the infinity. We attain the conserved and thermodynamic
quantities such as temperature, angular velocity, entropy, the energy and the
angular momentum densities of the rotating quartic quasitopological Lifshitz
dilaton black brane. By evaluating the total constant at the horizon and the
infinity, we can make a relation between the thermodynamic quantities and so
get to a Smarr-type formula. We demonstrate that the thermodynamic quantities
of this rotating black brane obey the first law of the thermodynamics. We also
study the thermal stability of the rotating quartic quasitopological Lifshitz
dilaton black brane and it is not thermally stable.
|
We investigate the dynamic asymptotic dimension for \'etale groupoids
introduced by Guentner, Willett and Yu. In particular, we establish several
permanence properties, including estimates for products and unions of
groupoids. We also establish invariance of the dynamic asymptotic dimension
under Morita equivalence. In the second part of the article, we consider a
canonical coarse structure on an \'etale groupoid and compare the asymptotic
dimension of the resulting coarse space with the dynamic asymptotic dimension
of the underlying groupoid.
|
Characterizing the vacuum of a thermalized SU(3) Yang-Mills theory in the
dual Ginzburg-Landau description, the possibility of topologically nontrivial,
classical monopole fields in the deconfining phase is explored. These fields
are assumed to be Bogomoln'yi-Prasad-Sommerfield (BPS) saturated solutions
along the compact, euclidean time dimension. A corresponding, gauge invariant
monopole interaction is constructed. The model passes first tests. In
particular, a reasonable value for the critical temperature is obtained, and
the partial persistence of nonperturbative features in the deconfining phase of
SU(3) Yang-Mills theory, as it is measured on the lattice, follows naturally.
|
The transverse momentum distribution of produced charged particles is
investigated for gold-gold collisions at $\sqrt{s_{NN}}=200$ GeV. A simple
parameterization is suggested for the particle distribution based on the
nuclear stopping effect. The model can fit very well both the transverse
momentum distributions at different pseudo-rapidities and the pseudo-rapidity
distributions at different centralities. The ratio of rapidity distributions
for peripheral and central collisions is calculated and compared with the data.
|
We propose a multivariate elastic net regression forecast model for German
quarter-hourly electricity spot markets. While the literature is diverse on
day-ahead prediction approaches, both the intraday continuous and intraday
call-auction prices have not been studied intensively with a clear focus on
predictive power. Besides electricity price forecasting, we check for the
impact of early day-ahead (DA) EXAA prices on intraday forecasts. Another
novelty of this paper is the complementary discussion of economic benefits. A
precise estimation is worthless if it cannot be utilized. We elaborate possible
trading decisions based upon our forecasting scheme and analyze their monetary
effects. We find that even simple electricity trading strategies can lead to
substantial economic impact if combined with a decent forecasting technique.
|
We analyze the universal radiative correction $\Delta_R^V$ to neutron and
superallowed nuclear $\beta$ decay by expressing the hadronic $\gamma W$-box
contribution in terms of a dispersion relation, which we identify as an
integral over the first Nachtmann moment of the $\gamma W$ interference
structure function $F_3^{(0)}$. By connecting the needed input to existing data
on neutrino and antineutrino scattering, we obtain an updated value of
$\Delta_R^V = 0.02467(22)$, wherein the hadronic uncertainty is reduced.
Assuming other Standard Model theoretical calculations and experimental
measurements remain unchanged, we obtain an updated value of $|V_{ud}| =
0.97366(15)$, raising tension with the first row CKM unitarity constraint. We
comment on ways current and future experiments can provide input to our
dispersive analysis.
|
W3 is one of the most outstanding regions of high-mass star formation in the
outer solar circle, including two active star-forming clouds, W3 Main and
W3(OH). Based on a new analysis of the $^{12}$CO data obtained at
38$^{\prime\prime}$ resolution, we have found three clouds having molecular
mass from 2000 to 8000~$M_\odot$ at velocities, $-50$~km s$^{-1}$, $-43$~km
s$^{-1}$, and $-39$~km s$^{-1}$. The $-43$~km s$^{-1}$ cloud is the most
massive one, overlapping with the $-39$~km s$^{-1}$ cloud and the $-50$~km
s$^{-1}$ cloud toward W3 Main and W3(OH), respectively. In W3 Main and W3(OH),
we have found typical signatures of a cloud-cloud collision, i.e., the
complementary distribution with/without a displacement between the two clouds
and/or a V-shape in the position-velocity diagram. We frame a hypothesis that a
cloud-cloud collision triggered the high-mass star formation in each region.
The collision in W3 Main involves the $-39$~km s$^{-1}$ cloud and the $-43$~km
s$^{-1}$ cloud. The collision likely produced a cavity in the $-43$~km s$^{-1}$
cloud having a size similar to the $-39$~km s$^{-1}$ cloud and triggered the
formation of young high-mass stars in IC~1795 2 Myr ago. We suggest that the
$-39$~km s$^{-1}$ cloud is still triggering the high-mass objects younger than
1 Myr embedded in W3 Main currently. On the other hand, another collision
between the $-50$~km s$^{-1}$ cloud and the $-43$~km s$^{-1}$ cloud likely
formed the heavily embedded objects in W3(OH) within $\sim$0.5 Myr ago. The
present results favour an idea that cloud-cloud collisions are common phenomena
not only in the inner solar circle but also in the outer solar circle, where
the number of reported cloud-cloud collisions is yet limited (Fukui et al.
2021, PASJ, 73, S1).
|
Medical imaging refers to the technologies and methods utilized to view the
human body and its inside, in order to diagnose, monitor, or even treat medical
disorders. This paper aims to explore the application of deep learning
techniques in the semantic segmentation of Cardiac short-axis MRI (Magnetic
Resonance Imaging) images, aiming to enhance the diagnosis, monitoring, and
treatment of medical disorders related to the heart. The focus centers on
implementing various architectures that are derivatives of U-Net, to
effectively isolate specific parts of the heart for comprehensive anatomical
and functional analysis. Through a combination of images, graphs, and
quantitative metrics, the efficacy of the models and their predictions are
showcased. Additionally, this paper addresses encountered challenges and
outline strategies for future improvements. This abstract provides a concise
overview of the efforts in utilizing deep learning for cardiac image
segmentation, emphasizing both the accomplishments and areas for further
refinement.
|
To predict the heat diffusion in a given region over time, it is often
necessary to find the numerical solution for heat equation. With the techniques
of discrete differential calculus, we propose two unconditional stable
numerical schemes for simulation heat equation on space manifold and time. The
analysis of their stability and error is accomplished by the use of maximum
principle.
|
Textbook Question Answering (TQA) is a task that one should answer a
diagram/non-diagram question given a large multi-modal context consisting of
abundant essays and diagrams. We argue that the explainability of this task
should place students as a key aspect to be considered. To address this issue,
we devise a novel architecture towards span-level eXplanations of the TQA
(XTQA) based on our proposed coarse-to-fine grained algorithm, which can
provide not only the answers but also the span-level evidences to choose them
for students. This algorithm first coarsely chooses top $M$ paragraphs relevant
to questions using the TF-IDF method, and then chooses top $K$ evidence spans
finely from all candidate spans within these paragraphs by computing the
information gain of each span to questions. Experimental results shows that
XTQA significantly improves the state-of-the-art performance compared with
baselines. The source code is available at
https://github.com/keep-smile-001/opentqa
|
Industrial Internet of Things (I-IoT) is a collaboration of devices, sensors,
and networking equipment to monitor and collect data from industrial
operations. Machine learning (ML) methods use this data to make high-level
decisions with minimal human intervention. Data-driven predictive maintenance
(PDM) is a crucial ML-based I-IoT application to find an optimal maintenance
schedule for industrial assets. The performance of these ML methods can
seriously be threatened by adversarial attacks where an adversary crafts
perturbed data and sends it to the ML model to deteriorate its prediction
performance. The models should be able to stay robust against these attacks
where robustness is measured by how much perturbation in input data affects
model performance. Hence, there is a need for effective defense mechanisms that
can protect these models against adversarial attacks. In this work, we propose
a double defense mechanism to detect and mitigate adversarial attacks in I-IoT
environments. We first detect if there is an adversarial attack on a given
sample using novelty detection algorithms. Then, based on the outcome of our
algorithm, marking an instance as attack or normal, we select adversarial
retraining or standard training to provide a secondary defense layer. If there
is an attack, adversarial retraining provides a more robust model, while we
apply standard training for regular samples. Since we may not know if an attack
will take place, our adaptive mechanism allows us to consider irregular changes
in data. The results show that our double defense strategy is highly efficient
where we can improve model robustness by up to 64.6% and 52% compared to
standard and adversarial retraining, respectively.
|
Constraint Satisfaction Problems (CSP) constitute a convenient way to capture
many combinatorial problems. The general CSP is known to be NP-complete, but
its complexity depends on a template, usually a set of relations, upon which
they are constructed. Following this template, there exist tractable and
intractable instances of CSPs. It has been proved that for each CSP problem
over a given set of relations there exists a corresponding CSP problem over
graphs of unary functions belonging to the same complexity class. In this short
note we show a dichotomy theorem for every finite domain D of CSP built upon
graphs of homogeneous co-Boolean functions, i.e., unary functions sharing the
Boolean range {0, 1}.
|
We have developed a web tool to perform Principal Component Analysis (PCA,
Murtagh & Heck 1987; Kendall 1980) onto spectral data. The method is especially
designed to perform spectral classification of galaxies from a sample of input
spectra, giving the set of orthonormal vectors called Principal Components
(PCs) and the corresponding projections. The first two projections of the
galaxy spectra onto the PCs are known to correlate with the morphological type
(Connolly et al. 1995) and, following Galaz & de Lapparent (1998), we use the
parameters \delta and \theta which define a spectral classification sequence of
typical galaxies from ellipticals to late spirals and star-forming galaxies.
The program runs in the website http://azul.astro.puc.cl/PCA/ and can be used
without downloading any binary files or building archives of any kind.
|
In this paper the dynamics of the interaction of attosecond laser pulses with
matter is investigated. It will be shown that the master equation: modified
Klein-Gordon equation describes the propagation of the heatons. Heatons are the
thermal wave packets. When the duration of the laser pulsees \delta t is of the
order of attosecond the heaton-thermal wave packets are nondispersive objects.
For \delta t \to \infty, the heatons are damped with damping factor of the
order of relaxation time for thermal processes.
Key words: Temperature fields; Attosecond laser pulses; Heatons; Modified
Klein-Gordon equation.
|
Objectives: The objectives of this narrative review are to summarize the
current state of AI applications in neuroimaging for early Alzheimer's disease
(AD) prediction and to highlight the potential of AI techniques in improving
early AD diagnosis, prognosis, and management.
Methods: We conducted a narrative review of studies using AI techniques
applied to neuroimaging data for early AD prediction. We examined
single-modality studies using structural MRI and PET imaging, as well as
multi-modality studies integrating multiple neuroimaging techniques and
biomarkers. Furthermore, they reviewed longitudinal studies that model AD
progression and identify individuals at risk of rapid decline.
Results: Single-modality studies using structural MRI and PET imaging have
demonstrated high accuracy in classifying AD and predicting progression from
mild cognitive impairment (MCI) to AD. Multi-modality studies, integrating
multiple neuroimaging techniques and biomarkers, have shown improved
performance and robustness compared to single-modality approaches. Longitudinal
studies have highlighted the value of AI in modeling AD progression and
identifying individuals at risk of rapid decline. However, challenges remain in
data standardization, model interpretability, generalizability, clinical
integration, and ethical considerations.
Conclusion: AI techniques applied to neuroimaging data have the potential to
improve early AD diagnosis, prognosis, and management. Addressing challenges
related to data standardization, model interpretability, generalizability,
clinical integration, and ethical considerations is crucial for realizing the
full potential of AI in AD research and clinical practice. Collaborative
efforts among researchers, clinicians, and regulatory agencies are needed to
develop reliable, robust, and ethical AI tools that can benefit AD patients and
society.
|
In this work, we show how the complete set of splitting functions relevant
for the evolution of various distribution functions describing nucleonic
helicity structure can be obtained in the light front Hamiltonian perturbation
theory using completely fixed light front gauge, $A^+=0$.
|
We discuss the inhomogeneous multidimensional mixmaster model in view of
appearing, near the cosmological singularity, a scenario for the dimensional
compactification in correspondence to an 11-dimensional space-time. Our
analysis candidates such a collapsing picture toward the singularity to
describe the actual expanding 3-dimensional Universe and an associated
collapsed 7-dimensional space. To this end, a conformal factor is determined in
front of the 4-dimensional metric to remove the 4-curvature divergences and the
resulting Universe expands with a power-law.inflation. Thus we provide an
additional peculiarity of the eleven space-time dimensions in view of
implementing a geometrical theory of unification.
|
Smart metering of domestic water consumption to continuously monitor the
usage of different appliances has been shown to have an impact on people's
behavior towards water conservation. However, the installation of multiple
sensors to monitor each appliance currently has a high initial cost and as a
result, monitoring consumption from different appliances using sensors is not
cost-effective. To address this challenge, studies have focused on analyzing
measurements of the total domestic consumption using Machine Learning (ML)
methods, to disaggregate water usage into each appliance. Identifying which
appliances are in use through ML is challenging since their operation may be
overlapping, while specific appliances may operate with intermittent flow,
making individual consumption events hard to distinguish. Moreover, ML
approaches require large amounts of labeled input data to train their models,
which are typically not available for a single household, while usage
characteristics may vary in different regions. In this work, we initially
propose a data model that generates synthetic time series based on regional
water usage characteristics and resolution to overcome the need for a large
training dataset with real labeled data. The method requires a small number of
real labeled data from the studied region. Following this, we propose a new
algorithm for classifying single and overlapping household water usage events,
using the total domestic consumption measurements.
|
Given a pair of graphs $G$ and $H$, the Ramsey number $R(G,H)$ is the
smallest $N$ such that every red-blue coloring of the edges of the complete
graph $K_N$ contains a red copy of $G$ or a blue copy of $H$. If a graph $G$ is
connected, it is well known and easy to show that $R(G,H) \geq
(|G|-1)(\chi(H)-1)+\sigma(H)$, where $\chi(H)$ is the chromatic number of $H$
and $\sigma(H)$ is the size of the smallest color class in a $\chi(H)$-coloring
of $H$. A graph $G$ is called $H$-good if $R(G,H)=
(|G|-1)(\chi(H)-1)+\sigma(H)$. The notion of Ramsey goodness was introduced by
Burr and Erd\H{o}s in 1983 and has been extensively studied since then.
In this paper we show that if $n\geq 10^{60}|H|$ and $\sigma(H)\geq
\chi(H)^{22}$ then the $n$-vertex cycle $C_n$ is $H$-good. For graphs $H$ with
high $\chi(H)$ and $\sigma(H)$, this proves in a strong form a conjecture of
Allen, Brightwell, and Skokan.
|
Recently reported anomalies in various $B$ meson decays and also in the
anomalous magnetic moment of muon $(g-2)_\mu$ motivate us to consider a
particular extension of the standard model incorporating new interactions in
lepton and quark sectors simultaneously. Our minimal choice would be
leptoquark. In particular, we take vector leptoquark ($U_1$) and
comprehensively study all related observables including ${(g-2)_{\mu}},\
R_{K^{(*)}},\ R_{D^{(*)}}$, $B \to (K) \ell \ell' $ where $\ell\ell'$ are
various combinations of $\mu$ and $\tau$, and also lepton flavor violation in
the $\tau$ decays. We find that a hybrid scenario with additional
$U(1)_{B_3-L_2}$ gauge boson provides a common explanation of all these
anomalies.
|
The Sihl river, located near the city of Zurich in Switzerland, is under
continuous and tight surveillance as it flows directly under the city's main
railway station. To issue early warnings and conduct accurate risk
quantification, a dense network of monitoring stations is necessary inside the
river basin. However, as of 2021 only three automatic stations are operated in
this region, naturally raising the question: how to extend this network for
optimal monitoring of extreme rainfall events?
So far, existing methodologies for station network design have mostly focused
on maximizing interpolation accuracy or minimizing the uncertainty of some
model's parameters estimates. In this work, we propose new principles inspired
from extreme value theory for optimal monitoring of extreme events. For
stationary processes, we study the theoretical properties of the induced
sampling design that yields non-trivial point patterns resulting from a
compromise between a boundary effect and the maximization of inter-location
distances. For general applications, we propose a theoretically justified
functional peak-over-threshold model and provide an algorithm for sequential
station selection. We then issue recommendations for possible extensions of the
Sihl river monitoring network, by efficiently leveraging both station and radar
measurements available in this region.
|
We discuss how to formulate a condition for choosing the vacuum state of a
quantum scalar field on a timelike hyperplane in the general boundary
formulation (GBF) using the coupling to an Unruh-DeWitt detector. We explicitly
study the response of an Unruh-DeWitt detector for evanescent modes which occur
naturally in quantum field theory in the presence of the equivalent of a
dielectric boundary. We find that the physically correct vacuum state has to
depend on the physical situation outside of the boundaries of the spacetime
region considered. Thus it cannot be determined by general principles
pertaining only to a subset of spacetime.
|
Given a set of deep learning models, it can be hard to find models
appropriate to a task, understand the models, and characterize how models are
different one from another. Currently, practitioners rely on manually-written
documentation to understand and choose models. However, not all models have
complete and reliable documentation. As the number of machine learning models
increases, this issue of finding, differentiating, and understanding models is
becoming more crucial. Inspired from research on data lakes, we introduce and
define the concept of model lakes. We discuss fundamental research challenges
in the management of large models. And we discuss what principled data
management techniques can be brought to bear on the study of large model
management.
|
In this paper we present a cubic regularized Newton's method to minimize a
smooth function over a Riemannian manifold. The proposed algorithm is shown to
reach a second-order $\epsilon$-stationary point within
$\mathcal{O}(1/\epsilon^{\frac{3}{2}})$ iterations, under the condition that
the pullbacks are locally Lipschitz continuous, a condition that is shown to be
satisfied if the manifold is compact. Furthermore, we present a local
superlinear convergence result under some additional conditions.
|
We report results from a systematic wide-area search for faint dwarf galaxies
at heliocentric distances from 0.3 to 2 Mpc using the full six years of data
from the Dark Energy Survey (DES). Unlike previous searches over the DES data,
this search specifically targeted a field population of faint galaxies located
beyond the Milky Way virial radius. We derive our detection efficiency for
faint, resolved dwarf galaxies in the Local Volume with a set of synthetic
galaxies and expect our search to be complete to $M_V$ ~ $(-7, -10)$ mag for
galaxies at $D = (0.3, 2.0)$ Mpc respectively. We find no new field dwarfs in
the DES footprint, but we report the discovery of one high-significance
candidate dwarf galaxy at a distance of $2.2\substack{+0.05\\-0.12}$ Mpc, a
potential satellite of the Local Volume galaxy NGC 55, separated by $47$ arcmin
(physical separation as small as 30 kpc). We estimate this dwarf galaxy to have
an absolute V-band magnitude of $-8.0\substack{+0.5\\-0.3}$ mag and an
azimuthally averaged physical half-light radius of $2.2\substack{+0.5\\-0.4}$
kpc, making this one of the lowest surface brightness galaxies ever found with
$\mu = 32.3$ mag ${\rm arcsec}^{-2}$. This is the largest, most diffuse galaxy
known at this luminosity, suggesting possible tidal interactions with its host.
|
The proton is one of the main building blocks of all visible matter in the
universe. Among its intrinsic properties are its electric charge, mass, and
spin. These emerge from the complex dynamics of its fundamental constituents,
quarks and gluons, described by the theory of quantum chromodynamics (QCD).
Using electron scattering, its electric charge and spin, shared among the quark
constituents, have been the topic of active investigation. An example is the
novel precision measurement of the proton's electric charge radius. In
contrast, little is known about the proton's inner mass density, dominated by
the energy carried by the gluons, which are hard to access through electron
scattering since gluons carry no electromagnetic charge. Here, we chose to
probe this gluonic gravitational density using a small color dipole, the
$J/\psi$ particle, through its threshold photoproduction. From our data, we
determined, for the first time, the proton's gluonic gravitational form
factors. We used a variety of models and determined, in all cases, a mass
radius that is notably smaller than the electric charge radius. In some cases,
the determined radius, although model dependent, is in excellent agreement with
first-principle predictions from lattice QCD. This work paves the way for a
deeper understanding of the salient role of gluons in providing gravitational
mass to visible matter.
|
Subsets and Splits