text
stringlengths 6
128k
|
---|
We construct minimal surfaces in hyperbolic and anti-de Sitter 3-space with
the topology of a $n$-punctured sphere by loop group factorization methods. The
end behavior of the surfaces is based on the asymptotics of Delaunay-type
surfaces, i.e., rotational symmetric minimal cylinders. The minimal surfaces in
$\mathrm{H}^3$ extend to Willmore surfaces in the conformal 3-sphere
$\mathrm{S}^3=\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3$.
|
An $n \times n$ matrix with $\pm 1$ entries which acts on $\mathbb{R}^n$ as a
scaled isometry is called Hadamard. Such matrices exist in some, but not all
dimensions. Combining number-theoretic and probabilistic tools we construct
matrices with $\pm 1$ entries which act as approximate scaled isometries in
$\mathbb{R}^n$ for all $n$. More precisely, the matrices we construct have
condition numbers bounded by a constant independent of $n$.
Using this construction, we establish a phase transition for the probability
that a random frame contains a Riesz basis. Namely, we show that a random frame
in $\mathbb{R}^n$ formed by $N$ vectors with independent identically
distributed coordinates having a non-degenerate symmetric distribution contains
many Riesz bases with high probability provided that $N \ge \exp(Cn)$. On the
other hand, we prove that if the entries are subgaussian, then a random frame
fails to contain a Riesz basis with probability close to $1$ whenever $N \le
\exp(cn)$, where $c<C$ are constants depending on the distribution of the
entries.
|
Supersonic magnetohydrodynamic (MHD) turbulence is a ubiquitous state for
many astrophysical plasmas. However, even the basic statistics for this type of
turbulence remains uncertain. We present results from supersonic MHD turbulence
simulations at unparalleled resolutions, with plasma Reynolds numbers of over a
million. In the kinetic energy spectrum we find a break between the scales that
are dominated by kinetic energy, with spectral index $-2$, and those that
become strongly magnetized, with spectral index $-3/2$. By analyzing the
Helmholtz decomposed kinetic energy spectrum, we find that the compressible
modes are not passively mixed through the cascade of the incompressible modes.
At high magnetic Reynolds number, above $10^5$, we find a power law in the
magnetic energy spectrum with spectral index $-9/5$. On the strongly
magnetized, subsonic scales the plasma tends to self-organize into locally
relaxed regions, where there is strong alignment between the current density,
magnetic field, velocity field and vorticity field, depleting both the
nonlinearities and magnetic terms in the MHD equations, which we attribute to
plasma relaxation on scales where the magnetic fluctuations evolve on shorter
timescales than the velocity fluctuations. This process constrains the cascade
to inhomogenous, volume-poor, fractal surfaces between relaxed regions, which
has significant repercussions for understanding the nature of magnetized
turbulence in astrophysical plasmas and the saturation of the fluctuation
dynamo.
|
In medical imaging, the characteristics purely derived from a disease should
reflect the extent to which abnormal findings deviate from the normal features.
Indeed, physicians often need corresponding images without abnormal findings of
interest or, conversely, images that contain similar abnormal findings
regardless of normal anatomical context. This is called comparative diagnostic
reading of medical images, which is essential for a correct diagnosis. To
support comparative diagnostic reading, content-based image retrieval (CBIR),
which can selectively utilize normal and abnormal features in medical images as
two separable semantic components, will be useful. Therefore, we propose a
neural network architecture to decompose the semantic components of medical
images into two latent codes: normal anatomy code and abnormal anatomy code.
The normal anatomy code represents normal anatomies that should have existed if
the sample is healthy, whereas the abnormal anatomy code attributes to abnormal
changes that reflect deviation from the normal baseline. These latent codes are
discretized through vector quantization to enable binary hashing, which can
reduce the computational burden at the time of similarity search. By
calculating the similarity based on either normal or abnormal anatomy codes or
the combination of the two codes, our algorithm can retrieve images according
to the selected semantic component from a dataset consisting of brain magnetic
resonance images of gliomas. Our CBIR system qualitatively and quantitatively
achieves remarkable results.
|
The precise subtype classification of myeloproliferative neoplasms (MPNs)
based on multimodal information, which assists clinicians in diagnosis and
long-term treatment plans, is of great clinical significance. However, it
remains a great challenging task due to the lack of diagnostic
representativeness for local patches and the absence of diagnostic-relevant
features from a single modality. In this paper, we propose a Dynamic Screening
and Clinical-Enhanced Network (DSCENet) for the subtype classification of MPNs
on the multimodal fusion of whole slide images (WSIs) and clinical information.
(1) A dynamic screening module is proposed to flexibly adapt the feature
learning of local patches, reducing the interference of irrelevant features and
enhancing their diagnostic representativeness. (2) A clinical-enhanced fusion
module is proposed to integrate clinical indicators to explore complementary
features across modalities, providing comprehensive diagnostic information. Our
approach has been validated on the real clinical data, achieving an increase of
7.91% AUC and 16.89% accuracy compared with the previous state-of-the-art
(SOTA) methods. The code is available at https://github.com/yuanzhang7/DSCENet.
|
Non-free data types are data types whose data have no canonical forms. For
example, multisets are non-free data types because the multiset $\{a,b,b\}$ has
two other equivalent but literally different forms $\{b,a,b\}$ and $\{b,b,a\}$.
Pattern matching is known to provide a handy tool set to treat such data types.
Although many studies on pattern matching and implementations for practical
programming languages have been proposed so far, we observe that none of these
studies satisfy all the criteria of practical pattern matching, which are as
follows: i) efficiency of the backtracking algorithm for non-linear patterns,
ii) extensibility of matching process, and iii) polymorphism in patterns.
This paper aims to design a new pattern-matching-oriented programming
language that satisfies all the above three criteria. The proposed language
features clean Scheme-like syntax and efficient and extensible pattern matching
semantics. This programming language is especially useful for the processing of
complex non-free data types that not only include multisets and sets but also
graphs and symbolic mathematical expressions. We discuss the importance of our
criteria of practical pattern matching and how our language design naturally
arises from the criteria. The proposed language has been already implemented
and open-sourced as the Egison programming language.
|
We present the confirmation of two new planets transiting the nearby mid-M
dwarf LTT 3780 (TIC 36724087, TOI-732, $V=13.07$, $K_s=8.204$, $R_s$=0.374
R$_{\odot}$, $M_s$=0.401 M$_{\odot}$, d=22 pc). The two planet candidates are
identified in a single TESS sector and are validated with reconnaissance
spectroscopy, ground-based photometric follow-up, and high-resolution imaging.
With measured orbital periods of $P_b=0.77$ days, $P_c=12.25$ days and sizes
$r_{p,b}=1.33\pm 0.07$ R$_{\oplus}$, $r_{p,c}=2.30\pm 0.16$ R$_{\oplus}$, the
two planets span the radius valley in period-radius space around low mass stars
thus making the system a laboratory to test competing theories of the emergence
of the radius valley in that stellar mass regime. By combining 63 precise
radial-velocity measurements from HARPS and HARPS-N, we measure planet masses
of $m_{p,b}=2.62^{+0.48}_{-0.46}$ M$_{\oplus}$ and $m_{p,c}=8.6^{+1.6}_{-1.3}$
M$_{\oplus}$, which indicates that LTT 3780b has a bulk composition consistent
with being Earth-like, while LTT 3780c likely hosts an extended H/He envelope.
We show that the recovered planetary masses are consistent with predictions
from both photoevaporation and from core-powered mass loss models. The
brightness and small size of LTT 3780, along with the measured planetary
parameters, render LTT 3780b and c as accessible targets for atmospheric
characterization of planets within the same planetary system and spanning the
radius valley.
|
We provide a simple parametrization for the group G2, which is analogous to
the Euler parametrization for SU(2). We show how to obtain the general element
of the group in a form emphasizing the structure of the fibration of G2 with
fiber SO(4) and base H, the variety of quaternionic subalgebras of octonions.
In particular this allows us to obtain a simple expression for the Haar measure
on G2. Moreover, as a by-product it yields a concrete realization and an
Einstein metric for H.
|
This paper (concerning the infinite-mass boundary condition) has been
withdrawn by the author. Another, independent study regarding the zigzag
boundary condition has appeared in Phys. Rev. B 82, 125419 (2010).
|
We report the detection of five Jovian mass planets orbiting high metallicity
stars. Four of these stars were first observed as part of the N2K program and
exhibited low RMS velocity scatter after three consecutive observations.
However, follow-up observations over the last three years now reveal the
presence of longer period planets with orbital periods ranging from 21 days to
a few years. HD 11506 is a G0V star with a planet of \msini = 4.74 \mjup in a
3.85 year orbit. HD 17156 is a G0V star with a 3.12 \mjup planet in a 21.2 day
orbit. The eccentricity of this orbit is 0.67, one of the highest known for a
planet with a relatively short period. The orbital period for this planet
places it in a region of parameter space where relatively few planets have been
detected. HD 125612 is a G3V star with a planet of \msini = 3.5 \mjup in a 1.4
year orbit. HD 170469 is a G5IV star with a planet of \msini = 0.67 \mjup in a
3.13 year orbit. HD 231701 is an F8V star with planet of 1.08 \mjup in a 142
day orbit. All of these stars have supersolar metallicity. Three of the five
stars were observed photometrically but showed no evidence of brightness
variability. A transit search conducted for HD 17156 was negative but covered
only 25% of the search space and so is not conclusive.
|
We show that there are two different ways of calculating the average electric
field of a superconducting cable in conduit conductor depending on the relation
between the current transfer length and the characteristic self-field length.
|
In statistical mechanics Gibbs' paradox is avoided if the particles of a gas
are assumed to be indistinguishable. The resulting entropy then agrees with the
empirically tested thermodynamic entropy up to a term proportional to the
logarithm of the particle number. We discuss here how analogous situations
arise in the statistical foundation of black-hole entropy. Depending on the
underlying approach to quantum gravity, the fundamental objects to be counted
have to be assumed indistinguishable or not in order to arrive at the
Bekenstein--Hawking entropy. We also show that the logarithmic corrections to
this entropy, including their signs, can be understood along the lines of
standard statistical mechanics. We illustrate the general concepts within the
area quantization model of Bekenstein and Mukhanov.
|
Supernova (SN) explosions are a major feedback mechanism regulating star
formation in galaxies through their momentum input. We review the observations
of SNRs in radiative stages in the Milky Way to validate the theoretical
results on the momentum/energy injection from a single SN explosion. For seven
SNRs where we can observe fast-expanding, atomic radiative shells, we show that
the shell momentum inferred from HI 21 cm line observations is in the range of
(0.5--4.5)$\times 10^5$ $M_\odot$ km s$^{-1}$. In two SNRs (W44 and IC 443),
shocked molecular gas with momentum comparable to that of the atomic SNR shells
has been also observed. We compare the momentum and kinetic/thermal energy of
these seven SNRs with the results from 1D and 3D numerical simulations. The
observation-based momentum and kinetic energy agree well with the expected
momentum/energy input from an SN explosion of $\sim 10^{51}$ erg. It is much
more difficult to use data/model comparisons of thermal energy to constrain the
initial explosion energy, however, due to rapid cooling and complex physics at
the hot/cool interface in radiative SNRs. We discuss the observational and
theoretical uncertainties of these global parameters and explosion energy
estimates for SNRs in complex environments.
|
We study equivalence relation of the set of triangles generated by similarity
and operation on a triangle to get a new one by joining division points of
three edges with the same ratio. Using the moduli space of similarity classes
of triangles introduced by Nakamura and Oguiso, we give characterization of
equivalent triangles in terms of circles of Apollonius (or hyperbolic pencil of
circles) and properties of special equivalent triangles. We also study
rationality problem and constructibility problem.
|
The imaging performance of tomographic deconvolution phase microscopy can be
described in terms of the phase optical transfer function (POTF) which, in
turn, depends on the illumination profile. To facilitate the optimization of
the illumination profile, an analytical calculation method based on polynomial
fitting is developed to describe the POTF for general non-uniform
axially-symmetric illumination. This is then applied to Gaussian and related
profiles. Compared to numerical integration methods that integrate over a
series of annuli, the present analytical method is much faster and is equally
accurate. Further, a balanced distribution criterion for the POTF and a
least-squares minimization are presented to optimize the uniformity of the
POTF. An optimum general profile is found analytically by relaxed optimal
search and an optimum Gaussian profile is found through a tree search.
Numerical simulations confirm the performance of these optimum profiles and
support the balanced distribution criterion introduced.
|
We consider capillary wave turbulence at scales larger than the forcing one.
At such scales, our measurements show that the surface waves dynamics is the
one of a thermal equilibrium state in which the effective temperature is
related to the injected power. We characterize this evolution with a scaling
law and report the statistical properties of the large-scale surface elevation
depending on this effective temperature.
|
We analyse the most general bosonic supersymmetric solutions of type IIB
supergravity whose metrics are warped products of five-dimensional anti-de
Sitter space AdS_5 with a five-dimensional Riemannian manifold M_5. All fluxes
are allowed to be non-vanishing consistent with SO(4,2) symmetry. We show that
the necessary and sufficient conditions can be phrased in terms of a local
identity structure on M_5. For a special class, with constant dilaton and
vanishing axion, we reduce the problem to solving a second order non-linear
ODE. We find an exact solution of the ODE which reproduces a solution first
found by Pilch and Warner. A numerical analysis of the ODE reveals an
additional class of local solutions.
|
The simulation of quantum correlations with alternative nonlocal resources,
such as classical communication, gives a natural way to quantify their
nonlocality. While multipartite nonlocal correlations appear to be useful
resources, very little is known on how to simulate multipartite quantum
correlations. We present the first known protocol that reproduces 3-partite GHZ
correlations with bounded communication: 3 bits in total turn out to be
sufficient to simulate all equatorial Von Neumann measurements on the 3-partite
GHZ state.
|
We studied the statics and dynamics of elastic manifolds in disordered media
with long-range correlated disorder using functional renormalization group
(FRG). We identified different universality classes and computed the critical
exponents and universal amplitudes describing geometric and velocity-force
characteristics. In contrast to uncorrelated disorder, the statistical tilt
symmetry is broken resulting in a nontrivial response to a transverse tilting
force. For instance, the vortex lattice in disordered superconductors shows a
new glass phase whose properties interpolate between those of the Bragg and
Bose glasses formed by pointlike and columnar disorder, respectively. Whereas
there is no response in the Bose glass phase (transverse Meissner effect), the
standard linear response expected in the Bragg-glass gets modified to a power
law response in the presence of disorder correlations. We also studied the long
distance properties of the O(N) spin system with random fields and random
anisotropies correlated as 1/x^{d-sigma}. Using FRG we obtained the phase
diagram in (d,sigma,N)-parameter space and computed the corresponding critical
exponents. We found that below the lower critical dimension 4+sigma, there can
exist two different types of quasi-long-range-order with zero order-parameter
but infinite correlation length.
|
We summarize our results concerning the spectrum and mass anomalous dimension
of SU(2) gauge theories with various numbers of fermions in the adjoint
representation, where each Majorana fermion corresponds effectively to half a
Dirac flavour $N_f$. The most relevant examples for extensions of the standard
model are supersymmetric Yang-Mills theory ($N_f=1/2$) and Minimal Walking
Technicolour ($N_f=2$). In addition to these theories we will also consider the
cases of $N_f=1$ and $N_f=3/2$. The results comprise the particle spectrum of
glueballs, triplet and singlet mesons, and possible fractionally charged spin
half particles. In addition we will discuss our recent results for the mass
anomalous dimension.
|
In recommendation systems, practitioners observed that increase in the number
of embedding tables and their sizes often leads to significant improvement in
model performances. Given this and the business importance of these models to
major internet companies, embedding tables for personalization tasks have grown
to terabyte scale and continue to grow at a significant rate. Meanwhile, these
large-scale models are often trained with GPUs where high-performance memory is
a scarce resource, thus motivating numerous work on embedding table compression
during training. We propose a novel change to embedding tables using a cache
memory architecture, where the majority of rows in an embedding is trained in
low precision, and the most frequently or recently accessed rows cached and
trained in full precision. The proposed architectural change works in
conjunction with standard precision reduction and computer arithmetic
techniques such as quantization and stochastic rounding. For an open source
deep learning recommendation model (DLRM) running with Criteo-Kaggle dataset,
we achieve 3x memory reduction with INT8 precision embedding tables and
full-precision cache whose size are 5% of the embedding tables, while
maintaining accuracy. For an industrial scale model and dataset, we achieve
even higher >7x memory reduction with INT4 precision and cache size 1% of
embedding tables, while maintaining accuracy, and 16% end-to-end training
speedup by reducing GPU-to-host data transfers.
|
Reducing the complexity of higher order problems can enable solving them in
analytical ways. In this paper, we propose an analytic whole body motion
generator for humanoid robots. Our approach targets inexpensive platforms that
possess position controlled joints and have limited feedback capabilities. By
analysing the mass distribution in a humanoid-like body, we find relations
between limb movement and their respective CoM positions. A full pose of a
humanoid robot is then described with five point-masses, with one attached to
the trunk and the remaining four assigned to each limb. The weighted sum of
these masses in combination with a contact point form an inverted pendulum. We
then generate statically stable poses by specifying a desired upright pendulum
orientation, and any desired trunk orientation. Limb and trunk placement
strategies are utilised to meet the reference CoM position. A set of these
poses is interpolated to achieve stable whole body motions. The approach is
evaluated by performing several motions with an igus Humanoid Open Platform
robot. We demonstrate the extendability of the approach by applying basic
feedback mechanisms for disturbance rejection and tracking error minimisation.
|
The excursion set approach provides a framework for predicting how the
abundance of dark matter halos depends on the initial conditions. A key
ingredient of this formalism comes from the physics of halo formation: the
specification of a critical overdensity threshold (barrier) which protohalos
must exceed if they are to form bound virialized halos at a later time. Another
ingredient is statistical, as it requires the specification of the appropriate
statistical ensemble over which to average when making predictions. The
excursion set approach explicitly averages over all initial positions, thus
implicitly assuming that the appropriate ensemble is that associated with
randomly chosen positions in space, rather than special positions such as peaks
of the initial density field. Since halos are known to collapse around special
positions, it is not clear that the physical and statistical assumptions which
underlie the excursion set approach are self-consistent. We argue that they are
at least for low mass halos, and illustrate by comparing our excursion set
predictions with numerical data from the DEUS simulations.
|
After discussing the key idea underlying the Maxwell's Demon ensemble, we
employ this idea for calculating fluctuations of ideal Bose gas condensates in
traps with power-law single-particle energy spectra. Two essentially different
cases have to be distinguished. If the heat capacity remains continuous at the
condensation point in the large-N-limit, the fluctuations of the number of
condensate particles vanish linearly with temperature, independent of the trap
characteristics. If the heat capacity becomes discontinuous, the fluctuations
vanish algebraically with temperature, with an exponent determined by the trap.
Our results are based on an integral representation that yields the solution to
both the canonical and the microcanonical fluctuation problem in a singularly
transparent manner.
|
We consider non-perturbative six and four dimensional N=1 space-time
supersymmetric orientifolds. Some states in such compactifications arise in
``twisted'' open string sectors which lack world-sheet description in terms of
D-branes. Using Type I-heterotic duality we are able to obtain the massless
spectra for some of such orientifolds. The four dimensional compactification we
discuss in this context is an example of a chiral N=1 supersymmetric string
vacuum which is non-perturbative in both orientifold and heterotic pictures. In
particular, it contains both D9- and D5-branes plus non-perturbative
``twisted'' open string sector states as well.
|
We consider exact and quasi-exact solvability of the one-dimensional
Fokker-Planck equation based on the connection between the Fokker-Planck
equation and the Schr\"odinger equation. A unified consideration of these two
types of solvability is given from the viewpoint of prepotential together with
Bethe ansatz equations. Quasi-exactly solvable Fokker-Planck equations related
to the $sl(2)$-based systems in Turbiner's classification are listed. We also
present one $sl(2)$-based example which is not listed in Turbiner's scheme.
|
We consider an inhomogeneous Erd\H{o}s-R\'enyi random graph $G_N$ with vertex
set $[N] = \{1,\dots,N\}$ for which the pair of vertices $i,j \in [N]$, $i\neq
j$, is connected by an edge with probability $r_N(\tfrac{i}{N},\tfrac{j}{N})$,
independently of other pairs of vertices. Here, $r_N\colon\,[0,1]^2 \to (0,1)$
is a symmetric function that plays the role of a reference graphon. Let
$\lambda_N$ be the maximal eigenvalue of the Laplacian matrix of $G_N$. We show
that if $\lim_{N\to\infty} \|r_N-r\|_\infty = 0$ for some limiting graphon
$r\colon\,[0,1]^2 \to (0,1)$, then $\lambda_N/N$ satisfies a downward LDP with
rate $\binom{N}{2}$ and an upward LDP with rate $N$. We identify the associated
rate functions $\psi_r$ and $\widehat{\psi}_r$, and derive their basic
properties.
|
We study Higgs boson production in exclusive jet bins at possible future 33
and 100 TeV proton-proton colliders. We compare the cross sections obtained
using fixed-order perturbation theory with those obtained by also resuming
large logarithms induced by the jet-binning in the gluon-fusion and associated
production channels. The central values obtained by the best-available
fixed-order predictions differ by $10-20\%$ from those obtained after including
resummation over the majority of phase-space regions considered. Additionally,
including the resummation dramatically reduces the residual scale variation in
these regions, often by a factor of two or more. We further show that in
several new kinematic regimes that can be explored at these high-energy
machines, the inclusion of resummation improvement is mandatory.
|
We investigate the orgin of ``quantum superarrivals'' in the reflection and
transmission probabilities of a Gaussian wave packet for a rectangular
potential barrier while it is perturbed by either reducing or increasing its
height. There exists a finite time interval during which the probability of
reflection is {\it larger} (superarrivals) while the barrier is {\it lowered}
compared to the unperturbed case. Similarly, during a certain interval of time,
the probability of transmission while the barrier is {\it raised} {\it exceeds}
that for free propagation. We compute {\it particle trajectories} using the
Bohmian model of quantum mechanics in order to understand {\it how} this
phenomenon of superarrivals occurs.
|
We show the effects of supersymmetric higher derivative terms on inflation
models in supergravity. The results show that such terms generically modify the
effective kinetic coefficient of the inflaton during inflation if the cut off
scale of the higher derivative operators is sufficiently small. In such a case,
the $\eta$-problem in supergravity does not occur, and we find that the
effective potential of the inflaton generically becomes a power type potential
with a power smaller than two.
|
This paper explores caching of fractions of a video content, not caching of
an entire content, to increase the expected video quality. We first show that
the highest-quality content is better to be cached and propose the caching
policy of video chunks having different qualities. Our caching policy utilizes
the characteristics of video contents that video files can be encoded into
multiple versions with different qualities, each file consists of many chunks,
and chunks can have different qualities. Extensive performance evaluations are
conducted to show that caching of content fractions, rather than an entire
content, can improve the expected video quality especially when the channel
conditions is sufficiently good to cooperate with nearby BS or helpers.
|
In a capital adequacy framework, risk measures are used to determine the
minimal amount of capital that a financial institution has to raise and invest
in a portfolio of pre-specified eligible assets in order to pass a given
capital adequacy test. From a capital efficiency perspective, it is important
to identify the set of portfolios of eligible assets that allow to pass the
test by raising the least amount of capital. We study the existence and
uniqueness of such optimal portfolios as well as their sensitivity to changes
in the underlying capital position. This naturally leads to investigating the
continuity properties of the set-valued map associating to each capital
position the corresponding set of optimal portfolios. We pay special attention
to lower semicontinuity, which is the key continuity property from a financial
perspective. This "stability" property is always satisfied if the test is based
on a polyhedral risk measure but it generally fails once we depart from
polyhedrality even when the reference risk measure is convex. However, lower
semicontinuity can be often achieved if one if one is willing to focuses on
portfolios that are close to being optimal. Besides capital adequacy, our
results have a variety of natural applications to pricing, hedging, and capital
allocation problems.
|
The existence of even the simplest magnetized wormholes may lead to
observable consequences. In the case where both the wormhole and the magnetic
field around its mouths are static and spherically symmetric, and gas in the
region near the wormhole falls radially into it, the former's spectrum contains
bright cyclotron or synchrotron lines due to the interaction of charged plasma
particles with the magnetic field. At the same time, due to spherical symmetry,
the radiation is non-polarized. The emission of this just-described exotic type
(non-thermal, but non-polarized) may be a wormhole signature. Also, in this
scenario, the formation of an accretion disk is still quite possible at some
distance from the wormhole, but a monopole magnetic field could complicate this
process and lead to the emergence of asymmetrical and one-sided relativistic
jets.
|
Due to the inherent robustness of segmentation models, traditional
norm-bounded attack methods show limited effect on such type of models. In this
paper, we focus on generating unrestricted adversarial examples for semantic
segmentation models. We demonstrate a simple and effective method to generate
unrestricted adversarial examples using conditional generative adversarial
networks (CGAN) without any hand-crafted metric. The na\"ive implementation of
CGAN, however, yields inferior image quality and low attack success rate.
Instead, we leverage the SPADE (Spatially-adaptive denormalization) structure
with an additional loss item to generate effective adversarial attacks in a
single step. We validate our approach on the popular Cityscapes and ADE20K
datasets, and demonstrate that our synthetic adversarial examples are not only
realistic, but also improve the attack success rate by up to 41.0\% compared
with the state of the art adversarial attack methods including PGD.
|
Localization systems intended for home use by people with mild cognitive
impairment should comply with specific requirements. They should provide the
users with sub-meter accuracy allowing for analyzing patient's movement
trajectory and be energy effective, so the devices do not need frequent
charging. Such requirements could be satisfied by employing a hybrid
positioning system combining accurate UWB with energy efficient Bluetooth Low
Energy (BLE) technology. In the paper, such a solution is presented and
experimentally verified. In the proposed system, user's location is derived
using BLE based fingerprinting. A radio map utilized by the algorithm is
created automatically during system operation with the support of UWB
subsystem. Such an approach allows the users to repeat system calibration as
often as possible, which raises systems resistance to environmental changes.
|
Nowadays, more and more clinical trials choose combinational agents as the
intervention to achieve better therapeutic responses. However, dose-finding for
combinational agents is much more complicated than single agent as the full
order of combination dose toxicity is unknown. Therefore, regular phase I
designs are not able to identify the maximum tolerated dose (MTD) of
combinational agents. Motivated by such needs, plenty of novel phase I clinical
trial designs for combinational agents were proposed. With so many available
designs, research that compare their performances, explore parameters' impacts,
and provide recommendations is very limited. Therefore, we conducted a
simulation study to evaluate multiple phase I designs that proposed to identify
single MTD for combinational agents under various scenarios. We also explored
influences of different design parameters. In the end, we summarized the pros
and cons of each design, and provided a general guideline in design selection.
|
We consider two operator space versions of type and cotype, namely
$S_p$-type, $S_q$-cotype and type $(p,H)$, cotype $(q,H)$ for a homogeneous
Hilbertian operator space $H$ and $1\leq p \leq 2 \leq q\leq \infty$,
generalizing "$OH$-cotype 2" of G. Pisier. We compute type and cotype of some
Hilbertian operator spaces and $L_p$ spaces, and we investigate the
relationship between a homogeneous Hilbertian space $H$ and operator spaces
with cotype $(2,H)$. As applications we consider operator space versions of
generalized little Grothendieck's theorem and Maurey's extension theorem in
terms of these new notions.
|
A new closed-form inflationary solution is given for a hyperbolic interaction
potential. The method used to arrive at this solution is outlined as it appears
possible to generate additional sets of equations which satisfy the model. In
addition a new form of decaying cosmological constant is presented.
|
It has been well established in the past decades that the central black hole
masses of galaxies correlate with dynamical properties of their harbouring
bulges. This notion begs the question of whether there are causal connections
between the AGN and its immediate vicinity in the host galaxy. In this paper we
analyse the presence of circumnuclear star formation in a sample of 15 AGN
using mid-infrared observations. The data consist of a set of 11.3{\mm} PAH
emission and reference continuum images, taken with ground based telescopes,
with sub-arcsecond resolution. By comparing our star formation estimates with
AGN accretion rates, derived from X-ray luminosities, we investigate the
validity of theoretical predictions for the AGN-starburst connection. Our main
results are: i) circumnuclear star formation is found, at distances as low as
tens of parsecs from the nucleus, in nearly half of our sample (7/15); ii) star
formation luminosities are correlated with the bolometric luminosity of the AGN
($L_{AGN}$) only for objects with $L_{AGN} \ge 10^{42}\,\,{\rm erg\,\,s^{-1}}$;
iii) low luminosity AGNs ($L_{AGN} < 10^{42}\,\,{\rm erg\,\,s^{-1}}$) seem to
have starburst luminosities far greater than their bolometric luminosities.
|
In the I=0 sector there are more scalar mesons than can fit in one $q{\bar
q}$ nonet. Consequently, many have claimed that there is in fact more than one
multiplet, perhaps both $q{\bar q}$ and $qq{\bar {qq}}$. Such proposals require
the existence of at least two strange isodoublets (and their antiparticles).
The current PDG Tables list just one state, the $K^*_0(1430)$, while fits to
data with Breit-Wigner forms and variable backgrounds can accommodate a
$\kappa(900)$ too. Whether a state exists in the spectrum of hadrons is not a
matter of ability to fit data along the real energy axis, but is completely
specified by the number of poles in the complex energy plane. Here we perform a
model-independent analytic continuation of $\pi K$ scattering results between
825 MeV and 2 GeV to determine the number and position of resonance poles. We
find that there {\bf is} a $K^*_0(1430)$, but {\bf no} $\kappa(900)$. The LASS
data cannot rule out the possibility of a very low mass $\kappa$ well below 825
MeV.
|
The hand-eye calibration problem is an important application problem in robot
research. Based on the 2-norm of dual quaternion vectors, we propose a new dual
quaternion optimization method for the hand-eye calibration problem. The dual
quaternion optimization problem is decomposed to two quaternion optimization
subproblems. The first quaternion optimization subproblem governs the rotation
of the robot hand. It can be solved efficiently by the eigenvalue decomposition
or singular value decomposition. If the optimal value of the first quaternion
optimization subproblem is zero, then the system is rotationwise noiseless,
i.e., there exists a ``perfect'' robot hand motion which meets all the testing
poses rotationwise exactly. In this case, we apply the regularization technique
for solving the second subproblem to minimize the distance of the translation.
Otherwise we apply the patching technique to solve the second quaternion
optimization subproblem. Then solving the second quaternion optimization
subproblem turns out to be solving a quadratically constrained quadratic
program. In this way, we give a complete description for the solution set of
hand-eye calibration problems. This is new in the hand-eye calibration
literature. The numerical results are also presented to show the efficiency of
the proposed method.
|
The Einstein-Rosen "bridge" wormhole solution proposed in the classic paper
[1] does not satisfy the vacuum Einstein equations at the wormhole throat. We
show that the fully consistent formulation of the original Einstein-Rosen
"bridge" requires solving Einstein equations of bulk D=4 gravity coupled to a
lightlike brane with a well-defined world-volume action. The non-vanishing
contribution of Einstein-Rosen "bridge" solution to the right hand side of
Einstein equations at the throat matches precisely the surface stress-energy
tensor of the lightlike brane which automatically occupies the throat ("horizon
straddling") - a feature triggered by the world-volume lightlike brane
dynamics.
|
This article investigates residual a posteriori error estimates and adaptive
mesh refinements for time-dependent boundary element methods for the wave
equation. We obtain reliable estimates for Dirichlet and acoustic boundary
conditions which hold for a large class of discretizations. Efficiency of the
error estimate is shown for a natural discretization of low order. Numerical
examples confirm the theoretical results. The resulting adaptive mesh
refinement procedures in 3d recover the adaptive convergence rates known for
elliptic problems.
|
Citizen science projects in which volunteers collect data are increasingly
popular due to their ability to engage the public with scientific questions.
The scientific value of these data are however hampered by several biases. In
this paper, we deal with geospatial sampling bias by enriching the
volunteer-collected data with geographical covariates, and then using
regression-based models to correct for bias. We show that night sky brightness
estimates change substantially after correction, and that the corrected
inferences better represent an external satellite-derived measure of skyglow.
We conclude that geospatial bias correction can greatly increase the scientific
value of citizen science projects.
|
A grand challenge in representation learning is to learn the different
explanatory factors of variation behind the high dimen- sional data. Encoder
models are often determined to optimize performance on training data when the
real objective is to generalize well to unseen data. Although there is enough
numerical evidence suggesting that noise injection (during training) at the
representation level might improve the generalization ability of encoders, an
information-theoretic understanding of this principle remains elusive. This
paper presents a sample-dependent bound on the generalization gap of the
cross-entropy loss that scales with the information complexity (IC) of the
representations, meaning the mutual information between inputs and their
representations. The IC is empirically investigated for standard multi-layer
neural networks with SGD on MNIST and CIFAR-10 datasets; the behaviour of the
gap and the IC appear to be in direct correlation, suggesting that SGD selects
encoders to implicitly minimize the IC. We specialize the IC to study the role
of Dropout on the generalization capacity of deep encoders which is shown to be
directly related to the encoder capacity, being a measure of the
distinguishability among samples from their representations. Our results
support some recent regularization methods.
|
Control of quantum coherence in many-body system is one of the key issues in
modern condensed matter. Conventional wisdom is that lattice vibration is an
innate source of decoherence, and amounts of research have been conducted to
eliminate lattice effects. Challenging this wisdom, here we show that lattice
vibration may not be a decoherence source but an impetus of a novel coherent
quantum many-body state. We demonstrate the possibility by studying the
transverse-field Ising model on a chain with renormalization group and
density-matrix renormalization group method, and theoretically discover a
stable $\mathcal{N}=1$ supersymmetric quantum criticality with central charge
$c=3/2$. Thus, we propose an Ising spin chain with strong spin-lattice coupling
as a candidate to observe supersymmetry. Generic precursor conditions of novel
quantum criticality are obtained by generalizing the Larkin-Pikin criterion of
thermal transitions. Our work provides a new perspective that lattice vibration
may be a knob for exotic quantum many-body states.
|
This paper introduces A2C, a multi-stage collaborative decision framework
designed to enable robust decision-making within human-AI teams. Drawing
inspiration from concepts such as rejection learning and learning to defer, A2C
incorporates AI systems trained to recognise uncertainty in their decisions and
defer to human experts when needed. Moreover, A2C caters to scenarios where
even human experts encounter limitations, such as in incident detection and
response in cyber Security Operations Centres (SOC). In such scenarios, A2C
facilitates collaborative explorations, enabling collective resolution of
complex challenges. With support for three distinct decision-making modes in
human-AI teams: Automated, Augmented, and Collaborative, A2C offers a flexible
platform for developing effective strategies for human-AI collaboration. By
harnessing the strengths of both humans and AI, it significantly improves the
efficiency and effectiveness of complex decision-making in dynamic and evolving
environments. To validate A2C's capabilities, we conducted extensive simulative
experiments using benchmark datasets. The results clearly demonstrate that all
three modes of decision-making can be effectively supported by A2C. Most
notably, collaborative exploration by (simulated) human experts and AI achieves
superior performance compared to AI in isolation, underscoring the framework's
potential to enhance decision-making within human-AI teams.
|
In the context of explosion models for Type Ia Supernovae, we present one-
and two-dimensional simulations of fully resolved detonation fronts in
degenerate C+O White Dwarf matter including clumps of previously burned
material. The ability of detonations to survive the passage through sheets of
nuclear ashes is tested as a function of the width and composition of the ash
region. We show that detonation fronts are quenched by microscopically thin
obstacles with little sensitivity to the exact ash composition. Front-tracking
models for detonations in macroscopic explosion simulations need to include
this effect in order to predict the amount of unburned material in delayed
detonation scenarios.
|
We present angular dependent magneto-transport and magnetization measurements
on alpha-(ET)2MHg(SCN)4 compounds at high magnetic fields and low temperatures.
We find that the low temperature ground state undergoes two subsequent
field-induced density-wave type phase transitions above a critical angle of the
magnetic field with respect to the crystallographic axes. This new phase
diagram may be qualitatively described assuming a charge density wave ground
state which undergoes field-induced transitions due to the interplay of Pauli
and orbital effects.
|
For these two decades, the Arakawa-Kaneko zeta function has been studied
actively. Recently Kaneko and Tsumura constructed its variants from the
viewpoint of poly-Bernoulli numbers. In this paper, we generalize their zeta
functions of Arakawa-Kaneko type to those with indices in which positive and
negative integers are mixed. We show that values of these functions at positive
integers can be expressed in terms of the multiple Hurwitz zeta star values.
|
Reasoning over visual data is a desirable capability for robotics and
vision-based applications. Such reasoning enables forecasting of the next
events or actions in videos. In recent years, various models have been
developed based on convolution operations for prediction or forecasting, but
they lack the ability to reason over spatiotemporal data and infer the
relationships of different objects in the scene. In this paper, we present a
framework based on graph convolution to uncover the spatiotemporal
relationships in the scene for reasoning about pedestrian intent. A scene graph
is built on top of segmented object instances within and across video frames.
Pedestrian intent, defined as the future action of crossing or not-crossing the
street, is a very crucial piece of information for autonomous vehicles to
navigate safely and more smoothly. We approach the problem of intent prediction
from two different perspectives and anticipate the intention-to-cross within
both pedestrian-centric and location-centric scenarios. In addition, we
introduce a new dataset designed specifically for autonomous-driving scenarios
in areas with dense pedestrian populations: the Stanford-TRI Intent Prediction
(STIP) dataset. Our experiments on STIP and another benchmark dataset show that
our graph modeling framework is able to predict the intention-to-cross of the
pedestrians with an accuracy of 79.10% on STIP and 79.28% on \rev{Joint
Attention for Autonomous Driving (JAAD) dataset up to one second earlier than
when the actual crossing happens. These results outperform the baseline and
previous work. Please refer to http://stip.stanford.edu/ for the dataset and
code.
|
We present two approaches for computing rational approximations to
multivariate functions, motivated by their effectiveness as surrogate models
for high-energy physics (HEP) applications. Our first approach builds on the
Stieltjes process to efficiently and robustly compute the coefficients of the
rational approximation. Our second approach is based on an optimization
formulation that allows us to include structural constraints on the rational
approximation, resulting in a semi-infinite optimization problem that we solve
using an outer approximation approach. We present results for synthetic and
real-life HEP data, and we compare the approximation quality of our approaches
with that of traditional polynomial approximations.
|
Transition paths are rare events occurring when a system, thanks to the
effect of fluctuations, crosses successfully from one stable state to another
by surmounting an energy barrier. Even though they are of great significance in
many mesoscale processes, their direct determination is often challenging due
to their short duration as compared to other relevant time-scales. Here, we
measure the local average velocity along transition paths of a colloidal bead
embedded in a glycerol/water mixture that hops over a barrier separating two
optical potential wells. Owing to the slow dynamics of the bead in this viscous
medium, we can spatially resolve the mean velocity profiles of the transition
paths for distinct potentials, which agree with theoretical predictions of a
model for the motion of a Brownian particle traversing a parabolic barrier.
This allows us to experimentally verify various expressions linking the
behavior of such mean velocities with equilibrium and transition path position
distributions, mean transition-path times and mean escape times from the wells.
We also show that artifacts in the mean velocity profiles arise when reducing
the experimental time resolution, thus highlighting the importance of the
sampling rate in the characterization of the transition path dynamics. Our
results confirm that mean transition path velocity establishes a fundamental
relationship between mean transition path times and equilibrium rates in
thermally activated processes of small-scaled systems.
|
QCD theory predicts the existence of glueballs, but so far all experimental
endeavors have failed to identify any such states. To remedy this discrepancy
between QCD, which has proven to be a successful theory for strong
interactions, and the failure of experimental searches for glueballs, one is
tempted to accept the promising interpretation that the glueballs mix with
regular $q\bar q$ states of the same quantum numbers. The lattice estimate of
the masses of pure $0^{++}$ glueballs ranges from 1 to 2 GeV, which is the
region of the $f_0$ family. Thus many authors suggest that the $f_0$ mesonic
series is an ideal place to study possible mixtures of glueballs and $q\bar q$.
In this paper, following the strategy proposed by Close, Farrar and Li, we try
to determine the fraction of glueball components in $f_0$ mesons using the
measured mass spectra and the branching ratios of $J/\psi$ radiative decays
into $f_0$ mesons. Since the pioneering papers by Close et al., more than 20
years has elapsed and more accurate measurements have been done by several
experimental collaborations, so it is time to revisit this interesting topic
using new data. We suppose $f_0(500)$ and $f_0(980)$ to be pure quark states,
while for $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$, to fit both the
experimental data of $J/\psi$ radiative decay and their mass spectra, glueball
components are needed. Moreover, the mass of the pure $0^{++}$ glueball is
phenomenologically determined.
|
Stability and optimal convergence analysis of a non-uniform implicit-explicit
L1 finite element method (IMEX-L1-FEM) is studied for a class of
time-fractional linear partial differential/integro-differential equations with
non-self-adjoint elliptic part having (space-time) variable coefficients. The
proposed scheme is based on a combination of an IMEX-L1 method on graded mesh
in the temporal direction and a finite element method in the spatial direction.
With the help of a discrete fractional Gr\"{o}nwall inequality, global almost
optimal error estimates in $L^2$- and $H^1$-norms are derived for the problem
with initial data $u_0 \in H_0^1(\Omega)\cap H^2(\Omega)$. The novelty of our
approach is based on managing the interaction of the L1 approximation of the
fractional derivative and the time discrete elliptic operator to derive the
optimal estimate in $H^1$-norm directly. Furthermore, a super convergence
result is established when the elliptic operator is self-adjoint with time and
space varying coefficients, and as a consequence, an $L^\infty$ error estimate
is obtained for 2D problems that too with the initial condition is in $
H_0^1(\Omega)\cap H^2(\Omega)$. All results proved in this paper are valid
uniformly as $\alpha\longrightarrow 1^{-}$, where $\alpha$ is the order of the
Caputo fractional derivative. Numerical experiments are presented to validate
our theoretical findings.
|
Autoencoder-based reduced-order modeling (ROM) has recently attracted
significant attention, owing to its ability to capture underlying nonlinear
features. However, two critical drawbacks severely undermine its scalability to
various physical applications: entangled and therefore uninterpretable latent
variables (LVs) and the blindfold determination of latent space dimension. In
this regard, this study proposes the physics-aware ROM using only interpretable
and information-intensive LVs extracted by $\beta$-variational autoencoder,
which are referred to as physics-aware LVs throughout this paper. To extract
these LVs, their independence and information intensity are quantitatively
scrutinized in a two-dimensional transonic flow benchmark problem. Then, the
physical meanings of the physics-aware LVs are thoroughly investigated and we
confirmed that with appropriate hyperparameter $\beta$, they actually
correspond to the generating factors of the training dataset, Mach number and
angle of attack. To the best of the authors' knowledge, our work is the first
to practically confirm that $\beta$-variational autoencoder can automatically
extract the physical generating factors in the field of applied physics.
Finally, physics-aware ROM, which utilizes only physics-aware LVs, is compared
with conventional ROMs, and its validity and efficiency are successfully
verified.
|
We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework
which modifies traditional on-policy actor-critic methods by separating policy
and value function training into distinct phases. In prior methods, one must
choose between using a shared network or separate networks to represent the
policy and value function. Using separate networks avoids interference between
objectives, while using a shared network allows useful features to be shared.
PPG is able to achieve the best of both worlds by splitting optimization into
two phases, one that advances training and one that distills features. PPG also
enables the value function to be more aggressively optimized with a higher
level of sample reuse. Compared to PPO, we find that PPG significantly improves
sample efficiency on the challenging Procgen Benchmark.
|
We examine the interplay between projectivity (in the sense that was
introduced by S.~Ghilardi) and uniform post-interpolant for the classical and
intuitionistic propositional logic. More precisely, we explore whether a
projective substitution of a formula is equivalent to its uniform
post-interpolant, assuming the substitution leaves the variables of the
interpolant unchanged. We show that in classical logic, this holds for all
formulas. Although such a nice property is missing in intuitionistic logic, we
provide Kripke semantical characterisation for propositions with this property.
As a main application of this, we show that the unification type of some
extensions of intuitionistic logic are finitary. In the end, we study
admissibility for intuitionistic logic, relative to some sets of formulae.
The first author of this paper recently considered a particular case of this
relativised admissibility and found it useful in characterising the provability
logic of Heyting Arithmetic.
|
Differential cross sections for electron collisions with the O$_2$ molecule
in its ground ${X}^{3}\Sigma_g^-$ state, as well as excited ${a}^{1}\Delta_g$
and ${b}^{1}\Sigma_g^+$ states are calculated. As previously, the fixed-bond
R-matrix method based on state-averaged complete active space SCF orbitals is
employed. In additions to elastic scattering of electron with the O$_2$
${X}^{3}\Sigma_g^-$, ${a}^{1}\Delta_g$ and ${b}^{1}\Sigma_g^+$ states, electron
impact excitation from the ${X}^{3}\Sigma_g^-$ state to the ${a}^{1}\Delta_g$
and ${b}^{1}\Sigma_g^+$ states as well as '6 eV states' of
${c}^{1}\Sigma_u^{-}$, ${A'}^{3}\Delta_u$ and ${A}^{3}\Sigma_u^{+}$ states is
studied. Differential cross sections for excitation to the '6 eV states' have
not been calculated previously. Electron impact excitation to the
${b}^{1}\Sigma_g^+$ state from the metastable ${a}^{1}\Delta_g$ state is also
studied. For electron impact excitation from the O$_2$ ${X}^{3}\Sigma_g^-$
state to the ${b}^{1}\Sigma_g^+$ state, our results agree better with the
experimental measurements than previous theoretical calculations. Our cross
sections show angular behaviour similar to the experimental ones for
transitions from the ${X}^{3}\Sigma_g^-$ state to the '6 eV states', although
the calculated cross sections are up to a factor two larger at large scattering
angles. For the excitation from the ${a}^{1}\Delta_g$ state to the
${b}^{1}\Sigma_g^+$ state, our results marginally agree with the experimental
data except for the forward scattering direction.
|
We explore the ground states of strongly interacting bosons in the
vanishingly small and weak lattices using the multiconfiguration time-dependent
Hartree method for bosons (MCTDHB) which calculate numerically exact many-body
wave function. Two new many-body phases: fragmented or quasi superfluid (QSF)
and incomplete fragmented Mott or quasi Mott insulator (QMI) are emerged due to
the strong interplay between interaction and lattice depth. Fragmentation is
utilized as a figure of merit to distinguish these two new phases. We utilize
the eigenvalues of the reduced one-body density matrix and define an order
parameter that characterizes the pathway from a very weak lattice to a deep
lattice. We provide a detailed investigation through the measures of one- and
two-body correlations and information entropy. We find that the structures in
one- and two-body coherence are good markers to understand the gradual built-up
of intra-well correlation and decay of inter-well correlation with increase in
lattice depth.
|
We prove the existence of definable retractions onto arbitrary closed subsets
of $K^{n}$ definable over Henselian valued fields $K$. Hence directly follows
non-Archimedian analogues of the Tietze--Urysohn and Dugundji theorems on
extending continuous definable functions. The main ingredients of the proof are
a description of definable sets due to van den Dries, resolution of
singularities and our closedness theorem.
|
In this paper, a novel channel modeling approach, named light detection and
ranging (LiDAR)-aided geometry-based stochastic modeling (LA-GBSM), is
developed. Based on the developed LA-GBSM approach, a new millimeter wave
(mmWave) channel model for sixth-generation (6G) vehicular intelligent
sensing-communication integration is proposed, which can support the design of
intelligent transportation systems (ITSs). The proposed LA-GBSM is accurately
parameterized under high, medium, and low vehicular traffic density (VTD)
conditions via a sensing-communication simulation dataset with LiDAR point
clouds and scatterer information for the first time. Specifically, by detecting
dynamic vehicles and static building/tress through LiDAR point clouds via
machine learning, scatterers are divided into static and dynamic scatterers.
Furthermore, statistical distributions of parameters, e.g., distance, angle,
number, and power, related to static and dynamic scatterers are quantified
under high, medium, and low VTD conditions. To mimic channel non-stationarity
and consistency, based on the quantified statistical distributions, a new
visibility region (VR)-based algorithm in consideration of newly generated
static/dynamic scatterers is developed. Key channel statistics are derived and
simulated. By comparing simulation results and ray-tracing (RT)-based results,
the utility of the proposed LA-GBSM is verified.
|
This is a survey on contact open books and contact Dehn surgery. The relation
between these two concepts is discussed, and various applications are sketched,
e.g. the monodromy of Stein fillable contact 3-manifolds, the Giroux-Goodman
proof of Harer's conjecture on fibred links, construction of symplectic caps to
fillings (Eliashberg, Etnyre), and detection of non-loose Legendrian knots with
the help of contact surgery.
|
Basic considerations of lens detection and identification indicate that a
wide field survey of the types planned for weak lensing and Type Ia SNe with
SNAP are close to optimal for the optical detection of strong lenses. Such a
``piggy-back'' survey might be expected even pessimistically to provide a
catalogue of a few thousand new strong lenses, with the numbers dominated by
systems of faint blue galaxies lensed by foreground ellipticals. After
sketching out our strategy for detecting and measuring these galaxy lenses
using the SNAP images, we discuss some of the scientific applications of such a
large sample of gravitational lenses: in particular we comment on the partition
of information between lens structure, the source population properties and
cosmology. Understanding this partitioning is key to assessing strong lens
cosmography's value as a cosmological probe.
|
We model accelerated trips at high-velocity aboard light sails (beam-powered
propulsion in general) and radiation rockets (thrust by anisotropic emission of
radiation) in terms of Kinnersley's solution of general relativity and its
associated geodesics. The analysis of radiation rockets relativistic kinematics
shows that the true problem of interstellar travel is not really the amount of
propellant, nor the duration of the trip but rather its tremendous energy cost.
Indeed, a flyby of Proxima Centauri with an ultralight gram-scale laser sail
would require the energy produced by a 1 GW power plant during about one day,
while more than 15 times the current world energy production would be required
for sending a 100 tons radiation rocket to the nearest star system. The
deformation of the local celestial sphere aboard radiation rockets is obtained
through the null geodesics of Kinnersley's spacetime in the Hamiltonian
formulation. It is shown how relativistic aberration and Doppler effect for the
accelerated traveller differ from their description in special relativity for
motion at constant velocity. We also show how our results could interestingly
be extended to extremely luminous events like the large amount of gravitational
waves emitted by binary black hole mergers.
|
In contrast to its chargeless version the charged Banados, Taitelboim and
Zanelli (BTZ) metric in linear Maxwell electromagnetism is known to be singular
at r=0. We show, by employing nonlinear electrodynamics that one obtains
charged, extension of the BTZ metric with regular electric field. This we do by
choosing a logarithmic Lagrangian for the nonlinear electrodynamics. A Theorem
is proved on the existence of electric black holes and combining this results
with a duality principle disproves the existence of magnetic black holes in
2+1-dimensions.
|
In our previous work we have introduced an analogue of
Robinson-Schensted-Knuth correspondence for Schubert calculus of the complete
flag varieties. The objects inserted are certain biwords, the outcomes of
insertion are bumpless pipe dreams, and the recording objects are decorated
chains in Bruhat order. In this paper we study a class of biwords that have a
certain associativity property; we call them plactic biwords. We introduce
analogues of Knuth moves on plactic biwords, and prove that any two plactic
biwords with the same insertion bumpless pipe dream are connected by those
moves.
|
This paper studies the distributional asymptotics of the slowly changing
sequence of logarithms $(\log_bn)$ with $b\in\mathbb{N}\setminus\{1\}.$ It is
known that $(\log_bn)$ is not uniformly distributed modulo one, and its omega
limit set is composed of a family of translated exponential distributions with
constant $\log b.$ An improved upper estimate $\left(\sqrt{\log N}/N\right)$ is
obtained for the rate of convergence with respect to (w.r.t.) the Kantorovich
metric on the circle, compared to the general results on rates of convergence
for a class of slowly changing sequences in the author's companion in-progress
work. Moreover, a sharp rate of convergence $\left(\log N/N\right)$ w.r.t. the
Kantorovich metric on the interval $[0,1]$, is derived. As a byproduct, the
rate of convergence w.r.t. the discrepancy metric (or the Kolmogorov metric)
turns out to be $\left(\log N/N\right)$ as well, which verifies that an upper
bound for this rate derived in [Y. Ohkubo and O. Strauch, Distribution of
leading digits of numbers, Unif. Distrib. Theory, $\textbf{11}$ (2016), no.1,
23--45.] is sharp.
|
Most Reinforcement Learning (RL) environments are created by adapting
existing physics simulators or video games. However, they usually lack the
flexibility required for analyzing specific characteristics of RL methods often
relevant to research. This paper presents Craftium, a novel framework for
exploring and creating rich 3D visual RL environments that builds upon the
Minetest game engine and the popular Gymnasium API. Minetest is built to be
extended and can be used to easily create voxel-based 3D environments (often
similar to Minecraft), while Gymnasium offers a simple and common interface for
RL research. Craftium provides a platform that allows practitioners to create
fully customized environments to suit their specific research requirements,
ranging from simple visual tasks to infinite and procedurally generated worlds.
We also provide five ready-to-use environments for benchmarking and as examples
of how to develop new ones. The code and documentation are available at
https://github.com/mikelma/craftium/.
|
We introduce the concept of a fiber bundle color space, which acts according
to the psychophysiological rules of trichromacy perception of colors by a
human. The image resides in the fiber bundle base space and the fiber color
space contains color vectors. Further we propose the decomposition of color
vectors into spectral and achromatic parts. A homomorphism of a color image and
constructed two-dimensional vector field is demonstrated that allows us to
apply well-known advanced methods of vector analysis to a color image, i.e.
ultimately give new numerical characteristics of the image. Appropriate image
to vector field forward mapping is constructed. The proposed backward mapping
algorithm converts a two-dimensional vector field to color image. The type of
image filter is described using sequential forward and backward mapping
algorithms. An example of the color image formation on the base of
two-dimensional magnetic vector field scattered by a typical pipe line defect
is given.
|
We apply convolutional neural networks (CNN) to the problem of image
orientation detection in the context of determining the correct orientation
(from 0, 90, 180, and 270 degrees) of a consumer photo. The problem is
especially important for digitazing analog photographs. We substantially
improve on the published state of the art in terms of the performance on one of
the standard datasets, and test our system on a more difficult large dataset of
consumer photos. We use Guided Backpropagation to obtain insights into how our
CNN detects photo orientation, and to explain its mistakes.
|
Baryon acoustic oscillations, measured through the patterned distribution of
galaxies or other baryon tracing objects on very large (100 Mpc) scales, offer
a possible geometric probe of cosmological distances. Pluses and minuses in
this approach's leverage for understanding dark energy are discussed, as are
systematic uncertainties requiring further investigation. Conclusions are that
1) BAO offer promise of a new avenue to distance measurements and further study
is warranted, 2) the measurements will need to attain ~1% accuracy (requiring a
10000 square degree spectroscopic survey) for their dark energy leverage to
match that from supernovae, but do give complementary information at 2%
accuracy. Because of the ties to the matter dominated era, BAO is not a
replacement probe of dark energy, but a valuable complement.
|
We study scattering properties of a PT-symmetric square well potential with
real depth larger than the threshold of particle-antiparticle pair production
as the time component of a vector potential in the (1+1)-dimensional Dirac
equation.
|
I clarify the differences between various approaches in the literature which
attempt to link gravity and thermodynamics. I then describe a new perspective
based on the following features: (1) As in the case of any other matter field,
the gravitational field equations should also remain unchanged if a constant is
added to the Lagrangian; in other words, the field equations of gravity should
remain invariant under the transformation $T^a_b \to T^a_b + \delta^a_b
$(constant). (2) Each event of spacetime has a certain number ($f$) of
microscopic degrees of freedom (`atoms of spacetime'). This quantity $f$ is
proportional to the area measure of an equi-geodesic surface, centered at that
event, when the geodesic distance tends to zero. The spacetime should have a
zero-point length in order for $f$ to remain finite. (3) The dynamics is
determined by extremizing the heat density at all events of the spacetime. The
heat density is the sum of a part contributed by matter and a part contributed
by the atoms of spacetime, with the latter being $L_P^{-4} f$. The implications
of this approach are discussed.
|
This paper generalizes the results of [13] and then provides an interesting
example. We construct a family of $W$-like maps $\{W_a\}$ with a turning fixed
point having slope $s_1$ on one side and $-s_2$ on the other. Each $W_a$ has an
absolutely continuous invariant measure $\mu_a$. Depending on whether
$\frac{1}{s_1}+\frac{1}{s_2}$ is larger, equal or smaller than 1, we show that
the limit of $\mu_a$ is a singular measure, a combination of singular and
absolutely continuous measure or an absolutely continuous measure,
respectively. It is known that the invariant density of a single piecewise
expanding map has a positive lower bound on its support. In Section 4 we give
an example showing that in general, for a family of piecewise expanding maps
with slopes larger than 2 in modulus and converging to a piecewise expanding
map, their invariant densities do not necessarily have a positive lower bound
on the support.
|
Finding the mean of the total number N_{tot} of critical points for
N-dimensional random energy landscapes is reduced to averaging the absolute
value of characteristic polynomial of the corresponding Hessian. For any finite
N we provide the exact solution to the problem for a class of landscapes
corresponding to the "toy model" of manifolds in random environment. For N >>1
our asymptotic analysis reveals a phase transition at some critical value \mu_c
of a control parameter \mu from a phase with finite landscape complexity to the
phase with vanishing complexity. The same value of the control parameter is
known to correspond to an onset of glassy behaviour at zero temperature.
Finally, we discuss a method of dealing with the modulus of the spectral
determinant applicable to a broad class of problems.
|
A new research hypothesis has been developed by the author based upon finding
astronomically based `cosmic constituents` of the Universe that may be created
or influenced by or have a special relationship with possible dark matter
candidates. He then developed a list of 14 relevant and plausible `cosmic
constituents` of the Universe, which then was used to establish a list of
constraints regarding the nature and characteristics of the long-sought dark
matter particles. A dark matter candidate was then found that best conformed to
the 14 constraints established by the `cosmic constituents.` The author then
used this same dark matter candidate to provide evidence that the Big Bang was
relativistic, had a low entropy, and therefore probably satisfied the Second
Law of Thermodynamics.
|
Traditionally, learning from human demonstrations via direct behavior cloning
can lead to high-performance policies given that the algorithm has access to
large amounts of high-quality data covering the most likely scenarios to be
encountered when the agent is operating. However, in real-world scenarios,
expert data is limited and it is desired to train an agent that learns a
behavior policy general enough to handle situations that were not demonstrated
by the human expert. Another alternative is to learn these policies with no
supervision via deep reinforcement learning, however, these algorithms require
a large amount of computing time to perform well on complex tasks with
high-dimensional state and action spaces, such as those found in StarCraft II.
Automatic curriculum learning is a recent mechanism comprised of techniques
designed to speed up deep reinforcement learning by adjusting the difficulty of
the current task to be solved according to the agent's current capabilities.
Designing a proper curriculum, however, can be challenging for sufficiently
complex tasks, and thus we leverage human demonstrations as a way to guide
agent exploration during training. In this work, we aim to train deep
reinforcement learning agents that can command multiple heterogeneous actors
where starting positions and overall difficulty of the task are controlled by
an automatically-generated curriculum from a single human demonstration. Our
results show that an agent trained via automated curriculum learning can
outperform state-of-the-art deep reinforcement learning baselines and match the
performance of the human expert in a simulated command and control task in
StarCraft II modeled over a real military scenario.
|
Objective: We aimed to develop and validate a novel multimodal framework
HiMAL (Hierarchical, Multi-task Auxiliary Learning) framework, for predicting
cognitive composite functions as auxiliary tasks that estimate the longitudinal
risk of transition from Mild Cognitive Impairment (MCI) to Alzheimer Disease
(AD).
Methods: HiMAL utilized multimodal longitudinal visit data including imaging
features, cognitive assessment scores, and clinical variables from MCI patients
in the Alzheimer Disease Neuroimaging Initiative (ADNI) dataset, to predict at
each visit if an MCI patient will progress to AD within the next 6 months.
Performance of HiMAL was compared with state-of-the-art single-task and
multi-task baselines using area under the receiver operator curve (AUROC) and
precision recall curve (AUPRC) metrics. An ablation study was performed to
assess the impact of each input modality on model performance. Additionally,
longitudinal explanations regarding risk of disease progression were provided
to interpret the predicted cognitive decline.
Results: Out of 634 MCI patients (mean [IQR] age : 72.8 [67-78], 60% men),
209 (32%) progressed to AD. HiMAL showed better prediction performance compared
to all single-modality singe-task baselines (AUROC = 0.923 [0.915-0.937];
AUPRC= 0.623 [0.605-0.644]; all p<0.05). Ablation analysis highlighted that
imaging and cognition scores with maximum contribution towards prediction of
disease progression.
Discussion: Clinically informative model explanations anticipate cognitive
decline 6 months in advance, aiding clinicians in future disease progression
assessment. HiMAL relies on routinely collected EHR variables for proximal (6
months) prediction of AD onset, indicating its translational potential for
point-of-care monitoring and managing of high-risk patients.
|
A popular way to create detailed yet easily controllable 3D shapes is via
procedural modeling, i.e. generating geometry using programs. Such programs
consist of a series of instructions along with their associated parameter
values. To fully realize the benefits of this representation, a shape program
should be compact and only expose degrees of freedom that allow for meaningful
manipulation of output geometry. One way to achieve this goal is to design
higher-level macro operators that, when executed, expand into a series of
commands from the base shape modeling language. However, manually authoring
such macros, much like shape programs themselves, is difficult and largely
restricted to domain experts. In this paper, we present ShapeMOD, an algorithm
for automatically discovering macros that are useful across large datasets of
3D shape programs. ShapeMOD operates on shape programs expressed in an
imperative, statement-based language. It is designed to discover macros that
make programs more compact by minimizing the number of function calls and free
parameters required to represent an input shape collection. We run ShapeMOD on
multiple collections of programs expressed in a domain-specific language for 3D
shape structures. We show that it automatically discovers a concise set of
macros that abstract out common structural and parametric patterns that
generalize over large shape collections. We also demonstrate that the macros
found by ShapeMOD improve performance on downstream tasks including shape
generative modeling and inferring programs from point clouds. Finally, we
conduct a user study that indicates that ShapeMOD's discovered macros make
interactive shape editing more efficient.
|
The classifying spaces of cobordisms of singular maps have two fairly
different constructions. We expose a homotopy theoretical connection between
them. As a corollary we show that the classifying spaces in some cases have a
simple product structure.
|
The relativistic correction of the AdS/CFT implied heavy quark potential is
examined within the framework of the potential model. For the typical range of
the coupling strength appropriate to heavy-ion collisions, we find the
correction is significant in size and lowers the dissociation temperature of
quarkonia.
|
This paper deals with a version of the two-timing method which describes
various `slow' effects caused by externally imposed `fast' oscillations. Such
small oscillations are often called \emph{vibrations} and the research area can
be referred as \emph{vibrodynamics}. The governing equations represent a
generic system of first-order ODEs containing a prescribed oscillating velocity
u, given in a general form. Two basic small parameters stand in for the inverse
frequency and the ratio of two time-scales; they appear in equations as regular
perturbations. The proper connections between these parameters yield the
\emph{distinguished limits}, leading to the existence of closed systems of
asymptotic equations. The aim of this paper is twofold: (i) to clarify (or to
demystify) the choices of a slow variable, and (ii) to give a coherent
exposition which is accessible for practical users in applied mathematics,
sciences and engineering. We focus our study on the usually hidden aspects of
the two-timing method such as the \emph{uniqueness or multiplicity of
distinguished limits} and \emph{universal structures of averaged equations}.
The main result is the demonstration that there are two (and only two)
different distinguished limits. The explicit instruction for practically
solving ODEs for different classes of u is presented. The key roles of drift
velocity and the qualitatively new appearance of the linearized equations are
discussed. To illustrate the broadness of our approach, two examples from
mathematical biology are shown.
|
We describe the Carnegie-Spitzer-IMACS (CSI) Survey, a wide-field, near-IR
selected spectrophotometric redshift survey with the Inamori Magellan Areal
Camera and Spectrograph (IMACS) on Magellan-Baade. By defining a flux-limited
sample of galaxies in Spitzer 3.6micron imaging of SWIRE fields, the CSI Survey
efficiently traces the stellar mass of average galaxies to z~1.5. This first
paper provides an overview of the survey selection, observations, processing of
the photometry and spectrophotometry. We also describe the processing of the
data: new methods of fitting synthetic templates of spectral energy
distributions are used to derive redshifts, stellar masses, emission line
luminosities, and coarse information on recent star-formation. Our unique
methodology for analyzing low-dispersion spectra taken with multilayer prisms
in IMACS, combined with panchromatic photometry from the ultraviolet to the IR,
has yielded 37,000 high quality redshifts in our first 5.3 sq.degs of the SWIRE
XMM-LSS field. We use three different approaches to estimate our redshift
errors and find robust agreement. Over the full range of 3.6micron fluxes of
our selection, we find typical uncertainties of sigma_z/(1+z) < 0.015. In
comparisons with previously published VVDS redshifts, for example, we find a
scatter of sigma_z/(1+z) = 0.012 for galaxies at 0.8< z< 1.2. For galaxies
brighter and fainter than i=23 mag, we find sigma_z/(1+z) = 0.009 and
sigma_z/(1+z) = 0.025, respectively. Notably, our low-dispersion spectroscopy
and analysis yields comparable redshift uncertainties and success rates for
both red and blue galaxies, largely eliminating color-based systematics that
can seriously bias observed dependencies of galaxy evolution on environment.
|
We propose a method for computing the Kolmogorov-Sinai (KS) entropy of
chaotic systems. In this method, the KS entropy is expressed as a statistical
average over the canonical ensemble for a Hamiltonian with many ground states.
This Hamiltonian is constructed directly from an evolution equation that
exhibits chaotic dynamics. As an example, we compute the KS entropy for a
chaotic repeller by evaluating the thermodynamic entropy of a system with many
ground states.
|
We present a model-based approach to wind velocity profiling using motion
perturbations of a multirotor unmanned aircraft system (UAS) in both hovering
and steady ascending flight. A state estimation framework was adapted to a set
of closed-loop rigid body models identified for an off-the-shelf quadrotor. The
quadrotor models used for wind estimation were characterized for hovering and
steady ascending flight conditions ranging between 0 and 2 m/s. The closed-loop
models were obtained using system identification algorithms to determine model
structures and estimate model parameters. The wind measurement method was
validated experimentally above the Virginia Tech Kentland Experimental Aircraft
Systems Laboratory by comparing quadrotor and independent sensor measurements
from a sonic anemometer and two SoDARs. Comparison results demonstrated
quadrotor wind estimation in close agreement with the independent wind velocity
measurements. Wind velocity profiles were difficult to validate using
time-synchronized SoDAR measurements, however. Analysis of the noise intensity
and signal-to-noise ratio of the SoDARs proved that close-proximity quadrotor
operations can corrupt wind measurement from SoDARs.
|
We propose two new dependent type systems. The first, is a dependent
graded/linear type system where a graded dependent type system is connected via
modal operators to a linear type system in the style of Linear/Non-linear
logic. We then generalize this system to support many graded systems connected
by many modal operators through the introduction of modes from Adjoint Logic.
Finally, we prove several meta-theoretic properties of these two systems
including graded substitution.
|
Generative Adversarial Networks (GAN) training process, in most cases, apply
Uniform or Gaussian sampling methods in the latent space, which probably spends
most of the computation on examples that can be properly handled and easy to
generate. Theoretically, importance sampling speeds up stochastic optimization
in supervised learning by prioritizing training examples. In this paper, we
explore the possibility of adapting importance sampling into adversarial
learning. We use importance sampling to replace Uniform and Gaussian sampling
methods in the latent space and employ normalizing flow to approximate latent
space posterior distribution by density estimation. Empirically, results on
MNIST and Fashion-MNIST demonstrate that our method significantly accelerates
GAN's optimization while retaining visual fidelity in generated samples.
|
The machinery of noncommutative Schur functions is a general approach to
Schur positivity of symmetric functions initiated by Fomin-Greene. Hwang
recently adapted this theory to posets to give a new approach to the
Stanley-Stembridge conjecture. We further develop this theory to prove that the
symmetric function associated to any $P$-Knuth equivalence graph is Schur
positive. This settles a conjecture of Kim and the third author, and refines
results of Gasharov, Shareshian-Wachs, and Hwang on the Schur positivity of
chromatic symmetric functions.
|
People usually get involved in multiple social networks to enjoy new services
or to fulfill their needs. Many new social networks try to attract users of
other existing networks to increase the number of their users. Once a user
(called source user) of a social network (called source network) joins a new
social network (called target network), a new inter-network link (called anchor
link) is formed between the source and target networks. In this paper, we
concentrated on predicting the formation of such anchor links between
heterogeneous social networks. Unlike conventional link prediction problems in
which the formation of a link between two existing users within a single
network is predicted, in anchor link prediction, the target user is missing and
will be added to the target network once the anchor link is created. To solve
this problem, we use meta-paths as a powerful tool for utilizing heterogeneous
information in both the source and target networks. To this end, we propose an
effective general meta-path-based approach called Connector and Recursive
Meta-Paths (CRMP). By using those two different categories of meta-paths, we
model different aspects of social factors that may affect a source user to join
the target network, resulting in the formation of a new anchor link. Extensive
experiments on real-world heterogeneous social networks demonstrate the
effectiveness of the proposed method against the recent methods.
|
The Boltzmann distribution of electrons poses a fundamental barrier to
lowering energy dissipation in conventional electronics, often termed as
Boltzmann Tyranny. Negative capacitance in ferroelectric materials, which stems
from the stored energy of phase transition, could provide a solution, but a
direct measurement of negative capacitance has so far been elusive. Here we
report the observation of negative capacitance in a thin, epitaxial
ferroelectric film. When a voltage pulse is applied, the voltage across the
ferroelectric capacitor is found to be decreasing with time-in exactly the
opposite direction to which voltage for a regular capacitor should change.
Analysis of this inductance-like behavior from a capacitor presents an
unprecedented insight into the intrinsic energy profile of the ferroelectric
material and could pave the way for completely new applications.
|
Local differential privacy (LDP) has received much interest recently. In
existing protocols with LDP guarantees, a user encodes and perturbs his data
locally before sharing it to the aggregator. In common practice, however, users
would prefer not to answer all the questions due to different
privacy-preserving preferences for different questions, which leads to data
missing or the loss of data quality. In this paper, we demonstrate a new
approach for addressing the challenges of data perturbation with consideration
of users' privacy preferences. Specifically, we first propose BiSample: a
bidirectional sampling technique value perturbation in the framework of LDP.
Then we combine the BiSample mechanism with users' privacy preferences for
missing data perturbation. Theoretical analysis and experiments on a set of
datasets confirm the effectiveness of the proposed mechanisms.
|
Understanding and manipulating properties emerging at a surface or an
interface require a thorough knowledge of structure-property relationships. We
report a study of a prototype oxide system, La2/3Sr1/3MnO3 grown on
SrTiO3(001), by combining in-situ angle-resolved x-ray photoelectron
spectroscopy, ex-situ x-ray diffraction, and scanning transmission electron
microscopy/spectroscopy with electric transport measurements. We find that
La2/3Sr1/3MnO3 films thicker than 20 unit cells (u.c.) exhibit a universal
behavior with no more than one u.c. intermixing at the interface but at least 3
u.c. of Sr segregation near the surface which is (La/Sr)O terminated. The
conductivity vs film thickness shows the existence of nonmetallic layers with
thickness ~ 6.5 +/- 0.9 u.c., which is independent of film thickness but mainly
relates to the deviation of Sr concentration near the surface region. Below 20
u.c., the surface of the films appears mixed (La/Sr)O with MnO2 termination.
Decreasing film thickness to less than 10 u.c. leads to the enhanced deviation
of chemical composition in the films and eventually drives the film insulating.
Our observation offers a natural explanation for the thickness-driven
metal-nonmetal transition in thin films based on the variation of film
stoichiometry.
|
We generalize the Brin-Higman-Thompson groups $n G_{k,1}$ to monoids $n
M_{k,1}$, for $n \ge 1$ and $k \ge 2$, by replacing bijections by partial
functions. The monoid $n M_{k,1}$ has $n G_{k,1}$ as its group of units, and is
congruence-simple. Moreover, $n M_{k,1}$ is finitely generated, and for $n \ge
2$ its word problem is {\sf coNP}-complete. We also present new results about
higher-dimensional joinless codes.
|
We have now tested the Finch Committee's Hypothesis that Green Open Access
Mandates are ineffective in generating deposits in institutional repositories.
With data from ROARMAP on institutional Green OA mandates and data from ROAR on
institutional repositories, we show that deposit number and rate is
significantly correlated with mandate strength (classified as 1-12): The
stronger the mandate, the more the deposits. The strongest mandates generate
deposit rates of 70%+ within 2 years of adoption, compared to the un-mandated
deposit rate of 20%. The effect is already detectable at the national level,
where the UK, which has the largest proportion of Green OA mandates, has a
national OA rate of 35%, compared to the global baseline of 25%. The conclusion
is that, contrary to the Finch Hypothesis, Green Open Access Mandates do have a
major effect, and the stronger the mandate, the stronger the effect (the Liege
ID/OA mandate, linked to research performance evaluation, being the strongest
mandate model). RCUK (as well as all universities, research institutions and
research funders worldwide) would be well advised to adopt the strongest Green
OA mandates and to integrate institutional and funder mandates.
|
Alternative metrics are currently one of the most popular research topics in
scientometric research. This paper provides an overview of research into three
of the most important altmetrics: microblogging (Twitter), online reference
managers (Mendeley and CiteULike) and blogging. The literature is discussed in
relation to the possible use of altmetrics in research evaluation. Since the
research was particularly interested in the correlation between altmetrics
counts and citation counts, this overview focuses particularly on this
correlation. For each altmetric, a meta-analysis is calculated for its
correlation with traditional citation counts. As the results of the
meta-analyses show, the correlation with traditional citations for
micro-blogging counts is negligible (pooled r=0.003), for blog counts it is
small (pooled r=0.12) and for bookmark counts from online reference managers,
medium to large (CiteULike pooled r=0.23; Mendeley pooled r=0.51).
|
We construct an optimally local perfect lattice action for free scalars of
arbitrary mass, and truncate its couplings to a unit hypercube. Spectral and
thermodynamic properties of this ``hypercube scalar'' are drastically improved
compared to the standard action. We also discuss new variants of perfect
actions, using anisotropic or triangular lattices, or applying new types of
RGTs. Finally we add a \lambda \phi^4 term and address perfect lattice
perturbation theory. We report on a lattice action for the anharmonic
oscillator, which is perfect to O(\lambda).
|
Gaussian Processes (GPs) are a versatile and popular method in Bayesian
Machine Learning. A common modification are Sparse Variational Gaussian
Processes (SVGPs) which are well suited to deal with large datasets. While GPs
allow to elegantly deal with Gaussian-distributed target variables in closed
form, their applicability can be extended to non-Gaussian data as well. These
extensions are usually impossible to treat in closed form and hence require
approximate solutions. This paper proposes to approximate the inverse-link
function, which is necessary when working with non-Gaussian likelihoods, by a
piece-wise constant function. It will be shown that this yields a closed form
solution for the corresponding SVGP lower bound. In addition, it is
demonstrated how the piece-wise constant function itself can be optimized,
resulting in an inverse-link function that can be learnt from the data at hand.
|
Many real-world systems studied are governed by complex, nonlinear dynamics.
By modeling these dynamics, we can gain insight into how these systems work,
make predictions about how they will behave, and develop strategies for
controlling them. While there are many methods for modeling nonlinear dynamical
systems, existing techniques face a trade off between offering interpretable
descriptions and making accurate predictions. Here, we develop a class of
models that aims to achieve both simultaneously, smoothly interpolating between
simple descriptions and more complex, yet also more accurate models. Our
probabilistic model achieves this multi-scale property through a hierarchy of
locally linear dynamics that jointly approximate global nonlinear dynamics. We
call it the tree-structured recurrent switching linear dynamical system. To fit
this model, we present a fully-Bayesian sampling procedure using Polya-Gamma
data augmentation to allow for fast and conjugate Gibbs sampling. Through a
variety of synthetic and real examples, we show how these models outperform
existing methods in both interpretability and predictive capability.
|
The modification of the $\phi$ meson spectrum in nuclear matter is studied in
an updated QCD sum rule analysis, taking into account recent improvements in
properly treating the chiral invariant and breaking components of four-quark
condensates. Allowing both mass and decay width to change at finite density,
the QCD sum rule analysis determines certain combinations of changes for these
parameters that satisfy the sum rules equally well. A comprehensive error
analysis, including uncertainties related to the behavior of various
condensates at linear order in density, the employed renormalization scale and
perturbative corrections of the Wilson coefficients, is used to compute the
allowed ranges of these parameter combinations. We find that the $\phi$ meson
mass shift in nuclear matter is especially sensitive to the strange sigma term
$\sigma_{sN}$, which determines the decrease of the strange quark condensate in
nuclear matter. Specifically, we obtain a linear relation between the width
$\Gamma_{\phi}$ and mass shift $\Delta m_{\phi}$ given as $ \Gamma_{\phi} =
a\Delta m_{\phi} + b\sigma_{sN}+c$ with $a = (3.947^{+0.139}_{-0.130})$, $b =
(0.936^{+0.180}_{-0.177} )$ and $c = -(7.707^{+4.791}_{-5.679}) \mathrm{MeV}$.
|
Subsets and Splits