text
stringlengths 6
128k
|
---|
Let $\mathcal{L}=-\Delta+V$ be a Schr\"{o}dinger operator, where the
nonnegative potential $V$ belongs to the reverse H\"{o}lder class $B_{q}$. By
the aid of the subordinative formula, we estimate the regularities of the
fractional heat semigroup, $\{e^{-t\mathcal{L}^{\alpha}}\}_{t>0},$ associated
with $\mathcal{L}$. As an application, we obtain the
$BMO^{\gamma}_{\mathcal{L}}$-boundedness of the maximal function, and the
Littlewood-Paley $g$-functions associated with $\mathcal{L}$ via $T1$ theorem,
respectively.
|
In this manuscript, we study optimal control problems for stochastic delay
differential equations using the dynamic programming approach in Hilbert spaces
via viscosity solutions of the associated Hamilton-Jacobi-Bellman equations. We
show how to use the partial $C^{1,\alpha}$-regularity of the value function
established in \cite{defeo_federico_swiech} to obtain optimal feedback
controls. The main result of the paper is a verification theorem which provides
a sufficient condition for optimality using the value function. We then discuss
its applicability to the construction of optimal feedback controls. We provide
an application to stochastic optimal advertising problems.
|
These are the lecture notes of a seminar held at the Universitat Aut\`onoma
de Barcelona where the Jones-Wolff theorem about the dimension of harmonic
measure in the plane is explained in full detail, for non-expert readers.
|
We propose and analyze an extended Fourier pseudospectral (eFP) method for
the spatial discretization of the Gross-Pitaevskii equation (GPE) with low
regularity potential by treating the potential in an extended window for its
discrete Fourier transform. The proposed eFP method maintains optimal
convergence rates with respect to the regularity of the exact solution even if
the potential is of low regularity and enjoys similar computational cost as the
standard Fourier pseudospectral method, and thus it is both efficient and
accurate. Furthermore, similar to the Fourier spectral/pseudospectral methods,
the eFP method can be easily coupled with different popular temporal
integrators including finite difference methods, time-splitting methods and
exponential-type integrators. Numerical results are presented to validate our
optimal error estimates and to demonstrate that they are sharp as well as to
show its efficiency in practical computations.
|
We analyze the moments of the isosinglet generalized parton distributions H,
E, H-tilde, E-tilde of the nucleon in one-loop order of heavy-baryon chiral
perturbation theory. We discuss in detail the construction of the operators in
the effective theory that are required to obtain all corrections to a given
order in the chiral power counting. The results will serve to improve the
extrapolation of lattice results to the chiral limit.
|
Most existing natural language interfaces to databases (NLIDBs) were designed
to be used with ``snapshot'' database systems, that provide very limited
facilities for manipulating time-dependent data. Consequently, most NLIDBs also
provide very limited support for the notion of time. The database community is
becoming increasingly interested in _temporal_ database systems. These are
intended to store and manipulate in a principled manner information not only
about the present, but also about the past and future.
This thesis develops a principled framework for constructing English NLIDBs
for _temporal_ databases (NLITDBs), drawing on research in tense and aspect
theories, temporal logics, and temporal databases. I first explore temporal
linguistic phenomena that are likely to appear in English questions to NLITDBs.
Drawing on existing linguistic theories of time, I formulate an account for a
large number of these phenomena that is simple enough to be embodied in
practical NLITDBs. Exploiting ideas from temporal logics, I then define a
temporal meaning representation language, TOP, and I show how the HPSG grammar
theory can be modified to incorporate the tense and aspect account of this
thesis, and to map a wide range of English questions involving time to
appropriate TOP expressions. Finally, I present and prove the correctness of a
method to translate from TOP to TSQL2, TSQL2 being a temporal extension of the
SQL-92 database language. This way, I establish a sound route from English
questions involving time to a general-purpose temporal database language, that
can act as a principled framework for building NLITDBs. To demonstrate that
this framework is workable, I employ it to develop a prototype NLITDB,
implemented using ALE and Prolog.
|
Let H(G) be the Hecke algebra of a reductive p-adic group G. We formulate a
conjecture for the ideals in the Bernstein decomposition of H(G). The
conjecture says that each ideal is geometrically equivalent to an algebraic
variety. Our conjecture is closely related to Lusztig's conjecture on the
asymptotic Hecke algebra. We prove our conjecture for SL(2) and GL(n). We also
prove part (1) of our conjecture for the Iwahori ideals of the groups PGL(n)
and SO(5).
|
We use scanning photocurrent microscopy (SPCM) to investigate individual
suspended semiconducting carbon nanotube devices where the potential profile is
engineered by means of local gates. In situ tunable p-n junctions can be
generated at any position along the nanotube axis. Combining SPCM with
transport measurements allows a detailed microscopic study of the evolution of
the band profiles as a function of the gates voltage. Here we study the
emergence of a p-n and a n-p junctions out of a n-type transistor channel using
two local gates. In both cases the I-V curves recorded for gate configurations
corresponding to the formation of the p-n or n-p junction in the SPCM
measurements reveal a clear transition from resistive to rectification regimes.
The rectification curves can be fitted well to the Shockley diode model with a
series resistor and reveal a clear ideal diode behavior.
|
We reconsider the training objective of Generative Adversarial Networks
(GANs) from the mixed Nash Equilibria (NE) perspective. Inspired by the
classical prox methods, we develop a novel algorithmic framework for GANs via
an infinite-dimensional two-player game and prove rigorous convergence rates to
the mixed NE, resolving the longstanding problem that no provably convergent
algorithm exists for general GANs. We then propose a principled procedure to
reduce our novel prox methods to simple sampling routines, leading to
practically efficient algorithms. Finally, we provide experimental evidence
that our approach outperforms methods that seek pure strategy equilibria, such
as SGD, Adam, and RMSProp, both in speed and quality.
|
We investigate the BCS-BEC crossover in an ultracold atomic gas in the
presence of disorder. The disorder is incorporated in the mean-field formalism
through Gaussian fluctuations. We observe evolution to an asymmetric line-shape
of fidelity susceptibility as a function of interaction coupling with
increasing disorder strength which may point to an impending quantum phase
transition. The asymmetric line-shape is further analyzed using the statistical
tools of skewness and kurtosis. We extend our analysis to density of states
(DOS) for a better understanding of the crossover in the disordered
environment.
|
Entangled atomic states, such as spin squeezed states, represent a promising
resource for a new generation of quantum sensors and atomic clocks. We
demonstrate that optimal control techniques can be used to substantially
enhance the degree of spin squeezing in strongly interacting many-body systems,
even in the presence of noise and imperfections. Specifically, we present a
protocol that is robust to noise which outperforms conventional methods.
Potential experimental implementations are discussed.
|
If one wants to represent the galaxy number density at some point in terms of
only the mass density at the same point, there appears the stochasticity in
such a relation, which is referred to as ``stochastic bias''. The stochasticity
is there because the galaxy number density is not merely a local function of a
mass density field, but it is a nonlocal functional, instead. Thus, the
phenomenological stochasticity of the bias should be accounted for by nonlocal
features of galaxy formation processes. Based on mathematical arguments, we
show that there are simple relations between biasing and nonlocality on linear
scales of density fluctuations, and that the stochasticity in Fourier space
does not exist on linear scales under a certain condition, even if the galaxy
formation itself is a complex nonlinear and nonlocal precess. The stochasticity
in real space, however, arise from the scale-dependence of bias parameter, $b$.
As examples, we derive the stochastic bias parameters of simple nonlocal models
of galaxy formation, i.e., the local Lagrangian bias models, the cooperative
model, and the peak model. We show that the stochasticity in real space is also
weak, except on the scales of nonlocality of the galaxy formation. Therefore,
we do not have to worry too much about the stochasticity on linear scales,
especially in Fourier space, even if we do not know the details of galaxy
formation process.
|
Large language models (LLMs) are being applied as actors for sequential
decision making tasks in domains such as robotics and games, utilizing their
general world knowledge and planning abilities. However, previous work does
little to explore what environment state information is provided to LLM actors
via language. Exhaustively describing high-dimensional states can impair
performance and raise inference costs for LLM actors. Previous LLM actors avoid
the issue by relying on hand-engineered, task-specific protocols to determine
which features to communicate about a state and which to leave out. In this
work, we propose Brief Language INputs for DEcision-making Responses (BLINDER),
a method for automatically selecting concise state descriptions by learning a
value function for task-conditioned state descriptions. We evaluate BLINDER on
the challenging video game NetHack and a robotic manipulation task. Our method
improves task success rate, reduces input size and compute costs, and
generalizes between LLM actors.
|
A biopsy is the only diagnostic procedure for accurate histological
confirmation of breast cancer. When sonographic placement is not feasible, a
Magnetic Resonance Imaging(MRI)-guided biopsy is often preferred. The lack of
real-time imaging information and the deformations of the breast make it
challenging to bring the needle precisely towards the tumour detected in
pre-interventional Magnetic Resonance (MR) images. The current manual
MRI-guided biopsy workflow is inaccurate and would benefit from a technique
that allows real-time tracking and localisation of the tumour lesion during
needle insertion. This paper proposes a robotic setup and software architecture
to assist the radiologist in targeting MR-detected suspicious tumours. The
approach benefits from image fusion of preoperative images with intraoperative
optical tracking of markers attached to the patient's skin. A hand-mounted
biopsy device has been constructed with an actuated needle base to drive the
tip toward the desired direction. The steering commands may be provided both by
user input and by computer guidance. The workflow is validated through phantom
experiments. On average, the suspicious breast lesion is targeted with a radius
down to 2.3 mm. The results suggest that robotic systems taking into account
breast deformations have the potentials to tackle this clinical challenge.
|
We study the $2\rightarrow1$ process of electron-positron pair annihilation
to a single photon in a plane-wave background. The probability of the process
in a pulsed plane wave is presented, and a locally constant field approximation
is derived and benchmarked against exact results. The stricter kinematics of
annihilation (compared to the $1\rightarrow2$ processes usually studied) leads
to a stronger dependence on the incoming particle states. We demonstrate this
by studying the effect that initial state wavepackets have on the annihilation
probability. The effect of annihilation in a distribution of particles is
studied by incorporating the process into Monte Carlo simulations.
|
Recent results obtained from the BES psi(2S) data are summarized, including
the measurement of the branching ratio of the J/psi to leptons, B(J/psi -> l^+
l^-) = 5.87 +/- 0.04 +/- 0.09, many psi(2S) and chi_c branching ratios,
information on the rho-pi puzzle, and measurements of the mass of the chi_c0,
m(chi_c0) = 3414.1 +/- 0.6 +/- 0.8 MeV, and the eta_c, m(eta_c) = 2975.8 +/-
3.9 +/- 1.2 MeV.
|
Let N(n, t) be the minimal number of points in a spherical t-design on the
unit sphere S^n in R^{n+1}. For each n >= 3, we prove a new asymptotic upper
bound N(n, t) <= C(n)t^{a_n}, where C(n) is a constant depending only on n, a_3
<= 4, a_4 <= 7, a_5 <= 9, a_6 <= 11, a_7 <= 12, a_8 <= 16, a_9 <= 19, a_10 <=
22, and a_n < n/2*log_2(2n), n > 10.
|
For a large class of quantized ergodic flows the quantum ergodicity theorem
due to Shnirelman, Zelditch, Colin de Verdi\`ere and others states that almost
all eigenfunctions become equidistributed in the semiclassical limit. In this
work we first give a short introduction to the formulation of the quantum
ergodicity theorem for general observables in terms of pseudodifferential
operators and show that it is equivalent to the semiclassical eigenfunction
hypothesis for the Wigner function in the case of ergodic systems. Of great
importance is the rate by which the quantum mechanical expectation values of an
observable tend to their mean value. This is studied numerically for three
Euclidean billiards (stadium, cosine and cardioid billiard) using up to 6000
eigenfunctions. We find that in configuration space the rate of quantum
ergodicity is strongly influenced by localized eigenfunctions like bouncing
ball modes or scarred eigenfunctions. We give a detailed discussion and
explanation of these effects using a simple but powerful model. For the rate of
quantum ergodicity in momentum space we observe a slower decay. We also study
the suitably normalized fluctuations of the expectation values around their
mean, and find good agreement with a Gaussian distribution.
|
We show that three fixed point structures equipped with (sequential)
composition, a sum operation, and a fixed point operation share the same valid
equations. These are the theories of (context-free) languages, (regular) tree
languages, and simulation equivalence classes of (regular) synchronization
trees (or processes). The results reveal a close relationship between classical
language theory and process algebra.
|
The dynamics of drop impact on solid surfaces can be changed significantly by
tuning the elasticity of the solid. Most prominently, the substrate deformation
causes an increase in the splashing threshold as compared to impact onto
perfectly rigid surfaces, and can thus lead to splash suppression. Here, we
experimentally determine the splashing threshold for impact on thin membranes
as a function of the tension in the membrane and its elastic properties. The
drop dynamics is correlated to the membrane deformation, which is
simultaneously measured using a laser profilometry technique. The experimental
results enable us to adapt current models for splashing, showing quantitatively
how substrate deformation alters the splashing threshold.
|
A group of Mira variables in the solar neighborhood show unusual spatial
motion in the Galaxy. To study this motion in a much larger scale in the
Galaxy, we newly surveyed 134 evolved stars off the Galactic plane by SiO maser
lines, obtaining accurate radial velocities of 84 detected stars. Together with
the past data of SiO maser sources, we analyzed the radial velocity data of a
large sample of sources distributing in a distance range of about 0.3 -- 6 kpc
in the first Galactic quadrant. At the Galactic longitudes between 20 and 40
deg, we found a group of stars with large negative radial velocities, which
deviate by more than 100 km s^{-1} from the Galactic rotation. We show that
these deviant motions of maser stars are created by periodic gravitational
perturbation of the Bulge bar, and that the effect appears most strongly at
radii between corotation and outer Lindblad resonances. The resonance effect
can explain the displacement of positions from the Galactic plane as well.
|
Methods for automated discovery of causal relationships from
non-interventional data have received much attention recently. A widely used
and well understood model family is given by linear acyclic causal models
(recursive structural equation models). For Gaussian data both constraint-based
methods (Spirtes et al., 1993; Pearl, 2000) (which output a single equivalence
class) and Bayesian score-based methods (Geiger and Heckerman, 1994) (which
assign relative scores to the equivalence classes) are available. On the
contrary, all current methods able to utilize non-Gaussianity in the data
(Shimizu et al., 2006; Hoyer et al., 2008) always return only a single graph or
a single equivalence class, and so are fundamentally unable to express the
degree of certainty attached to that output. In this paper we develop a
Bayesian score-based approach able to take advantage of non-Gaussianity when
estimating linear acyclic causal models, and we empirically demonstrate that,
at least on very modest size networks, its accuracy is as good as or better
than existing methods. We provide a complete code package (in R) which
implements all algorithms and performs all of the analysis provided in the
paper, and hope that this will further the application of these methods to
solving causal inference problems.
|
The equation of state (EOS), $w(z)$, is the most important parameter of dark
energy. We reconstruct the evolution of this EOS in a model-independent way
using the latest cosmic microwave background (CMB) data from Planck and other
observations, such as type Ia supernovae (SNe Ia), the baryonic acoustic
oscillation measurements (SDSS, 6dF, BOSS, and WiggleZ), and the Hubble
parameter value $H(z)$. The results show that the EOS is consistent with the
cosmological constant at the $2\sigma$ confidence level, not preferring a
dynamical dark energy. The uncorrelated EOS of dark energy constraints from
Planck CMB data are much tighter than those from the WMAP 9-year CMB data.
|
We propose a new framework by means of the dineutron condensate (DC) wave
function to describe the dineutron correlation, which is characterized by the
spatially strong correlation of a spin-zero neutron-neutron pair, in
neutron-rich nuclei with an active deformed core surrounded by valence
neutrons. Using the DC wave function for a 2$\alpha$+2n system, which
corresponds to a toy model for the $^{10}$Be system, we investigate the
neutron-neutron correlation around the $2\alpha$ core and discuss the mechanism
of the dineutron formation at the surface of finite nuclei. To investigate
dineutron correlations in realistic nuclear systems, we superpose the
antisymmetrized molecular dynamics (AMD) wave functions and the DC wave
functions. Applying the AMD+DC method to $^{10}$Be, we show effects of the DC
wave functions in the ground and excited $0^+$ states of $^{10}$Be and discuss
the dineutron correlation in them.
|
We here explore a specific class of scalar field, dubbed quasi-quintessence
which exhibits characteristics akin to ordinary matter. Specifically, we
investigate under which conditions this fluid can mitigate the classical
cosmological constant problem. We remark that, assuming a phase transition, it
is possible to predict inflationary dynamics within the metastable phase
triggered by the symmetry breaking mechanism. During this phase, we study
inflationary models incorporating this cancellation mechanism for vacuum energy
within the context of quasi-quintessence. There, we introduce four novel
potentials, categorized into two main groups, \emph{i.e.}, the Starobinsky-like
and symmetry breaking paradigms. Afterwards, we consider two distinct cases,
the first without coupling with the curvature, while the second exhibiting a
Yukawa-like interacting term. Hence, we compute the inflationary dynamics
within both the Jordan and Einstein frames and discuss the objective to unify
old with chaotic inflation into a single scheme. We therefore find the
tensor-to-scalar ratio and the spectral terms and conclude that the most suited
approach involves the Starobinsky-like class of solution. Indeed, our findings
show that small field inflationary scenarios appear disfavored and propose
\emph{de facto} a novel technique to reobtain the Starobinsky potential without
passing through generalizations of Einstein's gravity. Last but not least, we
conjecture that vacuum energy may be converted into particles by virtue of the
geometric interacting term and speculate about the physics associated with the
Jordan and Einstein frames.
|
We propose an algorithm for the efficient and robust sampling of the
posterior probability distribution in Bayesian inference problems. The
algorithm combines the local search capabilities of the Manifold Metropolis
Adjusted Langevin transition kernels with the advantages of global exploration
by a population based sampling algorithm, the Transitional Markov Chain Monte
Carlo (TMCMC). The Langevin diffusion process is determined by either the
Hessian or the Fisher Information of the target distribution with appropriate
modifications for non positive definiteness. The present methods is shown to be
superior over other population based algorithms, in sampling probability
distributions for which gradients are available and is shown to handle
otherwise unidentifiable models. We demonstrate the capabilities and advantages
of the method in computing the posterior distribution of the parameters in a
Pharmacodynamics model, for glioma growth and its drug induced inhibition,
using clinical data.
|
Bayesian analysis methods often use some form of iterative simulation such as
Monte Carlo computation. Models that involve discrete variables can sometime
pose a challenge, either because the methods used do not support such variables
(e.g. Hamiltonian Monte Carlo) or because the presence of such variables can
slow down the computation. A common workaround is to marginalise the discrete
variables out of the model. While it is reasonable to expect that such
marginalisation would also lead to more time-efficient computations, to our
knowledge this has not been demonstrated beyond a few specialised models.
We explored the impact of marginalisation on the computational efficiency for
a few simple statistical models. Specifically, we considered two- and
three-component Gaussian mixture models, and also the Dawid-Skene model for
categorical ratings. We explored each with two software implementations of
Markov chain Monte Carlo techniques: JAGS and Stan. We directly compared
marginalised and non-marginalised versions of the same model using the samplers
on the same software.
Our results show that marginalisation on its own does not necessarily boost
performance. Nevertheless, the best performance was usually achieved with Stan,
which requires marginalisation. We conclude that there is no simple answer to
whether or not marginalisation is helpful. It is not necessarily the case that,
when turned 'on', this technique can be assured to provide computational
benefit independent of other factors, nor is it likely to be the model
component that has the largest impact on computational efficiency.
|
We report on a search for a dijet resonance in events with only two or three
jets and large imbalance in the total event transverse momentum. This search is
sensitive to the possible production of a new particle in association with a
$W$ or $Z$ boson, where the boson decays leptonically with one or more
neutrinos in the final state. We use the full data set collected by the CDF II
detector at the Tevatron collider at a proton-antiproton center-of-mass energy
of 1.96 TeV. These data correspond to an integrated luminosity of 9.1
fb$^{-1}$. We study the invariant mass distribution of the two jets with
highest transverse energy. We find good agreement between data and standard
model background expectations and measure the combined cross section for WW,
WZ, and ZZ production to be $13.8^{+3.0}_{-2.7}$ pb. No significant anomalies
are observed in the mass spectrum and 95% credibility level upper limits are
set on the production rates of a potential new particle in association with a
$W$ or $Z$ boson.
|
The increasing interest in studying the role of holographic dark energy in
the evolution of the very early universe motivates us to study it for the
scenario of warm inflation. Due to this scenario, the holographic dark energy,
which now drives inflation, has an interaction with the radiation. The case of
interacting dark energy also has received increasing interest in studying the
late time cosmology. The Infrared cutoff is taken as the Hubble length and all
corrections are assumed to be exhibited by the parameter $c$, which appears in
the holographic dark energy. By comparing the predictions of the model with
observational data, the free constants of the model could be determined. Then,
by using these values of the constants, the energy density of inflation is
estimated. Next, we consider the validity of the fundamental assumptions of the
warm inflation, e.g. $T/H > 1$, which is necessary to be held during inflation,
for the obtained values of the constant. Gathering all outcomes, the model
could be count as a suitable candidate for warm inflation.
|
Inspired by classical work of Bel and Robinson, a natural purely algebraic
construction of super-energy tensors for arbitrary fields is presented, having
good mathematical and physical properties. Remarkably, there appear quantities
with mathematical characteristics of energy densities satisfying the dominant
property, which provides super-energy estimates useful for global results and
helpful in other matters. For physical fields, higher order (super)^n-energy
tensors involving the field and its derivatives arise. In Special Relativity,
they provide infinitely many conserved quantities. The interchange of
super-energy between different fields is shown. The discontinuity propagation
law in Einstein-Maxwell fields is related to super-energy tensors, providing
quantities conserved along null hypersurfaces. Finally, conserved super-energy
currents are found for any minimally coupled scalar field whenever there is a
Killing vector.
|
Given multiple input signals, how can we infer node importance in a knowledge
graph (KG)? Node importance estimation is a crucial and challenging task that
can benefit a lot of applications including recommendation, search, and query
disambiguation. A key challenge towards this goal is how to effectively use
input from different sources. On the one hand, a KG is a rich source of
information, with multiple types of nodes and edges. On the other hand, there
are external input signals, such as the number of votes or pageviews, which can
directly tell us about the importance of entities in a KG. While several
methods have been developed to tackle this problem, their use of these external
signals has been limited as they are not designed to consider multiple signals
simultaneously. In this paper, we develop an end-to-end model MultiImport,
which infers latent node importance from multiple, potentially overlapping,
input signals. MultiImport is a latent variable model that captures the
relation between node importance and input signals, and effectively learns from
multiple signals with potential conflicts. Also, MultiImport provides an
effective estimator based on attentive graph neural networks. We ran
experiments on real-world KGs to show that MultiImport handles several
challenges involved with inferring node importance from multiple input signals,
and consistently outperforms existing methods, achieving up to 23.7% higher
NDCG@100 than the state-of-the-art method.
|
We introduce a highly robust GAN-based framework for digitizing a normalized
3D avatar of a person from a single unconstrained photo. While the input image
can be of a smiling person or taken in extreme lighting conditions, our method
can reliably produce a high-quality textured model of a person's face in
neutral expression and skin textures under diffuse lighting condition.
Cutting-edge 3D face reconstruction methods use non-linear morphable face
models combined with GAN-based decoders to capture the likeness and details of
a person but fail to produce neutral head models with unshaded albedo textures
which is critical for creating relightable and animation-friendly avatars for
integration in virtual environments. The key challenges for existing methods to
work is the lack of training and ground truth data containing normalized 3D
faces. We propose a two-stage approach to address this problem. First, we adopt
a highly robust normalized 3D face generator by embedding a non-linear
morphable face model into a StyleGAN2 network. This allows us to generate
detailed but normalized facial assets. This inference is then followed by a
perceptual refinement step that uses the generated assets as regularization to
cope with the limited available training samples of normalized faces. We
further introduce a Normalized Face Dataset, which consists of a combination
photogrammetry scans, carefully selected photographs, and generated fake people
with neutral expressions in diffuse lighting conditions. While our prepared
dataset contains two orders of magnitude less subjects than cutting edge
GAN-based 3D facial reconstruction methods, we show that it is possible to
produce high-quality normalized face models for very challenging unconstrained
input images, and demonstrate superior performance to the current
state-of-the-art.
|
In this paper, we generalize a lot of facts from John Conway and Alex Ryba's
paper, \textit{The extra Fibonacci series and the Empire State Building}, where
we replace the Fibonacci sequence with the Tribonacci sequence. We study the
Tribonacci array, which we also call \textit{the Trithoff array} to emphasize
the connection to the Wythoff array. We describe 13 new sequences.
|
The comprehension of the mechanisms behind the mobility of skilled workers is
of paramount importance for policy making. The lacking nature of official
measurements motivates the use of digital trace data extracted from ORCID
public records. We use such data to investigate European regions, studied at
NUTS2 level, over the time horizon of 2009 to 2020. We present a novel
perspective where regions roles are dictated by the overall activity of the
research community, contradicting the common brain drain interpretation of the
phenomenon. We find that a high mobility is usually correlated with strong
university prestige, high magnitude of investments and an overall good
schooling level in a region.
|
The $\tau$ lepton anomalous magnetic moment: $a_\tau = \frac{g_{\tau}-2}{2}$
was measured, so far, with a precision of only several percents despite its
highly sensitivity to physics beyond the Standard Model such as compositeness
or Supersymmetry. A new study is presented to improve the sensitivity of the
$a_\tau $ measurement with photon-photon interactions from ultra-peripheral
lead-lead collisions at LHC. The theoretical approach used in this work is
based on an effective Lagrangian and on a photon flux implemented in the
MadGraph5 Monte Carlo simulation. Using a multivariate analysis to discriminate
the signal from the background processes, a sensitivity to the anomalous
magnetic moment $\rm{a_{\tau}}$ = 0 $_{+0.011} ^{-0.019}$ is obtained at 95\%
CL with a dataset corresponding to an integrated luminosity of 2 nb$^{-1}$ of
lead-lead collisions and assuming a conservative 10\% systematic uncertainty.
The present results are compared with previous calculations and available
measurements.
|
We develop a Birman-Schwinger principle for the spherically symmetric,
asymptotically flat Einstein-Vlasov system. It characterizes stability
properties of steady states such as the positive definiteness of an
Antonov-type operator or the existence of exponentially growing modes in terms
of a one-dimensional variational problem for a Hilbert-Schmidt operator. This
requires a refined analysis of the operators arising from linearizing the
system, which uses action-angle type variables. For the latter, a single-well
structure of the effective potential for the particle flow of the steady state
is required. This natural property can be verified for a broad class of
singularity-free steady states. As a particular example for the application of
our Birman-Schwinger principle we consider steady states where a Schwarzschild
black hole is surrounded by a shell of Vlasov matter. We prove the existence of
such steady states and derive linear stability if the mass of the Vlasov shell
is small compared to the mass of the black hole.
|
Counterion adsorption on a flexible polyelectrolyte chain in a spherical
cavity is considered by taking a "permuted" charge distribution on the chain so
that the "adsorbed" counterions are allowed to move along the backbone. We
compute the degree of ionization by using self-consistent field theory (SCFT)
and compare with the previously developed variational theory. Analysis of
various contributions to the free energy in both theories reveals that the
equilibrium degree of ionization is attained mainly as an interplay of the
adsorption energy of counterions on the backbone, the translational entropy of
the small ions, and their correlated density fluctuations. Degree of ionization
computed from SCFT is significantly lower than that from the variational
formalism. The difference is entirely due to the density fluctuations of the
small ions in the system, which are accounted for in the variational procedure.
When these fluctuations are deliberately suppressed in the truncated
variational procedure, there emerges a remarkable quantitative agreement in the
various contributing factors to the equilibrium degree of ionization, in spite
of the fundamental differences in the approximations and computational
procedures used in these two schemes. Nevertheless, since the significant
effects from density fluctuations of small ions are not captured by the SCFT,
and due to the close agreement between SCFT and the other contributing factors
in the more transparent variational procedure, the latter is a better
computational tool for obtaining the degree of ionization.
|
The $\mathrm{3D}$ Navier--Stokes system, under Lions boundary conditions, is
proven to be approximately controllable provided a suitable saturating set does
exist. An explicit saturating set for $\mathrm{3D}$ rectangles is given.
|
Let $G$ be a Garside group with Garside element $\Delta$, and let $\Delta^m$
be the minimal positive central power of $\Delta$. An element $g\in G$ is said
to be 'periodic' if some power of it is a power of $\Delta$. In this paper, we
study periodic elements in Garside groups and their conjugacy classes.
We show that the periodicity of an element does not depend on the choice of a
particular Garside structure if and only if the center of $G$ is cyclic; if
$g^k=\Delta^{ka}$ for some nonzero integer $k$, then $g$ is conjugate to
$\Delta^a$; every finite subgroup of the quotient group $G/<\Delta^m>$ is
cyclic.
By a classical theorem of Brouwer, Ker\'ekj\'art\'o and Eilenberg, an
$n$-braid is periodic if and only if it is conjugate to a power of one of two
specific roots of $\Delta^2$. We generalize this to Garside groups by showing
that every periodic element is conjugate to a power of a root of $\Delta^m$.
We introduce the notions of slimness and precentrality for periodic elements,
and show that the super summit set of a slim, precentral periodic element is
closed under any partial cycling. For the conjugacy problem, we may assume the
slimness without loss of generality. For the Artin groups of type $A_n$, $B_n$,
$D_n$, $I_2(e)$ and the braid group of the complex reflection group of type
$(e,e,n)$, endowed with the dual Garside structure, we may further assume the
precentrality.
|
A new idea for neutrino mass was proposed recently, where its smallness is
not due to the seesaw mechanism, i.e. not inversely proportional to some large
mass scale. It comes from a one-loop mechanism with dark matter in the loop
consisting of singlet Majorana fermions $N_i$ with masses of order 10 keV and
neutrino masses are scaled down from them by factors of about $10^{-5}$. We
discuss how this model may be implemented with the non-Abelian discrete
symmetry $A_4$ for neutrino mixing, and consider the phenomenology of $N_i$ as
well as the extra scalar doublet $(\eta^+,\eta^0)$.
|
A deterministic mathematical model for the polymerization of hyperbranched
molecules accounting for substitution, cyclization, and shielding effect has
been developed as a system of nonlinear population balances. The solution
obtained by a novel approximation method shows perfect agreement with the
analytical solution in limiting cases and provides, for the first time in this
class of polymerization problems, full multidimensional results.
|
Many searches for axion cold dark matter rely on the use of tunable
electromagnetic resonators. Current detectors operate at or near microwave
frequencies and use cylindrical cavities with cylindrical tuning rods. The
cavity performance strongly impacts the signal power of the detector, which is
expected to be very small even under optimal conditions. There is strong
motivation to characterize these microwave cavities and improve their
performance in order to maximize the achievable signal power. We present the
results of a study characterizing the HAYSTAC (Haloscope At Yale Sensitive to
Axion Cold dark matter) cavity using bead perturbation measurements and
detailed 3D electromagnetic simulations. This is the first use of bead
perturbation methods to characterize an axion haloscope cavity. In this study,
we measured impacts of misalignments on the order of 0.001 in and demonstrated
that the same impacts can be predicted using electromagnetic simulations. We
also performed a detailed study of mode crossings and hybridization between the
TM$_{010}$ mode used in operation and other cavity modes. This mixing limits
the tuning range of the cavity that can be used during an axion search. By
characterizing each mode crossing in detail, we show that some mode crossings
are benign and are potentially still useful for data collection. The level of
observed agreement between measurements and simulations demonstrates that
finite element modeling can capture non-ideal cavity behavior and the impacts
of very small imperfections. 3D electromagnetic simulations and bead
perturbation measurements are standard tools in the microwave engineering
community, but they have been underutilized in axion cavity design. This work
demonstrates their potential to improve understanding of existing cavities and
to optimize future designs.
|
Rb2Ti2O5 (RTO) has recently been demonstrated to be a solid electrolyte,
producing colossal capacitance when interfaced with metals. In order to
understand the mechanisms leading to such colossal equivalent permittivity (up
to four orders of magnitude above state-of-the-art values), the charge
distribution in RTO is a key feature to be investigated. In the present
article, this charge distribution is probed using the pressure-wave-propagation
method, in devices made of RTO single crystals or polycrystals sandwiched
between two metallic electrodes. Remarkably enough, in both types of samples,
negative charges are found to accumulate inside RTO, near the anode, while the
electric field near the cathode remains zero. This proves that the ionic
carriers are majoritarily negatively charged and provides an explanation for
the colossal capacitance. The latter takes place only at the anode while the
cathode is virtually shifted into the solid electrolyte.
|
Detecting failures and identifying their root causes promptly and accurately
is crucial for ensuring the availability of microservice systems. A typical
failure troubleshooting pipeline for microservices consists of two phases:
anomaly detection and root cause analysis. While various existing works on root
cause analysis require accurate anomaly detection, there is no guarantee of
accurate estimation with anomaly detection techniques. Inaccurate anomaly
detection results can significantly affect the root cause localization results.
To address this challenge, we propose BARO, an end-to-end approach that
integrates anomaly detection and root cause analysis for effectively
troubleshooting failures in microservice systems. BARO leverages the
Multivariate Bayesian Online Change Point Detection technique to model the
dependency within multivariate time-series metrics data, enabling it to detect
anomalies more accurately. BARO also incorporates a novel nonparametric
statistical hypothesis testing technique for robustly identifying root causes,
which is less sensitive to the accuracy of anomaly detection compared to
existing works. Our comprehensive experiments conducted on three popular
benchmark microservice systems demonstrate that BARO consistently outperforms
state-of-the-art approaches in both anomaly detection and root cause analysis.
|
Anomaly detection in crowds enables early rescue response. A plug-and-play
smart camera for crowd surveillance has numerous constraints different from
typical anomaly detection: the training data cannot be used iteratively; there
are no training labels; and training and classification needs to be performed
simultaneously. We tackle all these constraints with our approach in this
paper. We propose a Core Anomaly-Detection (CAD) neural network which learns
the motion behavior of objects in the scene with an unsupervised method. On
average over standard datasets, CAD with a single epoch of training shows a
percentage increase in Area Under the Curve (AUC) of 4.66% and 4.9% compared to
the best results with convolutional autoencoders and convolutional LSTM-based
methods, respectively. With a single epoch of training, our method improves the
AUC by 8.03% compared to the convolutional LSTM-based approach. We also propose
an Expectation Maximization filter which chooses samples for training the core
anomaly-detection network. The overall framework improves the AUC compared to
future frame prediction-based approach by 24.87% when crowd anomaly detection
is performed on a video stream. We believe our work is the first step towards
using deep learning methods with autonomous plug-and-play smart cameras for
crowd anomaly detection.
|
This paper contributes to the challenge of skeleton-based human action
recognition in videos. The key step is to develop a generic network
architecture to extract discriminative features for the spatio-temporal
skeleton data. In this paper, we propose a novel module, namely Logsig-RNN,
which is the combination of the log-signature layer and recurrent type neural
networks (RNNs). The former one comes from the mathematically principled
technology of signatures and log-signatures as representations for streamed
data, which can manage high sample rate streams, non-uniform sampling and time
series of variable length. It serves as an enhancement of the recurrent layer,
which can be conveniently plugged into neural networks. Besides we propose two
path transformation layers to significantly reduce path dimension while
retaining the essential information fed into the Logsig-RNN module. Finally,
numerical results demonstrate that replacing the RNN module by the Logsig-RNN
module in SOTA networks consistently improves the performance on both Chalearn
gesture data and NTU RGB+D 120 action data in terms of accuracy and robustness.
In particular, we achieve the state-of-the-art accuracy on Chalearn2013 gesture
data by combining simple path transformation layers with the Logsig-RNN. Codes
are available at https://github.com/steveliao93/GCN_LogsigRNN.
|
An important problem in space-time adaptive detection is the estimation of
the large p-by-p interference covariance matrix from training signals. When the
number of training signals n is greater than 2p, existing estimators are
generally considered to be adequate, as demonstrated by fixed-dimensional
asymptotics. But in the low-sample-support regime (n < 2p or even n < p)
fixed-dimensional asymptotics are no longer applicable. The remedy undertaken
in this paper is to consider the "large dimensional limit" in which n and p go
to infinity together. In this asymptotic regime, a new type of estimator is
defined (Definition 2), shown to exist (Theorem 1), and shown to be
detection-theoretically ideal (Theorem 2). Further, asymptotic conditional
detection and false-alarm rates of filters formed from this type of estimator
are characterized (Theorems 3 and 4) and shown to depend only on data that is
given, even for non-Gaussian interference statistics. The paper concludes with
several Monte Carlo simulations that compare the performance of the estimator
in Theorem 1 to the predictions of Theorems 2-4, showing in particular higher
detection probability than Steiner and Gerlach's Fast Maximum Likelihood
estimator.
|
Our ability to study the properties of the interstellar medium (ISM) in the
earliest galaxies will rely on emission line diagnostics at rest-frame
ultraviolet (UV) wavelengths. In this work, we identify metallicity-sensitive
diagnostics using UV emission lines. We compare UV-derived metallicities with
standard, well-established optical metallicities using a sample of galaxies
with rest-frame UV and optical spectroscopy. We find that the He2-O3C3
diagnostic (He II 1640 / C III 1906,1909 vs. O III 1666 / C III 1906,1909) is a
reliable metallicity tracer, particularly at low metallicity (12+log(O/H) < 8),
where stellar contributions are minimal. We find that the Si3-O3C3 diagnostic
(Si III 1883 / C III 1906,1909 vs. O III 1666 / C III 1906,1909) is a reliable
metallicity tracer, though with large scatter (0.2-0.3 dex), which we suggest
is driven by variations in gas-phase abundances. We find that the C4-O3C3
diagnostic (C IV 1548,1550 / O III 1666 vs. O III 1666 / C III 1906,1909)
correlates poorly with optically-derived metallicities. We discuss possible
explanations for these discrepant metallicity determinations, including the
hardness of the ionizing spectrum, contribution from stellar wind emission, and
non-solar-scaled gas-phase abundances. Finally, we provide two new UV oxygen
abundance diagnostics, calculated from polynomial fits to the model grid
surface in the He2-O3C3 and Si3-O3C3 diagrams.
|
Differentiable architecture search (DARTS) is an effective method for
data-driven neural network design based on solving a bilevel optimization
problem. Despite its success in many architecture search tasks, there are still
some concerns about the accuracy of first-order DARTS and the efficiency of the
second-order DARTS. In this paper, we formulate a single level alternative and
a relaxed architecture search (RARTS) method that utilizes the whole dataset in
architecture learning via both data and network splitting, without involving
mixed second derivatives of the corresponding loss functions like DARTS. In our
formulation of network splitting, two networks with different but related
weights cooperate in search of a shared architecture. The advantage of RARTS
over DARTS is justified by a convergence theorem and an analytically solvable
model. Moreover, RARTS outperforms DARTS and its variants in accuracy and
search efficiency, as shown in adequate experimental results. For the task of
searching topological architecture, i.e., the edges and the operations, RARTS
obtains a higher accuracy and 60\% reduction of computational cost than
second-order DARTS on CIFAR-10. RARTS continues to out-perform DARTS upon
transfer to ImageNet and is on par with recent variants of DARTS even though
our innovation is purely on the training algorithm without modifying search
space. For the task of searching width, i.e., the number of channels in
convolutional layers, RARTS also outperforms the traditional network pruning
benchmarks. Further experiments on the public architecture search benchmark
like NATS-Bench also support the preeminence of RARTS.
|
We consider a fully-connected wireless gossip network which consists of a
source and $n$ receiver nodes. The source updates itself with a Poisson process
and also sends updates to the nodes as Poisson arrivals. Upon receiving the
updates, the nodes update their knowledge about the source. The nodes gossip
the data among themselves in the form of Poisson arrivals to disperse their
knowledge about the source. The total gossiping rate is bounded by a
constraint. The goal of the network is to be as timely as possible with the
source. In this work, we propose ASUMAN, a distributed opportunistic gossiping
scheme, where after each time the source updates itself, each node waits for a
time proportional to its current age and broadcasts a signal to the other nodes
of the network. This allows the nodes in the network which have higher age to
remain silent and only the low-age nodes to gossip, thus utilizing a
significant portion of the constrained total gossip rate. We calculate the
average age for a typical node in such a network with symmetric settings and
show that the theoretical upper bound on the age scales as $O(1)$. ASUMAN, with
an average age of $O(1)$, offers significant gains compared to a system where
the nodes just gossip blindly with a fixed update rate in which case the age
scales as $O(\log n)$.
|
We study the local-in-time hydrodynamic limit of the relativistic Boltzmann
equation using a Hilbert expansion. More specifically, we prove the existence
of local solutions to the relativistic Boltzmann equation that are nearby the
local relativistic Maxwellian constructed from a class of solutions to the
relativistic Euler equations that includes a large subclass of near-constant,
non-vacuum fluid states. In particular, for small Knudsen number, these
solutions to the relativistic Boltzmann equation have dynamics that are
effectively captured by corresponding solutions to the relativistic Euler
equations.
|
One possible explanation for the present observed acceleration of the
Universe is the breakdown of homogeneity and isotropy due to the formation of
non-linear structures. How inhomogeneities affect the averaged cosmological
expansion rate and lead to late-time acceleration is generally considered to be
due to some backreaction mechanism. General Relativity together with
pressure-free matter have until recently been considered as the sole
ingredients for averaged calculations. In this communication we focus our
attention on more general scenarios, including imperfect fluids as well as
alternative theories of gravity, and apply an averaging procedure to them in
order to determine possible backreaction effects. For illustrative purposes, we
present our results for dark energy models, quintessence and Brans-Dicke
theories. We also provide a discussion about the limitations of frame choices
in the averaging procedure.
|
We investigate the spin relaxation of $p$-type GaAs quantum wires by
numerically solving the fully microscopic kinetic spin Bloch equations. We find
that the quantum-wire size influences the spin relaxation time effectively by
modulating the energy spectrum and the heavy-hole--light-hole mixing of wire
states. The effects of quantum-wire size, temperature, hole density, and
initial polarization are investigated in detail. We show that, depending on the
situation, the spin relaxation time can either increase or decrease with hole
density. Due to the different subband structure and effects arising from
spin-orbit coupling, many spin-relaxation properties are quite different from
those of holes in the bulk or in quantum wells, and the inter-subband
scattering makes a marked contribution to the spin relaxation.
|
Photons and electrons are the key quantum media for the quantum information
processing based on solid state devices. The essential ingredients to
accomplish the quantum repeater were investigated and their underlying physics
were revealed. The relevant elementary processes of the quantum state transfer
between a single photon and a single electron were analyzed, to clarify the
conditions to be satisfied to achieve the high fidelity of the quantum state
transfer. An optical method based on the Faraday rotation was proposed to carry
out the Bell measurement of two electrons which is a key operation in the
entanglement swapping for the quantum repeater and its feasibility was
confirmed. Also investigated was the quantum dynamics in the electron-nuclei
coupled spin system in quantum dots and a couple of new phenomena were
predicted related to the correlations induced by the hyperfine interaction,
namely, bunching and revival in the electron spin measurements. These findings
will pave the way to accomplish the efficient and robust quantum repeater and
nuclear spin quantum memory.
|
The neutrino option is a scenario where the electroweak scale, and thereby
the Higgs mass, is generated simultaneously with neutrino masses in the seesaw
model. This occurs via the leading one loop and tree level diagrams matching
the seesaw model onto the Standard Model Effective Field Theory. We advance the
study of this scenario by determining one loop corrections to the leading order
matching results systematically, performing a detailed numerical analysis of
the consistency of this approach with Neutrino data and the Standard Model
particle masses, and by examining the embedding of this scenario into a more
ultraviolet complete model. We find that the neutrino option remains a viable
and intriguing scenario to explain the origin of observed particle masses.
|
A significant population of distant sub-millimeter-selected galaxies (SMGs)
with powerful dust continuum emission that matches the luminosity of the
brightest QSOs and exceeds that of most extreme local galaxies detected by
IRAS, has been known for almost a decade. The full range of powerful ground-
and space-based facilities have been used to investigate them, and a good deal
of information about their properties has been gathered. This meeting addresses
some of the key questions for better understanding their properties. While
continuum detection is relatively efficient, a spectrum is always required both
to determine a distance/luminosity, and to probe astrophysics: excitation
conditions, the total mass, the mass distribution and degree of dynamical
relaxation. Once a redshift is known, then the associated stellar mass can be
found, and more specialized spectrographs can be used to search for specific
line diagnostics. The first generation of submm surveys, have yielded a
combined sample of several hundred SMGs. Here we discuss the size and follow-up
of future SMG samples that will be compiled in much larger numbers by
JCMT-SCUBA2, Herschel, Planck, LMT, ALMA, and a future large-aperture
(25-m-class) submm/far-IR wide-field ground-based telescope CCAT, planned to
operate at a Chilean site even better than ALMA's. The issues concerning
placing SMGs in the context of their environments and other populations of
high-redshift galaxies are discussed.
|
We consider the optimal value of information (VoI) problem, where the goal is
to sequentially select a set of tests with a minimal cost, so that one can
efficiently make the best decision based on the observed outcomes. Existing
algorithms are either heuristics with no guarantees, or scale poorly (with
exponential run time in terms of the number of available tests). Moreover,
these methods assume a known distribution over the test outcomes, which is
often not the case in practice. We propose an efficient sampling-based online
learning framework to address the above issues. First, assuming the
distribution over hypotheses is known, we propose a dynamic hypothesis
enumeration strategy, which allows efficient information gathering with strong
theoretical guarantees. We show that with sufficient amount of samples, one can
identify a near-optimal decision with high probability. Second, when the
parameters of the hypotheses distribution are unknown, we propose an algorithm
which learns the parameters progressively via posterior sampling in an online
fashion. We further establish a rigorous bound on the expected regret. We
demonstrate the effectiveness of our approach on a real-world interactive
troubleshooting application and show that one can efficiently make high-quality
decisions with low cost.
|
We consider the effects of a noisy magnetic field background over the fermion
propagator in QED, as an approximation to the spatial inhomogeneities that
would naturally arise in certain physical scenarios, such as heavy-ion
collisions or the quark-gluon plasma in the early stages of the evolution of
the Universe. We considered a classical, finite and uniform average magnetic
field background $\langle\mathbf{B}(\mathbf{x})\rangle = \mathbf{B}$, subject
to white-noise spatial fluctuations with auto-correlation of magnitude
$\Delta_B$. By means of the Schwinger representation of the propagator in the
average magnetic field as a reference system, we used the replica formalism to
study the effects of the magnetic noise in the form of renormalized
quasi-particle parameters, leading to an effective charge and an effective
refraction index, that depend not only on the energy scale, as usual, but also
on the magnitude of the noise $\Delta_B$ and the average field $\mathbf{B}$.
|
In positive muon spin rotation and relaxation ($\mu^+$SR) spectroscopy,
positive muons ($\mu^+$) implanted into solid oxides are conventionally treated
as immobile spin-probes at interstitial sites below room temperature. This is
because each $\mu^+$ is thought to be tightly bound to an oxygen atom in the
host lattice to form a muonic analogue of the hydroxy group. On the basis of
this concept, anomalies in $\mu^+$SR spectra observed in oxides have been
attributed in most cases to the intrinsic properties of host materials. On the
other hand, global $\mu^+$ diffusion with an activation energy of $\sim$0.1~eV
has been reported in some chemically-substituted perovskite oxides at cryogenic
temperatures, although the reason for the small activation energy despite the
formation of the strong O$\mu$ bond has not yet been quantitatively understood.
In this study, we investigated interstitial $\mu^+$ diffusion in the perovskite
oxide lattice using KTaO$_3$ cubic perovskite as a model system. We used the
$\mu^+$SR method and density functional theory calculations along with the
harmonic transition state theory to study this phenomenon both experimentally
and theoretically. Experimental activation energies for global $\mu^+$
diffusion obtained below room temperature were less than a quarter of the
calculated classical potential barrier height for a bottleneck $\mu^+$ transfer
path. The reduction in the effective barrier height could be explained by the
harmonic transition state theory with a zero-point energy correction; a
significant difference in zero-point energies for $\mu^+$ at the positions in
the O$\mu$ bonding equilibrium state and a bond-breaking transition state was
the primary cause of the reduction. This suggests that the assumption of
immobile $\mu^+$ in solid oxides is not always satisfied since such a
significant decrease in diffusion barrier height can also occur in other
oxides.
|
Millimeter wave beam alignment (BA) is a challenging problem especially for
large number of antennas. Compressed sensing (CS) tools have been exploited due
to the sparse nature of such channels. This paper presents a novel
deterministic CS approach for BA. Our proposed sensing matrix which has a
Kronecker-based structure is sparse, which means it is computationally
efficient. We show that our proposed sensing matrix satisfies the restricted
isometry property (RIP) condition, which guarantees the reconstruction of the
sparse vector. Our approach outperforms existing random beamforming techniques
in practical low signal to noise ratio (SNR) scenarios.
|
In this paper, we introduce a neighbor embedding framework for manifold
alignment. We demonstrate the efficacy of the framework using a
manifold-aligned version of the uniform manifold approximation and projection
algorithm. We show that our algorithm can learn an aligned manifold that is
visually competitive to embedding of the whole dataset.
|
Both Marcinkiewicz-Zygmund strong laws of large numbers (MZ-SLLNs) and
ordinary strong laws of large numbers (SLLNs) for plug-in estimators of general
statistical functionals are derived. It is used that if a statistical
functional is "sufficiently regular", then a (MZ-) SLLN for the estimator of
the unknown distribution function yields a (MZ-) SLLN for the corresponding
plug-in estimator. It is in particular shown that many L-, V- and risk
functionals are "sufficiently regular", and that known results on the strong
convergence of the empirical process of \alpha-mixing random variables can be
improved. The presented approach does not only cover some known results but
also provides some new strong laws for plug-in estimators of particular
statistical functionals.
|
In this work we study the charge-exchange reaction to Isobaric Analog State
using two types of transition densities. We show that for projectiles that do
not probe the interior of the nucleus but mostly the surface of this nucleus,
distinct differences in the cross-section arise when the two types of
transition densities are employed. We demonstrate this by considering the
(3He,t) reaction.
|
Neuromorphic computing is a brainlike information processing paradigm that
requires adaptive learning mechanisms. A spiking neuro-evolutionary system is
used for this purpose; plastic resistive memories are implemented as synapses
in spiking neural networks. The evolutionary design process exploits parameter
self-adaptation and allows the topology and synaptic weights to be evolved for
each network in an autonomous manner. Variable resistive memories are the focus
of this research; each synapse has its own conductance profile which modifies
the plastic behaviour of the device and may be altered during evolution. These
variable resistive networks are evaluated on a noisy robotic dynamic-reward
scenario against two static resistive memories and a system containing standard
connections only. Results indicate that the extra behavioural degrees of
freedom available to the networks incorporating variable resistive memories
enable them to outperform the comparative synapse types.
|
We report electronic transmission properties of a tight binding Aharonov-Bohm
ring threaded by a magnetic flux, to one arm of which a finite cluster of atoms
has been attached from one side. we demonstrate that, by suitably choosing the
number of scatterers in each arm of the quantum ring and, by decoupling the
ring from the atomic cluster, the transmission across the ring can be
completely blocked when the flux threading the ring becomes equal to half the
fundamental flux quantum. A transmission resonance then occurs immediately as
the coupling between the ring and the impurity cluster is switched 'on'. It is
shown that the delta-like transmission resonances occur precisely at the
eigenvalues of the side coupled chain of atoms.Thw 'switching' effect can be
observed either for all the eigenvalues of the isolated atomic cluster, or for
a selected set of them, depending on the number of scatterers in the arms of
the ring. The ring-dot coupling can be gradually increased to suppress the
oscillations in the magneto-transmission completely. However, the suppression
can lead either to a complete transparency or no transmission at all,
occasionally accompanied by a reversal of phase at special values of the
magnetic flux.
|
We propose a multivariate log-normal distribution to jointly model received
power, mean delay, and root mean square (rms) delay spread of wideband radio
channels, referred to as the standardized temporal moments. The model is
validated using experimental data collected from five different measurement
campaigns (four indoor and one outdoor scenario). We observe that the received
power, mean delay and rms delay spread are correlated random variables and,
therefore, should be simulated jointly. Joint models are able to capture the
structure of the underlying process, unlike the independent models considered
in the literature. The proposed model of the multivariate log-normal
distribution is found to be a good fit for a large number of wideband
data-sets.
|
The NLP community has seen substantial recent interest in grounding to
facilitate interaction between language technologies and the world. However, as
a community, we use the term broadly to reference any linking of text to data
or non-textual modality. In contrast, Cognitive Science more formally defines
"grounding" as the process of establishing what mutual information is required
for successful communication between two interlocutors -- a definition which
might implicitly capture the NLP usage but differs in intent and scope. We
investigate the gap between these definitions and seek answers to the following
questions: (1) What aspects of grounding are missing from NLP tasks? Here we
present the dimensions of coordination, purviews and constraints. (2) How is
the term "grounding" used in the current research? We study the trends in
datasets, domains, and tasks introduced in recent NLP conferences. And finally,
(3) How to advance our current definition to bridge the gap with Cognitive
Science? We present ways to both create new tasks or repurpose existing ones to
make advancements towards achieving a more complete sense of grounding.
|
In this study, the evolutionary development of labor has been tried to be
revealed based on theoretical analysis. Using the example of gdp, which is an
indicator of social welfare, the economic value of the labor of housewives was
tried to be measured with an empirical modeling. To this end; first of all, the
concept of labor was questioned in orthodox (mainstream) economic theories;
then, by abstracting from the labor-employment relationship, it was examined
what effect the labor of unpaid housewives who are unemployed in the capitalist
system could have on gdp. In theoretical analysis; It has been determined that
the changing human profile moves away from rationality and creates limited
rationality and, accordingly, a heterogeneous individual profile. Women were
defined as the new example of heterogeneous individuals, as those who best fit
the definition of limited rational individuals because they prefer to be
housewives. In the empirical analysis of the study, housewife labor was taken
into account as the main variable. In the empirical analysis of the study; In
the case of Turkiye, using turkstat employment data and the atkinson inequality
scale; the impact of housewife labor on gdp was calculated. The results of the
theoretical and empirical analysis were evaluated in the context of
labor-employment independence.
|
Differential Privacy (DP) is a mathematical framework that is increasingly
deployed to mitigate privacy risks associated with machine learning and
statistical analyses. Despite the growing adoption of DP, its technical privacy
parameters do not lend themselves to an intelligible description of the
real-world privacy risks associated with that deployment: the guarantee that
most naturally follows from the DP definition is protection against membership
inference by an adversary who knows all but one data record and has unlimited
auxiliary knowledge. In many settings, this adversary is far too strong to
inform how to set real-world privacy parameters.
One approach for contextualizing privacy parameters is via defining and
measuring the success of technical attacks, but doing so requires a systematic
categorization of the relevant attack space. In this work, we offer a detailed
taxonomy of attacks, showing the various dimensions of attacks and highlighting
that many real-world settings have been understudied. Our taxonomy provides a
roadmap for analyzing real-world deployments and developing theoretical bounds
for more informative privacy attacks. We operationalize our taxonomy by using
it to analyze a real-world case study, the Israeli Ministry of Health's recent
release of a birth dataset using DP, showing how the taxonomy enables
fine-grained threat modeling and provides insight towards making informed
privacy parameter choices. Finally, we leverage the taxonomy towards defining a
more realistic attack than previously considered in the literature, namely a
distributional reconstruction attack: we generalize Balle et al.'s notion of
reconstruction robustness to a less-informed adversary with distributional
uncertainty, and extend the worst-case guarantees of DP to this average-case
setting.
|
Foundation models (FMs) are large neural networks trained on broad datasets,
excelling in downstream tasks with minimal fine-tuning. Human activity
recognition in video has advanced with FMs, driven by competition among
different architectures. However, high accuracies on standard benchmarks can
draw an artificially rosy picture, as they often overlook real-world factors
like changing camera perspectives. Popular benchmarks, mostly from YouTube or
movies, offer diverse views but only coarse actions, which are insufficient for
use-cases needing fine-grained, domain-specific actions. Domain-specific
datasets (e.g., for industrial assembly) typically use data from limited static
perspectives. This paper empirically evaluates how perspective changes affect
different FMs in fine-grained human activity recognition. We compare multiple
backbone architectures and design choices, including image- and video- based
models, and various strategies for temporal information fusion, including
commonly used score averaging and more novel attention-based temporal
aggregation mechanisms. This is the first systematic study of different
foundation models and specific design choices for human activity recognition
from unknown views, conducted with the goal to provide guidance for backbone-
and temporal- fusion scheme selection. Code and models will be made publicly
available to the community.
|
We demonstrate resonant Faraday polarization rotation in plasmonic arrays of
bimetallic nano-ring resonators consisting of Au and Ni sections. This
metamaterial design allows to optimize the trade-off between the enhancement of
magneto-optical effects and plasmonic dissipation. Although Ni sections
correspond to as little as ~6% of the total surface of the metamaterial, the
resulting magneto-optically induced polarization rotation is equal to that of a
continuous film. Such bimetallic metamaterials can be used in compact magnetic
sensors, active plasmonic components and integrated photonic circuits.
|
Fingerprint recognition on mobile devices is an important method for identity
verification. However, real fingerprints usually contain sweat and moisture
which leads to poor recognition performance. In addition, for rolling out
slimmer and thinner phones, technology companies reduce the size of recognition
sensors by embedding them with the power button. Therefore, the limited size of
fingerprint data also increases the difficulty of recognition. Denoising the
small-area wet fingerprint images to clean ones becomes crucial to improve
recognition performance. In this paper, we propose an end-to-end trainable
progressive guided multi-task neural network (PGT-Net). The PGT-Net includes a
shared stage and specific multi-task stages, enabling the network to train
binary and non-binary fingerprints sequentially. The binary information is
regarded as guidance for output enhancement which is enriched with the ridge
and valley details. Moreover, a novel residual scaling mechanism is introduced
to stabilize the training process. Experiment results on the FW9395 and
FT-lightnoised dataset provided by FocalTech shows that PGT-Net has promising
performance on the wet-fingerprint denoising and significantly improves the
fingerprint recognition rate (FRR). On the FT-lightnoised dataset, the FRR of
fingerprint recognition can be declined from 17.75% to 4.47%. On the FW9395
dataset, the FRR of fingerprint recognition can be declined from 9.45% to
1.09%.
|
The integration of edge computing in next-generation mobile networks is
bringing low-latency and high-bandwidth ubiquitous connectivity to a myriad of
cyber-physical systems. This will further boost the increasing intelligence
that is being embedded at the edge in various types of autonomous systems,
where collaborative machine learning has the potential to play a significant
role. This paper discusses some of the challenges in multi-agent distributed
deep reinforcement learning that can occur in the presence of byzantine or
malfunctioning agents. As the simulation-to-reality gap gets bridged, the
probability of malfunctions or errors must be taken into account. We show how
wrong discrete actions can significantly affect the collaborative learning
effort. In particular, we analyze the effect of having a fraction of agents
that might perform the wrong action with a given probability. We study the
ability of the system to converge towards a common working policy through the
collaborative learning process based on the number of experiences from each of
the agents to be aggregated for each policy update, together with the fraction
of wrong actions from agents experiencing malfunctions. Our experiments are
carried out in a simulation environment using the Atari testbed for the
discrete action spaces, and advantage actor-critic (A2C) for the distributed
multi-agent training.
|
Experimental and computer-modeling studies of spectral properties of
crystalline AgCl doped with metal bismuth or bismuth chloride are performed.
Broad near-IR luminescence band in the 0.8--1.2mkm range with time dependence
described by two exponential components corresponding to the lifetimes of 1.5
and 10.3mks is excited mainly by 0.39--0.44mkm radiation. Computer modeling of
probable Bi-related centers in AgCl lattice is performed. On the basis of
experimental and calculation data a conclusion is drawn that the IR
luminescence can be caused by Bi^+ ion centers substituted for Ag^+ ions.
|
Warp-drives are solutions of general relativity widely considered unphysical
due to high negative energy requirements. While the majority of the literature
has focused on macroscopic solutions towards the goal of interstellar travel,
in this work we explore what happens in the small radius limit. In this regime
the magnitude of the total negative energy requirements gets smaller than the
energy contained in a lightning bolt, more than 70 orders of magnitude less
than the original Alcubierre warp drive. Such an amount could conceivably be
generated with current technology by scaling up Casimir-like apparatuses. We
then describe a tubular distribution of externally-generated negative energy
which addresses the major issues plaguing macroscopic warp-drives and propose a
concrete mechanism to accelerate and decelerate a warp. A byproduct of warp
deceleration is the emission of a ray of high-energy particles. The detection
of such particles could be used as the backbone of a faster-than-light
communication device, reminiscent of the hyperwave of science fiction, even
though significant engineering challenges remain to achieve practical
communication.
|
Over the past decade, there has been tremendous progress in creating
synthetic media, mainly thanks to the development of powerful methods based on
generative adversarial networks (GAN). Very recently, methods based on
diffusion models (DM) have been gaining the spotlight. In addition to providing
an impressive level of photorealism, they enable the creation of text-based
visual content, opening up new and exciting opportunities in many different
application fields, from arts to video games. On the other hand, this property
is an additional asset in the hands of malicious users, who can generate and
distribute fake media perfectly adapted to their attacks, posing new challenges
to the media forensic community. With this work, we seek to understand how
difficult it is to distinguish synthetic images generated by diffusion models
from pristine ones and whether current state-of-the-art detectors are suitable
for the task. To this end, first we expose the forensics traces left by
diffusion models, then study how current detectors, developed for GAN-generated
images, perform on these new synthetic images, especially in challenging
social-networks scenarios involving image compression and resizing. Datasets
and code are available at github.com/grip-unina/DMimageDetection.
|
We present SignCLIP, which re-purposes CLIP (Contrastive Language-Image
Pretraining) to project spoken language text and sign language videos, two
classes of natural languages of distinct modalities, into the same space.
SignCLIP is an efficient method of learning useful visual representations for
sign language processing from large-scale, multilingual video-text pairs,
without directly optimizing for a specific task or sign language which is often
of limited size.
We pretrain SignCLIP on Spreadthesign, a prominent sign language dictionary
consisting of ~500 thousand video clips in up to 44 sign languages, and
evaluate it with various downstream datasets. SignCLIP discerns in-domain
signing with notable text-to-video/video-to-text retrieval accuracy. It also
performs competitively for out-of-domain downstream tasks such as isolated sign
language recognition upon essential few-shot prompting or fine-tuning.
We analyze the latent space formed by the spoken language text and sign
language poses, which provides additional linguistic insights. Our code and
models are openly available.
|
The quantization of Hall conductance in a p-type heterojunction with lateral
surface quantum dot superlattice is investigated. The topological properties of
the four-component hole wavefunction are studied both in r- and k-spaces. New
method of calculation of the Hall conductance in a 2D hole gas described by the
Luttinger Hamiltonian and affected by lateral periodic potential is proposed,
based on the investigation of four-component wavefunction singularities in
k-space. The deviations from the quantization rules for Hofstadter "butterfly"
for electrons are found, and the explanation of this effect is proposed. For
the case of strong periodic potential the mixing of magnetic subbands is taken
into account, and the exchange of the Chern numbers between magnetic subands is
discussed.
|
In this paper we examine the cones of effective cycles on blow ups of
projective spaces along smooth rational curves. We determine explicitly the
cones of divisors and 1- and 2-dimensional cycles on blow ups of rational
normal curves, and strengthen these results in cases of low dimension.
Central to our results is the geometry of resolutions of the secant varieties
of the curves which are blown up, and our computations of their effective
cycles may be of independent interest.
|
We report the double helicity asymmetry, $A_{LL}^{J/\psi}$, in inclusive
$J/\psi$ production at forward rapidity as a function of transverse momentum
$p_T$ and rapidity $|y|$. The data analyzed were taken during $\sqrt{s}=510$
GeV longitudinally polarized $p$$+$$p$ collisions at the Relativistic Heavy Ion
Collider (RHIC) in the 2013 run using the PHENIX detector. At this collision
energy, $J/\psi$ particles are predominantly produced through gluon-gluon
scatterings, thus $A_{LL}^{J/\psi}$ is sensitive to the gluon polarization
inside the proton. We measured $A_{LL}^{J/\psi}$ by detecting the decay
daughter muon pairs $\mu^+ \mu^-$ within the PHENIX muon spectrometers in the
rapidity range $1.2<|y|<2.2$. In this kinematic range, we measured the
$A_{LL}^{J/\psi}$ to be $0.012 \pm 0.010$~(stat)~$\pm$~$0.003$(syst). The
$A_{LL}^{J/\psi}$ can be expressed to be proportional to the product of the
gluon polarization distributions at two distinct ranges of Bjorken $x$: one at
moderate range $x \approx 0.05$ where recent RHIC data of jet and $\pi^0$
double helicity spin asymmetries have shown evidence for significant gluon
polarization, and the other one covering the poorly known small-$x$ region $x
\approx 2\times 10^{-3}$. Thus our new results could be used to further
constrain the gluon polarization for $x< 0.05$.
|
We consider a semi-infinite open-ended cylindrical waveguide with uniform
dielectric filling placed into collinear infinite vacuum waveguide with larger
radius. Electromagnetic field produced by a point charge or Gaussian bunch
moving along structure's axis from the dielectric waveguide into the vacuum one
is investigated. We utilize the modified residue-calculus technique and obtain
rigorous analytical solution of the problem by determining coefficients of mode
excitation in each subarea of the structure. Numerical simulations in CST
Particle Studio are also performed and an excellent agreement between
analytical and simulated results is shown. The main attention is paid to
analysis of Cherenkov radiation generated in the inner dielectric waveguide and
penetrated into vacuum regions of the outer waveguide. The discussed structure
can be used for generation of Terahertz radiation by modulated bunches (bunch
trains) by means of high-order Cherenkov modes. In this case, numerical
simulations becomes difficult while the developed analytical technique allows
for efficient calculation of the radiation characteristics.
|
We introduce and analyze an extension to the matching problem on a weighted
bipartite graph: Assignment with Type Constraints. The two parts of the graph
are partitioned into subsets called types and blocks; we seek a matching with
the largest sum of weights under the constraint that there is a pre-specified
cap on the number of vertices matched in every type-block pair. Our primary
motivation stems from the public housing program of Singapore, accounting for
over 70% of its residential real estate. To promote ethnic diversity within its
housing projects, Singapore imposes ethnicity quotas: each new housing
development comprises blocks of flats and each ethnicity-based group in the
population must not own more than a certain percentage of flats in a block.
Other domains using similar hard capacity constraints include matching
prospective students to schools or medical residents to hospitals. Limiting
agents' choices for ensuring diversity in this manner naturally entails some
welfare loss. One of our goals is to study the trade-off between diversity and
social welfare in such settings. We first show that, while the classic
assignment program is polynomial-time computable, adding diversity constraints
makes it computationally intractable; however, we identify a
$\tfrac{1}{2}$-approximation algorithm, as well as reasonable assumptions on
the weights that permit poly-time algorithms. Next, we provide two upper bounds
on the price of diversity -- a measure of the loss in welfare incurred by
imposing diversity constraints -- as functions of natural problem parameters.
We conclude the paper with simulations based on publicly available data from
two diversity-constrained allocation problems -- Singapore Public Housing and
Chicago School Choice -- which shed light on how the constrained maximization
as well as lottery-based variants perform in practice.
|
We present a design and implementation of the Thomas algorithm optimized for
hardware acceleration on an FPGA, the Thomas Core. The hardware-based algorithm
combined with the custom data flow and low level parallelism available in an
FPGA reduces the overall complexity from 8N down to 5N serial arithmetic
operations, and almost halves the overall latency by parallelizing the two
costly divisions. Combining this with a data streaming interface, we reduce
memory overheads to 2 N-length vectors per N-tridiagonal system to be solved.
The Thomas Core allows for multiple independent tridiagonal systems to be
continuously solved in parallel, providing an efficient and scalable
accelerator for many numerical computations. Finally we present applications
for derivatives pricing problems using implicit finite difference schemes on an
FPGA accelerated system and we investigate the use and limitations of
fixed-point arithmetic in our algorithm.
|
This work focuses on the Riemann problem of Euler equations with global
constant initial conditions and a single-point heating source, which comes from
the physical problem of heating one-dimensional inviscid compressible constant
flow. In order to deal with the source of Dirac delta-function, we propose an
analytical frame of double classic Riemann problems(CRPs) coupling, which
treats the fluids on both sides of the heating point as two separate Riemann
problems and then couples them. Under the double CRPs frame, the solution is
self-similar, and only three types of solution are found. The theoretical
analysis is also supported by the numerical simulation. Furthermore, the
uniqueness of the Riemann solution is established with some restrictions on the
Mach number of the initial condition.
|
Prostate cancer is one of the most common cancers in men. It is characterized
by a slow growth and it can be diagnosed in an early stage by observing the
Prostate Specific Antigen (PSA). However, a relapse after the primary therapy
could arise and different growth characteristics of the new tumor are observed.
In order to get a better understanding of the phenomenon, a mathematical model
involving several parameters is considered. To estimate the values of the
parameters identifying the disease risk level a novel approach, based on
combining Particle Swarm Optimization (PSO) with a meshfree interpolation
method, is proposed.
|
A theory is developed of the intricately fingered patterns of flux domains
observed in the intermediate state of thin type-I superconductors. The patterns
are shown to arise from the competition between the long-range Biot-Savart
interactions of the Meissner currents encircling each region and the
superconducting-normal surface energy. The energy of a set of such domains is
expressed as a nonlocal functional of the positions of their boundaries, and a
simple gradient flow in configuration space yields branched flux domains
qualitatively like those seen in experiment. Connections with pattern formation
in amphiphilic monolayers and magnetic fluids are emphasized.
|
Starting from known results, due to Y. Tian in [Ti; 00], referring to the
real matrix representations of the real quaternions, in this paper we will
investigate the left and right real matrix representations for the complex
quaternions and we give some examples in the special case of the complex
Fibonacci quaternions.
|
We consider the formation of cold ground-state polar molecules in a low
vibrational level by laser fields. Starting from a pair of cold colliding atoms
of dissimilar species, we propose a strategy consisting of three steps. In the
first step, a pump pulse induces the molecule formation by photoassociating the
atomic pair in a high or intermediate vibrational level of an excited
electronic molecular state. This step is followed by a dump pulse in an
intermediate vibrational level of the ground state. In the last step, an
infrared chirped pulse induces downward transitions among the vibrational
levels of the ground electronic state, reaching the ground vibrational level.
Initially, the strategy is constructed with fixed-shaped pulses. Subsequently,
we perform the calculations with optimized chirped pulses introducing an
optimal control technique in which the optimization of the time-dependent
frequency is carried out in the time domain. The proposed scheme is an
alternative to the use of two pairs of pump-dump pulses or to the direct
photoassociation and vibrational stabilization in the ground state.
|
In this paper, we propose a novel wireless caching scheme to enhance the
physical layer security of video streaming in cellular networks with limited
backhaul capacity. By proactively sharing video data across a subset of base
stations (BSs) through both caching and backhaul loading, secure cooperative
joint transmission of several BSs can be dynamically enabled in accordance with
the cache status, the channel conditions, and the backhaul capacity. Assuming
imperfect channel state information (CSI) at the transmitters, we formulate a
two-stage non-convex mixed-integer robust optimization problem for minimizing
the total transmit power while providing quality of service (QoS) and
guaranteeing communication secrecy during video delivery, where the caching and
the cooperative transmission policy are optimized in an offline video caching
stage and an online video delivery stage, respectively. Although the formulated
optimization problem turns out to be NP-hard, low-complexity polynomial-time
algorithms, whose solutions are globally optimal under certain conditions, are
proposed for cache training and video delivery control. Caching is shown to be
beneficial as it reduces the data sharing overhead imposed on the
capacity-constrained backhaul links, introduces additional secure degrees of
freedom, and enables a power-efficient communication system design. Simulation
results confirm that the proposed caching scheme achieves simultaneously a low
secrecy outage probability and a high power efficiency. Furthermore, due to the
proposed robust optimization, the performance loss caused by imperfect CSI
knowledge can be significantly reduced when the cache capacity becomes large.
|
The history-dependent behaviors of classical plasticity models are often
driven by internal variables evolved according to phenomenological laws. The
difficulty to interpret how these internal variables represent a history of
deformation, the lack of direct measurement of these internal variables for
calibration and validation, and the weak physical underpinning of those
phenomenological laws have long been criticized as barriers to creating
realistic models. In this work, geometric machine learning on graph data (e.g.
finite element solutions) is used as a means to establish a connection between
nonlinear dimensional reduction techniques and plasticity models. Geometric
learning-based encoding on graphs allows the embedding of rich time-history
data onto a low-dimensional Euclidean space such that the evolution of plastic
deformation can be predicted in the embedded feature space. A corresponding
decoder can then convert these low-dimensional internal variables back into a
weighted graph such that the dominating topological features of plastic
deformation can be observed and analyzed.
|
We report on theoretical studies of a recently discovered strong
radiation-induced magnetoresistance spike obtained in ultraclean
two-dimensional electron systems at low temperatures. The most striking feature
of this spike is that it shows up on the second harmonic of the cyclotron
resonance and with an amplitude that can reach an order of magnitude larger
than the radiation-induced resistance oscillations. We apply the
radiation-driven electron orbits model in the ultraclean scenario. Accordingly,
we calculate the elastic scattering rate (charged impurity) which will define
the unexpected resonance spike position. We also obtain the inelastic
scattering rate (phonon damping), that will be responsible of the large spike
amplitude. We present a microscopical model to explain the dependence of the
Landau level width on the magnetic field for ultraclean samples. We find that
this dependence explains the experimental shift of the resistance oscillations
with respect to the magnetic field found in this kind of samples. We study also
recent results on the influence of an in-plane magnetic field on the spike. We
are able to reconcile the obtained different experimental response of both
spike and resistance oscillations versus an increasing in-plane field. The same
model on the variation of the LL width, allows us to explain such surprising
results based in the increasing disorder in the sample caused by the in-planed
magnetic field. Calculated results are in good agreement with experiments.
These results would be of special interest in nanophotonics; they could lead to
the design of novel ultrasensitive microwave detectors.
|
This paper presents a new \emph{fast-pivoting} algorithm that computes the
$n$ Gittins index values of an $n$-state bandit -- in the discounted and
undiscounted cases -- by performing $(2/3) n^3 + O(n^2)$ arithmetic operations,
thus attaining better complexity than previous algorithms and matching that of
solving a corresponding linear-equation system by Gaussian elimination. The
algorithm further applies to the problem of optimal stopping of a Markov chain,
for which a novel Gittins-index solution approach is introduced. The algorithm
draws on Gittins and Jones' (1974) index definition via calibration, on
Kallenberg's (1986) proposal of using parametric linear programming, on
Dantzig's simplex method, on Varaiya et al.'s (1985) algorithm, and on the
author's earlier work. The paper elucidates the structure of parametric simplex
tableaux. Special structure is exploited to reduce the computational effort of
pivot steps, decreasing the operation count by a factor of three relative to
using conventional pivoting, and by a factor of $3/2$ relative to recent
state-elimination algorithms. A computational study demonstrates significant
time savings against alternative algorithms.
|
Two different kinds of metal transition oxides have been studied for their
large thermopower values. The first one corresponds to the Tl-based misfit
cobaltite which is a hole-doped metal. We demonstrate that the partial
Bi-substitution for Tl in this phase induces an increase of the room
temperature (RT) thermopower (TEP) value. Same result is obtained with the new
Pb_{1/3}SrCoO_{3+delta} misfit corresponding to the Tl complete replacement by
lead. Simultaneously, the T dependence of their resistivity exhibits a
re-entrance below 70-90K where a large negative magnetoresistance is observed.
Magnetic measurements reveal a strong interplay between spins and charges for
this class of materials. Electron-doped (n-type) perovskite manganites are a
second class of potential candidates for applications. In particular, the
Ru^{4+/5+} substitution for Mn in the CaMnO_3 semi-conductor induces a drastic
drop of the resistivity values. Metals with large RT TEP values and not too
large thermal conductivities are generated. A comparison with best known
materials, Bi_2Te_3 and NaCo_2O_4 is made.
|
This paper studies a multi-Intelligent Reflecting Surfaces (IRSs)-assisted
wireless network consisting of multiple base stations (BSs) serving a set of
mobile users. We focus on the IRS-BS association problem in which multiple BSs
compete with each other for controlling the phase shifts of a limited number of
IRSs to maximize the long-term downlink data rate for the associated users. We
propose MDLBI, a Multi-agent Deep Reinforcement Learning-based BS-IRS
association scheme that optimizes the BS-IRS association as well as the
phase-shift of each IRS when being associated with different BSs. MDLBI does
not require information exchanging among BSs. Simulation results show that
MDLBI achieves significant performance improvement and is scalable for large
networking systems.
|
In this work, we introduce a Self-Aware Polymorphic Architecture (SAPA)
design approach to support emerging context-aware applications and mitigate the
programming challenges caused by the ever-increasing complexity and
heterogeneity of high performance computing systems. Through the SAPA design,
we examined the salient software-hardware features of adaptive computing
systems that allow for (1) the dynamic allocation of computing resources
depending on program needs (e.g., the amount of parallelism in the program) and
(2) automatic approximation to meet program and system goals (e.g., execution
time budget, power constraints and computation resiliency) without the
programming complexity of current multicore and many-core systems. The proposed
adaptive computer architecture framework applies machine learning algorithms
and control theory techniques to the application execution based on information
collected about the system runtime performance trade-offs. It has heterogeneous
reconfigurable cores with fast hardware-level migration capability,
self-organizing memory structures and hierarchies, an adaptive
application-aware network-on-chip, and a built-in hardware layer for dynamic,
autonomous resource management. Our prototyped architecture performs extremely
well on a large pool of applications.
|
A one dimensional parameter study of a magneto-inertial fusion (MIF) concept
indicates that significant gain may be achievable. This concept uses a
dynamically formed plasma shell with inwardly directed momentum to drive a
magnetized fuel to ignition, which in turn partially burns an intermediate
layer of unmagnetized fuel. The concept is referred to as Plasma Jet MIF or
PJMIF. The results of an adaptive mesh refinement (AMR) Eulerian code
(Crestone) are compared to those of a Lagrangian code (LASNEX). These are the
first published results using the Crestone and LASNEX codes on the PJMIF
concept.
|
We provide a self-contained introduction for entanglement-assisted quantum
error-correcting codes in this book chapter.
|
The numerical solution of partial differential equations is at the heart of
many grand challenges in supercomputing. Solvers based on high-order
discontinuous Galerkin (DG) discretisation have been shown to scale on large
supercomputers with excellent performance and efficiency, if the implementation
exploits all levels of parallelism and is tailored to the specific
architecture. However, every year new supercomputers emerge and the list of
hardware-specific considerations grows, simultaneously with the list of desired
features in a DG code. Thus we believe that a sustainable DG code needs an
abstraction layer to implement the numerical scheme in a suitable language. We
explore the possibility to abstract the numerical scheme as small tensor
operations, describe them in a domain-specific language (DSL) resembling the
Einstein notation, and to map them to existing code generators which generate
small matrix matrix multiplication routines. The compiler for our DSL
implements classic optimisations that are used for large tensor contractions,
and we present novel optimisation techniques such as equivalent sparsity
patterns and optimal index permutations for temporary tensors. Our application
examples, which include the earthquake simulation software SeisSol, show that
the generated kernels achieve over 50 % peak performance while the DSL
considerably simplifies the implementation.
|
We consider a particle collision with a high center-of-mass energy near a
Ba\~nados-Teitelboim-Zanelli (BTZ) black hole. We obtain the center-of-mass
energy of two general colliding geodesic particles in the BTZ black hole
spacetime. We show that the center-of-mass energy of two ingoing particles can
be arbitrarily large on an event horizon if either of the two particles has a
critical angular momentum and the other has a non-critical angular momentum. We
also show that the motion of a particle with a subcritical angular momentum is
allowed near an extremal rotating BTZ black hole and that a center-of-mass
energy for a tail-on collision at a point can be arbitrarily large in a
critical angular momentum limit.
|
A longstanding goal in character animation is to combine data-driven
specification of behavior with a system that can execute a similar behavior in
a physical simulation, thus enabling realistic responses to perturbations and
environmental variation. We show that well-known reinforcement learning (RL)
methods can be adapted to learn robust control policies capable of imitating a
broad range of example motion clips, while also learning complex recoveries,
adapting to changes in morphology, and accomplishing user-specified goals. Our
method handles keyframed motions, highly-dynamic actions such as
motion-captured flips and spins, and retargeted motions. By combining a
motion-imitation objective with a task objective, we can train characters that
react intelligently in interactive settings, e.g., by walking in a desired
direction or throwing a ball at a user-specified target. This approach thus
combines the convenience and motion quality of using motion clips to define the
desired style and appearance, with the flexibility and generality afforded by
RL methods and physics-based animation. We further explore a number of methods
for integrating multiple clips into the learning process to develop
multi-skilled agents capable of performing a rich repertoire of diverse skills.
We demonstrate results using multiple characters (human, Atlas robot, bipedal
dinosaur, dragon) and a large variety of skills, including locomotion,
acrobatics, and martial arts.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.