text
stringlengths 6
128k
|
---|
These are the lecture notes for a course that I am teaching at Zhiyuan
College of Shanghai Jiao Tong University (available at
https://www.youtube.com/derekkorg), though the first draft was created for a
previous course I taught at the University of Erlangen-Nuremberg in Germany. It
has been designed for students who have only had basic training on quantum
mechanics, and hence, the course is suited for people at all levels. The notes
are a work in progress, meaning that some proofs and many figures are still
missing. However, I've tried my best to write everything in such a way that a
reader can follow naturally all arguments and derivations even with these
missing bits. Quantum optics treats the interaction between light and matter.
We may think of light as the optical part of the electromagnetic spectrum, and
matter as atoms. However, modern quantum optics covers a wild variety of
systems, including superconducting circuits, confined electrons, excitons in
semiconductors, defects in solid state, or the center-of-mass motion of micro-,
meso-, and macroscopic systems. Moreover, quantum optics is at the heart of the
field of quantum information. The ideas and experiments developed in quantum
optics have also allowed us to take a fresh look at many-body problems and even
high-energy physics. In addition, quantum optics holds the promise of testing
foundational problems in quantum mechanics as well as physics beyond the
standard model in table-sized experiments. Quantum optics is therefore a topic
that no future researcher in quantum physics should miss.
|
We present recent results of single top quark production in the lepton plus
jet final state, performed by the CDF and D0 collaborations based on 7.5 and
5.4/fb of ppbar collision data collected at sqrt(s) = 1.96 TeV from the
Fermilab Tevatron collider. Multivariate techniques are used to separate the
single top signal from the backgrounds. Both collaborations present
measurements of the single top quark cross section and the CKM matrix element
Vtb. A search for anomalous Wtb coupling from D0 is also presented.
|
In this invited contribution I briefly review some of the principal topics in
hadron spectroscopy that were studied at the CERN low-energy antiproton
facility LEAR, from its beginnings in the early 1980s to the present. These
topics include the nature of multiquark systems, the short-ranged nuclear
force, and gluonic hadrons, including glueballs and hybrids. Lessons we have
learned from the LEAR program that are relevant to the future GSI project are
given particular emphasis.
|
In this paper we consider a discrete Klein-Gordon (dKG) equation on $\ZZ^d$
in the limit of the discrete nonlinear Schrodinger (dNLS) equation, for which
small-amplitude breathers have precise scaling with respect to the small
coupling strength $\eps$. By using the classical Lyapunov-Schmidt method, we
show existence and linear stability of the KG breather from existence and
linear stability of the corresponding dNLS soliton. Nonlinear stability, for an
exponentially long time scale of the order $\mathcal{O}(\exp(\eps^{-1}))$, is
also obtained via the normal form technique, together with higher order
approximations of the KG breather through perturbations of the corresponding
dNLS soliton.
|
The emission of hard X-rays associated with white dwarfs (WD) can be
generated by the presence of a stellar companion either by the companion's
coronal emission or by an accretion disk formed by material stripped from the
companion. Recent studies have suggested that a Jupiter-like planet can also be
donor of material whose accretion onto the WD can generate hard X-rays. We use
the {\sc guacho} code to reproduce the conditions of this WD-planet scenario.
With the example of the hard X-ray WD KPD\,0005+5106, we explore different
terminal wind velocities and mass-loss rates of a donor planet for a future
network of simulations to investigate the luminosity and the spectral and
temporal properties of the hard X-ray emission in WD-planet systems. Our
simulations show that the material stripped from the planet forms a disk and
accretes onto the WD to reach temperatures high enough to generate hard X-rays
as usually seen in X-ray binaries with low-mass companions. For high terminal
wind velocities, the planet material does not form a disk, but it rather
accretes directly onto the WD surface. The simulations reproduce the X-ray
luminosity of another X-ray accreting WD (G\,29$-$38), and only for some times
reaches the hard X-ray luminosity of KPD\,0005+5106. The X-ray variability is
stochastic and does not reproduce the period of KPD\,0005+5106, suggesting that
additional physical processes (e.g., hot spots resulting from magnetic
channelling of the accreting material) need to be explored.
|
When the primary outcome is hard to collect, surrogate endpoint is typically
used as a substitute. However, even when the treatment has a positive average
causal effect (ACE) on the surrogate endpoint, which also has a positive ACE on
the primary outcome, it is still possible that the treatment has a negative ACE
on the primary outcome. Such a phenomenon is called the surrogate paradox and
greatly challenges the use of surrogate. In this paper, we provide novel
criteria to exclude the surrogate paradox. Unlike other conditions previously
proposed, our conditions are testable since they only involve observed data.
Furthermore, our criteria are optimal in the sense that they are sufficient and
"almost necessary" to exclude the paradox: if the conditions are satisfied, the
surrogate paradox is guaranteed to be absent while if the conditions fail,
there exists a data generating process with surrogate paradox that can generate
the same observed data. That is, our criteria capture all the information in
the observed data to exclude the surrogate paradox rather than relying on
unverifiable distributional assumptions.
|
Failure of the main argument for the use of heavy tailed distribution in
Finance is given. More precisely, one cannot observe so many outliers for
Cauchy or for symmetric stable distributions as we have in reality.
keywords:outliers; financial indexes; heavy tails; Cauchy distribution; stable
distributions
|
Supernova remnants (SNRs) exhibit varying degrees of anisotropy, which have
been extensively modeled using numerical methods. We implement a technique to
measure anisotropies in SNRs by calculating power spectra from their
high-resolution images. To test this technique, we develop 3D hydrodynamical
models of supernova remnants and generate synthetic x-ray images from them.
Power spectra extracted from both the 3D models and the synthetic images
exhibit the same dominant angular scale, which separates large scale features
from small scale features due to hydrodynamic instabilities. The angular power
spectrum at small length scales during relatively early times is too steep to
be consistent with Kolmogorov turbulence, but it transitions to Kolmogorov
turbulence at late times. As an example of how this technique can be applied to
observations, we extract a power spectrum from a \textit{Chandra} observation
of Tycho's SNR and compare with our models. Our predicted power spectrum picks
out the angular scale of Tycho's fleece-like structures and also agrees with
the small-scale power seen in Tycho. We use this to extract an estimate for the
density of the circumstellar gas ($n \sim 0.28/\mathrm{cm^3}$), consistent with
previous measurements of this density by other means. The power spectrum also
provides an estimate of the density profile of the outermost ejecta. Moreover,
we observe additional power at large scales which may provide important clues
about the explosion mechanism itself.
|
We present [CII] 158 $\mu$m and [OI] 63 $\mu$m observations of the bipolar
HII region RCW 36 in the Vela C molecular cloud, obtained within the SOFIA
legacy project FEEDBACK, which is complemented with APEX $^{12/13}$CO(3-2) and
Chandra X-ray (0.5-7 keV) data. This shows that the molecular ring, forming the
waist of the bipolar nebula, expands with a velocity of 1 - 1.9 km s$^{-1}$. We
also observe an increased linewidth in the ring indicating that turbulence is
driven by energy injection from the stellar feedback. The bipolar cavity hosts
blue-shifted expanding [CII] shells at 5.2$\pm$0.5$\pm$0.5 km s$^{-1}$
(statistical and systematic uncertainty) which indicates that expansion out of
the dense gas happens non-uniformly and that the observed bipolar phase might
be relatively short ($\sim$0.2 Myr). The X-ray observations show diffuse
emission that traces a hot plasma, created by stellar winds, in and around RCW
36. At least 50 \% of the stellar wind energy is missing in RCW 36. This is
likely due to leakage which is clearing even larger cavities around the bipolar
RCW 36 region. Lastly, the cavities host high-velocity wings in [CII] which
indicates relatively high mass ejection rates ($\sim$5$\times$10$^{-4}$
M$_{\odot}$ yr$^{-1}$). This could be driven by stellar winds and/or radiation
pressure, but remains difficult to constrain. This local mass ejection, which
can remove all mass within 1 pc of RCW 36 in 1-2 Myr, and the large-scale
clearing of ambient gas in the Vela C cloud indicates that stellar feedback
plays a significant role in suppressing the star formation efficiency (SFE).
|
By combining a bound on the absolute value of the difference of mutual
information between two joint probablity distributions with a fixed variational
distance, and a bound on the probability of a maximal deviation in variational
distance between a true joint probability distribution and an empirical joint
probability distribution, confidence intervals for the mutual information of
two random variables with finite alphabets are established. Different from
previous results, these intervals do not need any assumptions on the
distribution and the sample size.
|
Studying the atomic gas (HI) properties of the most isolated galaxies is
essential to quantify the effect that the environment exerts on this sensitive
component of the interstellar medium. We observed and compiled HI data for a
well defined sample of ~ 800 galaxies in the Catalog of Isolated Galaxies, as
part of the AMIGA project (Analysis of the ISM in Isolated GAlaxies,
http://amiga.iaa.es), which enlarges considerably previous samples used to
quantify the HI deficiency in galaxies located in denser environments. By
studying the shape of 182 HI profiles, we revisited the usually accepted result
that, independently of the environment, more than half of the galaxies present
a perturbed HI disk. In isolated galaxies this would certainly be a striking
result if these are supposed to be the most relaxed systems, and has
implications in the relaxation time scales of HI disks and the nature of the
most frequent perturbing mechanisms in galaxies. Our sample likely exhibits the
lowest HI asymmetry level in the local Universe. We found that other field
samples present an excess of ~ 20% more asymmetric HI profiles than that in
CIG. Still a small percentage of galaxies in our sample present large
asymmetries. Follow-up high resolution VLA maps give insight into the origin of
such asymmetries.
|
We use DNS to study inter-scale and inter-space energy exchanges in the
near-field of a turbulent wake of a square prism in terms of the KHMH equation
written for a triple decomposition of the velocity field accounting for the
quasi-periodic vortex shedding. Orientation-averaged terms of the KHMH are
computed on the plane of the mean flow and on the geometric centreline. We
consider locations between $2$ and $8$ times the width $d$ of the prism. The
mean flow produces kinetic energy which feeds the vortex shedding coherent
structures. In turn, these structures transfer energy to the stochastic
fluctuations over all length-scales $r$ from the Taylor length $\lambda$ to $d$
and dominate spatial turbulent transport of two-point stochastic turbulent
fluctuations. The orientation-averaged non-linear inter-scale transfer rate
$\Pi^{a}$ which was found to be approximately independent of $r$ by Alves
Portela et. al. (2017) in the range $\lambda\le r \le 0.3d$ at a distance
$x_{1}=2d$ from the square prism requires an inter-scale transfer contribution
of coherent structures for this approximate constancy. However, the
near-constancy of $\Pi^a$ at $x_1=8d$ which was also found by Alves Portela et.
al. (2017) is mostly due to stochastic fluctuations. Even so, the proximity of
$-\Pi^a$ to the turbulence dissipation rate $\varepsilon$ in the range
$\lambda\le r\le d$ at $x_1=8d$ requires contributions of the coherent
structures. Spatial inhomogeneity also makes a direct and distinct contribution
to $\Pi^a$, and the constancy of $-\Pi^a/\varepsilon$ close to 1 would not have
been possible without it either in this near-field flow. Finally, the
pressure-velocity term is also an important contributor to the KHMH,
particularly at scales r larger than about $0.4d$, and appears to correlate
with the purely stochastic non-linear inter-scale transfer rate when the
orientation average is lifted.
|
We address a parametric joint detection-estimation problem for discrete
signals of the form $x(t) = \sum_{n=1}^{N} \alpha_n e^{-i \lambda_n t } +
\epsilon_t$, $t \in \mathbb{N}$, with an additive noise represented by
independent centered complex random variables $\epsilon_t$. The distributions
of $\epsilon_t$ are assumed to be unknown, but satisfying various sets of
conditions. We prove that in the case of a heavy-tailed noise it is possible to
construct asymptotically strongly consistent estimators for the unknown
parameters of the signal, i.e., the frequencies $\lambda_n$, their number $N$,
and complex amplitudes $\alpha_n$. For example, one of considered classes of
noise is the following: $\epsilon_t$ are independent identically distributed
random variables with $\mathbb{E} (\epsilon_t) = 0$ and $\mathbb{E}
(|\epsilon_t| \ln |\epsilon_t|) < \infty$. The construction of estimators is
based on detection of singularities of anti-derivatives for $Z$-transforms and
on a two-level selection procedure for special discretized versions of
superlevel sets. The consistency proof relies on the convergence theory for
random Fourier series. We discuss also decaying signals and the case of
infinite number of frequencies.
|
The construction of an effective 3D theory at high temperatures for the MSSM
as a model of electroweak baryogenesis is discussed. The analysis for a single
light scalar field shows, that given the experimental constraints, there is no
value of the Higgs mass for which a sufficiently first-order phase transition
is obtained. A precise determination of the 3D parameters of the effective
theory for the case of a light right-handed stop allows us to obtain an upper
bound on the masses of the lightest Higgs and right-handed stop using the
two-loop effective potential. A two-stage phase transition persists for a small
range of values of $m_{\tilde{t}_{R}}$.
|
We study the thermodynamic behavior of a decaying scalar field coupled to a
relativistic simple fluid. It is shown that if the decay products are
represented by a thermalized bath, its temperature evolution law requires
naturally a new phenomenological coupling term. This ``energy loss'' term is
the product between the enthalpy density of the thermalized bath and the decay
width of the scalar field. We also argue that if the field $\phi$ decays
"adiabatically" some thermodynamic properties of the fluid are preserved. In
particular, for a field decaying into photons, the radiation entropy production
rate is independent of the specific scalar field potential $V(\phi)$, and the
energy density $\rho$ and average number density of photons n scale as $\rho
\sim T^{4}$ and $n \sim T^{3}$. To illustrate these results, a new warm
inflationary scenario with no slow roll is proposed.
|
Non-classical states of light field have been exploited to provide marvellous
results in quantum information science. Effectiveness of nonclassical states
depends on whether physical parameter as signal is continuous or digital. Here
we present an investigation on the potential of quasi Bell state of entangled
coherent states in quantum reading of the classical digital memory which was
pioneered by Pirandola. This is a typical example of discrete quantum
discrimination. We show that the quasi Bell state gives the error free
performance in the quantum reading that cannot be obtained by any classical
state.
|
This paper generalizes a previously published differential equation that
describes the relation between the age-specific incidence, remission, and
mortality of a disease with its prevalence. The underlying model is a simple
compartment model with three states (illness-death model). In contrast to the
former work, migration- and calendar time-effects are included. As an
application of the theoretical findings, a hypothetical example of an
irreversible disease is treated.
|
The synthesis, structure, and basic magnetic properties of Na2Co2TeO6 and
Na3Co2SbO6 are reported. The crystal structures were determined by neutron
powder diffraction. Na2Co2TeO6 has a two-layer hexagonal structure (space group
P6322) while Na3Co2SbO6 has a single-layer monoclinic structure (space group
C2/m). The Co, Te, and Sb ions are in octahedral coordination, and the edge
sharing octahedra form planes interleaved by sodium ions. Both compounds have
full ordering of the Co2+ and Te6+/Sb5+ ions in the ab plane such that the Co2+
ions form a honeycomb array; the stacking of the honeycomb arrays differ in the
two compounds. Both Na2Co2TeO6 and Na3Co2SbO6 reveal antiferromagnetic ordering
at low temperatures, with a metamagnetic transition displayed by Na3Co2SbO6.
|
Packet traffic in complex networks undergoes the jamming transition from
free-flow to congested state as the number of packets in the system increases.
Here we study such jamming transition when queues are operated by the priority
queuing protocol and packets are guided by the dynamic routing protocol. We
introduce a minimal model in which there are two types of packets distinguished
by whether priority is assigned. Based on numerical simulations, we show that
traffic is improved in the congested region under the priority queuing
protocol, and it is worsened in the free-flow region. Also, we find that at the
transition point, the waiting-time distribution follows a power law, and the
power spectrum of traffic exhibits a crossover between two 1/f^a behaviors with
exponent a ~ 1 and 1 < a < 2 in low and high frequency regime, respectively.
This crossover is originated from a characteristic waiting time of packets in
the queue.
|
Gray molasses is a powerful tool for sub-Doppler laser cooling of atoms to
low temperatures. For alkaline atoms, this technique is commonly implemented
with cooling lasers which are blue-detuned from either the D1 or D2 line. Here
we show that efficient gray molasses can be implemented on the D2 line of 40K
with red-detuned lasers. We obtained temperatures of 48(2) microKelvin, which
enables direct loading of 9.2(3)*10^6 atoms from a magneto-optical trap into an
optical dipole trap. We support our findings by a one-dimensional model and
three-dimensional numerical simulations of the optical Bloch equations which
qualitatively reproduce the experimentally observed cooling effects.
|
I review some of the evidences for dust in the Local Bubble and in galactic
halos and show that a general mechanism based on radiation pressure is capable
of evacuating dust grains from regions dominated by massive star energy input
and thus originate huge dusty halos. A Monte Carlo/particle model has been
developed to study the dust dynamics above HII chimneys and the results, among
other findings, show that dust can travel several kpc away from the plane of
the parent galaxy. The cosmological implications of extragalactic dust are
briefly outlined.
|
We analyse slow-roll inflationary cosmologies that are holographically dual
to a three-dimensional conformal field theory deformed by a nearly marginal
scalar operator. We show the cosmological power spectrum is inversely
proportional to the spectral density associated with the 2-point function of
the trace of the stress tensor in the deformed CFT. Computing this quantity
using second-order conformal perturbation theory, we obtain a holographic power
spectrum in exact agreement with the expected inflationary power spectrum to
second order in slow roll.
|
Exact ground states are calculated with an integer optimization algorithm for
two and three dimensional site-diluted Ising antiferromagnets in a field (DAFF)
and random field Ising ferromagnets (RFIM). We investigate the structure and
the size-distribution of the domains of the ground state and compare it to
earlier results from Monte Carlo simulations for finite temperature. Although
DAFF and RFIM are thought to be in the same universality class we found
essential differences between these systems as far as the domain properties are
concerned. For the DAFF the ground states consist of fractal domains with a
broad size distribution that can be described by a power law with exponential
cut-off. For the RFIM the limiting case of the size distribution and structure
of the domains for strong random fields is the size distribution and structure
of the clusters of the percolation problem with a field dependent lower
cut-off. The domains are fractal and in three dimensions nearly all spins
belong to two large infinite domains of up- and down spins - the system is in a
two-domain state.
|
In data-driven predictive cloud control tasks, the privacy of data stored and
used in cloud services could be leaked to malicious attackers or curious
eavesdroppers. Homomorphic encryption technique could be used to protect data
privacy while allowing computation. However, extra errors are introduced by the
homomorphic encryption extension to ensure the privacy-preserving properties,
and the real number truncation also brings uncertainty. Also, process and
measure noise existed in system input and output may bring disturbance. In this
work, a data-driven predictive cloud controller is developed based on
homomorphic encryption to protect the cloud data privacy. Besides, a
disturbance observer is introduced to estimate and compensate the encrypted
control signal sequence computed in the cloud. The privacy of data is
guaranteed by encryption and experiment results show the effect of our
cloud-edge cooperative design.
|
The time complexity of the standard attention mechanism in a transformer
scales quadratically with the length of the sequence. We introduce a method to
reduce this to linear scaling with time, based on defining attention via latent
vectors. The method is readily usable as a drop-in replacement for the standard
attention mechanism. Our "Latte Transformer" model can be implemented for both
bidirectional and unidirectional tasks, with the causal version allowing a
recurrent implementation which is memory and time-efficient during inference of
language generation tasks. Whilst next token prediction scales linearly with
the sequence length for a standard transformer, a Latte Transformer requires
constant time to compute the next token. The empirical performance of our
method is comparable to standard attention, yet allows scaling to context
windows much larger than practical in standard attention.
|
Mathematical music theory has assumed without proof that musical notes can be
associated with the equivalence classes of $\mathbb{Z}_n$. We contest the
triviality of this assertion, which we call the Pitch-class Integer Theorem
(PCIT). Since the existing literature assumes the PCIT without proof, the
mathematics to rigorously treat the PCIT does not yet exist. Thus, we construct
an axiomatic proof of the PCIT to support the existing mathematical models of
music theory.
|
Distributed representations of text can be used as features when training a
statistical classifier. These representations may be created as a composition
of word vectors or as context-based sentence vectors. We compare the two kinds
of representations (word versus context) for three classification problems:
influenza infection classification, drug usage classification and personal
health mention classification. For statistical classifiers trained for each of
these problems, context-based representations based on ELMo, Universal Sentence
Encoder, Neural-Net Language Model and FLAIR are better than Word2Vec, GloVe
and the two adapted using the MESH ontology. There is an improvement of 2-4% in
the accuracy when these context-based representations are used instead of
word-based representations.
|
The mechanical performance of metallic metamaterials with 3-dimensional solid
frames is typically a combination of the geometrical effect ("architecture")
and the characteristic size effects of the base material ("microstructure"). In
this study, for the first time, the temperature- and rate-dependent mechanical
response of copper microlattices has been investigated. The microlattices were
fabricated via a localized electrodeposition in liquid (LEL) process which
enables high-precision additive manufacturing of metal at the micro-scale. The
metal microlattices possess a unique microstructure with micron sized grains
that are rich with randomly oriented growth twins and near-ideal nodal
connectivity. Importantly, copper microlattices exhibited unique temperature
(-150 and 25 degree C) and strain rate (0.001~100 s-1) dependent deformation
behavior during in situ micromechanical testing. Systematic compression tests
of fully dense copper micropillars, equivalent in diameter and length to the
struts of the microlattice at comparable extreme loading conditions, allow us
to investigate the intrinsic deformation mechanism of copper. Combined with the
post-mortem microstructural analysis, substantial shifts in deformation
mechanisms depending on the temperature and strain rate were revealed. On the
one hand, at room temperature (25 degree C), dislocation slip based plastic
deformation occurs and leads to a localized deformation of the micropillars. On
the other hand, at cryogenic temperature (-150 degree C), mechanical twinning
occurs and leads to relatively homogeneous deformation of the micropillars.
Based on the intrinsic deformation mechanisms of copper, the temperature and
strain rate dependent deformation behavior of microlattices could be explained.
|
The focus of this work is sample-efficient deep reinforcement learning (RL)
with a simulator. One useful property of simulators is that it is typically
easy to reset the environment to a previously observed state. We propose an
algorithmic framework, named uncertainty-first local planning (UFLP), that
takes advantage of this property. Concretely, in each data collection
iteration, with some probability, our meta-algorithm resets the environment to
an observed state which has high uncertainty, instead of sampling according to
the initial-state distribution. The agent-environment interaction then proceeds
as in the standard online RL setting. We demonstrate that this simple procedure
can dramatically improve the sample cost of several baseline RL algorithms on
difficult exploration tasks. Notably, with our framework, we can achieve
super-human performance on the notoriously hard Atari game, Montezuma's
Revenge, with a simple (distributional) double DQN. Our work can be seen as an
efficient approximate implementation of an existing algorithm with theoretical
guarantees, which offers an interpretation of the positive empirical results.
|
Galaxy populations at different cosmic epochs are often linked together by
comoving cumulative number density in observational studies. Many theoretical
works, however, have shown that the number densities of tracked galaxy
populations evolve in bulk and spread out over time. We present a number
density method for linking progenitor and descendant galaxy populations which
takes both of these effects into account. We define probability distribution
functions that capture the evolution and dispersion of galaxy populations in
comoving number density space, and use these functions to assign galaxies at
one redshift $z_f$ probabilities of being progenitors or descendants of a
galaxy population at another redshift $z_0$. These probabilities are then used
as weights for calculating distributions of physical properties such as stellar
mass, star formation rate, or velocity dispersion within the
progenitor/descendant population. We demonstrate that this probabilistic method
provides more accurate predictions for the evolution of physical properties
then either the assumption of a constant number density or the assumption of an
evolving number density in a bin of fixed width by comparing the predictions
against galaxy populations directly tracked through a cosmological simulation.
We find that the constant number density method performs most poorly at
recovering galaxy properties, the evolving number method density slightly
better, and the probabilistic number density method best of all. The
improvement is present for predictions of both stellar mass as well as inferred
quantities such as star formation rate and velocity dispersion which were not
included in the number density fits. We demonstrate that this method can also
be applied robustly and easily to observational data, and provide a code
package for doing so.
|
Recently, machine learning algorithms have successfully entered large-scale
real-world industrial applications (e.g. search engines and email spam
filters). Here, the CPU cost during test time must be budgeted and accounted
for. In this paper, we address the challenge of balancing the test-time cost
and the classifier accuracy in a principled fashion. The test-time cost of a
classifier is often dominated by the computation required for feature
extraction-which can vary drastically across eatures. We decrease this
extraction time by constructing a tree of classifiers, through which test
inputs traverse along individual paths. Each path extracts different features
and is optimized for a specific sub-partition of the input space. By only
computing features for inputs that benefit from them the most, our cost
sensitive tree of classifiers can match the high accuracies of the current
state-of-the-art at a small fraction of the computational cost.
|
In this article we study the spherical mean Radon transform in $\mathbf R^3$
with detectors centered on a plane. We use the consistency method suggested by
the author of this article for the inversion of the transform in 3D. A new
iterative inversion formula is presented. This formula has the benefit of being
local and is suitable for practical reconstructions. The inversion of the
spherical mean Radon transform is required in mathematical models in thermo-
and photo-acoustic tomography, radar imaging, and others.
|
The CNN-encoding of features from entire videos for the representation of
human actions has rarely been addressed. Instead, CNN work has focused on
approaches to fuse spatial and temporal networks, but these were typically
limited to processing shorter sequences. We present a new video representation,
called temporal linear encoding (TLE) and embedded inside of CNNs as a new
layer, which captures the appearance and motion throughout entire videos. It
encodes this aggregated information into a robust video feature representation,
via end-to-end learning. Advantages of TLEs are: (a) they encode the entire
video into a compact feature representation, learning the semantics and a
discriminative feature space; (b) they are applicable to all kinds of networks
like 2D and 3D CNNs for video classification; and (c) they model feature
interactions in a more expressive way and without loss of information. We
conduct experiments on two challenging human action datasets: HMDB51 and
UCF101. The experiments show that TLE outperforms current state-of-the-art
methods on both datasets.
|
The theory of quasi-elastic inclusive scattering of polarized leptons off
polarized $^3$He is critically reviewed and the origin of different expressions
for the polarized nuclear response function appearing in the literature is
explained. The sensitivity of the longitudinal asymmetry upon the neutron form
factors is thoroughly investigated and the role played by the polarization
angle for minimizing the proton contribution is illustrated.
|
In this paper, we prove a function field-analogue of Poonen-Rains heuristics
on the average size of $p$-Selmer group. Let $E$ be an elliptic curve defined
over $\mathbb{Z}[t]$. Then $E$ is also defined over $\mathbb{F}_q$ for any $q$
of prime power. We show that for large enough $q$, the average size of the
$p$-Selmer groups over the family of quadratic twists of $E$ over
$\mathbb{F}_q[t]$ is equal to $p+1$ for all but finitely many primes $p$.
Namely, if we twist the curve in $\mathbb{F}_q[t]$ by polynomials of fixed
degree $n$ and let both $n$ and $q$ approach to infinity, then the average rank
of $p$-Selmer group converges to $p+1$.
|
Can transformers generalize efficiently on problems that require dealing with
examples with different levels of difficulty? We introduce a new task tailored
to assess generalization over different complexities and present results that
indicate that standard transformers face challenges in solving these tasks.
These tasks are variations of pointer value retrieval previously introduced by
Zhang et al. (2021). We investigate how the use of a mechanism for adaptive and
modular computation in transformers facilitates the learning of tasks that
demand generalization over the number of sequential computation steps (i.e.,
the depth of the computation graph). Based on our observations, we propose a
transformer-based architecture called Hyper-UT, which combines dynamic function
generation from hyper networks with adaptive depth from Universal Transformers.
This model demonstrates higher accuracy and a fairer allocation of
computational resources when generalizing to higher numbers of computation
steps. We conclude that mechanisms for adaptive depth and modularity complement
each other in improving efficient generalization concerning example complexity.
Additionally, to emphasize the broad applicability of our findings, we
illustrate that in a standard image recognition task, Hyper- UT's performance
matches that of a ViT model but with considerably reduced computational demands
(achieving over 70\% average savings by effectively using fewer layers).
|
The capabilities of a high (~ 10^-9 resel^-1) contrast, narrow-field,
coronagraphic instrument (CGI) on a space-based AFTA-C or probe-class EXO-C/S
mission, conceived to study the diversity of exoplanets now known to exist into
stellar habitable zones, are particularly and importantly germane to symbiotic
studies of the systems of circumstellar (CS) material from which planets have
emerged and interact with throughout their lifetimes. The small particle
populations in "disks" of co-orbiting materials can trace the presence of
planets through dynamical interactions that perturb the spatial distribution of
the light-scattering debris, detectable at optical wavelengths and resolvable
with an AFTA-C or EXO-S/C CGI. Herein we: (1) present the science case to study
the formation, evolution, architectures, diversity, and properties of the
material in the planet-hosting regions of nearby stars, (2) discuss how a CGI
under current conception can uniquely inform and contribute to those
investigations, (3) consider the applicability of CGI anticipated performance
for CS debris system (CDS) studies, (4) investigate, through AFTA CGI image
simulations, the anticipated interpretive fidelity and metrical results from
specific, representative, zodiacal debris disk observations, (5) comment on
specific observational modes and methods germane to, and augmenting, CDS
observations, (6) present, in detail, the case for augmenting the currently
conceived CGI two-band Nyquist sampled (or better) imaging capability with a
full linear-Stokes imaging polarimeter of great benefit in characterizing the
material properties of CS dust (and exoplanet atmospheres, discussed in other
studies).
|
We investigate the sensitivity of the anomalous dimension-8 neutral triple
gauge couplings via process $pp\to \nu\nu\gamma$ with fast detector simulation
including pile-up effects for the post LHC experiments. The transverse momentum
of the final state photon and missing energy transverse distributions are
considered in the analysis. We obtain the sensitivity to the $C_{\widetilde B
W}/\Lambda^4$, $C_{B B}/\Lambda^4$, $C_{WW}/\Lambda^4$ and $C_{BW}/\Lambda^4$
couplings at HL-LHC and HE-LHC with an integrated luminosity of 3 ab$^{-1}$ and
15 ab$^{-1}$, respectively. Finally, our numerical results show that one can
reach the constraints at 95\% confidence level without systematic error on
$C_{\widetilde BW}/\Lambda^4$, $C_{B B}/\Lambda^4$, $C_{W W}/\Lambda^4$ and
$C_{BW}/\Lambda^4$ couplings for HL-LHC (HE-LHC) as [-0.38;0.38]
([-0.12;0.12]), [-0.21;0.21]([-0.085;0.085]), [-1.08;1.08]([-0.38;0.38]) and
[-0.48;0.48]([-0.25;0.25]), respectively. They are better than the experimental
limits obtained by LHC.
|
Federated learning (FL) collaboratively trains a shared global model
depending on multiple local clients, while keeping the training data
decentralized in order to preserve data privacy. However, standard FL methods
ignore the noisy client issue, which may harm the overall performance of the
shared model. We first investigate critical issue caused by noisy clients in FL
and quantify the negative impact of the noisy clients in terms of the
representations learned by different layers. We have the following two key
observations: (1) the noisy clients can severely impact the convergence and
performance of the global model in FL, and (2) the noisy clients can induce
greater bias in the deeper layers than the former layers of the global model.
Based on the above observations, we propose Fed-NCL, a framework that conducts
robust federated learning with noisy clients. Specifically, Fed-NCL first
identifies the noisy clients through well estimating the data quality and model
divergence. Then robust layer-wise aggregation is proposed to adaptively
aggregate the local models of each client to deal with the data heterogeneity
caused by the noisy clients. We further perform the label correction on the
noisy clients to improve the generalization of the global model. Experimental
results on various datasets demonstrate that our algorithm boosts the
performances of different state-of-the-art systems with noisy clients. Our code
is available on https://github.com/TKH666/Fed-NCL
|
We examine the following version of a classic combinatorial search problem
introduced by R\'enyi: Given a finite set $X$ of $n$ elements we want to
identify an unknown subset $Y \subset X$ of exactly $d$ elements by testing, by
as few as possible subsets $A$ of $X$, whether $A$ contains an element of $Y$
or not. We are primarily concerned with the model where the family of test sets
is specified in advance (non-adaptive) and each test set is of size at most a
given $k$. Our main results are asymptotically sharp bounds on the minimum
number of tests necessary for fixed $d$ and $k$ and for $n$ tending to
infinity.
|
In its simplest form the curvaton paradigm requires the Hubble parameter
during inflation to be bigger than $10^8 \GeV$, but this bound may be evaded in
non-standard settings. In the heavy curvaton scenario the curvaton mass
increases significantly after the end of inflation. We reanalyze the bound in
this set up, taking into account the upper bound on the curvaton mass from
direct decay. We obtain $H_* > 10^8 \GeV$ if the mass increase occurs at the
end of inflation, and $H_* > 10^{-14} \GeV$ if it occurs just before
nucleosynthesis. We then discuss two implementations of the heavy curvaton.
Parameters are constrained in these explicit models, and as a result even
obtaining TeV scale inflation is hard to obtain.
|
Investigation of energy systems integrated with green chemical conversion,
and in particular combi-nation of green hydrogen and synthetic methanation, is
still a scarce subject in the literature in terms of optimal design and
operation for energy grids under weather intermittency and demand uncertain-ty.
In this work, a multi-period mixed-integer linear programming (MILP) model is
formulated to identify the optimal design and operation of integrated energy
grids including such chemical conver-sion systems. Under current carbon dioxide
limitations, this model computes the best configuration of the renewable and
non-renewable-based generators, from a large candidate pool containing
thirty-nine different equipment, their optimal rated powers, capacities and
scheduling sequences. Three different scenarios are generated for a specific
location. We observed that photovoltaic, oil co-generator, reciprocating ICE,
micro turbine, and bio-gasifier are the equipment that is commonly chosen under
the three different scenarios. Results also show that concepts such as green
hydrogen and power-to-gas are currently not preferable for the investigated
location.
|
Although the recent progress in the deep neural network has led to the
development of learnable local feature descriptors, there is no explicit answer
for estimation of the necessary size of a neural network. Specifically, the
local feature is represented in a low dimensional space, so the neural network
should have more compact structure. The small networks required for local
feature descriptor learning may be sensitive to initial conditions and learning
parameters and more likely to become trapped in local minima. In order to
address the above problem, we introduce an adaptive pruning Siamese
Architecture based on neuron activation to learn local feature descriptors,
making the network more computationally efficient with an improved recognition
rate over more complex networks. Our experiments demonstrate that our learned
local feature descriptors outperform the state-of-art methods in patch
matching.
|
Active interferometers use amplifying elements for beam splitting and
recombination. We experimentally implement such a device by using spin exchange
in a Bose-Einstein condensate. The two interferometry modes are initially empty
spin states that get spontaneously populated in the process of parametric
amplification. This nonlinear mechanism scatters atoms into both modes in a
pairwise fashion and generates a nonclassical state. Finally, a matched second
period of spin exchange is performed that nonlinearly amplifies the output
signal and maps the phase onto readily detectable first moments. Depending on
the accumulated phase this nonlinear readout can reverse the initial dynamics
and deamplify the entangled state back to empty spin states. This sequence is
described in the framework of SU(1,1) mode transformations and compared to the
SU(2) angular momentum description of passive interferometers.
|
Hybrid visualizations mix different metaphors in a single layout of a
network. In particular, the popular NodeTrix model, introduced by Henry,
Fekete, and McGuffin in 2007, combines node-link diagrams and matrix-based
representations to support the analysis of real-world networks that are
globally sparse but locally dense. That idea inspired a series of works,
proposing variants or alternatives to NodeTrix. We present a user study that
compares the classical node-link model and three hybrid visualization models
designed to work on the same types of networks. The results of our study
provide interesting indications about advantages/drawbacks of the considered
models on performing classical tasks of analysis. At the same time, our
experiment has some limitations and opens up to further research on the
subject.
|
State-of-the-art individual-atom tweezer platforms have relied on loading
schemes based on spatially superimposing the tweezer array with a cloud of cold
atoms created beforehand. Together with immanent atom loss, this dramatically
limits the data rate, as the application sequence must be alternated with the
time-consuming phases of magneto-optical trapping and laser cooling. We
introduce a modular scheme built on an additional cold-atom reservoir and an
array of buffer traps effectively decoupling cold-atom accumulation and
single-atom supply from the quantum-register operation. For this purpose, we
connect a microlens-based tweezer array to a cloud of laser-cooled atoms held
in an auxiliary large-focus dipole trap by utilizing atom transport and buffer
traps for dedicated single-atom supply. We demonstrate deterministic loading of
a hexagonal target structure with atoms solely originating from the reservoir
trap. The results facilitate increased data rates and unlock a path to
continuous operation of individual-atom tweezer arrays in quantum science,
making use of discrete functional modules, operated in parallel and spatially
separated.
|
Topological insulators are a broad class of unconventional materials that are
insulating in the interior but conduct along the edges. This edge transport is
topologically protected and dissipationless. Until recently, all existing
topological insulators, known as quantum Hall states, violated time-reversal
symmetry. However, the discovery of the quantum spin Hall effect demonstrated
the existence of novel topological states not rooted in time-reversal
violations. Here, we lay out an experiment to realize time-reversal topological
insulators in ultra-cold atomic gases subjected to synthetic gauge fields in
the near-field of an atom-chip. In particular, we introduce a feasible scheme
to engineer sharp boundaries where the "edge states" are localized. Besides,
this multi-band system has a large parameter space exhibiting a variety of
quantum phase transitions between topological and normal insulating phases. Due
to their unprecedented controllability, cold-atom systems are ideally suited to
realize topological states of matter and drive the development of topological
quantum computing.
|
In a recent letter, it has been predicted within first principle studies that
Mn-doped ZrO2 compounds could be good candidate for spintronics application
because expected to exhibit ferromagnetism far beyond room temperature. Our
purpose is to address this issue experimentally for Mn-doped tetragonal
zirconia. We have prepared polycrystalline samples of Y0.15(Zr0.85-yMny)O2
(y=0, 0.05, 0.10, 0.15 & 0.20) by using standard solid state method at
equilibrium. The obtained samples were carefully characterized by using x-ray
diffraction, scanning electron microscopy, elemental color mapping, X-ray
photoemission spectroscopy and magnetization measurements. From the detailed
structural analyses, we have observed that the 5% Mn doped compound
crystallized into two symmetries (dominating tetragonal & monoclinic), whereas
higher Mn doped compounds are found to be in the tetragonal symmetry only. The
spectral splitting of the Mn 3s core-level x-ray photoelectron spectra confirms
that Mn ions are in the Mn3+ oxidation state and indicate a local magnetic
moment of about 4.5 {\mu}B/Mn. Magnetic measurements showed that compounds up
to 10% of Mn doping are paramagnetic with antiferromagnetic interactions.
However, higher Mn doped compound exhibits local ferrimagnetic ordering. Thus,
no ferromagnetism has been observed for all Mn-doped tetragonal ZrO2 samples.
|
Interaction with others influences our opinions and behaviours. Our
activities within various social circles lead to different opinions expressed
in various situations, groups, and ways of communication. Earlier studies on
agent-based modelling of conformism within networks were based on a
single-layer approach. Contrary to that, in this work, we propose a model
incorporating conformism in which a person can share different continuous
opinions on different layers depending on the social circle. Afterwards, we
extend the model with more components that are known to influence opinions,
e.g. authority or openness to new views. These two models are then compared to
show that only sole conformism leads to opinion convergence.
|
We present the results of the uniform analysis of 46 XMM-Newton observations
of six BAL and seven mini-BAL QSOs belonging to the Palomar-Green Quasar
catalogue. Moderate-quality X-ray spectroscopy was performed with the EPIC-pn,
and allowed to characterise the general source spectral shape to be complex,
significantly deviating from a power law emission. A simple power law analysis
in different energy bands strongly suggests absorption to be more significant
than reflection in shaping the spectra. If allowing for the absorbing gas to be
either partially covering the continuum emission source or to be ionised, large
column densities of the order of $10^{22-24}$ cm$^{-2}$ are inferred. When the
statistics was high enough, virtually every source was found to vary in
spectral shape on various time scales, from years to hours. All in all these
observational results are compatible with radiation driven accretion disk winds
shaping the spectra of these intriguing cosmic sources.
|
We consider the experimental data on the production of strange Lambdas, and
multistrange baryons (Xi, Omega), and antibaryons, on nuclear targets, at the
energy region from SPS up to LHC, in the framework of the Quark-Gluon String
Model. One remarkable result of this analysis is the significant dependence on
the centrality of the collision of the experimental ratios bar(Xi)+/bar(Lambda)
and bar(Omega)+/bar(Lambda) ratios in heavy-ion collisions, at SPS energies.
|
Finite-size scaling expressions for the current near the continuous phase
transition, and for the local density near the first-order transition, are
found in the steady state of the one-dimensional fully asymmetric
simple-exclusion process (FASEP) with open boundaries and discrete-time
dynamics. The corresponding finite-size scaling variables are identified as the
ratio of the chain length to the localization length of the relevant domain
wall.
|
The united rest mass and charge of a particle correspond to the two forms of
the same regularity of the unified nature of its ultimate structure. Each of
them contains the electric, weak, strong and the gravitational contributions.
As a consequence, the force of an attraction among the two neutrinos and force
of their repulsion must be defined from the point of view of any of the
existing types of the actions. Therefore, to understand the nature of the micro
world interaction at the fundamental level, one must use the fact that each of
the four types of well known forces includes both a kind of the Newton and a
kind of the Coulomb components. The opinion has been spoken that the existence
of the gravitational parts of the united rest mass and charge would imply the
availability of such a fifth force which come forwards in the system as a
unified whole.
|
We report results of investigation of amplitude calibration for very long
baseline interferometry (VLBI) observations with Korean VLBI Network (KVN).
Amplitude correction factors are estimated based on comparison of KVN
observations at 22~GHz correlated by Daejeon hardware correlator and DiFX
software correlator in Korea Astronomy and Space Science Institute (KASI) with
Very Long Baseline Array (VLBA) observations at 22~GHz by DiFX software
correlator in National Radio Astronomy Observatory (NRAO). We used the
observations for compact radio sources, 3C 454.3, NRAO 512, OJ 287, BL Lac, 3C
279, 1633+382, and 1510-089, which are almost unresolved for baselines in a
range of 350-477 km. Visibility data of the sources obtained with similar
baselines at KVN and VLBA are selected, fringe-fitted, calibrated, and compared
for their amplitudes. We found that visibility amplitudes of KVN observations
should be corrected by factors of 1.10 and 1.35 when correlated by DiFX and
Daejeon correlators, respectively. These correction factors are attributed to
the combination of two steps of 2-bit quantization in KVN observing systems and
characteristics of Daejeon correlator.
|
We propose a mechanism to solve the Higgs naturalness problem through a
cosmological selection process. The discharging of excited field configurations
through membrane nucleation leads to discrete jumps of the cosmological
constant and the Higgs mass, which vary in a correlated way. The resulting
multitude of universes are all empty, except for those in which the
cosmological constant and the Higgs mass are both nearly vanishing. Only under
these critical conditions can inflation be activated and create a non-empty
universe.
|
An adaptive agent predicting the future state of an environment must weigh
trust in new observations against prior experiences. In this light, we propose
a view of the adaptive immune system as a dynamic Bayesian machinery that
updates its memory repertoire by balancing evidence from new pathogen
encounters against past experience of infection to predict and prepare for
future threats. This framework links the observed initial rapid increase of the
memory pool early in life followed by a mid-life plateau to the ease of
learning salient features of sparse environments. We also derive a modulated
memory pool update rule in agreement with current vaccine response experiments.
Our results suggest that pathogenic environments are sparse and that memory
repertoires significantly decrease infection costs even with moderate sampling.
The predicted optimal update scheme maps onto commonly considered competitive
dynamics for antigen receptors.
|
The Krawtchouck polynomials arise naturally in both coding theory and
probability theory and have been studied extensively from these points of view.
However, very little is known about their irreducibility and Galois properties.
Just like many classical families of orthogonal polynomials (e.g. the Legendre
and Laguerre), the Krawtchouck polynomials can be viewed as special cases of
Jacobi polynomials. In this paper we determine the Newton Polygons of certain
Krawtchouck polynomials and show that they are very similar to those of the
Legendre polynomials (and exhibit new cases of irreducibility). However, we
also show that their Galois groups are significantly more complicated to study,
due to the nature of their coefficients, versus those of other classical
orthogonal families.
|
We analyzed agent behavior in complex networks: Barab\'asi-Albert (BA),
Erdos-R\'enyi (ER), and Watts-Strogatz (WS) models under the following rules:
agents (a) randomly select a destination among adjacent nodes; (b) exclude the
most congested adjacent node as a potential destination and randomly select a
destination among the remaining nodes; or (c) select the sparsest adjacent node
as a destination. We focused on small complex networks with node degrees
ranging from zero to a maximum of approximately 20 to study agent behavior in
traffic and transportation networks. We measured the hunting rate, that is, the
rate of change of agent amounts in each node per unit of time, and the
imbalance of agent distribution among nodes. Our simulation study reveals that
the topological structure of a network precisely determines agent distribution
when agents perform full random walks; however, their destination selections
alter the agent distribution. Notably, rule (c) makes hunting and imbalance
rates significantly high compared with random walk cases (a) and (b),
irrespective of network types, when the network has a high degree and high
activity rate. Compared with the full random walk in (a), (b) increases the
hunting rate while decreasing the imbalance rate when activity is low; however,
both increase when activity is high. These characteristics exhibit slight
periodic undulations over time. Furthermore, our analysis shows that in the BA,
ER, and WS network models, the hunting rate decreases and the imbalance rate
increases when the system disconnects randomly selected nodes in simulations
where agents follow rules (a)-(c) and the network has the ability to disconnect
nodes within a certain time of all time steps. Our findings can be applied to
various applications related to agent dynamics in complex networks.
|
Semantic image understanding is a challenging topic in computer vision. It
requires to detect all objects in an image, but also to identify all the
relations between them. Detected objects, their labels and the discovered
relations can be used to construct a scene graph which provides an abstract
semantic interpretation of an image. In previous works, relations were
identified by solving an assignment problem formulated as Mixed-Integer Linear
Programs. In this work, we interpret that formulation as Ordinary Differential
Equation (ODE). The proposed architecture performs scene graph inference by
solving a neural variant of an ODE by end-to-end learning. It achieves
state-of-the-art results on all three benchmark tasks: scene graph generation
(SGGen), classification (SGCls) and visual relationship detection (PredCls) on
Visual Genome benchmark.
|
In order to examine the origin of octupole ordering in NpO2, we propose a
microscopic model constituted of neptunium 5f and oxygen 2p orbitals. To study
multipole ordering, we derive effective multipole interactions from the f-p
model by using the fourth-order perturbation theory in terms of p-f hopping
integrals. Analyzing the effective model numerically, we find a tendency toward
\Gamma_{5u} antiferro-octupole ordering.
|
This article is concerned with bending plate theory for thermoelastic
diffusion materials under Green-Naghdi theory.
First, we present the basic equations which characterize the bending of thin
thermoelastic diffusion plates for type II and III models. The theory allows
for the effect of transverse shear deformation without any shear correction
factor, and permits the propagation of waves at a finite speed without energy
dissipation for type II model and with energy dissipation for type III model.
By the semigroup theory of linear operators, we prove the well-posedness of the
both models and the asymptotic behavior of the solutions of type III model. For
unbounded plate of type III model, we prove that a measure associated with the
thermodynamic process decays faster than an exponential of a polynomial of
second degree.
Finally, we investigate the impossibility of the localization in time of
solutions. The main idea to prove this result is to show the uniqueness of
solutions for the backward in-time problem.
|
We present a high-resolution Keck/ESI spectrum of GRB, which exhibits four
absorption systems at z=1.04329, 1.95260, 1.96337, and 1.98691. The two highest
redshift systems, separated by about 2400 km/s, have been previously suspected
as kinematic features arising in the circumstellar wind around the progenitor
star. However, the high column densities of low-ionization species (including
possibly neutral hydrogen) in the blue-shifted system, are inconsistent with
the expected highly ionized state of the circumstellar wind from the massive
progenitor star, even prior to the GRB explosion. This conclusion is also
supported by the lack of detectable absorption from fine-structure transitions
of SiII and FeII. Instead we conclude that the two redshift systems are similar
to multiple DLAs found in QSO sight lines with a similar velocity separation
and chemical abundance of [Cr/Fe] and [Zn/Fe]. The absorption system at
z=1.96337 is likely an intervening low-mass galaxy, possibly related to the GRB
host as part of a forming large-scale structure.
|
Macromolecular solubility in solvent mixtures often exhibit striking and
paradoxical nature. For example, when two well miscible poor solvents for a
given polymer are mixed together, the same polymer may swell within
intermediate mixing ratios. We combine computer simulations and theoretical
arguments to unveil the first microscopic, generic origin of this
collapse-swelling-collapse scenario. We show that this phenomenon naturally
emerges at constant pressure in mixtures of purely repulsive components,
especially when a delicate balance of the entropically driven depletion
interactions is achieved.
|
Our goal in this paper is to test some popular dark matter models by Ly-alpha
forest in QSO spectra. Recent observations of the size and velocity of Ly-alpha
forest clouds have indicated that the Ly-alpha absorption is probably not given
by collapsed objects, but pre-collapsed regions in the baryonic density field.
Therefore, a linear approximation description would be able to provide valuable
information. We developed a technique to simulate Ly-alpha forest as the
absorption of such pre-collapsed regions under linear approximation regime. The
simulated Ly-alpha forests in models of the standard cold dark matter (SCDM),
the cold plus hot dark matter (CHDM), and the low-density flat cold dark matter
(LCDM) have been confronted with observational features, including 1) the
number density of Ly-alpha lines and its dependencies on redshift and
equivalent width; 2) the distribution of equivalent widths and its redshift
dependence; 3) clustering; and 4) the Gunn-Peterson effect. The "standard" CHDM
model, i.e. 60% cold and 30% hot dark matters and 10\% baryons, is found to be
difficult to pass the Ly-alpha forest test, probably because it produces
structures too late and favors to form structures on large scales instead of
small scale objects like Ly-alpha clouds. Within a reasonable range of J_nu,
the UV background radiation at high redshift, and delta_th, the threshold of
the onset of gravitational collapse of the baryonic matter, the LCDM model is
consistent with observational data in all above-mentioned four aspects. The
model of SCDM can also fit with observation, but it requires a smaller J_nu and
a higher delta_th. This suggests that whether a significant part of the
Ly-alpha forest lines is located in the halos of collapsed objects would be
crucial to the success of SCDM.
|
The paper gives a brief review of the expectation-maximization algorithm
(Dempster 1977) in the comprehensible framework of discrete mathematics. In
Section 2, two prominent estimation methods, the relative-frequency estimation
and the maximum-likelihood estimation are presented. Section 3 is dedicated to
the expectation-maximization algorithm and a simpler variant, the generalized
expectation-maximization algorithm. In Section 4, two loaded dice are rolled. A
more interesting example is presented in Section 5: The estimation of
probabilistic context-free grammars.
|
Generation of the cosmological baryon asymmetry in frameworks of spontaneous
baryogenesis is studied in detail. It is shown that the relation between
baryonic chemical potential and the time derivative of the (pseudo)Goldstone
field essentially depends upon the representation chosen for the fermionic
fields with non-zero baryonic number (quarks). Kinetic equation is modified and
numerically solved in equilibrium for the case of time dependent external
background or finite integration time to be applicable to the case when energy
conservation law is formally violated.
|
In this contribution to the proceedings of the 29th Solvay Conference on
Physics I will give an overview of some key challenges in our theoretical
understanding of the rheology of glasses, focussing on (i) steady shear flow
curves and their relation to the glass and jamming transitions, (ii) ductile
versus brittle yielding in shear startup and (iii) yielding under oscillatory
shear. I will also briefly discuss connections to the reversible-irreversible
and random organization transitions as well as to the broad field of memory
formation in materials.
|
We study non-equilibrium electronic transport through a quantum dot or
impurity weakly coupled to ferromagnetic leads. Based on the rate equation
formalism we derive noise spectra for the transport current. We show that due
to quantum interference between different spin components of the current the
spectrum develops a peak or a dip at the frequency corresponding to Zeeman
splitting in the quantum dot. The detailed analysis of the spectral structure
of the current is carried out for noninteracting electrons as well as in the
regime of Coulomb blockade.
|
Multimodal fusion is considered a key step in multimodal tasks such as
sentiment analysis, emotion detection, question answering, and others. Most of
the recent work on multimodal fusion does not guarantee the fidelity of the
multimodal representation with respect to the unimodal representations. In this
paper, we propose a variational autoencoder-based approach for modality fusion
that minimizes information loss between unimodal and multimodal
representations. We empirically show that this method outperforms the
state-of-the-art methods by a significant margin on several popular datasets.
|
Decreased myocardial capillary density has been reported as an important
histopathological feature associated with various heart disorders. Quantitative
assessment of cardiac capillarization typically involves double immunostaining
of cardiomyocytes (CMs) and capillaries in myocardial slices. In contrast,
single immunostaining of basement membrane components is a straightforward
approach to simultaneously label CMs and capillaries, presenting fewer
challenges in background staining. However, subsequent image analysis always
requires manual work in identifying and segmenting CMs and capillaries. Here,
we developed an image analysis tool, AutoQC, to automatically identify and
segment CMs and capillaries in immunofluorescence images of collagen type IV, a
predominant basement membrane protein within the myocardium. In addition,
commonly used capillarization-related measurements can be derived from
segmentation masks. AutoQC features a weakly supervised instance segmentation
algorithm by leveraging the power of a pre-trained segmentation model via
prompt engineering. AutoQC outperformed YOLOv8-Seg, a state-of-the-art instance
segmentation model, in both instance segmentation and capillarization
assessment. Furthermore, the training of AutoQC required only a small dataset
with bounding box annotations instead of pixel-wise annotations, leading to a
reduced workload during network training. AutoQC provides an automated solution
for quantifying cardiac capillarization in basement-membrane-immunostained
myocardial slices, eliminating the need for manual image analysis once it is
trained.
|
Adversarial training, a method for learning robust deep networks, is
typically assumed to be more expensive than traditional training due to the
necessity of constructing adversarial examples via a first-order method like
projected gradient decent (PGD). In this paper, we make the surprising
discovery that it is possible to train empirically robust models using a much
weaker and cheaper adversary, an approach that was previously believed to be
ineffective, rendering the method no more costly than standard training in
practice. Specifically, we show that adversarial training with the fast
gradient sign method (FGSM), when combined with random initialization, is as
effective as PGD-based training but has significantly lower cost. Furthermore
we show that FGSM adversarial training can be further accelerated by using
standard techniques for efficient training of deep networks, allowing us to
learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with
$\epsilon=8/255$ in 6 minutes, and a robust ImageNet classifier with 43% robust
accuracy at $\epsilon=2/255$ in 12 hours, in comparison to past work based on
"free" adversarial training which took 10 and 50 hours to reach the same
respective thresholds. Finally, we identify a failure mode referred to as
"catastrophic overfitting" which may have caused previous attempts to use FGSM
adversarial training to fail. All code for reproducing the experiments in this
paper as well as pretrained model weights are at
https://github.com/locuslab/fast_adversarial.
|
This study presents a meshless-based local reanalysis (MLR) method. The
purpose of this study is to extend reanalysis methods to the Kriging
interpolation meshless method due to its high efficiency. In this study, two
reanalysis methods: combined approximations CA) and indirect factorization
updating (IFU) methods are utilized. Considering the computational cost of
meshless methods, the reanalysis method improves the efficiency of the full
meshless method significantly. Compared with finite element method (FEM)-based
reanalysis methods, the main superiority of meshless-based reanalysis method is
to break the limitation of mesh connection. The meshless-based reanalysis is
much easier to obtain the stiffness matrix even for solving the mesh distortion
problems. However, compared with the FEM-based reanalysis method, the critical
challenge is to use much more nodes in the influence domain due to high order
interpolation. Therefore, a local reanalysis method which only needs to
calculate the local stiffness matrix in the influence domain is suggested to
improve the efficiency further. Several typical numerical examples are tested
and the performance of the suggested method is verified.
|
The numerical simulation of complex physical processes requires the use of
economical discrete models. This lecture presents a general paradigm of
deriving a posteriori error estimates for the Galerkin finite element
approximation of nonlinear problems. Employing duality techniques as used in
optimal control theory the error in the target quantities is estimated in terms
of weighted `primal' and `dual' residuals. On the basis of the resulting local
error indicators economical meshes can be constructed which are tailored to the
particular goal of the computation. The performance of this {\it Dual Weighted
Residual Method} is illustrated for a model situation in computational fluid
mechanics: the computation of the drag of a body in a viscous flow, the drag
minimization by boundary control and the investigation of the optimal
solution's stability.
|
Let G be a finitely presented group, and let {G_i} be a collection of finite
index normal subgroups that is closed under intersections. Then, we prove that
at least one of the following must hold: 1. G_i is an amalgamated free product
or HNN extension, for infinitely many i; 2. the Cayley graphs of G/G_i (with
respect to a fixed finite set of generators for G) form an expanding family; 3.
inf_i (d(G_i)-1)/[G:G_i] = 0, where d(G_i) is the rank of G_i.
The proof involves an analysis of the geometry and topology of finite Cayley
graphs. Several applications of this result are given.
|
We compute the relative-order-v^4 contribution to gluon fragmentation into
quarkonium in the 3S1 color-singlet channel, using the nonrelativistic QCD
(NRQCD) factorization approach. The QCD fragmentation process contains infrared
divergences that produce single and double poles in epsilon in 4-2epsilon
dimensions. We devise subtractions that isolate the pole contributions, which
ultimately are absorbed into long-distance NRQCD matrix elements in the NRQCD
matching procedure. The matching procedure involves two-loop renormalizations
of the NRQCD operators. The subtractions are integrated over the phase space
analytically in 4-2epsilon dimensions, and the remainder is integrated over the
phase-space numerically. We find that the order-v^4 contribution is enhanced
relative to the order-v^0 contribution. However, the order-v^4 contribution is
not important numerically at the current level of precision of
quarkonium-hadroproduction phenomenology. We also estimate the contribution to
hadroproduction from gluon fragmentation into quarkonium in the 3PJ color-octet
channel and find that it is significant in comparison to the complete
next-to-leading-order-in-alpha_s contribution in that channel.
|
Machine-learning-based parameterizations (i.e. representation of sub-grid
processes) of global climate models or turbulent simulations have recently been
proposed as a powerful alternative to physical, but empirical, representations,
offering a lower computational cost and higher accuracy. Yet, those approaches
still suffer from a lack of generalization and extrapolation beyond the
training data, which is however critical to projecting climate change or
unobserved regimes of turbulence. Here we show that a multi-fidelity approach,
which integrates datasets of different accuracy and abundance, can provide the
best of both worlds: the capacity to extrapolate leveraging the
physically-based parameterization and a higher accuracy using the
machine-learning-based parameterizations. In an application to climate
modeling, the multi-fidelity framework yields more accurate climate projections
without requiring major increase in computational resources. Our multi-fidelity
randomized prior networks (MF-RPNs) combine physical parameterization data as
low-fidelity and storm-resolving historical run's data as high-fidelity. To
extrapolate beyond the training data, the MF-RPNs are tested on high-fidelity
warming scenarios, $+4K$, data. We show the MF-RPN's capacity to return much
more skillful predictions compared to either low- or high-fidelity (historical
data) simulations trained only on one regime while providing trustworthy
uncertainty quantification across a wide range of scenarios. Our approach paves
the way for the use of machine-learning based methods that can optimally
leverage historical observations or high-fidelity simulations and extrapolate
to unseen regimes such as climate change.
|
Important correspondences in representation theory can be regarded as
restrictions of the Morita--Tachikawa correspondence. Moreover, this
correspondence motivates the study of many classes of algebras like Morita
algebras and gendo-symmetric algebras. Explicitly, the Morita--Tachikawa
correspondence describes that endomorphism algebras of generators-cogenerators
over finite-dimensional algebras are exactly the finite-dimensional algebras
with dominant dimension at least two.
In this paper, we introduce the concepts of quasi-generators and
quasi-cogenerators which generalise generators and cogenerators, respectively.
Using these new concepts, we present higher versions of the Morita--Tachikawa
correspondence that takes into account relative dominant dimension with respect
to a self-orthogonal module with arbitrary projective and injective dimension.
These new versions also hold over Noetherian algebras which are finitely
generated and projective over a commutative Noetherian ring.
|
We prove an existence result for a class of Dirichlet boundary value problems
with discontinuous nonlinearity and involving a Leray-Lions operator. The proof
combines monotonicity methods for elliptic problems, variational inequality
techniques and basic tools related to monotone operators.
|
In a previous work we devised a framework to derive generalised gradient
systems for an evolution equation from the large deviations of an underlying
microscopic system, in the spirit of the Onsager-Machlup relations. Of
particular interest is the case where the microscopic system consists of random
particles, and the macroscopic quantity is the empirical measure or
concentration. In this work we take the particle flux as the macroscopic
quantity, which is related to the concentration via a continuity equation. By a
similar argument the large deviations can induce a generalised gradient or
Generic system in the space of fluxes. In a general setting we study how flux
gradient or generic systems are related to gradient systems of concentrations.
The arguments are explained by the example of reacting particle systems, which
is later expanded to include spatial diffusion as well.
|
One of the main current issues in Neurobiology concerns the understanding of
interrelated spiking activity among multineuronal ensembles and differences
between stimulus-driven and spontaneous activity in neurophysiological
experiments. Multi electrode array recordings that are now commonly used
monitor neuronal activity in the form of spike trains from many well identified
neurons. A basic question when analyzing such data is the identification of the
directed graph describing "synaptic coupling" between neurons. In this article
we deal with this matter working with a high quality multielectrode array
recording dataset (Pouzat et al., 2015) from the first olfactory relay of the
locust, $Schistocerca$ $americana$. From a mathematical point of view this
paper presents two novelties. First we propose a procedure allowing to deal
with the small sample sizes met in actual datasets. Moreover we address the
sensitive case of partially observed networks. Our starting point is the
procedure introduced in Duarte et al. (2016). We evaluate the performance of
both original and improved procedures through simulation studies, which are
also used for parameter tuning and for exploring the effect of recording only a
small subset of the neurons of a network.
|
The Chandra Deep Field-South and North surveys (CDFs) provide unique windows
into the cosmic history of X-ray emission from normal (non-active) galaxies.
Scaling relations of normal galaxy X-ray luminosity (L_X) with star formation
rate (SFR) and stellar mass (M_star) have been used to show that the formation
rates of low-mass and high-mass X-ray binaries (LMXBs and HMXBs, respectively)
evolve with redshift across z = 0-2 following L_HMXB/SFR ~ 1 + z and
L_LMXB/M_star ~ (1 + z)^{2-3}. However, these measurements alone do not
directly reveal the physical mechanisms behind the redshift evolution of X-ray
binaries (XRBs). We derive star-formation histories for a sample of 344 normal
galaxies in the CDFs, using spectral energy distribution (SED) fitting of
FUV-to-FIR photometric data, and construct a self-consistent, age-dependent
model of the X-ray emission from the galaxies. Our model quantifies how X-ray
emission from hot gas and XRB populations vary as functions of host
stellar-population age. We find that (1) the ratio L_X/M_star declines by a
factor of ~1000 from 0-10 Gyr and (2) the X-ray SED becomes harder with
increasing age, consistent with a scenario in which the hot gas contribution to
the X-ray SED declines quickly for ages above 10 Myr. When dividing our sample
into subsets based on metallicity, we find some indication that L_X/M_star is
elevated for low-metallicity galaxies, consistent with recent studies of X-ray
scaling relations. However, additional statistical constraints are required to
quantify both the age and metallicity dependence of X-ray emission from
star-forming galaxies.
|
We use the \emph{unit-graphs} and the \emph{special unit-digraphs} on matrix
rings to show that every $n \times n$ nonzero matrix over $\Bbb F_q$ can be
written as a sum of two $\operatorname{SL}_n$-matrices when $n>1$. We compute
the eigenvalues of these graphs in terms of Kloosterman sums and study their
spectral properties; and prove that if $X$ is a subset of $\operatorname{Mat}_2
(\Bbb F_q)$ with size $|X| > \frac{2 q^3 \sqrt{q}}{q - 1}$, then $X$ contains
at least two distinct matrices whose difference has determinant $\alpha$ for
any $\alpha \in \Bbb F_q^{\ast}$. Using this result we also prove a sum-product
type result: if $A,B,C,D \subseteq \Bbb F_q$ satisfy $\sqrt[4]{|A||B||C||D|}=
\Omega (q^{0.75})$ as $q \rightarrow \infty$, then $(A - B)(C - D)$ equals all
of $\Bbb F_q$. In particular, if $A$ is a subset of $\Bbb F_q$ with cardinality
$|A| > \frac{3} {2} q^{\frac{3}{4}}$, then the subset $(A - A) (A - A)$ equals
all of $\Bbb F_q$. We also recover a classical result: every element in any
finite ring of odd order can be written as the sum of two units.
|
This paper investigates a novel task of talking face video generation solely
from speeches. The speech-to-video generation technique can spark interesting
applications in entertainment, customer service, and human-computer-interaction
industries. Indeed, the timbre, accent and speed in speeches could contain rich
information relevant to speakers' appearance. The challenge mainly lies in
disentangling the distinct visual attributes from audio signals. In this
article, we propose a light-weight, cross-modal distillation method to extract
disentangled emotional and identity information from unlabelled video inputs.
The extracted features are then integrated by a generative adversarial network
into talking face video clips. With carefully crafted discriminators, the
proposed framework achieves realistic generation results. Experiments with
observed individuals demonstrated that the proposed framework captures the
emotional expressions solely from speeches, and produces spontaneous facial
motion in the video output. Compared to the baseline method where speeches are
combined with a static image of the speaker, the results of the proposed
framework is almost indistinguishable. User studies also show that the proposed
method outperforms the existing algorithms in terms of emotion expression in
the generated videos.
|
Network pruning and quantization are proven to be effective ways for deep
model compression. To obtain a highly compact model, most methods first perform
network pruning and then conduct network quantization based on the pruned
model. However, this strategy may ignore that they would affect each other and
thus performing them separately may lead to sub-optimal performance. To address
this, performing pruning and quantization jointly is essential. Nevertheless,
how to make a trade-off between pruning and quantization is non-trivial.
Moreover, existing compression methods often rely on some pre-defined
compression configurations. Some attempts have been made to search for optimal
configurations, which however may take unbearable optimization cost. To address
the above issues, we devise a simple yet effective method named Single-path Bit
Sharing (SBS). Specifically, we first consider network pruning as a special
case of quantization, which provides a unified view for pruning and
quantization. We then introduce a single-path model to encode all candidate
compression configurations. In this way, the configuration search problem is
transformed into a subset selection problem, which significantly reduces the
number of parameters, computational cost and optimization difficulty. Relying
on the single-path model, we further introduce learnable binary gates to encode
the choice of bitwidth. By jointly training the binary gates in conjunction
with network parameters, the compression configurations of each layer can be
automatically determined. Extensive experiments on both CIFAR-100 and ImageNet
show that SBS is able to significantly reduce computational cost while
achieving promising performance. For example, our SBS compressed MobileNetV2
achieves 22.6x Bit-Operation (BOP) reduction with only 0.1% drop in the Top-1
accuracy.
|
We give a succinct data-structure that stores a tree with colors on the
nodes. Given a node x and a color alpha, the structure finds the nearest node
to x with color alpha. This results improves the $O(n\log n)$-bits structure of
Gawrychowski et al.~[CPM 2016].
|
Unconventional superconductors have been long sought for their potential
applications in quantum technologies and devices. A key challenge impeding this
effort is the difficulty associated with probing and characterizing candidate
materials and establishing their order parameter. In this Letter, we present a
platform that allows us to spectroscopically probe unconventional
superconductivity in thin-layer materials via the proximity effect. We show
that inducing an s-wave gap in a sample with an intrinsic d-wave instability
leads to the formation of bound-states of quasiparticle pairs, which manifest
as a collective mode in the d-wave channel. This finding provides a way to
study the underlying pairing interactions vicariously through the collective
mode spectrum of the system. Upon further cooling of the system we observe that
this mode softens considerably and may even condense, signaling the onset of
time-reversal symmetry breaking superconductivity. Therefore, our proposal also
allows for the creation and study of these elusive unconventional states.
|
We present a model based on a dipole picture with a hard and a soft pomeron
in which large dipoles couple to the soft pomeron and small dipoles couple to
the hard pomeron. The parameters in the model are fixed by proton-proton
scattering and the proton structure function. The model is then applied
successfully to the proton charm structure function, the proton longitudinal
structure function, J/psi photoproduction, deep virtual Compton scattering, the
real photon-proton total cross section, the real photon-photon cross section,
and the photon structure function. Differences between our predictions and data
on charm production in real photon-photon interactions and the virtual
gamma-gamma cross section are discussed.
|
While large language models (LMs) have shown remarkable capabilities across
numerous tasks, they often struggle with simple reasoning and planning in
physical environments, such as understanding object permanence or planning
household activities. The limitation arises from the fact that LMs are trained
only on written text and miss essential embodied knowledge and skills. In this
paper, we propose a new paradigm of enhancing LMs by finetuning them with world
models, to gain diverse embodied knowledge while retaining their general
language capabilities. Our approach deploys an embodied agent in a world model,
particularly a simulator of the physical world (VirtualHome), and acquires a
diverse set of embodied experiences through both goal-oriented planning and
random exploration. These experiences are then used to finetune LMs to teach
diverse abilities of reasoning and acting in the physical world, e.g., planning
and completing goals, object permanence and tracking, etc. Moreover, it is
desirable to preserve the generality of LMs during finetuning, which
facilitates generalizing the embodied knowledge across tasks rather than being
tied to specific simulations. We thus further introduce the classical (EWC) for
selective weight updates, combined with low-rank adapters (LoRA) for training
efficiency. Extensive experiments show our approach substantially improves base
LMs on 18 downstream tasks by 64.28% on average. In particular, the small LMs
(1.3B, 6B, and 13B) enhanced by our approach match or even outperform much
larger LMs (e.g., ChatGPT).
|
Most generalization bounds in learning theory are based on some measure of
the complexity of the hypothesis class used, independently of any algorithm. In
contrast, the notion of algorithmic stability can be used to derive tight
generalization bounds that are tailored to specific learning algorithms by
exploiting their particular properties. However, as in much of learning theory,
existing stability analyses and bounds apply only in the scenario where the
samples are independently and identically distributed. In many machine learning
applications, however, this assumption does not hold. The observations received
by the learning algorithm often have some inherent temporal dependence.
This paper studies the scenario where the observations are drawn from a
stationary phi-mixing or beta-mixing sequence, a widely adopted assumption in
the study of non-i.i.d. processes that implies a dependence between
observations weakening over time. We prove novel and distinct stability-based
generalization bounds for stationary phi-mixing and beta-mixing sequences.
These bounds strictly generalize the bounds given in the i.i.d. case and apply
to all stable learning algorithms, thereby extending the use of
stability-bounds to non-i.i.d. scenarios.
We also illustrate the application of our phi-mixing generalization bounds to
general classes of learning algorithms, including Support Vector Regression,
Kernel Ridge Regression, and Support Vector Machines, and many other kernel
regularization-based and relative entropy-based regularization algorithms.
These novel bounds can thus be viewed as the first theoretical basis for the
use of these algorithms in non-i.i.d. scenarios.
|
The six nondegeneracy conditions of geometric nature that are satisfied by
the only six possibly existing nondegenerate general classes I, II, III-1,
III-2, IV-1, IV-2 of 5-dimensional CR manifolds are shown to be readable
instantaneously from their elementarily normalized respective defining graphed
equations, without advanced Moser theory.
|
In this paper a randomness evaluation of a block cipher for secure image
communication is presented. The GFHT cipher is a genetic algorithm, that
combines gene fusion (GF) and horizontal gene transfer (HGT) both inspired from
antibiotic resistance in bacteria. The symmetric encryption key is generated by
four pairs of chromosomes with multi-layer random sequences. The encryption
starts by a GF of the principal key-agent in a single block, then HGT performs
obfuscation where the genes are pixels and the chromosomes are the rows and
columns. A Salt extracted from the image hash-value is used to implement
one-time pad (OTP) scheme, hence a modification of one pixel generates a
different encryption key without changing the main passphrase or key.
Therefore, an extreme avalanche effect of 99% is achieved. Randomness
evaluation based on random matrix theory, power spectral density, avalanche
effect, 2D auto-correlation, pixels randomness tests and chi-square hypotheses
testing show that encrypted images adopt the statistical behavior of uniform
white noise; hence validating the theoretical model by experimental results.
Moreover, performance comparison with chaos-genetic ciphers shows the merit of
the GFHT algorithm.
|
We study the efficiency of galactic feedback in the early Universe by
stacking the [C II] 158 um emission in a large sample of normal star-forming
galaxies at 4 < z < 6 from the ALMA Large Program to INvestigate [C II] at
Early times (ALPINE) survey. Searching for typical signatures of outflows in
the high-velocity tails of the stacked [C II] profile, we observe (i)
deviations from a single-component Gaussian model in the combined residuals and
(ii) broad emission in the stacked [C II] spectrum, with velocities of |v|<~
500 km/s. The significance of these features increases when stacking the subset
of galaxies with star formation rates (SFRs) higher than the median (SFRmed =
25 Msun/yr), thus confirming their star-formation-driven nature. The estimated
mass outflow rates are comparable to the SFRs, yielding mass-loading factors of
the order of unity (similarly to local star-forming galaxies), suggesting that
star-formation-driven feedback may play a lesser role in quenching galaxies at
z > 4. From the stacking analysis of the datacubes, we find that the combined
[C II] core emission (|v|< 200 km/s) of the higher-SFR galaxies is extended on
physical sizes of ~ 30 kpc (diameter scale), well beyond the analogous [C II]
core emission of lower-SFR galaxies and the stacked far-infrared continuum. The
detection of such extended metal-enriched gas, likely tracing circumgalactic
gas enriched by past outflows, corroborates previous similar studies,
confirming that baryon cycle and gas exchanges with the circumgalactic medium
are at work in normal star-forming galaxies already at early epochs.
|
On a foliated manifold equipped with an action of a compact Lie group $G$, we
study a class of almost-coupling Poisson and Dirac structures, in the context
of deformation theory and the method of averaging.
|
We construct effective Hamiltonians which despite their apparently
nonrelativistic form incorporate relativistic effects by involving parameters
which depend on the relevant momentum. For some potentials the corresponding
energy eigenvalues may be determined analytically. Applied to two-particle
bound states, it turns out that in this way a nonrelativistic treatment may
indeed be able to simulate relativistic effects. Within the framework of hadron
spectroscopy, this lucky circumstance may be an explanation for the sometimes
extremely good predictions of nonrelativistic potential models even in
relativistic regions.
|
To each sequence $(a_n)$ of positive real numbers we associate a growing
sequence $(T_n)$ of continuous trees built recursively by gluing at step $n$ a
segment of length $a_n$ on a uniform point of the pre-existing tree, starting
from a segment $T_1$ of length $a_1$. Previous works on that model focus on the
influence of $(a_n)$ on the compactness and Hausdorff dimension of the limiting
tree. Here we consider the cases where the sequence $(a_n)$ is regularly
varying with a non-negative index, so that the sequence $(T_n)$ exploses. We
determine the asymptotics of the height of $T_n$ and of the subtrees of $T_n$
spanned by the root and $\ell$ points picked uniformly at random and
independently in $T_n$, for all $\ell \in \mathbb N$.
|
For tabular data sets, we explore data and model distillation, as well as
data denoising. These techniques improve both gradient-boosting models and a
specialized DNN architecture. While gradient boosting is known to outperform
DNNs on tabular data, we close the gap for datasets with 100K+ rows and give
DNNs an advantage on small data sets. We extend these results with input-data
distillation and optimized ensembling to help DNN performance match or exceed
that of gradient boosting. As a theoretical justification of our practical
method, we prove its equivalence to classical cross-entropy knowledge
distillation. We also qualitatively explain the superiority of DNN ensembles
over XGBoost on small data sets. For an industry end-to-end real-time ML
platform with 4M production inferences per second, we develop a model-training
workflow based on data sampling that distills ensembles of models into a single
gradient-boosting model favored for high-performance real-time inference,
without performance loss. Empirical evaluation shows that the proposed
combination of methods consistently improves model accuracy over prior best
models across several production applications deployed worldwide.
|
We present imaging and force spectroscopy measurements of DNA molecules
adsorbed on functionalized mica. By means of Non-Contact mode AFM (NC-AFM) in
Ultra High Vacuum (UHV), the frequency shift (\Delta f) versus separation (z)
curves were measured providing a quantitative measurement of both force and
energy of the tip-DNA interaction. Similarly, topographic images of the
adsorbed DNA molecules in constant frequency shift mode were collected. The
high resolution force measurements confirm the imaging contrast difference
between the substrate and DNA. The force curves measured along the DNA molecule
can be divided into two classes showing marked differences in the minimum of
the interaction force and energy, indicating that NC-AFM could deliver chemical
contrast along the DNA molecule.
|
Off-axis twisted waveguides possess unique optical properties such as
circular and orbital angular momentum (OAM) birefringence, setting them apart
from their straight counterparts. Analyzing mode formation in such helical
waveguides relies on the use of specific coordinate frames that follow the
twist of the structure, making the waveguide invariant along one of the new
coordinates. In this study, the differences between modes forming in
high-contrast off-axis twisted waveguides defined in the three most important
coordinate systems - the Frenet-Serret, the helicoidal, or the Overfelt frame -
are investigated through numerical simulations. We explore modal
characteristics up to high twist rates (pitch: 50 $\mu$m) and clarify a
transformation allowing to map the modal fields and the effective index back to
the laboratory frame.
In case the waveguide is single-mode, the fundamental modes of the three
types of waveguides show significant differences in terms of birefringence,
propagation loss, and polarization. Conversely, the modal characteristics of
the investigated waveguides are comparable in the multimode domain.
Furthermore, our study examines the impact of twisting on spatial mode
properties with the results suggesting a potential influence of the photonic
spin Hall and orbital Hall effects. Additionally, modes of single-mode helical
waveguides were found to exhibit superchiral fields on their surfaces.
Implementation approaches such as 3D-nanoprinting or fiber-preform twisting
open the doors to potential applications of such highly twisted waveguides,
including chip-integrated devices for broadband spin- and OAM-preserving
optical signal transport, as well as applications in chiral spectroscopy or
nonlinear frequency conversion.
|
The advanced features of today's smart phones and hand held devices, like the
increased memory and processing capabilities, allowed them to act even as
information providers. Thus a smart phone hosting web services is not a fancy
anymore. But the relevant discovery of these services provided by the smart
phones has became quite complex, because of the volume of services possible
with each Mobile Host providing some services. Centralized registries have
severe drawbacks in such a scenario and alternate means of service discovery
are to be addressed. P2P domain with it resource sharing capabilities comes
quite handy and here in this paper we provide an alternate approach to UDDI
registry for discovering mobile web services. The services are published into
the P2P network as JXTA modules and the discovery issues of these module
advertisements are addressed. The approach also provides alternate means of
identifying the Mobile Host.
|
We study the statistics of free-surface turbulence at large Reynolds numbers
produced by direct numerical simulations in a fluid layer at different
thickness with fixed characteristic forcing scale. We observe the production of
a transient inverse cascade, with a duration which depends on the thickness of
the layer, followed by a transition to three-dimensional turbulence initially
produced close to the bottom, no-slip boundary. By switching off the forcing,
we study the decaying turbulent regime and we find that it cannot be described
by an exponential law. Our results show that boundary conditions play a
fundamental role in the nature of turbulence produced in thin layers and give
limits on the conditions to produce a two-dimensional phenomenology.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.