text
stringlengths 6
128k
|
---|
Artificial intelligence (AI) is regarded as one of the most disruptive
technology of the century and with countless applications. What does it mean
for radiation protection? This article describes the fundamentals of machine
learning (ML) based methods and presents the inaugural applications in
different fields of radiation protection. It is foreseen that the usage of AI
will increase in radiation protection. Consequently, this article explores some
of the benefits and also the potential barriers and questions, including
ethical ones, that can come out. The article proposes that collaboration
between radiation protection professionals and data scientist experts can
accelerate and guide the development of the algorithms for effective scientific
and technological outcomes.
|
Using the VESTIGE survey, a deep narrow-band H$\alpha$ imaging survey of the
Virgo cluster carried on at the CFHT with MegaCam, we discovered a long diffuse
tail of ionised gas in the edge-on late-type galaxy NGC 4330. This peculiar
feature witnesses an ongoing ram pressure stripping (RPS) event able to remove
the gas in the outer disc region. Tuned hydrodynamic simulations suggest that
the RPS event is occurring almost face-on, making NGC 4330 the ideal candidate
to study the effects of the perturbation in the direction perpendicular to the
disc plane. We present here two new independent sets of Fabry-Perot
observations (R$\simeq$10000) in order to understand the effects of the RPS
process on the ionised gas kinematics. Despite their limited sensitivity to the
diffuse gas emission, the data allowed us to measure the velocity and the
velocity dispersion fields over the galaxy disc and in several features at the
edges or outside the stellar disc formed after the RPS event. We have
constructed the position-velocity diagrams and the rotation curves of the
galaxy using three different techniques. The data show, consistent with the
hydrodynamic simulations, that the galaxy has an inner solid-body rotation up
to $\sim$2.4 kpc, with non-circular streaming motions outwards the disc and in
the several external features formed during the interaction of the galaxy with
the surrounding intracluster medium. The data also indicate a decrease of the
rotational velocity of the gas with increasing distance from the galaxy disc
along the tails, suggesting a gradual but not linear loss of angular momentum
in the stripped gas. Consistent with a RPS scenario, the $i$-band image shows a
boxy shape at the southwest edge of the disc, where the stellar orbits might
have been perturbed by the modification of the gravitational potential well of
the galaxy due to the displacement of the gas in the $z$-direction.
|
We show that the asymmetric tunneling spectrum observed in the cuprate
superconductors stems from the existence of a competing order. The competition
between the competing order and superconductivity can create a charge depletion
region near the surface. The asymmetric response of the depletion region as the
function of the external voltage causes the asymmetric tunneling spectrum. The
effect is very general in a system which is near the phase boundary of two
competing states favoring different carrier densities. The asymmetry which has
recently been observed in the point-contact spectroscopy of the heavy fermion
superconductor CeCoIn5 is another example of this effect.
|
In this paper, we propose a numerical scheme to solve the kinetic model for
chemotaxis phenomena. Formally, this scheme is shown to be uniformly stable
with respect to the small parameter, consistent with the fluid-diffusion limit
(Keller-Segel model). Our approach is based on the micro-macro decomposition
which leads to an equivalent formulation of the kinetic model that couples a
kinetic equation with macroscopic ones. This method is validated with various
test cases and compared to other standard methods.
|
We introduce the notion of wide representation of an inverse semigroup and
prove that with a suitably defined topology there is a space of germs of such a
representation which has the structure of an etale groupoid. This gives an
elegant description of Paterson's universal groupoid and of the translation
groupoid of Skandalis, Tu, and Yu. In addition we characterize the inverse
semigroups that arise from groupoids, leading to a precise bijection between
the class of etale groupoids and the class of complete and infinitely
distributive inverse monoids equipped with suitable representations, and we
explain the sense in which quantales and localic groupoids carry a
generalization of this correspondence.
|
Uranium-based materials are valuable assets in the energy, medical, and
military industries. However, understanding their sensitivity to hydrogen
embrittlement is particularly challenging due to the toxicity of uranium and
computationally expensive nature of the quantum-based methods generally
required to study such processes. In this regard, we have developed a Chebyshev
Interaction Model for Efficient Simulation (ChIMES) model that can be employed
to compute energies and forces of U and UH3 bulk structures with vacancies and
hydrogen interstitials with similar accuracy to Density Functional Theory (DFT)
while yielding linear scaling and orders of magnitude improvement in
computational efficiency. We show that that the bulk structural parameters,
uranium and hydrogen vacancy formation energies, and diffusion barriers
predicted by the ChIMES potential are in strong agreement with the reference
DFT data. We then use ChIMES to conduct molecular dynamics simulations of the
temperature-dependent diffusion of a hydrogen interstitial and determine the
corresponding diffusion activation energy. Our model has particular
significance in studies of actinides and other high-Z materials, where there is
a strong need for computationally efficient methods to bridge length and time
scales between experiments and quantum theory.
|
Rigorous coupled-wave analysis (RCWA) is a very effective tool for the
studying optical properties of multilayered vertically invariant periodic
structures. However, it fails to deal with arrays of small particles because of
high gradients in a local field. In this thesis, we implement discrete dipole
approximation (DDA) for the construction of scattering matrices of arrays of
resonant nanoparticles. This strongly speeds up the calculations and therefore
provides an opportunity for thorough consideration of various layered
structures with small periodic inclusions in terms of the RCWA. We study in
detail three main stages of the method: calculation of polarizability tensor of
a single nanoparticle, effective polarizability of this particle in a lattice
and corresponding scattering matrix of the layer for further integration in the
conventional RCWA approach. We demonstrate the performance of the proposed
method by considering plasmonic lattices embedded in a homogeneous ambiance and
placed inside and onto optical waveguides and compare our results with
experimental papers. Such phenomena as localized surface plasmon resonances
(LSPRs) and lattice plasmon resonances (LPRs) are observed as well as their
hybridization with photonic guided modes. High accuracy and fast convergence of
our approach are shown by a comparison with other computational approaches.
Typical limits of applicability of our approximate method are determined by an
exploration of the dependence of its error on the parameters of the structure.
|
Artificial neural networks learn how to solve new problems through a
computationally intense and time consuming process. One way to reduce the
amount of time required is to inject preexisting knowledge into the network. To
make use of past knowledge, we can take advantage of techniques that transfer
the knowledge learned from one task, and reuse it on another (sometimes
unrelated) task. In this paper we propose a novel selective breeding technique
that extends the transfer learning with behavioural genetics approach proposed
by Kohli, Magoulas and Thomas (2013), and evaluate its performance on financial
data. Numerical evidence demonstrates the credibility of the new approach. We
provide insights on the operation of transfer learning and highlight the
benefits of using behavioural principles and selective breeding when tackling a
set of diverse financial applications problems.
|
This paper has been withdrawn
|
We give a complete survey of a construction by Boone and Collins for
embedding any finitely presented group into one with $8$ generators and $26$
relations. We show that this embedding preserves the set of orders of torsion
elements, and in particular torsion-freeness. We combine this with the
independent results of Belegradek and Chiodo to prove that there is an
$8$-generator $26$-relator universal finitely presented torsion-free group (one
into which all finitely presented torsion-free groups embed).
|
We first consider the rational Cherednik algebra corresponding to the action
of a finite group on a complex variety, as defined by Etingof. We define a
category of representations of this algebra which is analogous to "category O"
for the rational Cherednik algebra of a vector space. We generalise to this
setting Bezrukavnikov and Etingof's results about the possible support sets of
such representations. Then we focus on the case of $S_n$ acting on $\C^n$,
determining which irreducible modules in this category have which support sets.
We also show that the category of representations with a given support, modulo
those with smaller support, is equivalent to the category of finite dimensional
representations of a certain Hecke algebra.
|
Twisting process for quantum linear spaces is defined. It consists in a
particular kind of globally defined deformations on finitely generated
algebras. Given a quantum space (A_1,A), a multiplicative cosimplicial
quasicomplex C[A_1] in the category Grp is associated to A_1, in such a way
that for every n a subclass of linear automorphisms of A^{\otimes n} is
obtained from the groups C^n[A_1]. Among the elements of this subclass, the
counital 2-cocycles are those which define the twist transformations. In these
terms, the twisted internal coHom objects, constructed in a previous paper (cf.
math.QA/0112233), can be described as twisting of the proper coHom objects,
enabling us in turn to generalize the mentioned construction. The
quasicomplexes C[V], V a vector space, are studied in detail, showing for
instance that, when V is a coalgebra, the quasicomplexes related to Drinfeld
twisting, corresponding to bialgebras generated by V, are subobjects of C[V].
|
Current self-supervised approaches for skeleton action representation
learning often focus on constrained scenarios, where videos and skeleton data
are recorded in laboratory settings. When dealing with estimated skeleton data
in real-world videos, such methods perform poorly due to the large variations
across subjects and camera viewpoints. To address this issue, we introduce ViA,
a novel View-Invariant Autoencoder for self-supervised skeleton action
representation learning. ViA leverages motion retargeting between different
human performers as a pretext task, in order to disentangle the latent
action-specific `Motion' features on top of the visual representation of a 2D
or 3D skeleton sequence. Such `Motion' features are invariant to skeleton
geometry and camera view and allow ViA to facilitate both, cross-subject and
cross-view action classification tasks. We conduct a study focusing on
transfer-learning for skeleton-based action recognition with self-supervised
pre-training on real-world data (e.g., Posetics). Our results showcase that
skeleton representations learned from ViA are generic enough to improve upon
state-of-the-art action classification accuracy, not only on 3D laboratory
datasets such as NTU-RGB+D 60 and NTU-RGB+D 120, but also on real-world
datasets where only 2D data are accurately estimated, e.g., Toyota Smarthome,
UAV-Human and Penn Action.
|
We empirically study the effect of noise scheduling strategies for denoising
diffusion generative models. There are three findings: (1) the noise scheduling
is crucial for the performance, and the optimal one depends on the task (e.g.,
image sizes), (2) when increasing the image size, the optimal noise scheduling
shifts towards a noisier one (due to increased redundancy in pixels), and (3)
simply scaling the input data by a factor of $b$ while keeping the noise
schedule function fixed (equivalent to shifting the logSNR by $\log b$) is a
good strategy across image sizes. This simple recipe, when combined with
recently proposed Recurrent Interface Network (RIN), yields state-of-the-art
pixel-based diffusion models for high-resolution images on ImageNet, enabling
single-stage, end-to-end generation of diverse and high-fidelity images at
1024$\times$1024 resolution (without upsampling/cascades).
|
Hadronic radiation provides a tool to distinguish different topologies of
colour flow in hard scattering processes. We study the structure of hadronic
flow corresponding to Higgs production and decay in high-energy hadron-hadron
collisions. In particular, the signal gg -> H -> b anti-b and background gg ->
b anti-b processes are shown to have very different radiation patterns, and
this may provide an useful additional method for distinguishing Higgs signal
events from the QCD background.
|
The Dual Characteristic-Galerkin method (DCGM) is conservative, precise and
experimentally positive. We present the method and prove convergence and
$L^2$-stability in the case of Neumann boundary conditions. In a 2D numerical
finite element setting (FEM), the method is compared to Primal
Characteristic-Galerkin (PCGM), Streamline upwinding (SUPG), the Dual
Discontinuous Galerkin method (DDG) and centered FEM without upwinding. DCGM is
difficult to implement numerically but, in the numerical context of this note,
it is far superior to all others.
|
The onset of the COVID-19 pandemic changed the landscape of education and led
to increased usage of remote proctoring tools that are designed to monitor
students when they take assessments outside the classroom. While prior work has
explored students' privacy and security concerns regarding online proctoring
tools, the perspective of educators is under explored. Notably, educators are
the decision makers in the classrooms and choose which remote proctoring
services and the level of observations they deem appropriate. To explore how
educators balance the security and privacy of their students with the
requirements of remote exams, we sent survey requests to over 3,400 instructors
at a large private university that taught online classes during the 2020/21
academic year. We had n=125 responses: 21% of the educators surveyed used
online exam proctoring services during the remote learning period, and of
those, 35% plan to continue using the tools even when there is a full return to
in-person learning. Educators who use exam proctoring services are often
comfortable with their monitoring capabilities. However, educators are
concerned about students sharing certain types of information with exam
proctoring companies, particularly when proctoring services collect
identifiable information to validate students' identities. Our results suggest
that many educators developed alternative assessments that did not require
online proctoring and that those who did use online proctoring services often
considered the tradeoffs between the potential risks to student privacy and the
utility or necessity of exam proctoring services.
|
We study a square-lattice three-state Potts antiferromagnet with a staggered
polarization field at finite temperature. Numerically treating the transfer
matrices, we determine two phase boundaries separating the model-parameter
space into three parts. We confirm that one of them belongs to the
ferromagnetic three-state Potts criticality, which is in accord with a recent
prediction, and another to the Ising type; these are both corresponding to the
massless renormalization-group flows stemming from the Gaussian fixed points.
We also discuss a field theory to describe the latter Ising transition.
|
The Mach-Zehnder interferometric setup quantitatively characterizing the
wave-particle duality implements in fact a joint measurement of two unsharp
observables. We present a necessary and sufficient condition for such a pair of
unsharp observables to be jointly measurable. The condition is shown to be
equivalent to a duality inequality, which for the optimal strategy of
extracting the which-path information is more stringent than the
Jaeger-Shimony-Vaidman-Englert inequality.
|
The Navier-Stokes-Voigt (NSV) model of viscoelastic incompressible fluid has
been recently proposed as a regularization of the 3D Navier-Stokes equations
for the purpose of direct numerical simulations. In this work we investigate
its statistical properties by employing phenomenological heuristic arguments,
in combination with Sabra shell model simulations of the analogue of the NSV
model. For large values of the regularizing parameter, compared to the
Kolmogorov length scale, simulations exhibit multiscaling inertial range, and
the dissipation range displaying low intermittency. These facts provide
evidence that the NSV regularization may reduce the stiffness of direct
numerical simulations of turbulent flows, with a small impact on the energy
containing scales.
|
The entrainment of underlying erodible material by geophysical flows can
significantly boost the flowing mass and increase the final deposition extent.
The particle size of both the flowing material and the erodible substrate
influence the entrainment mechanism and determine the overall flow dynamics.
This paper examines these mechanisms experimentally by considering the flow of
particles over an erodible bed using different particle size combinations for
the incoming flow and the base layer in a laboratory-scale inclined flume.
Dynamic X-ray radiography was used to capture the dynamics of the flow-erodible
bed interface. The experiments found that the maximum downslope velocity
depends on the ratio between the size of the flowing particles and the size of
the bed particles, with higher ratios leading to faster velocities. Two
techniques were then applied to estimate the evolving erosion depth: an
established critical velocity method, and a novel particle-size-based method.
Erosion rates were estimated from both of these methods. Interestingly, these
two rates express different and contradictory conclusions. In the
critical-velocity-based rate estimation, the normalized erosion rate increases
with the flow to bed grain size ratio, whereas the erosion rates estimated from
the particle-size-based approach find the opposite trend. We rationalise this
discrepancy by considering the physical interpretation of both measurement
methods, and provide insight into how future modelling can be performed to
accommodate both of these complementary measures. This paper highlights how the
erosion rate is entirely dependent on the method of estimating the erosion
depth and the choice of measurement technique.
|
Detection of biomolecules is important in proteomics and clinical diagnosis
and treatment of diseases. Here, we apply functionalized, macromolecular,
single walled carbon nanotubes SWNTs as multi-color Raman labels to protein
arrays for highly sensitive, multiplexed protein detection. Raman detection
utilizes the sharp peaks of SWNTs with minimal background interference,
affording a high signal to noise ratio needed for ultra-sensitive detection.
Surface-enhanced Raman scattering SERS combined with the strong resonance Raman
intensity of SWNTs, affords detection sensitivity down to 1 fM, a three order
of magnitude improvement over most of reported fluorescence-based protein
detections. We show that human autoantibodies to Proteinase 3 aPR3, a biomarker
for the autoimmune disease Wegeners granulomatosis, is detected by Raman in
human serum up to a 107 dilution. Moreover, SWNT Raman tags are stable against
photobleaching and quenching, and by conjugating different antibodies to pure
12C and 13C SWNT isotopes, we demonstrate two-color SWNT Raman-based protein
detection in a multiplexed fashion.
|
Humans have a remarkable ability to use physical commonsense and predict the
effect of collisions. But do they understand the underlying factors? Can they
predict if the underlying factors have changed? Interestingly, in most cases
humans can predict the effects of similar collisions with different conditions
such as changes in mass, friction, etc. It is postulated this is primarily
because we learn to model physics with meaningful latent variables. This does
not imply we can estimate the precise values of these meaningful variables
(estimate exact values of mass or friction). Inspired by this observation, we
propose an interpretable intuitive physics model where specific dimensions in
the bottleneck layers correspond to different physical properties. In order to
demonstrate that our system models these underlying physical properties, we
train our model on collisions of different shapes (cube, cone, cylinder,
spheres etc.) and test on collisions of unseen combinations of shapes.
Furthermore, we demonstrate our model generalizes well even when similar scenes
are simulated with different underlying properties.
|
Current kidney exchange pools are of moderate size and thin, as they consist
of many highly sensitized patients. Creating a thicker pool can be done by
waiting for many pairs to arrive. We analyze a simple class of matching
algorithms that search periodically for allocations. We find that if only 2-way
cycles are conducted, in order to gain a significant amount of matches over the
online scenario (matching each time a new incompatible pair joins the pool) the
waiting period should be "very long". If 3-way cycles are also allowed we find
regimes in which waiting for a short period also increases the number of
matches considerably. Finally, a significant increase of matches can be
obtained by using even one non-simultaneous chain while still matching in an
online fashion. Our theoretical findings and data-driven computational
experiments lead to policy recommendations.
|
In recent years, Discriminative Correlation Filter (DCF) based methods have
significantly advanced the state-of-the-art in tracking. However, in the
pursuit of ever increasing tracking performance, their characteristic speed and
real-time capability have gradually faded. Further, the increasingly complex
models, with massive number of trainable parameters, have introduced the risk
of severe over-fitting. In this work, we tackle the key causes behind the
problems of computational complexity and over-fitting, with the aim of
simultaneously improving both speed and performance.
We revisit the core DCF formulation and introduce: (i) a factorized
convolution operator, which drastically reduces the number of parameters in the
model; (ii) a compact generative model of the training sample distribution,
that significantly reduces memory and time complexity, while providing better
diversity of samples; (iii) a conservative model update strategy with improved
robustness and reduced complexity. We perform comprehensive experiments on four
benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive
deep features, our tracker provides a 20-fold speedup and achieves a 13.0%
relative gain in Expected Average Overlap compared to the top ranked method in
the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features,
operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015.
|
We study the fluctuations in luminosity distances due to gravitational
lensing by large scale (> 35 Mpc) structures, specifically voids and sheets. We
use a simplified "Swiss cheese" model consisting of a \Lambda -CDM
Friedman-Robertson-Walker background in which a number of randomly distributed
non-overlapping spherical regions are replaced by mass compensating comoving
voids, each with a uniform density interior and a thin shell of matter on the
surface. We compute the distribution of magnitude shifts using a variant of the
method of Holz & Wald (1998), which includes the effect of lensing shear. The
standard deviation of this distribution is ~ 0.027 magnitudes and the mean is ~
0.003 magnitudes for voids of radius 35 Mpc, sources at redshift z_s=1.0, with
the voids chosen so that 90% of the mass is on the shell today. The standard
deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source
redshift, and fraction of mass on the shells today. If the shell walls are
given a finite thickness of ~ 1 Mpc, the standard deviation is reduced to ~
0.013 magnitudes. This standard deviation due to voids is a factor ~ 3 smaller
than that due to galaxy scale structures. We summarize our results in terms of
a fitting formula that is accurate to ~ 20%, and also build a simplified
analytic model that reproduces our results to within ~ 30%. Our model also
allows us to explore the domain of validity of weak lensing theory for voids.
We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens
coupling are of order ~ 4%, and corrections to due shear are ~ 3%. Finally, we
estimate the bias due to source-lens clustering in our model to be negligible.
|
We address the problem of measuring the relative angle between two "quantum
axes" made out of N1 and N2 spins. Closed forms of our fidelity-like figure of
merit are obtained for an arbitrary number of parallel spins. The asymptotic
regimes of large N1 and/or N2 are discussed in detail. The extension of the
concept "quantum axis" to more general situations is addressed. We give optimal
strategies when the first quantum axis is made out of parallel spins whereas
the second is a general state made out of two spins.
|
High quality electrical contact to semiconducting transition metal
dichalcogenides (TMDCs) such as $MoS_2$ is key to unlocking their unique
electronic and optoelectronic properties for fundamental research and device
applications. Despite extensive experimental and theoretical efforts reliable
ohmic contact to doped TMDCs remains elusive and would benefit from a better
understanding of the underlying physics of the metal-TMDC interface. Here we
present measurements of the atomic-scale energy band diagram of junctions
between various metals and heavily doped monolayer $MoS_2$ using ultra-high
vacuum scanning tunneling microscopy (UHV-STM). Our measurements reveal that
the electronic properties of these junctions are dominated by 2D metal induced
gap states (MIGS). These MIGS are characterized by a spatially growing measured
gap in the local density of states (L-DOS) of the $MoS_2$ within 2 nm of the
metal-semiconductor interface. Their decay lengths extend from a minimum of
~0.55 nm near mid gap to as long as 2 nm near the band edges and are nearly
identical for Au, Pd and graphite contacts, indicating that it is a universal
property of the monolayer semiconductor. Our findings indicate that even in
heavily doped semiconductors, the presence of MIGS sets the ultimate limit for
electrical contact.
|
We prove that the free splitting complex of a finite rank free group, also
known as Hatcher's sphere complex, is hyperbolic.
|
Merging photonic structures and optoelectronic sensors into a single chip may
yield a sensor-on-chip spectroscopic device that can measure the spectrum of
matters. In this work, we propose and realize an on-chip concurrent
multi-wavelength infrared (IR) sensor. The fabricated quad-wavelength IR
sensors exhibit four different narrowband spectral responses at normal
incidence following the pre-designed resonances in the mid-wavelength infrared
region that corresponds to the atmospheric window. The device can be applied
for practical spectroscopic applications such as non-dispersive IR sensors, IR
chemical imaging devices, pyrometers, and spectroscopic thermography imaging.
|
In autonomous driving, Vehicle-Infrastructure Cooperative 3D Object Detection
(VIC3D) makes use of multi-view cameras from both vehicles and traffic
infrastructure, providing a global vantage point with rich semantic context of
road conditions beyond a single vehicle viewpoint. Two major challenges prevail
in VIC3D: 1) inherent calibration noise when fusing multi-view images, caused
by time asynchrony across cameras; 2) information loss when projecting 2D
features into 3D space. To address these issues, We propose a novel 3D object
detection framework, Vehicles-Infrastructure Multi-view Intermediate fusion
(VIMI). First, to fully exploit the holistic perspectives from both vehicles
and infrastructure, we propose a Multi-scale Cross Attention (MCA) module that
fuses infrastructure and vehicle features on selective multi-scales to correct
the calibration noise introduced by camera asynchrony. Then, we design a
Camera-aware Channel Masking (CCM) module that uses camera parameters as priors
to augment the fused features. We further introduce a Feature Compression (FC)
module with channel and spatial compression blocks to reduce the size of
transmitted features for enhanced efficiency. Experiments show that VIMI
achieves 15.61% overall AP_3D and 21.44% AP_BEV on the new VIC3D dataset,
DAIR-V2X-C, significantly outperforming state-of-the-art early fusion and late
fusion methods with comparable transmission cost.
|
In this paper, we give a generalization of Fenchel's theorem for closed
curves as frontals in Euclidean space $\mathbb{R}^n$. We prove that, for a
non-co-orientable closed frontal in $\mathbb{R}^n$, its total absolute
curvature is greater than or equal to $\pi$. It is equal to $\pi$ if and only
if the curve is a planar locally $L$-convex closed frontal whose rotation index
is $1/2$ or $-1/2$. Furthermore, if the equality holds and if every singular
point is a cusp, then the number $N$ of cusps is an odd integer greater than or
equal to $3$, and $N=3$ holds if and only if the curve is simple.
|
The magnetic excitations in multiferroic TbMnO3 have been studied by
inelastic neutron scattering in the spiral and sinusoidally ordered phases. At
the incommensurate magnetic zone center of the spiral phase, we find three
low-lying magnons whose character has been fully determined using
neutron-polarization analysis. The excitation at the lowest energy is the
sliding mode of the spiral, and two modes at 1.1 and 2.5meV correspond to
rotations of the spiral rotation plane. These latter modes are expected to
couple to the electric polarization. The 2.5meV-mode is in perfect agreement
with recent infra-red-spectroscopy data giving strong support to its
interpretation as an hybridized phonon-magnon excitation.
|
This study presents elemental abundances of the early A-type supergiant HD
80057 and the late A-type supergiant HD80404. High resolution and high
signal-to-noise ratio spectra published by the UVES Paranal Observatory Project
(Bagnulo et al., 2003) were analysed to compute their elemental abundances
using ATLAS9 (Kurucz, 1993, 2005; Sbordone et al., 2004). In our analysis we
assumed local thermodynamic equilibrium. The atmospheric parameters of HD 80057
used in this study are from Firnstein & Przybilla (2012), and that of HD80404
are derived from spectral energy distribution, ionization equilibria of Cr I/II
and Fe I/II, and the fits to the wings of Balmer lines and Paschen lines as
Teff = 7700 +/- 150 K and log g=1.60 +/- 0.15 (in cgs). The microturbulent
velocities of HD 80057 and HD 80404 have been determined as 4.3 +/- 0.1 and 2.2
+/- 0.7 km s^-1 . The rotational velocities are 15 +/-1 and 7 +/- 2 km s^-1 and
their macroturbulence velocities are 24 +/-2 and 2+/-1 km s^-1 . We have given
the abundances of 27 ions of 20 elements for HD 80057 and 39 ions of 25
elements for HD80404. The abundances are close to solar values, except for some
elements (Na, Sc, Ti, V, Ba, and Sr). We have found the metallicities [M/H] for
HD 80057 and HD 80404 as -0.15 +/- 0.24 and -0.02 +/- 0.20 dex, respectively.
The evolutionary status of these stars are discussed and their
nitrogen-to-carbon (N/C) and nitrogen-to-oxygen (N/O) ratios show that they are
in their blue supergiant phase before the red supergiant region.
|
Relativistic quantum metrology studies the maximal achievable precision for
estimating a physical quantity when both quantum and relativistic effects are
taken into account. We study the relativistic quantum metrology of temperature
in (3+1)-dimensional de Sitter and anti-de Sitter space. Using Unruh-DeWitt
detectors coupled to a massless scalar field as probes and treating them as
open quantum systems, we compute the Fisher information for estimating
temperature. We investigate the effect of acceleration in dS, and the effect of
boundary condition in AdS. We find that the phenomenology of the Fisher
information in the two spacetimes can be unified, and analyze its dependence on
temperature, detector energy gap, curvature, interaction time, and detector
initial state. We then identify estimation strategies that maximize the Fisher
information and therefore the precision of estimation.
|
We generalize Renault's notion of measurewise amenability to actions of
second countable, Hausdorff, \'etale groupoids on separable $C^*$-algebras and
show that measurewise amenability characterizes nuclearity of the crossed
product whenever the $C^*$-algebra acted on is nuclear. In the more general
context of Fell bundles over second countable, Hausdorff, \'etale groupoids, we
introduce a version of Exel's approximation property. We prove that the
approximation property implies nuclearity of the cross-sectional algebra
whenever the unit bundle is nuclear. For Fell bundles associated to groupoid
actions, we show that the approximation property implies measurewise
amenability of the underlying action.
|
This dissertation explores applications of discrete geometry in mathematical
neuroscience. We begin with convex neural codes, which model the activity of
hippocampal place cells and other neurons with convex receptive fields. In
Chapter 4, we introduce order-forcing, a tool for constraining convex
realizations of codes, and use it to construct new examples of non-convex codes
with no local obstructions. In Chapter 5, we relate oriented matroids to convex
neural codes, showing that a code has a realization with convex polytopes iff
it is the image of a representable oriented matroid under a neural code
morphism. We also show that determining whether a code is convex is at least as
difficult as determining whether an oriented matroid is representable, implying
that the problem of determining whether a code is convex is NP-hard. Next, we
turn to the problem of the underlying rank of a matrix. This problem is
motivated by the problem of determining the dimensionality of (neural) data
which has been corrupted by an unknown monotone transformation. In Chapter 6,
we introduce two tools for computing underlying rank, the minimal nodes and the
Radon rank. We apply these to analyze calcium imaging data from a larval
zebrafish. In Chapter 7, we explore the underlying rank in more detail,
establish connections to oriented matroid theory, and show that computing
underlying rank is also NP-hard. Finally, we study the dynamics of
threshold-linear networks (TLNs), a simple model of the activity of neural
circuits. In Chapter 9, we describe the nullcline arrangement of a threshold
linear network, and show that a subset of its chambers are an attracting set.
In Chapter 10, we focus on combinatorial threshold linear networks (CTLNs),
which are TLNs defined from a directed graph. We prove that if the graph of a
CTLN is a directed acyclic graph, then all trajectories of the CTLN approach a
fixed point.
|
Holographic MIMO (HMIMO) has recently been recognized as a promising enabler
for future 6G systems through the use of an ultra-massive number of antennas in
a compact space to exploit the propagation characteristics of the
electromagnetic (EM) channel. Nevertheless, the promised gain of HMIMO could
not be fully unleashed without an efficient means to estimate the
high-dimensional channel. Bayes-optimal estimators typically necessitate either
a large volume of supervised training samples or a priori knowledge of the true
channel distribution, which could hardly be available in practice due to the
enormous system scale and the complicated EM environments. It is thus important
to design a Bayes-optimal estimator for the HMIMO channels in arbitrary and
unknown EM environments, free of any supervision or priors. This work proposes
a self-supervised minimum mean-square-error (MMSE) channel estimation algorithm
based on powerful machine learning tools, i.e., score matching and principal
component analysis. The training stage requires only the pilot signals, without
knowing the spatial correlation, the ground-truth channels, or the received
signal-to-noise-ratio. Simulation results will show that, even being totally
self-supervised, the proposed algorithm can still approach the performance of
the oracle MMSE method with an extremely low complexity, making it a
competitive candidate in practice.
|
Without any explicit cross-lingual training data, multilingual language
models can achieve cross-lingual transfer. One common way to improve this
transfer is to perform realignment steps before fine-tuning, i.e., to train the
model to build similar representations for pairs of words from translated
sentences. But such realignment methods were found to not always improve
results across languages and tasks, which raises the question of whether
aligned representations are truly beneficial for cross-lingual transfer. We
provide evidence that alignment is actually significantly correlated with
cross-lingual transfer across languages, models and random seeds. We show that
fine-tuning can have a significant impact on alignment, depending mainly on the
downstream task and the model. Finally, we show that realignment can, in some
instances, improve cross-lingual transfer, and we identify conditions in which
realignment methods provide significant improvements. Namely, we find that
realignment works better on tasks for which alignment is correlated with
cross-lingual transfer when generalizing to a distant language and with smaller
models, as well as when using a bilingual dictionary rather than FastAlign to
extract realignment pairs. For example, for POS-tagging, between English and
Arabic, realignment can bring a +15.8 accuracy improvement on distilmBERT, even
outperforming XLM-R Large by 1.7. We thus advocate for further research on
realignment methods for smaller multilingual models as an alternative to
scaling.
|
Despite increasing representation in graduate training programs, a
disproportionate number of women leave academic research before obtaining an
independent position. To understand factors underlying this trend, we analyzed
a multidisciplinary database of Ph.D. and postdoctoral mentoring relationships
covering the years 2000-2020, focusing on data from the life sciences. Student
and mentor gender are both associated with differences in rates of student's
continuation to independent mentor positions of their own. Although trainees of
women mentors are less likely to take on independent positions than trainees of
men mentors, this effect is reduced substantially after controlling for several
measurements of mentor status. Thus the effect of mentor gender can be
explained at least partially by gender disparities in social and financial
resources available to mentors. Because trainees and mentors tend to be of the
same gender, this association between mentor gender and academic continuation
disproportionately impacts women trainees. On average, gender homophily in
graduate training is unrelated to mentor status. A notable exception to this
trend is the special case of scientists having been granted an outstanding
distinction, evidenced by membership in the National Academy of Sciences, being
a grantee of the Howard Hughes Medical Institute, or having been awarded the
Nobel Prize. This group of mentors trains men graduate students at higher rates
than their most successful colleagues. These results suggest that, in addition
to other factors that limit career choices for women trainees, gender
inequities in mentors' access to resources and prestige contribute to women's
attrition from independent research positions.
|
We study additional non-isospectral symmetries of constrained (reduced) N=2
supersymmetric KP hierarchies of integrable ``soliton''-like evolution
equations. These symmetries are shown to form an infinite-dimensional
non-Abelian superloop superalgebra. Furthermore we study the general
Darboux-Backlund (DB) transformations (including adjoint-DB and binary DB) of
N=2 super- KP hierarchies preserving (most of) the additional symmetries. Also
we derive the explicit form of the general DB (N=2 ``super- soliton''-like)
solutions in the form of generalized Wronskian-like super-determinants.
|
Time series domain adaptation stands as a pivotal and intricate challenge
with diverse applications, including but not limited to human activity
recognition, sleep stage classification, and machine fault diagnosis. Despite
the numerous domain adaptation techniques proposed to tackle this complex
problem, they primarily focus on domain adaptation from a single source domain.
Yet, it is more crucial to investigate domain adaptation from multiple domains
due to the potential for greater improvements. To address this, three important
challenges need to be overcome: 1). The lack of exploration to utilize
domain-specific information for domain adaptation, 2). The difficulty to learn
domain-specific information that changes over time, and 3). The difficulty to
evaluate learned domain-specific information. In order to tackle these
challenges simultaneously, in this paper, we introduce PrOmpt-based domaiN
Discrimination (POND), the first framework to utilize prompts for time series
domain adaptation. Specifically, to address Challenge 1, we extend the idea of
prompt tuning to time series analysis and learn prompts to capture common and
domain-specific information from all source domains. To handle Challenge 2, we
introduce a conditional module for each source domain to generate prompts from
time series input data. For Challenge 3, we propose two criteria to select good
prompts, which are used to choose the most suitable source domain for domain
adaptation. The efficacy and robustness of our proposed POND model are
extensively validated through experiments across 50 scenarios encompassing four
datasets. Experimental results demonstrate that our proposed POND model
outperforms all state-of-the-art comparison methods by up to $66\%$ on the
F1-score.
|
Assume that $\mathcal{I}$ is an ideal on $\mathbb{N}$, and $\sum_n x_n$ is a
divergent series in a Banach space $X$. We study the Baire category, and the
measure of the set $A(\mathcal{I}):=\left\{t \in \{0,1\}^{\mathbb{N}} \colon
\sum_n t(n)x_n \textrm{ is } \mathcal{I}\textrm{-convergent}\right\}$. In the
category case, we assume that $\mathcal{I}$ has the Baire property and $\sum_n
x_n$ is not unconditionally convergent, and we deduce that $A(\mathcal{I})$ is
meager. We also study the smallness of $A(\mathcal{I})$ in the measure case
when the Haar probability measure $\lambda$ on $\{0,1\}^{\mathbb{N}}$ is
considered. If $\mathcal{I}$ is analytic or coanalytic, and $\sum_n x_n$ is
$\mathcal{I}$-divergent, then $\lambda(A(\mathcal{I}))=0$ which extends the
theorem of Dindo\v{s}, \v{S}al\'at and Toma. Generalizing one of their
examples, we show that, for every ideal $\mathcal{I}$ on $\mathbb{N}$, with the
property of long intervals, there is a divergent series of reals such that
$\lambda(A(Fin))=0$ and $\lambda(A(\mathcal{I}))=1$.
|
In fields that are mainly nonexperimental, such as economics and finance, it
is inescapable to compute test statistics and confidence regions that are not
probabilistically independent from previously examined data. The Bayesian and
Neyman-Pearson inference theories are known to be inadequate for such a
practice. We show that these inadequacies also hold m.a.e. (modulo
approximation error). We develop a general econometric theory, called the
neoclassical inference theory, that is immune to this inadequacy m.a.e. The
neoclassical inference theory appears to nest model calibration, and most
econometric practices, whether they are labelled Bayesian or \`a la
Neyman-Pearson. We derive a general, but simple adjustment to make standard
errors account for the approximation error.
|
We study the prospects for constraining the ionized fraction of the
intergalactic medium (IGM) at $z>6$ with the next generation of large
Ly$\alpha$ emitter surveys. We make predictions for the upcoming Subaru Hyper
Suprime-Cam (HSC) Ly$\alpha$ survey and a hypothetical spectroscopic survey
performed with the James Webb Space Telescope (JWST). Considering various
scenarios where the observed evolution of the Ly$\alpha$ luminosity function of
Ly$\alpha$ emitters at $z>6$ is explained partly by an increasingly neutral IGM
and partly by intrinsic galaxy evolution, we show how clustering measurements
will be able to distinguish between these scenarios. We find that the HSC
survey should be able to detect the additional clustering induced by a neutral
IGM if the global IGM neutral fraction is greater than $\sim$20 per cent at
$z=6.5$. If measurements of the Ly$\alpha$ equivalent widths (EWs) are also
available, neutral fractions as small as 10 per cent may be detectable by
looking for correlation between the EW and the local number density of objects.
In this case, if it should turn out that the IGM is significantly neutral at
$z=6.5$ and the intrinsic EW distribution is relatively narrow, the observed
EWs can also be used to construct a map of the locations and approximate sizes
of the largest ionized regions. For the JWST survey, the results appear a bit
less optimistic. Since such surveys probe a large range of redshifts, the
effects of the IGM will be mixed up with any intrinsic galaxy evolution that is
present, making it difficult to disentangle the effects. However, we show that
a survey with the JWST will have a possibility of observing a large group of
galaxies at $z\sim7$, which would be a strong indication of a partially neutral
IGM.
|
Integrated sensing and communication (ISAC) technology has been considered as
one of the key candidate technologies in the next-generation wireless
communication systems. However, when radar and communication equipment coexist
in the same system, i.e. radar-communication coexistence (RCC), the
interference from communication systems to radar can be large and cannot be
ignored. Recently, reconfigurable intelligent surface (RIS) has been introduced
into RCC systems to reduce the interference. However, the "multiplicative
fading" effect introduced by passive RIS limits its performance. To tackle this
issue, we consider a double active RIS-assisted RCC system, which focuses on
the design of the radar's beamforming vector and the active RISs' reflecting
coefficient matrices, to maximize the achievable data rate of the communication
system. The considered system needs to meet the radar detection constraint and
the power budgets at the radar and the RISs. Since the problem is non-convex,
we propose an algorithm based on the penalty dual decomposition (PDD)
framework. Specifically, we initially introduce auxiliary variables to
reformulate the coupled variables into equation constraints and incorporate
these constraints into the objective function through the PDD framework. Then,
we decouple the equivalent problem into several subproblems by invoking the
block coordinate descent (BCD) method. Furthermore, we employ the Lagrange dual
method to alternately optimize these subproblems. Simulation results verify the
effectiveness of the proposed algorithm. Furthermore, the results also show
that under the same power budget, deploying double active RISs in RCC systems
can achieve higher data rate than those with single active RIS and double
passive RISs.
|
We review QCD based descriptions of diffractive deep inelastic scattering
emphasising the role of models with parton saturation. These models provide
natural explanation of such experimentally observed facts as the constant ratio
of the diffractive and total cross sections as a function of the Bjorken
variable, and Regge factorization of diffractive parton distributions. The
Ingelman-Schlein model and the soft color interaction model are also presented.
|
When the intensity of turbulence is increased (by increasing the Reynolds
number, e.g. by reducing the viscosity of the fluid), the rate of the
dissipation of kinetic energy decreases but does not tend asymptotically to
zero: it levels off to a non-zero constant as smaller and smaller vortical flow
structures are generated. This fundamental property, called the dissipation
anomaly, is sometimes referred to as the zeroth law of turbulence. The question
of what happens in the limit of vanishing viscosity (purely hypothetical in
classical fluids) acquires a particular physical significance in the context of
liquid helium, a quantum fluid which becomes effectively inviscid at low
temperatures achievable in the laboratory. By performing numerical simulations
and identifying the superfluid Reynolds number, here we show evidence for a
superfluid analog to the classical dissipation anomaly. Our numerics indeed
show that as the superfluid Reynolds number increases, smaller and smaller
structures are generated on the quantized vortex lines on which the superfluid
vorticity is confined, balancing the effect of weaker and weaker dissipation.
|
At the LaAlO$_3$-SrTiO$_3$ interface, electronic phase transitions can be
triggered by modulation of the charge carrier density, making this system an
excellent prospect for the realization of versatile electronic devices. Here,
we report repeatable transistor operation in locally gated LaAlO$_3$-SrTiO$_3$
field-effect devices of which the LaAlO$_3$ dielectric is only four unit cells
thin, the critical thickness for conduction at this interface. This extremely
thin dielectric allows a very efficient charge modulation of
${\sim}3.2\times10^{13}$ cm$^{-2}$ within a gate-voltage window of $\pm1$ V, as
extracted from capacitance-voltage measurements. These also reveal a large
stray capacitance between gate and source, presenting a complication for
nanoscale device operation. Despite the small LaAlO$_3$ thickness, we observe a
negligible gate leakage current, which we ascribe to the extension of the
conducting states into the SrTiO$_3$ substrate.
|
The total variation (TV) penalty, as many other analysis-sparsity problems,
does not lead to separable factors or a proximal operatorwith a closed-form
expression, such as soft thresholding for the $\ell\_1$ penalty. As a result,
in a variational formulation of an inverse problem or statisticallearning
estimation, it leads to challenging non-smooth optimization problemsthat are
often solved with elaborate single-step first-order methods. When thedata-fit
term arises from empirical measurements, as in brain imaging, it isoften very
ill-conditioned and without simple structure. In this situation, in proximal
splitting methods, the computation cost of thegradient step can easily dominate
each iteration. Thus it is beneficialto minimize the number of gradient
steps.We present fAASTA, a variant of FISTA, that relies on an internal solver
forthe TV proximal operator, and refines its tolerance to balance
computationalcost of the gradient and the proximal steps. We give benchmarks
andillustrations on "brain decoding": recovering brain maps from
noisymeasurements to predict observed behavior. The algorithm as well as
theempirical study of convergence speed are valuable for any non-exact
proximaloperator, in particular analysis-sparsity problems.
|
We construct the Dirac-Born-Infeld action in the context of N=1 conformal
supergravity and its possible extensions including matter couplings. We
especially focus on the Volkov-Akulov constraint, which is important to avoid
ghost modes from the higher derivative terms. In the case with matter
couplings, we find the modified D-term potential.
|
We give a simple proof of a pointwise decay estimate in 3+1 dimensions stated
in two versions, making advantage of a particular simplicity of inverting the
spherically symmetric part of the wave operator and of the comparison theorem.
We briefly explain the role of this estimate in proving decay estimates for
nonlinear wave equations or wave equations with potential terms.
|
Among the various types of online language learner collective types, we
analyze in this contribution Web 2.0 communities featuring an explicit
progression. We use three analysis angles (user roles, pedagogical progression
and content) in order to provides leads towards expressing the interrelations
between the implementation choices regarding the concepts linked to Web 2.0 and
the learning experience of the users.
|
{\it Background.} We investigate possible correlations between neutron star
observables and properties of atomic nuclei. Particularly, we explore how the
tidal deformability of a 1.4 solar mass neutron star, $M_{1.4}$, and the
neutron skin thickness of ${^{48}}$Ca and ${^{208}}$Pb are related to the
stellar radius and the stiffness of the symmetry energy. {\it Methods.} We
examine a large set of nuclear equations of state based on phenomenological
models (Skyrme, NLWM, DDM) and {\it ab-initio} theoretical methods (BBG,
Dirac-Brueckner, Variational, Quantum Monte Carlo). {\it Results.} We find
strong correlations between tidal deformability and NS radius, whereas a weaker
correlation does exist with the stiffness of the symmetry energy. Regarding the
neutron skin thickness, weak correlations appear both with the stiffness of the
symmetry energy, and the radius of a $M_{1.4}$. {\it Conclusion.} The tidal
deformability of a $M_{1.4}$ and the neutron-skin thickness of atomic nuclei
show some degree of correlation with nuclear and astrophysical observables,
which however depends on the ensemble of adopted EoS.
|
A stochastic 3D microstructure model for polycrystals is introduced which
incorporates two types of twin grains, namely neighboring and inclusion twins.
They mimic the presence of crystal twins in $\gamma$-TiAl polycrystalline
microstructures as observed by 3D imaging techniques. The polycrystal grain
morphology is modeled by means of Voronoi and -- more generally -- Laguerre
tessellations. The crystallographic orientation of each grain is either sampled
uniformly on the space of orientations or chosen to be in a twinning relation
with another grain. The model is used to quantitatively study relationships
between morphology and mechanical properties of polycrystalline materials. For
this purpose, full-field Fourier-based computations are performed to
investigate the combined effect of grain morphology and twinning on the overall
elastic response. For $\gamma$-TiAl polycrystallines, the presence of twins is
associated with a softer response compared to polycrystalline aggregates
without twins. However, when comparing the influence on the elastic response, a
statistically different polycrystalline morphology has a much smaller effect
than the presence of twin grains. Notably, the bulk modulus is almost
insensitive to the grain morphology and exhibits much less sensitivity to the
presence of twins compared to the shear modulus. The numerical results are
consistent with a two-scale homogenization estimate that utilizes laminate
materials to model the interactions of twins.
|
The first results on next-to-leading order QCD corrections to the production
of two Z bosons, in hadronic collisions in the large extra dimension ADD model
are presented. Various kinematical distributions are obtained to order
$\alpha_s$ in QCD by taking into account all the parton level subprocesses. We
estimate the impact of the QCD corrections on various observables and find that
they are significant. We also show the reduction in factorisation scale
uncertainity when ${\cal O}(\alpha_s)$ effects are included.
|
The study of deep recurrent neural networks (RNNs) and, in particular, of
deep Reservoir Computing (RC) is gaining an increasing research attention in
the neural networks community. The recently introduced Deep Echo State Network
(DeepESN) model opened the way to an extremely efficient approach for designing
deep neural networks for temporal data. At the same time, the study of DeepESNs
allowed to shed light on the intrinsic properties of state dynamics developed
by hierarchical compositions of recurrent layers, i.e. on the bias of depth in
RNNs architectural design. In this paper, we summarize the advancements in the
development, analysis and applications of DeepESNs.
|
We reconsider the problem of optimal time to sell a stock studied recently by
Shiryaev, Xu and Zhou using path integral methods. This method allows us to
confirm the results obtained by these authors and extend them to a parameter
region inaccessible to the method used by Shiryaev et. al. We also obtain the
full distribution of the time t_m at which the maximum of the price is reached
for arbitrary values of the drift.
|
New high variability extragalactic sources may be identified by comparing the
flux of sources seen in the XMM-Newton Slew Survey with detections and upper
limits from the ROSAT All Sky Survey. In November 2012, X-ray emission was
detected from the galaxy XMMSL1 J061927.1-655311 (a.k.a. 2MASX
06192755-6553079), a factor 140 times higher than an upper limit from 20 years
earlier. Both the X-ray and UV flux subsequently fell, over the following year,
by factors of 20 and 4 respectively. Optically, the galaxy appears to be a
Seyfert I with broad Balmer lines and weak, narrow, low-ionisation emission
lines, at a redshift of 0.0729. The X-ray luminosity peaks at Lx ~ 8x10^43
ergs/s with a typical Sy I-like power-law X-ray spectrum of index ~ 2. The
flare has either been caused by a tidal disruption event or by an increase in
the accretion rate of a persistent AGN.
|
We consider the most general loop integral that appears in non-relativistic
effective field theories with no light particles. The divergences of this
integral are in correspondence with simple poles in the space of complex
space-time dimensions. Integrals related to the original integral by
subtraction of one or more poles in dimensions other than D=4 lead to
nonminimal subtraction schemes. Subtraction of all poles in correspondence with
ultraviolet divergences of the loop integral leads naturally to a
regularization scheme which is precisely equivalent to cutoff regularization.
We therefore recover cutoff regularization from dimensional regularization with
a nonminimal subtraction scheme. We then discuss the power-counting for
non-relativistic effective field theories which arises in these alternative
schemes.
|
For $0<p<1,$ we prove that there is a $\mathfrak{c}$-dimensional subspace of
$\mathcal{L}\left( \ell_{p},\ell_{p}\right) $ such that, except for the null
vector, all of its vectors fail to be absolutely $(r,s)$-summing regardless of
the real numbers $r,s$, with $1\leq s\leq r<\infty$. This extends a result
proved by Maddox in 1987. Moreover, the result is sharp in the sense that it is
not valid for $p\geq1.$
|
In this paper, first, we investigate the model of degraded broadcast channel
with side information and confidential messages. This work is from Steinberg's
work on the degraded broadcast channel with causal and noncausal side
information, and Csisz$\acute{a}$r-K\"{o}rner's work on broadcast channel with
confidential messages. Inner and outer bounds on the capacity-equivocation
regions are provided for the noncausal and causal cases. Superposition coding
and double-binning technique are used in the corresponding achievability
proofs.
Then, we investigate the degraded broadcast channel with side information,
confidential messages and noiseless feedback. The noiseless feedback is from
the non-degraded receiver to the channel encoder. Inner and outer bounds on the
capacity-equivocation region are provided for the noncausal case, and the
capacity-equivocation region is determined for the causal case. Compared with
the model without feedback, we find that the noiseless feedback helps to
enlarge the inner bounds for both causal and noncausal cases. In the
achievability proof of the feedback model, the noiseless feedback is used as a
secret key shared by the non-degraded receiver and the transmitter, and
therefore, the code construction for the feedback model is a combination of
superposition coding, Gel'fand-Pinsker's binning, block Markov coding and
Ahlswede-Cai's secret key on the feedback system.
|
An inconsistency is pointed out within Quantum Mechanics as soon as
successive joint measurements are involved on entangled states. The resolution
of the inconsistency leads to a refutation of the use of entangled states as
eigenvectors. Hence, the concept of quantum teleportation, which is based on
the use of such entangled states--the Bell states--as eigenvectors, is
demonstrated to be irrelevant to Quantum Mechanics.
|
Clustering of Lyman-$\alpha$ (Ly$\alpha$) emitting galaxies (LAEs) is a
useful probe of cosmology. However, Ly$\alpha$ radiative transfer (RT) effects,
such as absorption, line shift, and line broadening, and their dependence on
the large-scale density and velocity fields can modify the measured LAE
clustering and line intensity mapping (LIM) statistics. We explore the effect
of RT on the Ly$\alpha$ LIM power spectrum in two ways: using an analytic
description based on linear approximations and using lognormal mocks. The
qualitative effect of intergalactic Ly$\alpha$ absorption on the LIM auto- and
cross-power spectrum is a scale-dependent, reduced effective bias, reduced mean
intensity, and modified redshift-space distortions. The linear absorption model
does not describe the results of the lognormal simulations well. The random
line shift suppresses the redshift-space power spectrum similar to the
Fingers-of-God effect. In cross-correlation of LAEs or Ly$\alpha$ intensity
with a non-Ly$\alpha$ tracer, the Ly$\alpha$ line shift leads to a phase shift
of the complex power spectrum, i.e. a cosine damping of the real part. Line
broadening from RT suppresses the LIM power spectra in the same way as limited
spectral resolution. We study the impact of Ly$\alpha$ RT effects on the
Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) LAE and LIM power
spectra using lognormal mocks. We find that even small amounts of IGM
absorption will significantly change the measured LAE auto-power spectrum and
the LAE-intensity cross-power spectrum. Therefore, HETDEX will be able to
constrain Ly$\alpha$ RT effects.
|
Optical neural networks (ONNs), or optical neuromorphic hardware
accelerators, have the potential to dramatically enhance the computing power
and energy efficiency of mainstream electronic processors, due to their
ultralarge bandwidths of up to 10s of terahertz together with their analog
architecture that avoids the need for reading and writing data back and forth.
Different multiplexing techniques have been employed to demonstrate ONNs,
amongst which wavelength division multiplexing (WDM) techniques make sufficient
use of the unique advantages of optics in terms of broad bandwidths. Here, we
review recent advances in WDM based ONNs, focusing on methods that use
integrated microcombs to implement ONNs. We present results for human image
processing using an optical convolution accelerator operating at 11 Tera
operations per second. The open challenges and limitations of ONNs that need to
be addressed for future applications are also discussed.
|
There are many possibilities of scenarios in the early universe, which can
give rise to the same observational signals due to the degeneracy among each
other, caused by equivalence under the conformal transformations. In order to
break the degeneracy, in this paper we take into account the so-called
"frame-invariant variables" proposed by A. Ijjas and P. J. Steinhardt in
\cite{Ijjas:2015zma}. We discuss how the different scenarios will distribute in
different parametric space constructed from those variables, waiting for the
judgement of real observations in the future. Several concrete models with
explicit non-minimal coupling functions are also discussed.
|
We explore the role of redshift-space galaxy clustering data in constraining
non-gravitational interactions between dark energy (DE) and dark matter (DM),
for which state-of-the-art limits have so far been obtained from late-time
background measurements. We use the joint likelihood for pre-reconstruction
full-shape (FS) galaxy power spectrum and post-reconstruction Baryon Acoustic
Oscillation (BAO) measurements from the BOSS DR12 sample, alongside Cosmic
Microwave Background (CMB) data from \textit{Planck}: from this dataset
combination we infer $H_0=68.02^{+0.49}_{-0.60}\,{\rm km}/{\rm s}/{\rm Mpc}$
and the 2$\sigma$ lower limit $\xi>-0.12$, among the strongest limits ever
reported on the DM-DE coupling strength $\xi$ for the particular model
considered. Contrary to what has been observed for the $\Lambda$CDM model and
simple extensions thereof, we find that the CMB+FS combination returns tighter
constraints compared to the CMB+BAO one, suggesting that there is valuable
additional information contained in the broadband of the power spectrum. We
test this finding by running additional CMB-free analyses and removing sound
horizon information, and discuss the important role of the equality scale in
setting constraints on DM-DE interactions. Our results reinforce the critical
role played by redshift-space galaxy clustering measurements in the epoch of
precision cosmology, particularly in relation to tests of non-minimal dark
sector extensions of the $\Lambda$CDM model.
|
Chromatin fibers composed of DNA and proteins fold into consecutive loops to
form rod-shaped chromosome in mitosis. Although the loop growth dynamics is
investigated in several studies, its detailed processes are unclear. Here, we
describe the time evolution of the loop length for thermal-driven loop growth
processes as an iterative map by calculating physical quantities involved in
the processes. We quantify the energy during the chromatin loop formation by
calculating the free energies of unlooped and looped chromatins using the
Domb-Joyce model of a lattice polymer chain incorporating the bending
elasticity for thermal-driven loop growth processes. The excluded volume
interaction among loops is integrated by employing the mean-field theory. We
compare the loop formation energy with the thermal energy and evaluate the
growth of loop length via thermal fluctuation. By assuming the dependence of
the excluded volume parameter on the loop length, we construct an iterative map
for the loop growth dynamics. The map demonstrates that the growth length of
the loop for a single reaction cycle decreases with time to reach the condensin
size, where the loop growth dynamics can be less stochastic and be regarded as
direct power stroke of condensin as a kind of motor proteins.
|
For $C^*$-algebra generated by a finite family of isometries $s_j$,
$j=1,\dots,d$ satisfying $q_{ij}$-commutation relations
\[ s_j^* s_j = I, \quad s_j^* s_k = q_{ij}s_ks_j^*, \qquad q_{ij} = \bar
q_{ji}, |q_{ij}|<1, \ 1\le i \ne j \le d,
\] we construct an infinite family of unitarily non-equivalent irreducible
representations. These representations are deformations of the corresponding
class of representations of the Cuntz algebra $\mathcal O_d$.
|
We design temporal description logics suitable for reasoning about temporal
conceptual data models and investigate their computational complexity. Our
formalisms are based on DL-Lite logics with three types of concept inclusions
(ranging from atomic concept inclusions and disjointness to the full Booleans),
as well as cardinality constraints and role inclusions. In the temporal
dimension, they capture future and past temporal operators on concepts,
flexible and rigid roles, the operators `always' and `some time' on roles, data
assertions for particular moments of time and global concept inclusions. The
logics are interpreted over the Cartesian products of object domains and the
flow of time (Z,<), satisfying the constant domain assumption. We prove that
the most expressive of our temporal description logics (which can capture
lifespan cardinalities and either qualitative or quantitative evolution
constraints) turn out to be undecidable. However, by omitting some of the
temporal operators on concepts/roles or by restricting the form of concept
inclusions we obtain logics whose complexity ranges between PSpace and
NLogSpace. These positive results were obtained by reduction to various clausal
fragments of propositional temporal logic, which opens a way to employ
propositional or first-order temporal provers for reasoning about temporal data
models.
|
Humans are able to make rich predictions about the future dynamics of
physical objects from a glance. On the other hand, most existing computer
vision approaches require strong assumptions about the underlying system,
ad-hoc modeling, or annotated datasets, to carry out even simple predictions.
To tackle this gap, we propose a new perspective on the problem of learning
intuitive physics that is inspired by the spatial memory representation of
objects and spaces in human brains, in particular the co-existence of
egocentric and allocentric spatial representations. We present a generic
framework that learns a layered representation of the physical world, using a
cascade of invertible modules. In this framework, real images are first
converted to a synthetic domain representation that reduces complexity arising
from lighting and texture. Then, an allocentric viewpoint transformer removes
viewpoint complexity by projecting images to a canonical view. Finally, a novel
Recurrent Latent Variation Network (RLVN) architecture learns the dynamics of
the objects interacting with the environment and predicts future motion,
leveraging the availability of unlimited synthetic simulations. Predicted
frames are then projected back to the original camera view and translated back
to the real world domain. Experimental results show the ability of the
framework to consistently and accurately predict several frames in the future
and the ability to adapt to real images.
|
The Support Vector Machine (SVM) has been used in a wide variety of
classification problems. The original SVM uses the hinge loss function, which
is non-differentiable and makes the problem difficult to solve in particular
for regularized SVMs, such as with $\ell_1$-regularization. This paper
considers the Huberized SVM (HSVM), which uses a differentiable approximation
of the hinge loss function. We first explore the use of the Proximal Gradient
(PG) method to solving binary-class HSVM (B-HSVM) and then generalize it to
multi-class HSVM (M-HSVM). Under strong convexity assumptions, we show that our
algorithm converges linearly. In addition, we give a finite convergence result
about the support of the solution, based on which we further accelerate the
algorithm by a two-stage method. We present extensive numerical experiments on
both synthetic and real datasets which demonstrate the superiority of our
methods over some state-of-the-art methods for both binary- and multi-class
SVMs.
|
Widespread, high altitude, diffuse ionized gas with scale heights of around a
kiloparsec is observed in the Milky Way and other star forming galaxies.
Numerical radiation-magnetohydrodynamic simulations of a supernova-driven
turbulent interstellar medium show that gas can be driven to high altitudes
above the galactic midplane, but the degree of ionization is often less than
inferred from observations. For computational expediency, ionizing radiation
from massive stars is often included as a post-processing step assuming
ionization equilibrium. We extend our simulations of a Milky Way-like
interstellar medium to include the combined effect of supernovae and
photoionization feedback from midplane OB stars and a population of hot evolved
low mass stars. The diffuse ionized gas has densities below 0.1 ${\rm
cm^{-3}}$, so recombination timescales can exceed millions of years. Our
simulations now follow the time-dependent ionization and recombination of low
density gas. The long recombination timescales result in diffuse ionized gas
that persists at large altitudes long after the deaths of massive stars that
produce the vast majority of the ionized gas. The diffuse ionized gas does not
exhibit the large variability inherent in simulations that adopt ionization
equilibrium. The vertical distribution of neutral and ionized gas is close to
what is observed in the Milky Way. The volume filling factor of ionized gas
increases with altitude resulting in the scale height of free electrons being
larger than that inferred from H$\alpha$ emission, thus reconciling the
observations of ionized gas made in H$\alpha$ and from pulsar dispersion
measurements.
|
We present the X-ray spectroscopic study of the Compton-thick (CT) active
galactic nuclei (AGN) population within the $\textit{Chandra}$ Deep Field South
(CDF-S) by using the deepest X-ray observation to date, the $\textit{Chandra}$
7 Ms observation of the CDF-S. We combined an opimized version of our automated
selection technique and a Bayesian Monte Carlo Markov Chains (MCMC) spectral
fitting procedure, to develop a method to pinpoint and then characterize
candidate CT AGN as less model dependent and/or data-quality dependent as
possible. To obtain reliable automated spectral fits, we only considered the
sources detected in the hard (2-8 keV) band from the CDF-S 2 Ms catalog with
either spectroscopic or photometric redshifts available for 259 sources.
Instead of using our spectral analysis to decide if an AGN is CT, we derived
the posterior probability for the column density, and then we used it to assign
a probability of a source being CT. We also tested how the model-dependence of
the spectral analysis, and the spectral data quality, could affect our results
by using simulations. We finally derived the number density of CT AGN by taking
into account the probabilities of our sources being CT and the results from the
simulations. Our results are in agreement with X-ray background synthesis
models, which postulate a moderate fraction (25%) of CT objects among the
obscured AGN population.
|
We study the interaction of small hydrophobic particles on the surface of an
ultra-soft elastic gel, in which a small amount of elasticity of the medium
balances the weights of the particles. The excess energy of the surface of the
deformed gel causes them to attract as is the case with the generic capillary
interactions of particles on a liquid surface. The variation of the
gravitational potential energies of the particles resulting from their descents
in the gel coupled with the superposition principle of Nicolson allow a fair
estimation of the distance dependent attractive energy of the particles. This
energy follows a modified Bessel function of the second kind with a
characteristic elastocapillary decay length that decreases with the elasticity
of the medium. An interesting finding of this study is that the particles on
the gel move towards each other as if the system possesses a negative
diffusivity that is inversely proportional to friction. This study illustrates
how the capillary interaction of particles is modified by the elasticity of the
medium, which is expected to have important implications in the surface force
driven self-assembly of particles. In particular, this study points out that
the range and the strength of the capillary interaction can be tuned in by
appropriate choices of the elasticity of the support and the interfacial
tension of the surrounding medium. Manipulation of the particle interactions is
exemplified in such fascinating mimicry of the biological processes as the
tubulation, phagocytic engulfment and in the assembly of particles that can be
used to study nucleation and clustering phenomena in well controlled settings.
|
We present simultaneous XMM-Newton UV and X-ray observations of the quadruply
lensed quasar SDSS J1004+4112 (RBS 825). Simultaneously with the XMM-Newton
observations we also performed integral field spectroscopy on the two closest
lens images A and B using the Calar Alto PMAS spectrograph. In X-rays the
widely spaced components C and D are clearly resolved, while the closer pair of
images A and B is marginally resolved in the XMM-EPIC images. The integrated
X-ray flux of the system has decreased by a factor of 6 since it was observed
in the ROSAT All Sky Survey in 1990, while the X-ray spectrum became much
harder with the power law index evolving from Gamma=-2.3 to -1.86. By
deblending the X-ray images of the lensed QSO we find that the X-ray flux
ratios between the lens images A and B are significantly different from the
simultaneously obtained UV ratios and previously measured optical flux ratios.
Our optical spectrum of lens image A shows an enhancement in the blue emission
line wings, which has been observed in previous epochs as a transient feature.
We propose a scenario where intrinsic UV and X-ray variability gives rise to
line variations which are selectively magnified in image A by microlensing. The
extended emission of the lensing cluster of galaxies is clearly detected in the
EPIC images, we measure a 0.5-2.0 keV luminosity of 1.4 E44 erg/s. Based on the
cluster X-ray properties, we estimate a mass of 2-6 E14 solar masses.
|
An approximate partition functional is derived for the infinite-dimensional
Hubbard model. This functional naturally includes the exact solution of the
Falicov-Kimball model as a special case, and is exact in the uncorrelated and
atomic limits. It explicitly keeps spin-symmetry. For the case of the
Lorentzian density of states, we find that the Luttinger theorem is satisfied
at zero temperature. The susceptibility crosses over smoothly from that
expected for an uncorrelated state with antiferromagnetic fluctuations at high
temperature to a correlated state at low temperature via a Kondo-type anomaly
at a characteristic temperature $T^\star$. We attribute this anomaly to the
appearance of the Hubbard pseudo-gap. The specific heat also shows a peak near
$T^\star$. The resistivity goes to zero at zero temperature, in contrast to
other approximations, rises sharply around $T^\star$ and has a rough linear
temperature dependence above $T^\star$.
|
The problem of inverse statistics (statistics of distances for which the
signal fluctuations are larger than a certain threshold) in differentiable
signals with power law spectrum, $E(k) \sim k^{-\alpha}$, $3 \le \alpha < 5$,
is discussed. We show that for these signals, with random phases, exit-distance
moments follow a bi-fractal distribution. We also investigate two dimensional
turbulent flows in the direct cascade regime, which display a more complex
behavior. We give numerical evidences that the inverse statistics of 2d
turbulent flows is described by a multi-fractal probability distribution, i.e.
the statistics of laminar events is not simply captured by the exponent
$\alpha$ characterizing the spectrum.
|
Scene understanding is crucial for autonomous systems which intend to operate
in the real world. Single task vision networks extract information only based
on some aspects of the scene. In multi-task learning (MTL), on the other hand,
these single tasks are jointly learned, thereby providing an opportunity for
tasks to share information and obtain a more comprehensive understanding. To
this end, we develop UniNet, a unified scene understanding network that
accurately and efficiently infers vital vision tasks including object
detection, semantic segmentation, instance segmentation, monocular depth
estimation, and monocular instance depth prediction. As these tasks look at
different semantic and geometric information, they can either complement or
conflict with each other. Therefore, understanding inter-task relationships can
provide useful cues to enable complementary information sharing. We evaluate
the task relationships in UniNet through the lens of adversarial attacks based
on the notion that they can exploit learned biases and task interactions in the
neural network. Extensive experiments on the Cityscapes dataset, using
untargeted and targeted attacks reveal that semantic tasks strongly interact
amongst themselves, and the same holds for geometric tasks. Additionally, we
show that the relationship between semantic and geometric tasks is asymmetric
and their interaction becomes weaker as we move towards higher-level
representations.
|
We present a multirotor architecture capable of aggressive autonomous flight
and collision-free teleoperation in unstructured, GPS-denied environments. The
proposed system enables aggressive and safe autonomous flight around clutter by
integrating recent advancements in visual-inertial state estimation and
teleoperation. Our teleoperation framework maps user inputs onto smooth and
dynamically feasible motion primitives. Collision-free trajectories are ensured
by querying a locally consistent map that is incrementally constructed from
forward-facing depth observations. Our system enables a non-expert operator to
safely navigate a multirotor around obstacles at speeds of 10 m/s. We achieve
autonomous flights at speeds exceeding 12 m/s and accelerations exceeding 12
m/s^2 in a series of outdoor field experiments that validate our approach.
|
Context management strategies in wireless technology are dependent upon the
collection of accurate information from the individual nodes. This information
(called context information) can be exploited by administrators or automated
systems in order to decide on specific management concerns. While traditional
approaches for fixed networks are more or less centralized, more complex
management strategies have been proposed for wireless networks, such as
hierarchical, fully distributed and hybrid ones. The reason for the
introduction of new strategies is based on the dynamic and unpredictable nature
of wireless networks and their (usually) limited resources, which do not
support centralized management solutions. In this work, efforts is being made
to uncovered some specific strategies that can be used to managed context
information that reaches the centre of decision making, the work is concluded
with a detail comparison of the strategies to enable context application
developers make right choice of strategy to be employed in a specific
situation.
|
Interest-rate risk is a key factor for property-casualty insurer capital. P&C
companies tend to be highly leveraged, with bond holdings much greater than
capital. For GAAP capital, bonds are marked to market but liabilities are not,
so shifts in the yield curve can have a significant impact on capital.
Yield-curve scenario generators are one approach to quantifying this risk. They
produce many future simulated evolutions of the yield curve, which can be used
to quantify the probabilities of bond-value changes that would result from
various maturity-mix strategies. Some of these generators are provided as
black-box models where the user gets only the projected scenarios. One focus of
this paper is to provide methods for testing generated scenarios from such
models by comparing to known distributional properties of yield curves.
P&C insurers hold bonds to maturity and manage cash-flow risk by matching
asset and liability flows. Derivative pricing and stochastic volatility are of
little concern over the relevant time frames. This requires different models
and model testing than what is common in the broader financial markets.
To complicate things further, interest rates for the last decade have not
been following the patterns established in the sixty years following WWII. We
are now coming out of the period of very low rates, yet are still not returning
to what had been thought of as normal before that. Modeling and model testing
are in an evolving state while new patterns emerge.
Our analysis starts with a review of the literature on interest-rate model
testing, with a P&C focus, and an update of the tests for current market
behavior. We then discuss models, and use them to illustrate the fitting and
testing methods. The testing discussion does not require the model-building
section.
|
As one of the possible extensions of Einstein's General Theory of Relativity,
it has been recently suggested that the presence of spacetime torsion could
solve problems of the very early and the late-time universe undergoing
accelerating phases. In this paper, we use the latest observations of
high-redshift data, coming from multiple measurements of quasars and baryon
acoustic oscillations, to phenomenologically constrain such cosmological model
in the framework of Einstein-Cartan (EC) endowed with spacetime torsion. Such
newly compiled quasar datasets in the cosmological analysis is crucial to this
aim, since it will extend the Hubble diagram to high-redshift range in which
predictions from different cosmologies can be distinguished. Our results show
that out of all the candidate models, the torsion plus cosmological constant
model is strongly favoured by the current high-redshift data, where torsion
itself would be expected to yield the current cosmic acceleration. Specially,
in the framework of Friedmann-like cosmology with torsion, the determined
Hubble constant is in very good agreement with that derived from the Planck
2018 CMB results. On the other hand, our results are compatible with zero
spatial curvature and there is no significant deviation from flat spatial
hypersurfaces. Finally, we check the robustness of high-redshift observations
by placing constraints on the torsion parameter $\alpha$, which is strongly
consistent with other recent works focusing on torsion effect on the primordial
helium-4 abundance.
|
We present our progress on the study of extinction laws along three diferent
lines. [a] We compare how well different families of extinction laws fit
existing photometric data for Galactic sightlines and we find that the Ma\'iz
Apell\'aniz et al. (2014) family provides better results than those of Cardelli
et al. (1989) or Fitzpatrick (1999). [b] We describe the HST/STIS
spectrophotometry in the 1700-10 200 Angstrom range that we are obtaining for
several tens of sightlines in 30 Doradus with the purpose of deriving an
improved wavelength-detailed family of extinction laws. [c] We present the
study we are conducting on the behavior of the extinction law in the infrared
by combining 2MASS and WISE photometry with Spitzer and ISO spectrophotometry.
|
In a previous work we have introduced the notion of embedded
$\mathbf{Q}$-resolution, which essentially consists in allowing the final
ambient space to contain abelian quotient singularities, and A'Campo's formula
was calculated in this setting. Here we study the semistable reduction
associated with an embedded $\mathbf{Q}$-resolution so as to compute the mixed
Hodge structure on the cohomology of the Milnor fiber in the isolated case
using a generalization of Steenbrink's spectral sequence. Examples of
Yomdin-L\^{e} surface singularities are presented as an application.
|
Continuing the project described by Kato et al. (2009, PASJ 61, S395,
arXiv:0905.1757), we collected times of superhump maxima for 51 SU UMa-type
dwarf novae mainly observed during the 2010-2011 season. Although most of the
new data for systems with short superhump periods basically confirmed the
findings by Kato et al. (2009) and Kato et al. (2010, PASJ 62, 1525,
arXiv:1009.5444), the long-period system GX Cas showed an exceptionally large
positive period derivative. An analysis of public Kepler data of V344 Lyr and
V1504 Cyg yielded less striking stage transitions. In V344 Lyr, there was
prominent secondary component growing during the late stage of superoutbursts,
and the component persisted at least for two more cycles of successive normal
outbursts. We also investigated the superoutbursts of two conspicuous eclipsing
objects: HT Cas and the WZ Sge-type object SDSS J080434.20+510349.2. Strong
beat phenomena were detected in both objects, and late-stage superhumps in the
latter object had an almost constant luminosity during the repeated
rebrightenings. The WZ Sge-type object SDSS J133941.11+484727.5 showed a phase
reversal around the rapid fading from the superoutburst. The object showed a
prominent beat phenomenon even after the end of the superoutburst. A pilot
study of superhump amplitudes indicated that the amplitudes of superhumps are
strongly correlated with orbital periods, and the dependence on the inclination
is weak in systems with inclinations smaller than 80 deg.
|
Hidden symmetries of the Goryachev-Chaplygin and Kovalevskaya gyrostats
spacetimes, as well as the Brdi\v{c}ka-Eardley-Nappi-Witten pp-waves are
studied. We find out that these spacetimes possess higher rank
St\"ackel-Killing tensors and that in the case of the pp-wave spacetimes the
symmetry group of the St\"ackel-Killing tensors is the well-known Newton-Hooke
group.
|
The imputation of missing values in multivariate time series (MTS) data is
critical in ensuring data quality and producing reliable data-driven predictive
models. Apart from many statistical approaches, a few recent studies have
proposed state-of-the-art deep learning methods to impute missing values in MTS
data. However, the evaluation of these deep methods is limited to one or two
data sets, low missing rates, and completely random missing value types. This
survey performs six data-centric experiments to benchmark state-of-the-art deep
imputation methods on five time series health data sets. Our extensive analysis
reveals that no single imputation method outperforms the others on all five
data sets. The imputation performance depends on data types, individual
variable statistics, missing value rates, and types. Deep learning methods that
jointly perform cross-sectional (across variables) and longitudinal (across
time) imputations of missing values in time series data yield statistically
better data quality than traditional imputation methods. Although
computationally expensive, deep learning methods are practical given the
current availability of high-performance computing resources, especially when
data quality and sample size are highly important in healthcare informatics.
Our findings highlight the importance of data-centric selection of imputation
methods to optimize data-driven predictive models.
|
In the easy-plane phase, a ferromagnetic spin-1 Bose-Einstein condensate is
magnetized in a plane transverse to the applied Zeeman field. This phase
supports polar-core spin vortices (PCVs), which consist of phase windings of
transverse magnetization. Here we show that spin-changing collisions cause a
PCV to accelerate down density gradients in an inhomogeneous condensate. The
dynamics is well-described by a simplified model adapted from scalar systems,
which predicts the dependence of the dynamics on trap tightness and quadratic
Zeeman energy. In a harmonic trap, a PCV accelerates radially to the condensate
boundary, in stark contrast to the azimuthal motion of vortices in a scalar
condensate. In a trap that has a local potential maximum at the centre, the PCV
exhibits oscillations around the trap centre, which persist for a remarkably
long time. The oscillations coincide with the emission and reabsorption of
axial spin waves, which reflect off the condensate boundary.
|
Automatically identifying harmful content in video is an important task with
a wide range of applications. However, there is a lack of professionally
labeled open datasets available. In this work VidHarm, an open dataset of 3589
video clips from film trailers annotated by professionals, is presented. An
analysis of the dataset is performed, revealing among other things the relation
between clip and trailer level annotations. Audiovisual models are trained on
the dataset and an in-depth study of modeling choices conducted. The results
show that performance is greatly improved by combining the visual and audio
modality, pre-training on large-scale video recognition datasets, and class
balanced sampling. Lastly, biases of the trained models are investigated using
discrimination probing.
VidHarm is openly available, and further details are available at:
https://vidharm.github.io
|
Binary systems of compact objects are strong emitters of gravitational waves
whose amplitude depends on the binary orbital parameters as the component mass,
the orbital semi-major axis and eccentricity. Here, in addition to the famous
Hulse-Taylor binary system, we have studied the possibility to detect the
gravitational wave signal emitted by binary systems at the center of our
galaxy. In particular, recent infrared observation of the galactic center have
revealed the existence of a cluster of stars each of which appears to orbit the
central black hole in $SgrA^*$. For the stars labelled as S2 and S14, we have
studied the emitted spectrum of gravitational wave and compare it with the
sensitivity threshold of space-based interferometers like Lisa and Astrod.
Furthermore, following recent observations, we have considered the possibility
that $SgrA^*$ is actually a binary system of massive black holes and calculated
the emission spectrum as a function of the system parameters. The diffraction
pattern of gravitational waves emitted by a binary system by a cluster of stars
has been also analyzed. We remark that this is only a preliminary-theoretical
work than can acquire more interest in view of the next-coming gravitational
wave astronomy era.
|
Four-dimensional Einstein-Maxwell-Dilaton theory admits asymptotically flat
extremal dyonic solutions for an infinite discrete sequence of the coupling
constant values. The quantization condition arises as consequence of regularity
of the dilaton function at the event horizon. These dyons satisfy the no-force
condition and have flat reduced three spaces like true BPS configurations, but
no supersymmetric embeddings are known except for some cases of lower values of
the coupling sequence.
|
The generation of higher harmonics of the magnetoplasmon frequency which has
recently been reported in strongly coupled two-dimensional Yukawa systems is
investigated in detail and, in addition, extended to two-dimensional Coulomb
systems. We observe higher harmonics over a much larger frequency range than
before and compare the theoretical prediction with the simulations. The
influence of the coupling, structure, and thermal energy on the excitation of
these modes is examined in detail. We also report on the effect of friction on
the mode spectra to make predictions about the experimental observability of
this new effect.
|
We first develop some basic facts about certain sorts of rigid local systems
on the affine line in characteristic $p>0$. We then apply them to exhibit a
number of rigid local systems of rank $23$ on the affine line in characteristic
$p=3$ whose arithmetic and geometric monodromy groups are the Conway group
$\mathrm{Co}_3$ in its orthogonal irreducible representation of degree $23$.
|
We have implemented a Monte Carlo algorithm to model and predict the response
of various kinds of CCDs to X-ray photons and minimally-ionizing particles and
have applied this model to the CCDs in the Chandra X-ray Observatory's Advanced
CCD Imaging Spectrometer. This algorithm draws on empirical results and
predicts the response of all basic types of X-ray CCD devices. It relies on new
solutions of the diffusion equation, including recombination, to predict the
radial charge cloud distribution in field-free regions of CCDs. By adjusting
the size of the charge clouds, we can reproduce the event grade distribution
seen in calibration data. Using a model of the channel stops developed here and
an insightful treatment of the insulating layer under the gate structure
developed at MIT, we are able to reproduce all notable features in ACIS
calibration spectra.
The simulator is used to reproduce ground and flight calibration data from
ACIS, thus confirming its fidelity. It can then be used for a variety of
calibration tasks, such as generating spectral response matrices for spectral
fitting of astrophysical sources, quantum efficiency estimation, and modeling
of photon pile-up.
|
To characterize the signals registered by the different types of water
Cherenkov detectors (WCD) used by the Latin American Giant Observatory (LAGO)
Project, it is necessary to develop detailed simulations of the detector
response to the flux of secondary particles at the detector level. These
particles are originated during the interaction of cosmic rays with the
atmosphere. In this context, the LAGO project aims to study the high energy
component of gamma rays bursts (GRBs) and space weather phenomena by looking
for the solar modulation of galactic cosmic rays (GCRs). Focus in this, a
complete and complex chain of simulations is being developed that account for
geomagnetic effects, atmospheric reaction and detector response at each LAGO
site. In this work we shown the first steps of a GEANT4 based simulation for
the LAGO WCD, with emphasis on the induced effects of the detector internal
diffusive coating.
|
In its 16 years of scientific measurements, the Spitzer Space Telescope
performed a number of ground-breaking infrared measurements of Solar System
objects. In this second of two papers, we describe results from Spitzer
observations of asteroids, dust rings, and planets that provide new insight
into the formation and evolution of our Solar System. The key Spitzer results
presented here can be grouped into three broad classes: characterizing the
physical properties of asteroids, notably including a large survey of Near
Earth Objects; detection and characterization of several dust/debris disks in
the Solar System; and comprehensive characterization of ice giant (Uranus,
Neptune) atmospheres. Many of these observations provide critical foundations
for future infrared space-based observations.
|
In multi-microphone speech enhancement, reverberation as well as additive
noise and/or interfering speech are commonly suppressed by deconvolution and
spatial filtering, e.g., using multi-channel linear prediction (MCLP) on the
one hand and beamforming, e.g., a generalized sidelobe canceler (GSC), on the
other hand. In this paper, we consider several reverberant speech components,
whereof some are to be dereverberated and others to be canceled, as well as a
diffuse (e.g., babble) noise component to be suppressed. In order to perform
both deconvolution and spatial filtering, we integrate MCLP and the GSC into a
novel architecture referred to as integrated sidelobe cancellation and linear
prediction (ISCLP), where the sidelobe-cancellation (SC) filter and the linear
prediction (LP) filter operate in parallel, but on different microphone signal
frames. Within ISCLP, we estimate both filters jointly by means of a single
Kalman filter. We further propose a spectral Wiener gain post-processor, which
is shown to relate to the Kalman filter's posterior state estimate. The
presented ISCLP Kalman filter is benchmarked against two state-of-the-art
approaches, namely first a pair of alternating Kalman filters respectively
performing dereverberation and noise reduction, and second an MCLP+GSC Kalman
filter cascade. While the ISCLP Kalman filter is roughly $M^2$ times less
expensive than both reference algorithms, where $M$ denotes the number of
microphones, it is shown to perform similarly as compared to the former, and to
outperform the latter.
|
In light of the recent neutrino experimental results from Daya Bay and RENO
Collaborations, we construct a realistic tribimaximal-like
Pontecorvo-Maki-Nakagawa-Sakata (PMNS) leptonic mixing matrix. Motivated by the
Qin-Ma (QM) parametrization for the quark mixing matrix in which the CP-odd
phase is approximately maximal, we propose a simple ansatz for the charged
lepton mixing matrix, namely, it has the QM-like parametrization, and assume
the tribimaximal mixing (TBM) pattern for the neutrino mixing matrix. The
deviation of the leptonic mixing matrix from the TBM one is then systematically
studied. While the deviation of the solar and atmospheric neutrino mixing
angles from the corresponding TBM values, i.e. $\sin^2\theta_{12}=1/3$ and
$\sin^2\theta_{23}=1/2$, is fairly small, we find a nonvanishing reactor mixing
angle given by $\sin\theta_{13}\approx \lambda/\sqrt{2}$ ($\lambda\approx 0.22$
being the Cabibbo angle). Specifically, we obtain $\theta_{13}\simeq
9.2^{\circ}$ and $\delta_{CP} \simeq \delta_{\rm QM} \simeq {\cal
O}(90^{\circ})$. Furthermore, we show that the leptonic \CP violation
characterized by the Jarlskog invariant is $|J^{\ell}_{CP}|\simeq \lambda/6$,
which could be tested in the future experiments such as the upcoming long
baseline neutrino oscillation ones.
|
The underdoped region of the cuprate's phase diagram displays many novel
electronic phenomena both in the normal and the superconducting state. Many of
these anomalous properties have found a natural explanation within the
resonating valence bond spin liquid phenomenological model of Yang-Rice-Zhang
(YRZ) which includes the rise of a pseudogap. This leads to Fermi surface
reconstruction and profoundly changes the electronic structure. Here we extend
previous work to consider the shift in critical temperature on $^{16}$O to
$^{18}$O substitution, The isotope effect has been found experimentally to be
very small at optimal doping yet to rapidly increase to very large values with
underdoping. The YRZ model provides a natural explanation of this behavior and
supports the idea of a pairing mechanism which is mainly spin fluctuations with
a subdominant $(\sim 10\%)$ phonon contribution.
|
Subsets and Splits