text
stringlengths 6
128k
|
---|
Self-organized semiconductor quantum dots represent almost ideal two-level
systems, which have strong potential to applications in photonic quantum
technologies. For instance, they can act as emitters in close-to-ideal quantum
light sources. Coupled quantum dot systems with significantly increased
functionality are potentially of even stronger interest since they can be used
to host ultra-stable singlet-triplet spin qubits for efficient spin-photon
interfaces and for a deterministic photonic 2D cluster-state generation. We
realize an advanced quantum dot molecule (QDM) device and demonstrate excellent
optical properties. The device includes electrically controllable QDMs based on
stacked quantum dots in a pin-diode structure. The QDMs are deterministically
integrated into a photonic structure with a circular Bragg grating using
in-situ electron beam lithography. We measure a photon extraction efficiency of
up to (24$\pm$4)% in good agreement with numerical simulations. The coupling
character of the QDMs is clearly demonstrated by bias voltage dependent
spectroscopy that also controls the orbital couplings of the QDMs and their
charge state in quantitative agreement with theory. The QDM devices show
excellent single-photon emission properties with a multi-photon suppression of
$g^{(2)}(0) = (3.9 \pm 0.5) \cdot 10^{-3}$. These metrics make the developed
QDM devices attractive building blocks for use in future photonic quantum
networks using advanced nanophotonic hardware.
|
The knowledge engineering bottleneck is still a major challenge in
configurator projects. In this paper we show how recommender systems can
support knowledge base development and maintenance processes. We discuss a
couple of scenarios for the application of recommender systems in knowledge
engineering and report the results of empirical studies which show the
importance of user-centered configuration knowledge organization.
|
Gaussian processes are probabilistic models that are commonly used as
functional priors in machine learning. Due to their probabilistic nature, they
can be used to capture the prior information on the statistics of noise,
smoothness of the functions, and training data uncertainty. However, their
computational complexity quickly becomes intractable as the size of the data
set grows. We propose a Hilbert space approximation-based quantum algorithm for
Gaussian process regression to overcome this limitation. Our method consists of
a combination of classical basis function expansion with quantum computing
techniques of quantum principal component analysis, conditional rotations, and
Hadamard and Swap tests. The quantum principal component analysis is used to
estimate the eigenvalues while the conditional rotations and the Hadamard and
Swap tests are employed to evaluate the posterior mean and variance of the
Gaussian process. Our method provides polynomial computational complexity
reduction over the classical method.
|
Despite being relevant to better understand the properties of honeycomb-like
systems, as graphene-based compounds, the electron-phonon interaction is
commonly disregarded in theoretical approaches. That is, the effects of phonon
fields on \textit{interacting} Dirac electrons is an open issue, in particular
when investigating long-range ordering. Thus, here we perform unbiased quantum
Monte Carlo simulations to examine the Hubbard-Holstein model (HHM) in the
half-filled honeycomb lattice. By performing careful finite-size scaling
analysis, we identify semimetal-to-insulator quantum critical points, and
determine the behavior of the antiferromagnetic and charge-density wave phase
transitions. We have, therefore, established the ground state phase diagram of
the HHM for intermediate interaction strength, determining its behavior for
different phonon frequencies. Our findings represent a complete description of
the model, and may shed light on the emergence of many-body properties in
honeycomb-like systems.
|
Continual learning refers to a dynamical framework in which a model receives
a stream of non-stationary data over time and must adapt to new data while
preserving previously acquired knowledge. Unluckily, neural networks fail to
meet these two desiderata, incurring the so-called catastrophic forgetting
phenomenon. Whereas a vast array of strategies have been proposed to attenuate
forgetting in the computer vision domain, for speech-related tasks, on the
other hand, there is a dearth of works. In this paper, we consider the joint
use of rehearsal and knowledge distillation (KD) approaches for spoken language
understanding under a class-incremental learning scenario. We report on
multiple KD combinations at different levels in the network, showing that
combining feature-level and predictions-level KDs leads to the best results.
Finally, we provide an ablation study on the effect of the size of the
rehearsal memory that corroborates the efficacy of our approach for
low-resource devices.
|
Some new directions to lay a rigorous mathematical foundation for the
phase-portrait-based modelling of fingerprints are discussed in the present
work. Couched in the language of dynamical systems, and preparing to a
preliminary modelling, a back-to-basics analogy between Poincar\'{e}'s
categories of equilibria of planar differential systems and the basic
fingerprint singularities according to Purkyn\v{e}-Galton's standards is first
investigated. Then, the problem of the global representation of a fingerprint's
flow-like pattern as a smooth deformation of the phase portrait of a
differential system is addressed. Unlike visualisation in fluid dynamics, where
similarity between integral curves of smooth vector fields and flow streamline
patterns is eye-catching, the case of an oriented texture like a fingerprint's
stream of ridges proved to be a hard problem since, on the one hand, not all
fingerprint singularities and nearby orientational behaviour can be modelled by
canonical phase portraits on the plane, and on the other hand, even if it were
the case, this should lead to a perplexing geometrical problem of connecting
local phase portraits, a question which will be formulated within
Poincar\'{e}'s index theory and addressed via a normal form approach as a
bivariate Hermite interpolation problem. To a certain extent, the material
presented herein is self-contained and provides a baseline for future work
where, starting from a normal form as a source image, a transport via large
deformation flows is envisaged to match the fingerprint as a target image.
|
Adversarial attacks of deep neural networks have been intensively studied on
image, audio, natural language, patch, and pixel classification tasks.
Nevertheless, as a typical, while important real-world application, the
adversarial attacks of online video object tracking that traces an object's
moving trajectory instead of its category are rarely explored. In this paper,
we identify a new task for the adversarial attack to visual tracking: online
generating imperceptible perturbations that mislead trackers along an incorrect
(Untargeted Attack, UA) or specified trajectory (Targeted Attack, TA). To this
end, we first propose a \textit{spatial-aware} basic attack by adapting
existing attack methods, i.e., FGSM, BIM, and C&W, and comprehensively analyze
the attacking performance. We identify that online object tracking poses two
new challenges: 1) it is difficult to generate imperceptible perturbations that
can transfer across frames, and 2) real-time trackers require the attack to
satisfy a certain level of efficiency. To address these challenges, we further
propose the spatial-aware online incremental attack (a.k.a. SPARK) that
performs spatial-temporal sparse incremental perturbations online and makes the
adversarial attack less perceptible. In addition, as an optimization-based
method, SPARK quickly converges to very small losses within several iterations
by considering historical incremental perturbations, making it much more
efficient than basic attacks. The in-depth evaluation on state-of-the-art
trackers (i.e., SiamRPN++ with AlexNet, MobileNetv2, and ResNet-50, and SiamDW)
on OTB100, VOT2018, UAV123, and LaSOT demonstrates the effectiveness and
transferability of SPARK in misleading the trackers under both UA and TA with
minor perturbations.
|
Discontinuous phase transitions occurs to be particularly interesting from a
social point of view because of their relationship to social hysteresis and
critical mass. In this paper, we show that the replacement of a time-varying
(annealed, situation-based) disorder by a static (quenched, personality-based)
one can lead to a change from a continuous to a discontinuous phase transition.
This is a result beyond the state of art, because so far numerous studies on
various complex systems (physical, biological and social) have indicated that
the quenched disorder can round or destroy the existence of a discontinuous
phase transition. To show the possibility of the opposite behavior, we study a
multistate $q$-voter model, with two types of disorder related to random
competing interactions (conformity and anticonformity). We confirm, both
analytically and through Monte Carlo simulations, that indeed discontinuous
phase transitions can be induced by a static disorder.
|
Quantum mechanics dictates bounds for the minimal evolution time between
predetermined initial and final states. Several of these Quantum Speed Limit
(QSL) bounds were derived for non-unitary dynamics using different approaches.
Here, we perform a systematic analysis of the most common QSL bounds in the
damped Jaynes-Cummings model, covering the Markovian and non-Markovian regime.
We show that only one of the analysed bounds cleaves to the essence of the QSL
theory outlined in the pioneer works of Mandelstam \& Tamm and Margolus \&
Levitin in the context of unitary evolutions. We also show that all of QSL
bounds analysed reflect the fact that in our model non-Markovian effects speed
up the quantum evolution. However, it is not possible to infer the Markovian or
non-Markovian behaviour of the dynamics only analysing the QSL bounds.
|
Korean is a morphologically rich language. Korean verbs change their forms in
a fickle manner depending on tense, mood, speech level, meaning, etc.
Therefore, it is challenging to construct comprehensive conjugation paradigms
of Korean verbs. In this paper we introduce a Korean (verb) conjugation
paradigm generator, dubbed KoParadigm. To the best of our knowledge, it is the
first Korean conjugation module that covers all contemporary Korean verbs and
endings. KoParadigm is not only linguistically well established, but also
computationally simple and efficient. We share it via PyPi.
|
We consider conformal defects with spins under the rotation group acting on
the transverse directions. They are described in the embedding space formalism
in a similar manner to spinning local operators, and their correlation
functions with bulk and defect local operators are determined by the conformal
symmetry. The operator product expansion (OPE) structure of spinning conformal
defects is examined by decomposing it into the spinning defect OPE block that
packages all the contribution from a conformal multiplet. The integral
representation of the block derived in the shadow formalism is facilitated to
deduce recursion relations for correlation functions of two spinning conformal
defects. In simple cases, we construct spinning defect correlators by acting
differential operators recursively on scalar defect correlators.
|
Possible conformations of the thioderivatives of pentacene (Pn) have been
considered. The absorption spectra of polythiopentacene (PTPn) solutions and
films have been studied. PTPn is revealed to be a mixture of Pn thioderivatives
with different numbers of S atoms. After this mixture having been condensed in
vacuum onto quartz substrates, its main components are tetrathiopentacene
(TTPn) and hexathiopentacene (HTPn). The position of the maximum in the
long-wave absorption bands of Pn thioderivatives is a linear function of the
number of valence electrons in S atoms, which take part in the conjugation with
the {\pi}-system of the pentacene frame of PTPn molecules. The analysis of the
photocurrent and capacitor photovoltage (CPV) spectra in the range of the first
electron transitions in PTPn has shown that the photoconductivity is of the
hole type and is caused by the dissociation of excitons at the electron capture
centers. The frontal CPV is caused by the Dember photovoltage ({\phi}D), and
the back one by the surface-barrier photovoltage ({\phi}b).
|
This is an expository account of the proof of the theorem of Bourgain,
Glibichuk and Konyagin which provides non-trivial bounds for exponential sums
over very small multiplicative subgroups of prime finite fields.
|
In this paper, the strong formulation of the generalised Navier-Stokes
momentum equation is investigated. Specifically, the formulation of
shear-stress divergence is investigated, due to its effect on the performance
and accuracy of computational methods. It is found that the term may be
expressed in two different ways. While the first formulation is commonly used,
the alternative derivation is found to be potentially more convenient for
direct numerical manipulation. The alternative formulation relocates a part of
strain information under the variable-coefficient Laplacian operator, thus
making future computational schemes potentially simpler with larger time-step
sizes.
|
Despite the revolutionary impact of AI and the development of locally trained
algorithms, achieving widespread generalized learning from multi-modal data in
medical AI remains a significant challenge. This gap hinders the practical
deployment of scalable medical AI solutions. Addressing this challenge, our
research contributes a self-supervised robust machine learning framework,
OCT-SelfNet, for detecting eye diseases using optical coherence tomography
(OCT) images. In this work, various data sets from various institutions are
combined enabling a more comprehensive range of representation. Our method
addresses the issue using a two-phase training approach that combines
self-supervised pretraining and supervised fine-tuning with a mask autoencoder
based on the SwinV2 backbone by providing a solution for real-world clinical
deployment. Extensive experiments on three datasets with different encoder
backbones, low data settings, unseen data settings, and the effect of
augmentation show that our method outperforms the baseline model, Resnet-50 by
consistently attaining AUC-ROC performance surpassing 77% across all tests,
whereas the baseline model exceeds 54%. Moreover, in terms of the AUC-PR
metric, our proposed method exceeded 42%, showcasing a substantial increase of
at least 10% in performance compared to the baseline, which exceeded only 33%.
This contributes to our understanding of our approach's potential and
emphasizes its usefulness in clinical settings.
|
Manifolds with boundary, with corners, $b$-manifolds and foliations model
configuration spaces for particles moving under constraints and can be
described as $E$-manifolds. $E$-manifolds were introduced in [NT01] and
investigated in depth in [MS20]. In this article we explore their physical
facets by extending gauge theories to the $E$-category. Singularities in the
configuration space of a classical particle can be described in several new
scenarios unveiling their Hamiltonian aspects on an $E$-symplectic manifold.
Following the scheme inaugurated in [Wei78], we show the existence of a
universal model for a particle interacting with an $E$-gauge field. In
addition, we generalize the description of phase spaces in Yang-Mills theory as
Poisson manifolds and their minimal coupling procedure, as shown in [Mon86],
for base manifolds endowed with an $E$-structure. In particular, the reduction
at coadjoint orbits and the shifting trick are extended to this framework. We
show that Wong's equations, which describe the interaction of a particle with a
Yang-Mills field, become Hamiltonian in the $E$-setting. We formulate the
electromagnetic gauge in a Minkowski space relating it to the proper time
foliation and we see that our main theorem describes the minimal coupling in
physical models such as the compactified black hole.
|
Graph-based methods, pivotal for label inference over interconnected objects
in many real-world applications, often encounter generalization challenges, if
the graph used for model training differs significantly from the graph used for
testing. This work delves into Graph Domain Adaptation (GDA) to address the
unique complexities of distribution shifts over graph data, where
interconnected data points experience shifts in features, labels, and in
particular, connecting patterns. We propose a novel, theoretically principled
method, Pairwise Alignment (Pair-Align) to counter graph structure shift by
mitigating conditional structure shift (CSS) and label shift (LS). Pair-Align
uses edge weights to recalibrate the influence among neighboring nodes to
handle CSS and adjusts the classification loss with label weights to handle LS.
Our method demonstrates superior performance in real-world applications,
including node classification with region shift in social networks, and the
pileup mitigation task in particle colliding experiments. For the first
application, we also curate the largest dataset by far for GDA studies. Our
method shows strong performance in synthetic and other existing benchmark
datasets.
|
We study existence of percolation in the hierarchical group of order $N$,
which is an ultrametric space, and transience and recurrence of random walks on
the percolation clusters. The connection probability on the hierarchical group
for two points separated by distance $k$ is of the form $c_k/N^{k(1+\delta)},
\delta>-1$, with $c_k=C_0+C_1\log k+C_2k^\alpha$, non-negative constants $C_0,
C_1, C_2$, and $\alpha>0$. Percolation was proved in Dawson and Gorostiza
(2013) for $\delta<1$, and for the critical case, $\delta=1,C_2>0$, with
$\alpha>2$. In this paper we improve the result for the critical case by
showing percolation for $\alpha>0$. We use a renormalization method of the type
in the previous paper in a new way which is more intrinsic to the model. The
proof involves ultrametric random graphs (described in the Introduction). The
results for simple (nearest neighbour) random walks on the percolation clusters
are: in the case $\delta<1$ the walk is transient, and in the critical case
$\delta=1, C_2>0,\alpha>0$, there exists a critical $\alpha_c\in(0,\infty)$
such that the walk is recurrent for $\alpha<\alpha_c$ and transient for
$\alpha>\alpha_c$. The proofs involve graph diameters, path lengths, and
electric circuit theory. Some comparisons are made with behaviours of random
walks on long-range percolation clusters in the one-dimensional Euclidean
lattice.
|
In Becker and Jentzen (2019) and Becker et al. (2017), an explicit temporal
semi-discretization scheme and a space-time full-discretization scheme were,
respectively, introduced and analyzed for the additive noise-driven stochastic
Allen-Cahn type equations, with strong convergence rates recovered. The present
work aims to propose a different explicit full-discrete scheme to numerically
solve the stochastic Allen-Cahn equation with cubic nonlinearity, perturbed by
additive space-time white noise. The approximation is easily implementable,
performing the spatial discretization by a spectral Galerkin method and the
temporal discretization by a kind of nonlinearity-tamed accelerated exponential
integrator scheme. Error bounds in a strong sense are analyzed for both the
spatial semi-discretization and the spatio-temporal full discretization, with
convergence rates in both space and time explicitly identified. It turns out
that the obtained convergence rate of the new scheme is, in the temporal
direction, twice as high as existing ones in the literature. Numerical results
are finally reported to confirm the previous theoretical findings.
|
Both antenna selection and spatial modulation allow for low-complexity MIMO
transmitters when the number of RF chains is much lower than the number of
transmit antennas. In this manuscript, we present a quantitative performance
comparison between these two approaches by taking into account implementational
restrictions, such as antenna switching. We consider a band-limitedMIMO system,
for which the pulse shape is designed, such that the outband emission satisfies
a desired spectral mask. The bit error rate is determined for this system,
considering antenna selection and spatial modulation. The results depict that
for any array size at the transmit and receive sides, antenna selection
outperforms spatial modulation, as long as the power efficiency is smaller than
a certain threshold level. By passing this threshold, spatial modulation starts
to perform superior. Our investigations show that the threshold takes smaller
values, as the number of receive antennas grows large. This indicates that
spatial modulation is an effective technique for uplink transmission in massive
MIMO systems.
|
Recently, Generative Adversarial Network (GAN) has been found wide
applications in style transfer, image-to-image translation and image
super-resolution. In this paper, a color-depth conditional GAN is proposed to
concurrently resolve the problems of depth super-resolution and color
super-resolution in 3D videos. Firstly, given the low-resolution depth image
and low-resolution color image, a generative network is proposed to leverage
mutual information of color image and depth image to enhance each other in
consideration of the geometry structural dependency of color-depth image in the
same scene. Secondly, three loss functions, including data loss, total
variation loss, and 8-connected gradient difference loss are introduced to
train this generative network in order to keep generated images close to the
real ones, in addition to the adversarial loss. Experimental results
demonstrate that the proposed approach produces high-quality color image and
depth image from low-quality image pair, and it is superior to several other
leading methods. Besides, we use the same neural network framework to resolve
the problem of image smoothing and edge detection at the same time.
|
We study the shadowing effect in highly asymmetric diffractive interactions
of left-handed and right-handed W-bosons with atomic nuclei. The target nucleus
is found to be quite transparent for the charmed-strange Fock component of the
light-cone W^+ in the helicity state \lambda=+1 and rather opaque for the c\bar
s dipole with \lambda=-1. The shadowing correction to the structure function
\Delta xF_3 = xF_3^{\nu N}-xF_3^{\bar\nu N} extracted from \nu Fe and \bar\nu
Fe data is shown to make up about 20% in the kinematical range of CCFR/NuTeV.
|
This note shows that the matrix forms of several one-parameter distribution
families satisfy a hierarchical low-rank structure. Such families of
distributions include binomial, Poisson, and $\chi^2$ distributions. The proof
is based on a uniform relative bound of a related divergence function.
Numerical results are provided to confirm the theoretical findings.
|
We study a model of a Spin-Peierls material consisting of a set of
antiferromagnetic Heisenberg chains coupled with phonons and interacting among
them via an inter-chain elastic coupling. The excitation spectrum is analyzed
by bosonization techniques and the self-harmonic approximation. The elementary
excitation is the creation of a localized domain structure where the dimerized
order is the opposite to the one of the surroundings. It is a triplet
excitation whose formation energy is smaller than the magnon gap. Magnetic
internal excitations of the domain are possible and give the further
excitations of the system. We discuss these results in the context of recent
experimental measurements on the inorganic Spin-Peierls compound CuGeO$_3$
|
A class of (possibly) degenerate stochastic integro-differential equations of
parabolic type is considered, which includes the Zakai equation in nonlinear
filtering for jump diffusions. Existence and uniqueness of the solutions are
established in Bessel potential spaces.
|
Several algorithms have been designed to convert a regular expression into an
equivalent finite automaton. One of the most popular constructions, due to
Glushkov and to McNaughton and Yamada, is based on the computation of the Null,
First, Last and Follow sets (called Glushkov functions) associated with a
linearized version of the expression. Recently Mignot considered a family of
extended expressions called Extended to multi-tilde-bar Regular Expressions
(EmtbREs) and he showed that, under some restrictions, Glushkov functions can
be defined for an EmtbRE. In this paper we present an algorithm which
efficiently computes the Glushkov functions of an unrestricted EmtbRE. Our
approach is based on a recursive definition of the language associated with an
EmtbRE which enlightens the fact that the worst case time complexity of the
conversion of an EmtbRE into an automaton is related to the worst case time
complexity of the computation of the Null function. Finally we show how to
extend the ZPC-structure to EmtbREs, which allows us to apply to this family of
extended expressions the efficient constructions based on this structure (in
particular the construction of the c-continuation automaton, the position
automaton, the follow automaton and the equation automaton).
|
Pursuit-evasion is the problem of capturing mobile targets with one or more
pursuers. We use deep reinforcement learning for pursuing an omni-directional
target with multiple, homogeneous agents that are subject to unicycle kinematic
constraints. We use shared experience to train a policy for a given number of
pursuers that is executed independently by each agent at run-time. The training
benefits from curriculum learning, a sweeping-angle ordering to locally
represent neighboring agents and encouraging good formations with reward
structure that combines individual and group rewards. Simulated experiments
with a reactive evader and up to eight pursuers show that our learning-based
approach, with non-holonomic agents, performs on par with classical algorithms
with omni-directional agents, and outperforms their non-holonomic adaptations.
The learned policy is successfully transferred to the real world in a
proof-of-concept demonstration with three motion-constrained pursuer drones.
|
Most conventional camera calibration algorithms assume that the imaging
device has a Single Viewpoint (SVP). This is not necessarily true for special
imaging device such as fisheye lenses. As a consequence, the intrinsic camera
calibration result is not always reliable. In this paper, we propose a new
formation model that tends to relax this assumption so that a Non-Single
Viewpoint (NSVP) system is corrected to always maintain a SVP, by taking into
account the variation of the Entrance Pupil (EP) using thin lens modeling. In
addition, we present a calibration procedure for the image formation to
estimate these EP parameters using non linear optimization procedure with
bundle adjustment. From experiments, we are able to obtain slightly better
re-projection error than traditional methods, and the camera parameters are
better estimated. The proposed calibration procedure is simple and can easily
be integrated to any other thin lens image formation model.
|
We experimentally observe lasing in a hexamer plasmonic lattice and find that
when tuning the scale of the unit cell, the polarization winding of the
emission changes. By a theoretical analysis we identify the lasing modes as
quasi bound states in continuum (quasi-BICs) of topological charges of zero,
one or two. A T-matrix simulation of the structure reveals that the mode
quality(Q)-factors depend on the scale of the unit cell, with highest-Q modes
favored by lasing. The system thus shows a loss-driven transition between
lasing in modes of trivial and high-order topological charge.
|
The paper provides new upper and lower bounds for the multivariate Laplace
approximation under weak local assumptions. Their range of validity is also
given. An application to an integral arising in the extension of the Dixon's
identity is presented. The paper both generalizes and complements recent
results by Inglot and Majerski and removes their superfluous assumption on
vanishing of the third order partial derivatives of the exponent function.
|
In traditional priority queues, we assume that every customer upon arrival
has a fixed, class-dependent priority, and that a customer may not commence
service if a customer with a higher priority is present in the queue. However,
in situations where a performance target in terms of the tails of the
class-dependent waiting time distributions has to be met, such models of
priority queueing may not be satisfactory. In fact, there could be situations
where high priority classes easily meet their performance target for the
maximum waiting time, while lower classes do not.
Here, we are interested in the stationary distribution at the times of
commencement of service of this maximum priority process. Until now, there has
been no explicit expression for this distribution. We construct a mapping of
the maximum priority process to a tandem fluid queue, which enables us to find
expressions for this stationary distribution. We derive the results for the
stationary distribution of the maximum priority process at the times of the
commencement of service.
|
Large amounts of deep optical images will be available in the near future,
allowing statistically significant studies of low surface brightness structures
such as intracluster light (ICL) in galaxy clusters. The detection of these
structures requires efficient algorithms dedicated to this task, where
traditional methods suffer difficulties. We present our new Detection Algorithm
with Wavelets for Intracluster light Studies (DAWIS), developed and optimised
for the detection of low surface brightness sources in images, in particular
(but not limited to) ICL. DAWIS follows a multiresolution vision based on
wavelet representation to detect sources, embedded in an iterative procedure
called synthesis-by-analysis approach to restore the complete unmasked light
distribution of these sources with very good quality. The algorithm is built so
sources can be classified based on criteria depending on the analysis goal; we
display in this work the case of ICL detection and the measurement of ICL
fractions. We test the efficiency of DAWIS on 270 mock images of galaxy
clusters with various ICL profiles and compare its efficiency to more
traditional ICL detection methods such as the surface brightness threshold
method. We also run DAWIS on a real galaxy cluster image, and compare the
output to results obtained with previous multiscale analysis algorithms. We
find in simulations that in average DAWIS is able to disentangle galaxy light
from ICL more efficiently, and to detect a greater quantity of ICL flux due to
the way it handles sky background noise. We also show that the ICL fraction, a
metric used on a regular basis to characterise ICL, is subject to several
measurement biases both on galaxies and ICL fluxes. In the real galaxy cluster
image, DAWIS detects a faint and extended source with an absolute magnitude two
orders brighter than previous multiscale methods.
|
Communication complexity and privacy are the two key challenges in Federated
Learning where the goal is to perform a distributed learning through a large
volume of devices. In this work, we introduce FedSKETCH and FedSKETCHGATE
algorithms to address both challenges in Federated learning jointly, where
these algorithms are intended to be used for homogeneous and heterogeneous data
distribution settings respectively. The key idea is to compress the
accumulation of local gradients using count sketch, therefore, the server does
not have access to the gradients themselves which provides privacy.
Furthermore, due to the lower dimension of sketching used, our method exhibits
communication-efficiency property as well. We provide, for the aforementioned
schemes, sharp convergence guarantees.
Finally, we back up our theory with various set of experiments.
|
We discuss Israel layers collapsing inward from rest at infinity along
Schwarzschild-Lemaitre geodesics. The dynamics of the collapsing layer and its
equation of state are developed. There is a general equation of state which is
approximately polytropic in the limit of very low pressure. The equation of
state establishes a new limit on the stress-density ratio.
|
Electron transmission through different gated and gapped graphene
superlattices (GSLs) is studied. Linear, Gaussian, Lorentzian and
P\"oschl-Teller superlattice potential profiles have been assessed. A
relativistic description of electrons in graphene as well as the transfer
matrix method have been used to obtain the transmission properties. We find
that is not possible to have perfect or nearly perfect pass bands in gated
GSLs. Regardless of the potential profile and the number of barriers there are
remanent oscillations in the transmission bands. On the contrary, nearly
perfect pass bands are obtained for gapped GSLs. The Gaussian profile is the
best option when the number of barriers is reduced, and there is practically no
difference among the profiles for large number of barriers. We also find that
both gated and gapped GSLs can work as omnidirectional band-pass filters. In
the case of gated Gaussian GSLs the omnidirectional range goes from
-$50^{\circ}$ to $50^{\circ}$ with an energy bandwidth of 55 meV, while for
gapped Gaussian GSLs the range goes from -$80^{\circ}$ to $80^{\circ}$ with a
bandwidth of 40 meV. Here, it is important that the energy range does not
include remanent oscillations. On the light of these results, the hole states
inside the barriers of gated GSLs are not beneficial for band-pass filtering.
So, the flatness of the pass bands is determined by the superlattice potential
profile and the chiral nature of the charge carriers in graphene. Moreover, the
width and the number of electron pass bands can be modulated through the
superlattice structural parameters. We consider that our findings can be useful
to design electron filters based on non-conventional GSLs.
|
Analogous to the Spin-Hall Effect (SHE), {\it ab initio} electronic structure
calculations reveal that acoustic phonons can induce charge (spin) current
flowing along (normal to) its propagation direction. Using Floquet approach we
have calculated the elastodynamical-induced charge and spin pumping in bulk Pt
and demonstrate that: (i) While the longitudinal charge pumping is an intrinsic
observable, the transverse pumped spin-current has an extrinsic origin that
depends strongly on the electronic relaxation time; (ii) The longitudinal
charge current
is of nonrelativstic origin, while the transverse spin current is a
relativistic effect that to lowest order scales linearly with the spin-orbit
coupling strength; (iii) both charge and spin pumped currents have parabolic
dependence on the amplitude of the elastic wave.
|
X-ray imaging observatories have revealed hydrodynamic structures with linear
scales ~ 10 kpc in clusters of galaxies, such as shock waves in the 1E0657-56
and A520 galaxy clusters and the hot plasma bubble in the MKW 3s cluster. The
future X-ray observatory IXO will resolve for the first time the metal
distribution in galaxy clusters at the these scales. Heating of plasmas by
shocks and AGN activities can result in non-equilibrium ionization states of
metal ions. We study the effect of the non-equilibrium ionization at linear
scales <50 kpc in galaxy clusters. A condition for non-equilibrium ionization
is derived by comparing the ionization time-scale with the age of hydrodynamic
structures. Modeling of non-equilibrium ionization when the plasma temperature
suddenly change is performed. An analysis of relaxation processes of the FeXXV
and FeXXVI ions by means of eigenvectors of the transition matrix is given. We
conclude that the non-equilibrium ionization of iron can occur in galaxy
clusters if the baryonic overdensity delta is smaller than 11.0/tau, where
tau<<1 is the ratio of the hydrodynamic structure age to the Hubble time. Our
modeling indicates that the emissivity in the helium-like emission lines of
iron increases as a result of deviation from the ionization equilibrium. A slow
process of helium-like ionic fraction relaxation was analyzed. A new way to
determine a shock velocity is proposed.
|
This paper studies the problem of selecting a submatrix of a positive
definite matrix in order to achieve a desired bound on the smallest eigenvalue
of the submatrix. Maximizing this smallest eigenvalue has applications to
selecting input nodes in order to guarantee consensus of networks with negative
edges as well as maximizing the convergence rate of distributed systems. We
develop a submodular optimization approach to maximizing the smallest
eigenvalue by first proving that positivity of the eigenvalues of a submatrix
can be characterized using the probability distribution of the quadratic form
induced by the submatrix. We then exploit that connection to prove that
positive-definiteness of a submatrix can be expressed as a constraint on a
submodular function. We prove that our approach results in polynomial-time
algorithms with provable bounds on the size of the submatrix. We also present
generalizations to non-symmetric matrices, alternative sufficient conditions
for the smallest eigenvalue to exceed a desired bound that are valid for
Laplacian matrices, and a numerical evaluation.
|
This paper is concerned with the values of Harish-Chandra characters of a
class of positive-depth, toral, very supercuspidal representations of $p$-adic
symplectic and special orthogonal groups, near the identity element. We declare
two representations equivalent if their characters coincide on a specific
neighbourhood of the identity (which is larger than the neighbourhood on which
Harish-Chandra local character expansion holds). We construct a parameter space
$B$ (that depends on the group and a real number $r>0$) for the set of
equivalence classes of the representations of minimal depth $r$ satisfying some
additional assumptions. This parameter space is essentially a geometric object
defined over $\Q$. Given a non-Archimedean local field $\K$ with sufficiently
large residual characteristic, the part of the character table near the
identity element for $G(\K)$ that comes from our class of representations is
parameterized by the residue-field points of $B$. The character values
themselves can be recovered by specialization from a constructible motivic
exponential function. The values of such functions are algorithmically
computable. It is in this sense that we show that a large part of the character
table of the group $G(\K)$ is computable.
|
Feature screening approaches are effective in selecting active features from
data with ultrahigh dimensionality and increasing complexity; however, the
majority of existing feature screening approaches are either restricted to a
univariate response or rely on some distribution or model assumptions. In this
article, we propose a novel sure independence screening approach based on the
multivariate rank distance correlation (MrDc-SIS). The MrDc-SIS achieves
multiple desirable properties such as being distribution-free, completely
nonparametric, scale-free, robust for outliers or heavy tails, and sensitive
for hidden structures. Moreover, the MrDc-SIS can be used to screen either
univariate or multivariate responses and either one dimensional or
multi-dimensional predictors. We establish the asymptotic sure screening
consistency property of the MrDc-SIS under a mild condition by lifting previous
assumptions about the finite moments. Simulation studies demonstrate that
MrDc-SIS outperforms three other closely relevant approaches under various
settings. We also apply the MrDc-SIS approach to a multi-omics ovarian
carcinoma data downloaded from The Cancer Genome Atlas (TCGA).
|
We investigate the structure of ideals generated by binomials (polynomials
with at most two terms) and the schemes and varieties associated to them. The
class of binomial ideals contains many classical examples from algebraic
geometry, and it has numerous applications within and beyond pure mathematics.
The ideals defining toric varieties are precisely the binomial prime ideals.
Our main results concern primary decomposition: If $I$ is a binomial ideal then
the radical, associated primes, and isolated primary components of $I$ are
again binomial, and $I$ admits primary decompositions in terms of binomial
primary ideals. A geometric characterization is given for the affine algebraic
sets that can be defined by binomials. Our structural results yield
sparsity-preserving algorithms for finding the radical and primary
decomposition of a binomial ideal.
|
Non-planar solar-cell devices have been promoted as a means to enhance
current collection in absorber materials with charge-transport limitations.
This work presents an analytical framework for assessing the ultimate
performance of non-planar solar-cells based on materials and geometry. Herein,
the physics of the p-n junction is analyzed for low-injection conditions, when
the junction can be considered spatially separable into quasi-neutral and
space-charge regions. For the conventional planar solar cell architecture,
previously established one-dimensional expressions governing charge carrier
transport are recovered from the framework established herein. Space-charge
region recombination statistics are compared for planar and non-planar
geometries, showing variations in recombination current produced from the
space-charge region. In addition, planar and non-planar solar cell performance
are simulated, based on a semi-empirical expression for short-circuit current,
detailing variations in charge carrier transport and efficiency as a function
of geometry, thereby yielding insights into design criteria for solar cell
architectures. For the conditions considered here, the expressions for
generation rate and total current are shown to universally govern any solar
cell geometry, while recombination within the space-charge region is shown to
be directly dependent on the geometrical orientation of the p-n junction.
|
The production of $W/Z$ bosons in association with heavy flavour jets or
hadrons at the LHC is sensitive to the flavour content of the proton and
provides an important test of perturbative QCD. The production of a $W$ boson
in association with $D^{+}$ and $D^{*+}$ mesons will be discussed. This
precision measurement provides information about the strange content of the
proton. Measurements are compared to the state-of-the art
next-to-next-to-leading order theoretical calculations.
|
We investigate bulk ion heating in solid buried layer targets irradiated by
ultra-short laser pulses of relativistic intensities using particle-in-cell
simulations. Our study focuses on a CD2-Al-CD2 sandwich target geometry. We
find enhanced deuteron ion heating in a layer compressed by the expanding
aluminium layer. A pressure gradient created at the Al-CD2 interface pushes
this layer of deuteron ions towards the outer regions of the target. During its
passage through the target, deuteron ions are constantly injected into this
layer. Our simulations suggest that the directed collective outward motion of
the layer is converted into thermal motion inside the layer, leading to
deuteron temperatures higher than those found in the rest of the target. This
enhanced heating can already be observed at laser pulse durations as low as 100
femtoseconds. Thus, detailed experimental surveys at repetition rates of
several ten laser shots per minute are in reach at current high-power laser
systems, which would allow for probing and optimizing the heating dynamics.
|
This paper studies sequence prediction based on the monotone Kolmogorov
complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is
extremely close to Solomonoff's prior M, the latter being an excellent
predictor in deterministic as well as probabilistic environments, where
performance is measured in terms of convergence of posteriors or losses.
Despite this closeness to M, it is difficult to assess the prediction quality
of m, since little is known about the closeness of their posteriors, which are
the important quantities for prediction. We show that for deterministic
computable environments, the "posterior" and losses of m converge, but rapid
convergence could only be shown on-sequence; the off-sequence behavior is
unclear. In probabilistic environments, neither the posterior nor the losses
converge, in general.
|
Time Series Representation Learning (TSRL) focuses on generating informative
representations for various Time Series (TS) modeling tasks. Traditional
Self-Supervised Learning (SSL) methods in TSRL fall into four main categories:
reconstructive, adversarial, contrastive, and predictive, each with a common
challenge of sensitivity to noise and intricate data nuances. Recently,
diffusion-based methods have shown advanced generative capabilities. However,
they primarily target specific application scenarios like imputation and
forecasting, leaving a gap in leveraging diffusion models for generic TSRL. Our
work, Time Series Diffusion Embedding (TSDE), bridges this gap as the first
diffusion-based SSL TSRL approach. TSDE segments TS data into observed and
masked parts using an Imputation-Interpolation-Forecasting (IIF) mask. It
applies a trainable embedding function, featuring dual-orthogonal Transformer
encoders with a crossover mechanism, to the observed part. We train a reverse
diffusion process conditioned on the embeddings, designed to predict noise
added to the masked part. Extensive experiments demonstrate TSDE's superiority
in imputation, interpolation, forecasting, anomaly detection, classification,
and clustering. We also conduct an ablation study, present embedding
visualizations, and compare inference speed, further substantiating TSDE's
efficiency and validity in learning representations of TS data.
|
Using the relation proposed by Weinberg in 1972, combining quantum and
cosmological parameters, we prove that the self gravitational potential energy
of any fundamental particle is a quantum, with physical properties independent
of the mass of the particle. It is a universal quantum of gravitational energy,
and its physical properties depend only on the cosmological scale factor R and
the physical constants \hbar and c. We propose a modification of the Weinberg's
relation, keeping the same numerical value, but substituting the cosmological
parameter H/c by 1/R.
|
A search for $CP$ violation in the Cabibbo-suppressed $D^0 \rightarrow K^+
K^- \pi^+ \pi^-$ decay mode is performed using an amplitude analysis. The
measurement uses a sample of $pp$ collisions recorded by the LHCb experiment
during 2011 and 2012, corresponding to an integrated luminosity of 3.0
fb$^{-1}$. The $D^0$ mesons are reconstructed from semileptonic $b$-hadron
decays into $D^0\mu^- X$ final states. The selected sample contains more than
160000 signal decays, allowing the most precise amplitude modelling of this
$D^0$ decay to date. The obtained amplitude model is used to perform the search
for $CP$ violation. The result is compatible with $CP$ symmetry, with a
sensitivity ranging from 1% to 15% depending on the amplitude considered.
|
(Abridged) We present an unbiassed near-IR selected AGN sample, covering
12.56 square degrees down to K ~ 15.5, selected from the Two Micron All Sky
Survey (2MASS). Our only selection effect is a moderate color cut (J-K>1.2)
designed to reduce contamination from galactic stars. We observed both
point-like and extended sources. Using the brute-force capabilities of the 2dF
multi-fiber spectrograph on the Anglo-Australian Telescope, we obtained spectra
of 65% of the target list: an unbiassed sub-sample of 1526 sources.
80% of the 2MASS sources in our fields are galaxies, with a median redshift
of 0.15. The remainder are K- and M-dwarf stars.
Seyfert-2 Galaxies are roughly three times more common in this sample than in
optically selected galaxy samples (once corrections have been made for the
equivalent width limit and for different aperture sizes).
We find 14 broad-line (Type-1) AGNs, giving a surface density down to K<15
comparable to that of optical samples down to B=18.5. Half of our Type-1 AGNs
could not have been found by normal color selection techniques. In all cases
this was due host galaxy light contamination rather than intrinsically red
colors.
We conclude that the Type-1 AGN population found in the near-IR is not
dramatically different from that found in optical samples. There is no evidence
for a large population of AGNs that could not be found at optical wavelengths,
though we can only place very weak constraints on any population of dusty
high-redshift QSOs.
|
This paper argues that training GANs on local and non-local dependencies in
speech data offers insights into how deep neural networks discretize continuous
data and how symbolic-like rule-based morphophonological processes emerge in a
deep convolutional architecture. Acquisition of speech has recently been
modeled as a dependency between latent space and data generated by GANs in
Begu\v{s} (2020b; arXiv:2006.03965), who models learning of a simple local
allophonic distribution. We extend this approach to test learning of local and
non-local phonological processes that include approximations of morphological
processes. We further parallel outputs of the model to results of a behavioral
experiment where human subjects are trained on the data used for training the
GAN network. Four main conclusions emerge: (i) the networks provide useful
information for computational models of speech acquisition even if trained on a
comparatively small dataset of an artificial grammar learning experiment; (ii)
local processes are easier to learn than non-local processes, which matches
both behavioral data in human subjects and typology in the world's languages.
This paper also proposes (iii) how we can actively observe the network's
progress in learning and explore the effect of training steps on learning
representations by keeping latent space constant across different training
steps. Finally, this paper shows that (iv) the network learns to encode the
presence of a prefix with a single latent variable; by interpolating this
variable, we can actively observe the operation of a non-local phonological
process. The proposed technique for retrieving learning representations has
general implications for our understanding of how GANs discretize continuous
speech data and suggests that rule-like generalizations in the training data
are represented as an interaction between variables in the network's latent
space.
|
We present continuum and molecular line (CO, C$^{18}$O, HCO$^+$) observations
carried out with the Atacama Large Millimeter/submillimeter Array toward the
"water fountain" star IRAS 15103-5754, an object that could be the youngest PN
known. We detect two continuum sources, separated by $0.39\pm 0.03$ arcsec. The
emission from the brighter source seems to arise mainly from ionized gas, thus
confirming the PN nature of the object. The molecular line emission is
dominated by a circumstellar torus with a diameter of $\simeq 0.6$ arcsec (2000
au) and expanding at $\simeq 23$ km s$^{-1}$. We see at least two gas outflows.
The highest-velocity outflow (deprojected velocities up to 250 km s$^{-1}$),
traced by the CO lines, shows a biconical morphology, whose axis is misaligned
$\simeq 14^\circ$ with respect to the symmetry axis of the torus, and with a
different central velocity (by $\simeq 8$ km s$^{-1}$). An additional
high-density outflow (traced by HCO$^+$) is oriented nearly perpendicular to
the torus. We speculate that IRAS 15103-5754 was a triple stellar system that
went through a common envelope phase, and one of the components was ejected in
this process. A subsequent low-collimation wind from the remaining binary
stripped out gas from the torus, creating the conical outflow. The high
velocity of the outflow suggests that the momentum transfer from the wind was
extremely efficient, or that we are witnessing a very energetic mass-loss
event.
|
The existence of massive quiescent galaxies at high redshift seems to require
rapid quenching, but it is unclear whether all quiescent galaxies have gone
through this phase and what physical mechanisms are involved. To study rapid
quenching, we use rest-frame colors to select 12 young quiescent galaxies at $z
\sim 1.5$. From spectral energy distribution fitting, we find that they all
experienced intense starbursts prior to rapid quenching. We confirm this with
deep Magellan/FIRE spectroscopic observations for a subset of seven galaxies.
Broad emission lines are detected for two galaxies and are most likely caused
by AGN activity. The other five galaxies do not show any emission features,
suggesting that gas has already been removed or depleted. Most of the rapidly
quenched galaxies are more compact than normal quiescent galaxies, providing
evidence for a central starburst in the recent past. We estimate an average
transition time of $300\,\rm Myr$ for the rapid quenching phase. Approximately
$4\%$ of quiescent galaxies at $z=1.5$ have gone through rapid quenching; this
fraction increases to $23\%$ at $z=2.2$. We identify analogs in the TNG100
simulation and find that rapid quenching for these galaxies is driven by AGN,
and for half of the cases, gas-rich major mergers seem to trigger the
starburst. We conclude that these massive quiescent galaxies are not just
rapidly quenched but also rapidly formed through a major starburst. We
speculate that mergers drive gas inflow towards the central regions and grow
supermassive black holes, leading to rapid quenching by AGN feedback.
|
This paper addresses the problem of safe and efficient navigation in remotely
controlled robots operating in hazardous and unstructured environments; or
conducting other remote robotic tasks. A shared control method is presented
which blends the commands from a VFH+ obstacle avoidance navigation module with
the teleoperation commands provided by an operator via a joypad. The presented
approach offers several advantages such as flexibility allowing for a
straightforward adaptation of the controller's behaviour and easy integration
with variable autonomy systems; as well as the ability to cope with dynamic
environments. The advantages of the presented controller are demonstrated by an
experimental evaluation in a disaster response scenario. More specifically,
presented evidence show a clear performance increase in terms of safety and
task completion time compared to a pure teleoperation approach, as well as an
ability to cope with previously unobserved obstacles.
|
We give an explicit procedure which computes for degree $d \leq 3$ the
correlation functions of topological sigma model (A-model) on a projective Fano
hypersurface $X$ as homogeneous polynomials of degree $d$ in the correlation
functions of degree 1 (number of lines). We extend this formalism to the case
of Calabi-Yau hypersurfaces and explain how the polynomial property is
preserved. Our key tool is the construction of universal recursive formulas
which express the structural constants of the quantum cohomology ring of $X$ as
weighted homogeneous polynomial functions in the constants of the Fano
hypersurface with the same degree and dimension one more. We propose some
conjectures about the existence and the form of the recursive formulas for the
structural constants of rational curves of arbitrary degree. Our recursive
formulas should yield the coefficients of the hypergeometric series used in the
mirror calculation. Assuming the validity of the conjectures we find the
recursive laws for rational curves of degree 4 and 5.
|
It has recently been suggested that the presence of a plenitude of light
axions, an Axiverse, is evidence for the extra dimensions of string theory. We
discuss the observational consequences of these axions on astrophysical black
holes through the Penrose superradiance process. When an axion Compton
wavelength is comparable to the size of a black hole, the axion binds to the
black hole "nucleus" forming a gravitational atom in the sky. The occupation
number of superradiant atomic levels, fed by the energy and angular momentum of
the black hole, grows exponentially. The black hole spins down and an axion
Bose-Einstein condensate cloud forms around it. When the attractive axion
self-interactions become stronger than the gravitational binding energy, the
axion cloud collapses, a phenomenon known in condensed matter physics as
"Bosenova". The existence of axions is first diagnosed by gaps in the mass vs
spin plot of astrophysical black holes. For young black holes the allowed
values of spin are quantized, giving rise to "Regge trajectories" inside the
gap region. The axion cloud can also be observed directly either through
precision mapping of the near horizon geometry or through gravitational waves
coming from the Bosenova explosion, as well as axion transitions and
annihilations in the gravitational atom. Our estimates suggest that these
signals are detectable in upcoming experiments, such as Advanced LIGO, AGIS,
and LISA. Current black hole spin measurements imply an upper bound on the QCD
axion decay constant of 2 x 10^17 GeV, while Advanced LIGO can detect signals
from a QCD axion cloud with a decay constant as low as the GUT scale. We
finally discuss the possibility of observing the gamma-rays associated with the
Bosenova explosion and, perhaps, the radio waves from axion-to-photon
conversion for the QCD axion.
|
Hall-effect and magnetoresistivity of holes in silicon and germanium are
considered with due regard for mutual drag of light and heavy band carriers.
Search of contribution of this drag shows that this interaction has a
sufficient and non-trivial influence on both effects.
|
A proof is given that the polar decomposition procedure for unitarity
restoration works for products of invertible nonunitary operators. A brief
discussion follows that the unitarity restoration procedure, applied to
propagators in spacetimes containing closed timelike curves, is analogous to
the original introduction by Feynman of ghosts to restore unitarity in
non-abelian gauge theories. (The substance of this paper will be a note added
in proof to the published version of gr-qc/9405058, to appear in Phys Rev D.)
|
The 80% of the matter in the Universe is in the form of dark matter that
comprises the skeleton of the large-scale structure called the Cosmic Web. As
the Cosmic Web dictates the motion of all matter in galaxies and inter-galactic
media through gravity, knowing the distribution of dark matter is essential for
studying the large-scale structure. However, the Cosmic Web's detailed
structure is unknown because it is dominated by dark matter and warm-hot
inter-galactic media, both of which are hard to trace. Here we show that we can
reconstruct the Cosmic Web from the galaxy distribution using the
convolutional-neural-network-based deep-learning algorithm. We find the mapping
between the position and velocity of galaxies and the Cosmic Web using the
results of the state-of-the-art cosmological galaxy simulations, Illustris-TNG.
We confirm the mapping by applying it to the EAGLE simulation. Finally, using
the local galaxy sample from Cosmicflows-3, we find the dark-matter map in the
local Universe. We anticipate that the local dark-matter map will illuminate
the studies of the nature of dark matter and the formation and evolution of the
Local Group. High-resolution simulations and precise distance measurements to
local galaxies will improve the accuracy of the dark-matter map.
|
We study the quantum electron transport in a one-dimensional interacting
electron system, called Schmid model, reformulating the model in terms of the
bosonic string theory on a disk. The particle-kink duality of the model is
discussed in the absence of the external electric field and further extended to
the model with a weak electric field. Using the linear response theory, we
evaluate the electric conductance both for both weak and strong periodic
potentials in the zero temperature limit. The electric conductance is invariant
under the particle-kink duality.
|
We present a new, publicly-available image dataset generated by the NVIDIA
Deep Learning Data Synthesizer intended for use in object detection, pose
estimation, and tracking applications. This dataset contains 144k stereo image
pairs that synthetically combine 18 camera viewpoints of three photorealistic
virtual environments with up to 10 objects (chosen randomly from the 21 object
models of the YCB dataset [1]) and flying distractors. Object and camera pose,
scene lighting, and quantity of objects and distractors were randomized. Each
provided view includes RGB, depth, segmentation, and surface normal images, all
pixel level. We describe our approach for domain randomization and provide
insight into the decisions that produced the dataset.
|
As cognitive interventions for older adults evolve, modern technologies are
increasingly integrated into their development. This study investigates the
efficacy of augmented reality (AR)-based physical-cognitive training using an
interactive game with Kinect motion sensor technology on older individuals at
risk of mild cognitive impairment. Utilizing a pretest-posttest experimental
design, twenty participants (mean age 66.8 SD. = 4.6 years, age range 60-78
years) underwent eighteen individual training sessions, lasting 45 to 60
minutes each, conducted three times a week over a span of 1.5 months. The
training modules from five activities, encompassing episodic and working
memory, attention and inhibition, cognitive flexibility, and speed processing,
were integrated with physical movement and culturally relevant Thai-context
activities. Results revealed significant improvements in inhibition, cognitive
flexibility, accuracy, and reaction time, with working memory demonstrating
enhancements in accuracy albeit not in reaction time. These findings underscore
the potential of AR interventions to bolster basic executive enhancement among
community-dwelling older adults at risk of cognitive decline.
|
A survey of physical parameters and of a ladder of various regimes of
laser-matter interactions at extreme intensities is given. Special emphases is
made on three selected topics: (i) qualitative derivation of the scalings for
probability rates of the basic processes; (ii) self-sustained cascades (which
may dominate at the intensity levels attainable with next generation laser
facilities); and (iii) possibility of breaking down the Intense Field QED
approach for ultrarelativistic electrons and high-energy photons at certain
intensity level.
|
The longitudinal magnetoresistance (MR) is assumed to be hardly realized as
the Lorentz force does not work on electrons when the magnetic field is
parallel to the current. However, in some cases, longitudinal MR becomes large,
which exceeds the transverse MR. To solve this problem, we have investigated
the longitudinal MR considering multivalley contributions based on the
classical MR theory. We have showed that the large longitudinal MR is caused by
off-diagonal components of a mobility tensor. Our theoretical results agree
with the experiments of large longitudinal MR in IV-VI semiconductors,
especially in PbTe, for a wide range of temperatures, except for linear MR at
low temperatures.
|
We stduy $L^p-L^r$ restriction estimates for algebraic varieties $V$ in the
case when restriction operators act on radial functions in the finite field
setting. We show that if the varieties $V$ lie in odd dimensional vector spaces
over finite fields, then the conjectured restriction estimates are possible for
all radial test functions. In addition, it is proved that if the varieties $V$
in even dimensions have few intersection points with the sphere of zero radius,
the same conclusion as in odd dimensional case can be also obtained.
|
Advancements in technology and culture lead to changes in our language. These
changes create a gap between the language known by users and the language
stored in digital archives. It affects user's possibility to firstly find
content and secondly interpret that content. In previous work we introduced our
approach for Named Entity Evolution Recognition~(NEER) in newspaper
collections. Lately, increasing efforts in Web preservation lead to increased
availability of Web archives covering longer time spans. However, language on
the Web is more dynamic than in traditional media and many of the basic
assumptions from the newspaper domain do not hold for Web data. In this paper
we discuss the limitations of existing methodology for NEER. We approach these
by adapting an existing NEER method to work on noisy data like the Web and the
Blogosphere in particular. We develop novel filters that reduce the noise and
make use of Semantic Web resources to obtain more information about terms. Our
evaluation shows the potentials of the proposed approach.
|
We consider polygon and simplex equations, of which the simplest nontrivial
examples are pentagon (5-gon) and Yang--Baxter (2-simplex), respectively. We
examine the general structure of (2n+1)-gon and 2n-simplex equations in direct
sums of vector spaces. Then we provide a construction for their solutions,
parameterized by elements of the Grassmannian Gr(n+1,2n+1).
|
The saturation level of the magnetorotational instability (MRI) is
investigated using three-dimensional MHD simulations. The shearing box
approximation is adopted and the vertical component of gravity is ignored, so
that the evolution of the MRI is followed in a small local part of the disk. We
focus on the dependence of the saturation level of the stress on the gas
pressure, which is a key assumption in the standard alpha disk model. From our
numerical experiments it is found that there is a weak power-law relation
between the saturation level of the Maxwell stress and the gas pressure in the
nonlinear regime; the higher the gas pressure, the larger the stress. Although
the power-law index depends slightly on the initial field geometry, the
relationship between stress and gas pressure is independent of the initial
field strength, and is unaffected by Ohmic dissipation if the magnetic Reynolds
number is at least 10. The relationship is the same in adiabatic calculations,
where pressure increases over time, and nearly-isothermal calculations, where
pressure varies little with time. Our numerical results are qualitatively
consistent with an idea that the saturation level of the MRI is determined by a
balance between the growth of the MRI and the dissipation of the field through
reconnection. The quantitative interpretation of the pressure-stress relation,
however, may require advances in the theoretical understanding of non-steady
magnetic reconnection.
|
We study anharmonic effects in MgB2 by comparing Inelastic X-ray and
Ramanscattering together with ab-initio calculations. Using high statistics and
high q resolution measurements we show that the E2g mode linewidth is
independent of temperature along Gamma-A. We show, contrary to previous claims,
that the Raman-peak energy decreases as a function of increasing temperature, a
behaviour inconsistent with all the anharmonic ab-initio calculations of the
E2g mode at Gamma available in literature. These findings and the excellent
agreement between the X-ray measured and ab-initio calculated phonon spectra
suggest that anharmonicity is not the main mechanism determining the
temperature behaviour of the Raman-peak energy. The Raman E2g peak position and
linewidth can be explained by large dynamical effects in the phonon
self-energy. In light of the present findings, the commonly accepted
explanation of the reduced isotope effect in terms of anharmonic effects needs
to be reconsidered.
|
Deep convolutional neural networks can enhance images taken with small mobile
camera sensors and excel at tasks like demoisaicing, denoising and
super-resolution. However, for practical use on mobile devices these networks
often require too many FLOPs and reducing the FLOPs of a convolution layer,
also reduces its parameter count. This is problematic in view of the recent
finding that heavily over-parameterized neural networks are often the ones that
generalize best. In this paper we propose to use HyperNetworks to break the
fixed ratio of FLOPs to parameters of standard convolutions. This allows us to
exceed previous state-of-the-art architectures in SSIM and MS-SSIM on the
Zurich RAW- to-DSLR (ZRR) data-set at > 10x reduced FLOP-count. On ZRR we
further observe generalization curves consistent with 'double-descent' behavior
at fixed FLOP-count, in the large image limit. Finally we demonstrate the same
technique can be applied to an existing network (VDN) to reduce its
computational cost while maintaining fidelity on the Smartphone Image Denoising
Dataset (SIDD). Code for key functions is given in the appendix.
|
In this paper we report our experiment concerning new attacks detection by a
neural network-based Intrusion Detection System. What is crucial for this topic
is the adaptation of the neural network that is already in use to correct
classification of a new "normal traffic" and of an attack representation not
presented during the network training process. When it comes to the new attack
it should also be easy to obtain vectors to test and to retrain the neural
classifier. We describe the proposal of an algorithm and a distributed IDS
architecture that could achieve the goals mentioned above.
|
Non-line-of-sight (NLOS) imaging allows for the imaging of objects around a
corner, which enables potential applications in various fields such as
autonomous driving, robotic vision, medical imaging, security monitoring, etc.
However, the quality of reconstruction is challenged by low signal-noise-ratio
(SNR) measurements. In this study, we present a regularization method, referred
to as structure sparsity (SS) regularization, for denoising in NLOS
reconstruction. By exploiting the prior knowledge of structure sparseness, we
incorporate nuclear norm penalization into the cost function of directional
light-cone transform (DLCT) model for NLOS imaging system. This incorporation
effectively integrates the neighborhood information associated with the
directional albedo, thereby facilitating the denoising process. Subsequently,
the reconstruction is achieved by optimizing a directional albedo model with SS
regularization using fast iterative shrinkage-thresholding algorithm. Notably,
the robust reconstruction of occluded objects is observed. Through
comprehensive evaluations conducted on both synthetic and experimental
datasets, we demonstrate that the proposed approach yields high-quality
reconstructions, surpassing the state-of-the-art reconstruction algorithms,
especially in scenarios involving short exposure and low SNR measurements.
|
We consider the problem of video caching across a set of 5G small-cell base
stations (SBS) connected to each other over a high-capacity short-delay
back-haul link, and linked to a remote server over a long-delay connection.
Even though the problem of minimizing the overall video delivery delay is
NP-hard, the Collaborative Caching Algorithm (CCA) that we present can
efficiently compute a solution close to the optimal, where the degree of
sub-optimality depends on the worst case video-to-cache size ratio. The
algorithm is naturally amenable to distributed implementation that requires
zero explicit coordination between the SBSs, and runs in $O(N + K \log K)$
time, where $N$ is the number of SBSs (caches) and $K$ the maximum number of
videos. We extend CCA to an online setting where the video popularities are not
known a priori but are estimated over time through a limited amount of periodic
information sharing between SBSs. We demonstrate that our algorithm closely
approaches the optimal integral caching solution as the cache size increases.
Moreover, via simulations carried out on real video access traces, we show that
our algorithm effectively uses the SBS caches to reduce the video delivery
delay and conserve the remote server's bandwidth, and that it outperforms two
other reference caching methods adapted to our system setting.
|
An algorithm based on backward induction is devised in order to compute the
optimal sequence of games to be played in Parrondo games. The algorithm can be
used to find the optimal sequence for any finite number of turns or in the
steady state, showing that ABABB... is the sequence with the highest steady
state average gain. The algorithm can also be generalised to find the optimal
adaptive strategy in a multi-player version of the games, where a finite number
of players may choose, at every turn, the game the whole ensemble should play.
|
Turbulence has been recognized as a factor of paramount importance for the
survival or extinction of sinking phytoplankton species. However, dealing with
its multiscale nature in models of coupled fluid and biological dynamics is a
formidable challenge. Advection by coherent structures, as those related to
winter convection and Langmuir circulation, is also recognized to play a role
in the survival and localization of phytoplankton. In this work we revisit a
theoretically appealing model for phytoplankton vertical dynamics, and
numerically investigate how large-scale fluid motions affect the survival
conditions and the spatial distribution of the biological population. For this
purpose, and to work with realistic parameter values, we adopt a kinematic flow
field to account for the different spatial and temporal scales of turbulent
motions. The dynamics of the population density are described by an
advection-reaction-diffusion model with a spatially heterogeneous growth term
proportional to sunlight availability. We explore the role of fluid transport
by progressively increasing the complexity of the flow in terms of spatial and
temporal scales. We find that, due to the large-scale circulation,
phytoplankton accumulates in downwelling regions and its growth is reduced,
confirming previous indications in slightly different conditions. We then
explain the observed phenomenology in terms of a plankton filament model.
Moreover, by contrasting the results in our different flow cases, we show that
the large-scale coherent structures have an overwhelming importance. Indeed, we
find that smaller-scale motions only quite weakly affect the dynamics, without
altering the general mechanism identified. Such results are relevant for
parameterizations in numerical models of phytoplankton life cycles in realistic
oceanic flow conditions.
|
This note is a report on the observation that some singular varieties admit
Calabi--Yau coverings. As an application, we construct 18 new Calabi--Yau
3-folds with Picard number one that have some interesting properties.
|
Two-dimensional (2D) materials are among the most studied ones nowadays,
because of their unique properties. These materials are made of, single- or few
atom-thick layers assembled by van der Waals forces, hence allowing a variety
of stacking sequences possibly resulting in a variety of crystallographic
structures as soon as the sequences are periodic. Taking the example of few
layer graphene (FLG), it is of an utmost importance to identify both the number
of layers and the stacking sequence, because of the driving role these
parameters have on the properties. For this purpose, analysing the spot
intensities of electron diffraction patterns (DPs) is commonly used, along with
attempts to vary the number of layers, and the specimen tilt angle. However,
the number of sequences able to be discriminated this way remains few, because
of the similarities between the DPs. Also, the possibility of the occurrence of
C layers in addition to A and/or B layers in FLG has been rarely considered. To
overcome this limitation, we propose here a new methodology based on
multi-wavelength electron diffraction which is able to discriminate between
stacking sequences up to 6 layers (potentially more) involving A, B, and C
layers. We also propose an innovative method to calculate the spot intensities
in an easier and faster way than the standard ones. Additionally, we show that
the method is valid for transition metal dichalcogenides, taking the example of
MoS2.
|
Modeling the dynamics of a quantum system connected to the environment is
critical for advancing our understanding of complex quantum processes, as most
quantum processes in nature are affected by an environment. Modeling a
macroscopic environment on a quantum simulator may be achieved by coupling
independent ancilla qubits that facilitate energy exchange in an appropriate
manner with the system and mimic an environment. This approach requires a
large, and possibly exponential number of ancillary degrees of freedom which is
impractical. In contrast, we develop a digital quantum algorithm that simulates
interaction with an environment using a small number of ancilla qubits. By
combining periodic modulation of the ancilla energies, or spectral combing,
with periodic reset operations, we are able to mimic interaction with a large
environment and generate thermal states of interacting many-body systems. We
evaluate the algorithm by simulating preparation of thermal states of the
transverse Ising model. Our algorithm can also be viewed as a quantum Markov
chain Monte Carlo (QMCMC) process that allows sampling of the Gibbs
distribution of a multivariate model. To demonstrate this we evaluate the
accuracy of sampling Gibbs distributions of simple probabilistic graphical
models using the algorithm.
|
Using group theoretical methods, we analyze the generalization of a
one-dimensional sixth-order thin film equation which arises in considering the
motion of a thin film of viscous fluid driven by an overlying elastic plate.
The most general Lie group classification of point symmetries, its Lie algebra,
and the equivalence group are obtained. Similar reductions are performed and
invariant solutions are constructed. It is found that some similarity solutions
are of great physical interest such as sink and source solutions,
travelling-wave solutions, waiting-time solutions, and blow-up solutions.
|
Let $\Omega$ be a group with identity $e$, $\Gamma$ be a $\Omega$-graded
commutative ring and $\Im$ a graded $\Gamma$-module. In this article, we
introduce the concept of $gr$-$C$-$2^{A}$-secondary submodules and investigate
some properties of this new class of graded submodules. A non-zero graded
submodule $S$ of $\Im$ is said to be a $gr$-$C$-$2^{A}$-secondary submodule if
whenever $r,s \in h(\Gamma)$, $L$ is a graded submodule of $\Im$, and
$rs\,S\subseteq L$, then either $r\,S\subseteq L$ or $s\,S \subseteq L$ or $rs
\in Gr(Ann_\Gamma(S))$.
|
In big bang nucleosynthesis (BBN), the deuterium-tritium (DT) fusion
reaction, D(T,n)$\alpha$, enhanced by the 3/2$^+$ resonance, is responsible for
99% of primordial $^4$He. This has been known for decades and has been well
documented in the scientific literature. However, following the tradition
adopted by authors of learned articles, it was stated in a matter-of-fact
manner and not emphasized; for most people, it has remained unknown. This
helium became a source for the subsequent creation of $\geq$25% of the carbon
and other heavier elements and, thus, a substantial fraction of our human
bodies. (To be more precise than $\geq$25% will require future simulation
studies on stellar nucleosynthesis.)
Also, without this resonance, controlled fusion energy would be beyond reach.
For example, for inertial confinement fusion (ICF), laser energy delivery for
the National Ignition Facility (NIF) would have to be approximately 70 times
larger for ignition.
Because the resonance enhances the DT fusion cross section a hundredfold, we
propose that the 3/2$^+$ $^5$He excited state be referred to as the "Bretscher
state" in honor of the Manhattan Project scientist who discovered it, in
analogy with the well-known 7.6 MeV "Hoyle state" in $^{12}$C that allows for
the resonant 3$\alpha$ formation.
|
Considering the interaction through mutual interference of the different
radio devices, the channel selection (CS) problem in decentralized parallel
multiple access channels can be modeled by strategic-form games. Here, we show
that the CS problem is a potential game (PG) and thus the fictitious play (FP)
converges to a Nash equilibrium (NE) either in pure or mixed strategies. Using
a 2-player 2-channel game, it is shown that convergence in mixed strategies
might lead to cycles of action profiles which lead to individual spectral
efficiencies (SE) which are worse than the SE at the worst NE in mixed and pure
strategies. Finally, exploiting the fact that the CS problem is a PG and an
aggregation game, we present a method to implement FP with local information
and minimum feedback.
|
We relax the continuity assumption in Bloom's uniform convergence theorem for
Beurling slowly varying functions \phi. We assume that \phi has the Darboux
property, and obtain results for \phi measurable or having the Baire property.
|
There is a growing need for new optimization methods to facilitate the
reliable and cost-effective operation of power systems with intermittent
renewable energy resources. In this paper, we formulate the robust AC optimal
power flow (RAC-OPF) problem as a two-stage robust optimization problem with
recourse. This problem amounts to a nonconvex infinite-dimensional optimization
problem that is computationally intractable, in general. Under the assumption
that there is adjustable generation or load at every bus in the power
transmission network, we develop a technique to approximate RAC-OPF from within
by a finite-dimensional semidefinite program by restricting the space of
recourse policies to be affine in the uncertain problem data. We establish a
sufficient condition under which the semidefinite program returns an affine
recourse policy that is guaranteed to be feasible for the original RAC-OPF
problem. We illustrate the effectiveness of the proposed optimization method on
the WSCC 9-bus and IEEE 14-bus test systems with different levels of renewable
resource penetration and uncertainty.
|
Rayleigh-B\'{e}nard convection is studied and quantitative comparisons are
made, where possible, between theory and experiment by performing numerical
simulations of the Boussinesq equations for a variety of experimentally
realistic situations. Rectangular and cylindrical geometries of varying aspect
ratios for experimental boundary conditions, including fins and spatial ramps
in plate separation, are examined with particular attention paid to the role of
the mean flow. A small cylindrical convection layer bounded laterally either by
a rigid wall, fin, or a ramp is investigated and our results suggest that the
mean flow plays an important role in the observed wavenumber. Analytical
results are developed quantifying the mean flow sources, generated by amplitude
gradients, and its effect on the pattern wavenumber for a large-aspect-ratio
cylinder with a ramped boundary. Numerical results are found to agree well with
these analytical predictions. We gain further insight into the role of mean
flow in pattern dynamics by employing a novel method of quenching the mean flow
numerically. Simulations of a spiral defect chaos state where the mean flow is
suddenly quenched is found to remove the time dependence, increase the
wavenumber and make the pattern more angular in nature.
|
We generalize the completely monotone conjecture ([CG15]) from Shannon
entropy to the Tsallis entropy up for orders up to at least four. To this end,
we employ the algorithm ([J\"un16, JM06a]) which employs the technique of
systematic integrations-by-parts.
|
We study perturbative unitarity corrections in the generalized leading
logarithmic approximation in high energy QCD. It is shown that the
corresponding amplitudes with up to six gluons in the t-channel are conformally
invariant in impact parameter space. In particular we give a new representation
for the two-to-six reggeized gluon vertex in terms of conformally invariant
functions. With the help of this representation an interesting regularity in
the structure of the two-to-four and the two-to-six transition vertices is
found.
|
We study the dynamics of the chiral phase transition at finite chemical
potential in the Gross-Neveu model in the leading order in large-N
approximation. We consider evolutions starting in local thermal equilibrium in
the massless unbroken phase for conditions pertaining to traversing a first or
second order phase transition. We assume boost invariant kinematics and
determine the evolution of the the order parameter $\sigma$, the energy density
and pressure as well as the effective temperature, chemical potential and
interpolating number densities as a function of $\tau$.
|
In recent years, research interest in personalised treatments has been
growing. However, treatment effect heterogeneity and possibly time-varying
treatment effects are still often overlooked in clinical studies. Statistical
tools are needed for the identification of treatment response patterns, taking
into account that treatment response is not constant over time. We aim to
provide an innovative method to obtain dynamic treatment effect phenotypes on a
time-to-event outcome, conditioned on a set of relevant effect modifiers. The
proposed method does not require the assumption of proportional hazards for the
treatment effect, which is rarely realistic. We propose a spline-based survival
neural network, inspired by the Royston-Parmar survival model, to estimate
time-varying conditional treatment effects. We then exploit the functional
nature of the resulting estimates to apply a functional clustering of the
treatment effect curves in order to identify different patterns of treatment
effects. The application that motivated this work is the discontinuation of
treatment with Mineralocorticoid receptor Antagonists (MRAs) in patients with
heart failure, where there is no clear evidence as to which patients it is the
safest choice to discontinue treatment and, conversely, when it leads to a
higher risk of adverse events. The data come from an electronic health record
database. A simulation study was performed to assess the performance of the
spline-based neural network and the stability of the treatment response
phenotyping procedure. In light of the results, the suggested approach has the
potential to support personalized medical choices by assessing unique treatment
responses in various medical contexts over a period of time.
|
The mean-field limit of a Markovian model describing the interaction of
several classes of permanent connections in a network is analyzed. Each of the
connections has a self-adaptive behavior in that its transmission rate along
its route depends on the level of congestion of the nodes of the route. Since
several classes of connections going through the nodes of the network are
considered, an original mean-field result in a multi-class context is
established. It is shown that, as the number of connections goes to infinity,
the behavior of the different classes of connections can be represented by the
solution of an unusual nonlinear stochastic differential equation depending not
only on the sample paths of the process, but also on its distribution.
Existence and uniqueness results for the solutions of these equations are
derived. Properties of their invariant distributions are investigated and it is
shown that, under some natural assumptions, they are determined by the
solutions of a fixed-point equation in a finite-dimensional space.
|
Computational problem certificates are additional data structures for each
output, which can be used by a-possibly randomized-verification algorithm that
proves the correctness of each output. In this paper, we give an algorithm that
computes a certificate for the minimal polynomial of sparse or structured nxn
matrices over an abstract field, of sufficiently large cardinality, whose Monte
Carlo verification complexity requires a single matrix-vector multiplication
and a linear number of extra field operations. We also propose a novel
preconditioner that ensures irreducibility of the characteristic polynomial of
the generically preconditioned matrix. This preconditioner takes linear time to
be applied and uses only two random entries. We then combine these two
techniques to give algorithms that compute certificates for the determinant,
and thus for the characteristic polynomial, whose Monte Carlo verification
complexity is therefore also linear.
|
If one accounts for correlations between scales, then nonlocal, k-dependent
halo bias is part and parcel of the excursion set approach, and hence of halo
model predictions for galaxy bias. We present an analysis that distinguishes
between a number of different effects, each one of which contributes to
scale-dependent bias in real space. We show how to isolate these effects and
remove the scale dependence, order by order, by cross-correlating the halo
field with suitably transformed versions of the mass field. These
transformations may be thought of as simple one-point, two-scale measurements
that allow one to estimate quantities which are usually constrained using
n-point statistics. As part of our analysis, we present a simple analytic
approximation for the first crossing distribution of walks with correlated
steps which are constrained to pass through a specified point, and demonstrate
its accuracy. Although we concentrate on nonlinear, nonlocal bias with respect
to a Gaussian random field, we show how to generalize our analysis to more
general fields.
|
We construct an explicit bulk dual in anti-de Sitter space, with couplings of
order $1/N$, for the $SU(N)$-singlet sector of QED in $d$ space-time dimensions
($2 < d < 4$) coupled to $N$ scalar fields. We begin from the bulk dual for the
theory of $N$ complex free scalar fields that we constructed in our previous
work, and couple this to $U(1)$ gauge fields living on the boundary in order to
get the bulk dual of scalar QED (in which the $U(1)$ gauge fields become the
boundary value of the bulk vector fields). As in our previous work, the bulk
dual is non-local but we can write down an explicit action for it. We focus on
the CFTs arising at low energies (or, equivalently, when the $U(1)$ gauge
coupling goes to infinity). For $d=3$ we discuss also the addition of a
Chern-Simons term for $U(1)$, modifying the boundary conditions for the bulk
gauge field. We also discuss the generalization to QCD, with $U(N_c)$ gauge
fields coupled to $N$ scalar fields in the fundamental representation (in the
large $N$ limit with fixed $N_c$).
|
Glioma is a common malignant brain tumor with distinct survival among
patients. The isocitrate dehydrogenase (IDH) gene mutation provides critical
diagnostic and prognostic value for glioma. It is of crucial significance to
non-invasively predict IDH mutation based on pre-treatment MRI. Machine
learning/deep learning models show reasonable performance in predicting IDH
mutation using MRI. However, most models neglect the systematic brain
alterations caused by tumor invasion, where widespread infiltration along white
matter tracts is a hallmark of glioma. Structural brain network provides an
effective tool to characterize brain organisation, which could be captured by
the graph neural networks (GNN) to more accurately predict IDH mutation.
Here we propose a method to predict IDH mutation using GNN, based on the
structural brain network of patients. Specifically, we firstly construct a
network template of healthy subjects, consisting of atlases of edges (white
matter tracts) and nodes (cortical/subcortical brain regions) to provide
regions of interest (ROIs). Next, we employ autoencoders to extract the latent
multi-modal MRI features from the ROIs of edges and nodes in patients, to train
a GNN architecture for predicting IDH mutation. The results show that the
proposed method outperforms the baseline models using the 3D-CNN and
3D-DenseNet. In addition, model interpretation suggests its ability to identify
the tracts infiltrated by tumor, corresponding to clinical prior knowledge. In
conclusion, integrating brain networks with GNN offers a new avenue to study
brain lesions using computational neuroscience and computer vision approaches.
|
Quenched disorder in semiconductors induces localized electronic states at
the band edge, which manifest as an exponential tail in the density of states.
For large impurity densities, this tail takes a universal Lifshitz form that is
characterized by short-ranged potential fluctuations. We provide both
analytical expressions and numerical values for the Lifshitz tail of a
parabolic conduction band, including its exact fluctuation prefactor. Our
analysis is based on a replica field integral approach, where the leading
exponential scaling of the tail is determined by an instanton profile, and
fluctuations around the instanton determine the subleading pre-exponential
factor. This factor contains the determinant of a fluctuation operator, and we
avoid a full computation of its spectrum by using a Gel'fand-Yaglom formalism,
which provides a concise general derivation of fluctuation corrections in
disorder problems. We provide a revised result for the disorder band tail in
two dimensions.
|
We derive the nonlinear fractional surface wave equation that governs
compression waves at an interface that is coupled to a viscous bulk medium. The
fractional character of the differential equation comes from the fact that the
effective thickness of the bulk layer that is coupled to the interface is
frequency dependent. The nonlinearity arises from the nonlinear dependence of
the interface compressibility on the local compression, which is obtained from
experimental measurements and reflects a phase transition at the interface.
Numerical solutions of our nonlinear fractional theory reproduce several
experimental key features of surface waves in phospholipid monolayers at the
air-water interface without freely adjustable fitting parameters. In
particular, the propagation length of the surface wave abruptly increases at a
threshold excitation amplitude. The wave velocity is found to be of the order
of 40 cm/s both in experiments and theory and slightly increases as a function
of the excitation amplitude. Nonlinear acoustic switching effects in membranes
are thus shown to arise purely based on intrinsic membrane properties, namely
the presence of compressibility nonlinearities that accompany phase transitions
at the interface.
|
In this study, we examine introductory physics students' ability to perform
analogical reasoning between two isomorphic problems which employ the same
underlying physics principles but have different surface features. Three
hundred and eighty two students from a calculus-based and an algebra-based
introductory physics course were asked to learn from a solved problem provided
and take advantage of what they learned from it to solve another isomorphic
problem (which we call the quiz problem). The solved problem provided has two
sub-problems while the quiz problem has three sub-problems, which is known to
be challenging for introductory students from previous research. In addition to
the solved problem, students also received extra scaffolding supports that were
intended to help them discern and exploit the underlying similarities of the
isomorphic solved and quiz problems. The results suggest that students had
great difficulty in transferring what they learned from a 2-step problem to a
3-step problem. Although most students were able to learn from the solved
problem to some extent with the scaffolding provided and invoke the relevant
principles in the quiz problem, they were not necessarily able to apply the
principles correctly. We also conducted think-aloud interviews with 6
introductory students in order to understand in-depth the difficulties they had
and explore strategies to provide better scaffolding. The interviews suggest
that students often superficially mapped the principles employed in the solved
problem to the quiz problem without necessarily understanding the governing
conditions underlying each principle and examining the applicability of the
principle in the new situation in an in-depth manner. Findings suggest that
more scaffolding is needed to help students in transferring from a two-step
problem to a three step problem and applying the physics principles
appropriately.
|
We briefly review the AdS3/CFT2 correspondence and the holographic issues
that arise in the Penrose limit. Exploiting current algebra techniques,
developped by D'Appollonio and Kiritsis for the closely related Nappi-Witten
model, we obtain preliminary results for bosonic string amplitudes in the
resulting Hpp-wave background and comment on how to extend them to the
superstring.
|
Developing fully parametric building models for performance-based generative
design tasks often requires proficiency in many advanced 3D modeling and visual
programming, limiting its use for many building designers. Moreover, iterations
of such models can be time-consuming tasks and sometimes limiting, as major
changes in the layout design may result in remodeling the entire parametric
definition. To address these challenges, we introduce a novel automated
generative design system, which takes a basic floor plan sketch as an input and
provides a parametric model prepared for multi-objective building optimization
as output. Furthermore, the user-designer can assign various design variables
for its desired building elements by using simple annotations in the drawing.
The system would recognize the corresponding element and define variable
constraints to prepare for a multi-objective optimization problem.
|
In this paper, we give a new method for proving the Lesche stability of
several functionals(Incomplete entropy, Tsallis entropy, \kappa - entropy,
Quantum-Group entropy). We prove also that the Incomplete q - expectation value
and Renyi entropy for (0 < q < 1) are \alpha - stable for all (0 <\alpha <= q).
Finally, we prove that the Incomplete q - expectation value is \alpha - stable
for all 0 <\alpha <=1.
|
In this letter, we first derive the analytical channel impulse response for a
cylindrical synaptic channel surrounded by glial cells and validate it with
particle-based simulations. Afterwards, we provide an accurate analytical
approximation for the long-time decay rate of the channel impulse response by
employing Taylor expansion to the characteristic equations that determine the
decay rates of the system. We validate our approximation by comparing it with
the numerical decay rate obtained from the characteristic equation. Overall, we
provide a fully analytical description for the long-time behavior of synaptic
diffusion, e.g., the clean-up processes inside the channel after communication
has long concluded.
|
Subsets and Splits