text
stringlengths 6
128k
|
---|
Plasma toroidal metric singularities in helical devices and tokamaks, giving
rise to magnetic surfaces inside the plasma devices are investigated in two
cases. In the first we consider the case of a rotational plasma on an helical
device with circular cross-section and dissipation. In this case singularities
are shown to place a Ricci scalar curvature bound on the radius of the surface
where the Ricci scalar is the contraction of the constant Riemannian curvature
tensor of magnetic surfaces. An upper bound on the initial magnetic field in
terms of the Ricci scalar is obtained. This last bound may be useful in the
engineering construction of plasma devices in laboratories. The normal poloidal
drift velocity is also computed. In the second case a toroidal metric is used
to show that there is a relation between singularities and the type of tearing
instabilities considered in the tokamak. Besides, in this case Ricci
collineations and Killing symmetries are computed.The pressure is computed by
applying these constraints to the pressure equations in tokamaks.
|
We study cosmological perturbations arising from thermal fluctuations in the
big-bounce cosmology in the Einstein-Cartan-Sciama-Kibble theory of gravity. We
show that such perturbations cannot have a scale-invariant spectrum if
fermionic matter minimally coupled to the torsion tensor is macroscopically
averaged as a spin fluid, but have a scale-invariant spectrum if the Dirac form
of the spin tensor of the fermionic matter is used.
|
Supervised learning based methods for monocular depth estimation usually
require large amounts of extensively annotated training data. In the case of
aerial imagery, this ground truth is particularly difficult to acquire.
Therefore, in this paper, we present a method for self-supervised learning for
monocular depth estimation from aerial imagery that does not require annotated
training data. For this, we only use an image sequence from a single moving
camera and learn to simultaneously estimate depth and pose information. By
sharing the weights between pose and depth estimation, we achieve a relatively
small model, which favors real-time application. We evaluate our approach on
three diverse datasets and compare the results to conventional methods that
estimate depth maps based on multi-view geometry. We achieve an accuracy
{\delta}1.25 of up to 93.5 %. In addition, we have paid particular attention to
the generalization of a trained model to unknown data and the self-improving
capabilities of our approach. We conclude that, even though the results of
monocular depth estimation are inferior to those achieved by conventional
methods, they are well suited to provide a good initialization for methods that
rely on image matching or to provide estimates in regions where image matching
fails, e.g. occluded or texture-less regions.
|
Stationary Random Functions have been successfully applied in geostatistical
applications for decades. In some instances, the assumption of a homogeneous
spatial dependence structure across the entire domain of interest is
unrealistic. A practical approach for modelling and estimating non-stationary
spatial dependence structure is considered. This consists in transforming a
non-stationary Random Function into a stationary and isotropic one via a
bijective continuous deformation of the index space. So far, this approach has
been successfully applied in the context of data from several independent
realizations of a Random Function. In this work, we propose an approach for
non-stationary geostatistical modelling using space deformation in the context
of a single realization with possibly irregularly spaced data. The estimation
method is based on a non-stationary variogram kernel estimator which serves as
a dissimilarity measure between two locations in the geographical space. The
proposed procedure combines aspects of kernel smoothing, weighted non-metric
multi-dimensional scaling and thin-plate spline radial basis functions. On a
simulated data, the method is able to retrieve the true deformation.
Performances are assessed on both synthetic and real datasets. It is shown in
particular that our approach outperforms the stationary approach. Beyond the
prediction, the proposed method can also serve as a tool for exploratory
analysis of the non-stationarity.
|
The use of e-learning systems has a long tradition, where students can study
online helped by a system. In this context, the use of recommender systems is
relatively new. In our research project, we investigated various ways to create
a recommender system. They all aim at facilitating the learning and
understanding of a student. We present a common concept of the learning path
and its learning indicators and embed 5 different recommenders in this context.
|
Starting from the working hypothesis that both physics and the corresponding
mathematics and in particular geometry have to be described by means of
discrete concepts on the Planck-scale, one of the many problems one has to face
in this enterprise is to find the discrete protoforms of the building blocks of
our ordinary continuum physics and mathematics living on a smooth background,
and perhaps more importantly find a way how this continuum limit emerges from
the mentioned discrete structure. We model this underlying substratum as a
structurally dynamic cellular network (basically a generalisation of a cellular
automaton). We regard these continuum concepts and continuum spacetime in
particular as being emergent, coarse-grained and derived relative to this
underlying erratic and disordered microscopic substratum, which we would like
to call quantum geometry and which is expected to play by quite different
rules, namely generalized cellular automaton rules. A central role in our
analysis is played by a geometric renormalization group which creates (among
other things) a kind of sparse translocal network of correlations between the
points in classical continuous space-time and underlies, in our view, such
mysterious phenomena as holography and the black hole entropy-area law. The
same point of view holds for quantum theory which we also regard as a
low-energy, coarse-grained continuum theory, being emergent from something more
fundamental. In this paper we review our approach and compare it to the quantum
graphity framework.
|
The pattern of branched electron flow revealed by scanning gate microscopy
shows the distribution of ballistic electron trajectories. The details of the
pattern are determined by the correlated potential of remote dopants with an
amplitude far below the Fermi energy. We find that the pattern persists even if
the electron density is significantly reduced such that the change in Fermi
energy exceeds the background potential amplitude. The branch pattern is robust
against changes in charge carrier density, but not against changes in the
background potential caused by additional illumination of the sample.
|
Most stars are formed as star clusters in galaxies, which then disperse into
galactic disks. Upcoming exascale supercomputational facilities will enable
performing simulations of galaxies and their formation by resolving individual
stars (star-by-star simulations). This will substantially advance our
understanding of star formation in galaxies, star cluster formation, and
assembly histories of galaxies. In previous galaxy simulations, a simple
stellar population approximation was used. It is, however, difficult to improve
the mass resolution with this approximation. Therefore, a model for forming
individual stars that can be used in simulations of galaxies must be
established. In this first paper of a series of the SIRIUS (SImulations
Resolving IndividUal Stars) project, we demonstrate a stochastic star formation
model for star-by-star simulations. An assumed stellar initial mass function
(IMF) is randomly assigned to newly formed stars. We introduce a maximum search
radius to assemble the mass from surrounding gas particles to form star
particles. In this study, we perform a series of N-body/smoothed particle
hydrodynamics simulations of star cluster formations from turbulent molecular
clouds and ultra-faint dwarf galaxies as test cases. The IMF can be correctly
sampled if a maximum search radius that is larger than the value estimated from
the threshold density for star formation is adopted. In small clouds, the
formation of massive stars is highly stochastic because of the small number of
stars. We confirm that the star formation efficiency and threshold density do
not strongly affect the results. We find that our model can naturally reproduce
the relationship between the most massive stars and the total stellar mass of
star clusters. Herein, we demonstrate that our models can be applied to
simulations varying from star clusters to galaxies for a wide range of
resolutions.
|
We report high-resolution measurements of the in-plane thermal expansion
anisotropy in the vicinity of the electronic nematic phase in Sr$_3$Ru$_2$O$_7$
down to very low temperatures and in varying magnetic field orientation. For
fields applied along the c-direction, a clear second-order phase transition is
found at the nematic phase, with critical behavior compatible with the
two-dimensional Ising universality class (although this is not fully
conclusive). Measurements in a slightly tilted magnetic field reveal a broken
four-fold in-plane rotational symmetry, not only within the nematic phase, but
extending towards slightly larger fields. We also analyze the universal scaling
behavior expected for a metamagnetic quantum critical point, which is realized
outside the nematic region. The contours of the magnetostriction suggest a
relation between quantum criticality and the nematic phase.
|
Recently, it has been reported that as one goes from oxygen to fluorine, just
the addition of one more proton, provides extraordinary stability to fluorine
which can bind six more neutrons beyond what oxygen can. It is shown here that
this surprising stability can be understood if neutron rich nuclei, $^{24}O$
and $^{27}F$ are treated as bound states of eight and nine-tritons
respectively. Also the recently discovered $^{42}Si$ is predicted to have a
bound state structure of fourteen tritons.
|
We use particle-in-cell (PIC) simulations and simple analytic models to
investigate the laser-plasma interaction known as ponderomotive steepening.
When normally incident laser light reflects at the critical surface of a
plasma, the resulting standing electromagnetic wave modifies the electron
density profile via the ponderomotive force, which creates peaks in the
electron density separated by approximately half of the laser wavelength. What
is less well studied is how this charge imbalance accelerates ions towards the
electron density peaks, modifying the ion density profile of the plasma.
Idealized PIC simulations with an extended underdense plasma shelf are used to
isolate the dynamics of ion density peak growth for a 42 fs pulse from an 800
nm laser with an intensity of 10$^{18}$ W cm$^{-2}$. These simulations exhibit
sustained longitudinal electric fields of 200 GV m$^{-1}$, which produce
counter-steaming populations of ions reaching a few keV in energy. We compare
these simulations to theoretical models, and we explore how ion energy depends
on factors such as the plasma density and the laser wavelength, pulse duration,
and intensity. We also provide relations for the strength of longitudinal
electric fields and an approximate timescale for the density peaks to develop.
These conclusions may be useful investigating the phenomenon of ponderomotive
steepening as advances in laser technology allow shorter and more intense
pulses to be produced at various wavelengths. We also discuss the parallels
with other work studying the interference from two counter-propagating laser
pulses.
|
Arguments are presented in favor of the idea that the solar dynamo may
operate not just at the bottom of the convection zone, i.e. in the tachocline,
but it may operate in a more distributed fashion in the entire convection zone.
The near-surface shear layer is likely to play an important role in this
scenario.
|
We define and study stacks which parametrize Lubin--Tate
$(\varphi,\Gamma)$-modules. By working at a perfectoid level, we compare these
with the Emerton--Gee stacks of cyclotomic $(\varphi,\Gamma)$-modules. As a
consequence, we deduce perfectness of the Herr complex in the Lubin--Tate
setting.
|
Single molecule X-ray scattering experiments using free electron lasers hold
the potential to resolve both single structures and structural ensembles of
biomolecules. However, molecular electron density determination has so far not
been achieved due to low photon counts, high noise levels and low hit rates.
Most analysis approaches therefore focus on large specimen like entire viruses,
which scatter substantially more photons per image, such that it becomes
possible to determine the molecular orientation for each image. In contrast,
for small specimen like proteins, the molecular orientation cannot be
determined for each image, and must be considered random and unknown.
Here we developed and tested a rigorous Bayesian approach to overcome these
limitations, and also taking into account intensity fluctuations, beam
polarization, irregular detector shapes, incoherent scattering and background
scattering. We demonstrate using synthetic scattering images that it is
possible to determine electron densities of small proteins in this extreme high
noise Poisson regime. Tests on published experimental data from the coliphage
PR772 achieved the detector-limited resolution of $9\,\mathrm{nm}$, using only
$0.01\,\%$ of the available photons per image.
|
Let $R$ be a lattice ordered ring along with a truncation in the sense of
Ball. We give a necessary and sufficient condition on $R$ for its unitization
$R\oplus\mathbb{Q}$ to be again a lattice ordered ring. Also, we shall see that
$R\oplus\mathbb{Q}$ is a lattice ordered ring for at most one truncation.
Particular attention will be paid to the Archimedean case. More precisely, we
shall identify the unique truncation on an Archimedean $\ell$-ring $R$ which
makes $R\oplus\mathbb{Q}$ into a lattice ordered ring.
|
The stepwise coupled-mode model is a classic approach for solving
range-dependent sound propagation problems. Existing coupled-mode programs have
disadvantages such as high computational cost, weak adaptability to complex
ocean environments and numerical instability. In this paper, a new algorithm is
designed that uses an improved range normalization and global matrix approach
to address range dependence in ocean environments. Due to its high accuracy in
solving differential equations, the spectral method has recently been applied
to range-independent normal modes and has achieved remarkable results. This
algorithm uses the Chebyshev--Tau spectral method to solve for the eigenmodes
in the range-independent segments. The main steps of the algorithm are
parallelized, so OpenMP multithreading technology is also applied for further
acceleration. Based on this algorithm, an efficient program is developed, and
numerical simulations verify that this algorithm is reliable, accurate and
capable. Compared with the existing coupled-mode programs, the newly developed
program is more stable and efficient at comparable accuracies and can solve
waveguides in more complex and realistic ocean environments.
|
Time-resolved optical lineshapes are calculated using a second-order
inhomogeneous cumulant expansion. The calculation shows that in the
inhomogeneous limit the optical spectra are determined solely by two-time
correlation functions. Therefore, measurements of the Stokes-shift correlation
function and the inhomogeneous linewidth cannot provide information about the
heterogeneity lifetime for systems exhibiting dynamic heterogeneities. The
theoretical results are illustrated using a stochastic model for the optical
transition frequencies. The model rests on the assumption that the transition
frequencies are coupled to the environmental relaxation of the system. The
latter is chosen according to a free-energy landscape model for dynamically
heterogeneous dynamics. The model calculations show that the available
experimental data are fully compatible with a heterogeneity lifetime on the
order of the primary relaxation time.
|
We develop a variational approximation to the entanglement entropy for scalar
$\phi^4$ theory in 1+1, 2+1, and 3+1 dimensions, and then examine the
entanglement entropy as a function of the coupling. We find that in 1+1 and 2+1
dimensions, the entanglement entropy of $\phi^4$ theory as a function of
coupling is monotonically decreasing and convex. While $\phi^4$ theory with
positive bare coupling in 3+1 dimensions is thought to lead to a trivial free
theory, we analyze a version of $\phi^4$ with infinitesimal negative bare
coupling, an asymptotically free theory known as precarious $\phi^4$ theory,
and explore the monotonicity and convexity of its entanglement entropy as a
function of coupling. Within the variational approximation, the stability of
precarious $\phi^4$ theory is related to the sign of the first and second
derivatives of the entanglement entropy with respect to the coupling.
|
We study various perturbations and their holographic interpretation for
non-Abelian T-dual of $ AdS_5 \times S^5 $ where the T-duality is applied along
the $ SU(2) $ of $ AdS_5 $. This paper focuses on two types of perturbations,
namely the scalar and the vector fields on NATD of $ AdS_5 \times S^5 $. For
scalar perturbations, the corresponding solutions could be categorised into two
classes. For one of these classes of solutions, we build up the associated
holographic dictionary where the asymptotic radial mode sources scalar
operators for the $ (0+1) $d matrix model. These scalar operators correspond to
either a marginal or an irrelevant deformation of the dual matrix model at
strong coupling. We calculate the two point correlation between these scalar
operators and explore their high as well as low frequency behaviour. We also
discuss the completion of these geometries by setting an upper cut-off along
the holographic axis and discuss the corresponding corrections to the scalar
correlators in the dual matrix model. Finally, we extend our results for vector
perturbations where we obtain asymptotic solutions for a particular class of
modes. These are further used to calculate the boundary charge density at
finite chemical potential.
|
We employ a three-dimensional (3D) reconstruction technique, for the first
time to study the kinematics of six coronal mass ejections (CMEs), using images
obtained from the COR1 and COR2 coronagraphs on board the twin STEREO
spacecraft, as also the eruptive prominences (EPs) associated with three of
them using images from the Extreme UltraViolet Imager (EUVI). A feature in the
EPs and leading edges (LEs) of all the CMEs was identified and tracked in
images from the two spacecraft, and a stereoscopic reconstruction technique was
used to determine the 3D coordinates of these features. True velocity and
acceleration were determined from the temporal evolution of the true height of
the CME features. Our study of kinematics of the CMEs in 3D reveals that the
CME leading edge undergoes maximum acceleration typically below 2R$_\{odot}$.
The acceleration profiles of CMEs associated with flares and prominences
exhibit different behaviour. While the CMEs not associated with prominences
show a bimodal acceleration profile, those associated with prominences do not.
Two of the three associated prominences in the study show a high and rising
value of acceleration up to a distance of almost 4R$_\{odot}$ but acceleration
of the corresponding CME LE does not show the same behaviour, suggesting that
the two may not be always driven by the same mechanism. One of the CMEs,
although associated with a C-class flare showed unusually high acceleration of
over 1500 m s$^{-2}$. Our results therefore suggest that only the
flare-associated CMEs undergo residual acceleration, which indicates that the
flux injection theoretical model holds good for the flare-associated CMEs, but
a different mechanism should be considered for EP-associated CMEs.
|
The effect of a soft phase core appearance in the center of polytropic star
is analyzed by means of linear response theory. Approximate formulae for the
changes of radius, moment of inertia and mass-energy of non-rotating
configuration with arbitrary adiabatic indices are presented, followed by an
example evaluation of astrophysical observables.
|
With the strong experimental evidence for standard neutrino mass and mixings,
there exists now a possibility of the lepton flavor violating process e\sup +
mu \sup - \arrow W \sup + W \sup -, which would occur via t-channel neutrino
exchange induced by neutrino mixings. We consider Langackers generalized
neutrino mixings including ordinary (canonical SU(2) \sub L x U(1) \sub Y
assignments), exotic (non-canonical SU(2) \sub L x U(1) \sub Y assignments) and
singlet neutrinos leading to light and heavy mass eigen states. Constraints on
lepton flavor violating (LFV) ordinary and heavy neutrino overlap parameters
are obtained by using the current experimental bounds on LFV process Mu \arrow
e gamma . These constraints are used to analyze the dependence of differential
cross section and angular distribution, for the process e\sup + mu \sup -
\arrow W \sup + W \sup -, on the mass of heavy (exotic) neutrino and c. m.
energy (sqrt(s)). The possibility of obtaining signatures of exotic neutrino
mixings at e - Mu collider is discussed.
|
A uniform Roe corona is the quotient of the uniform Roe algebra of a metric
space by the ideal of compact operators. Among other results, we show that it
is consistent with ZFC that isomorphism between uniform Roe coronas implies
coarse equivalence between the underlying spaces, for the class of uniformly
locally finite metric spaces which coarsely embed into a Hilbert space.
Moreover, for uniformly locally finite metric spaces with property A, it is
consistent with ZFC that isomorphism between the uniform Roe coronas is
equivalent to bijective coarse equivalence between some of their cofinite
subsets. We also find locally finite metric spaces such that the isomorphism of
their uniform Roe coronas is independent of ZFC. All set-theoretic
considerations in this paper are relegated to two 'black box' principles.
|
This work studies the quantum query complexity of Boolean functions in a
scenario where it is only required that the query algorithm succeeds with a
probability strictly greater than 1/2. We show that, just as in the
communication complexity model, the unbounded error quantum query complexity is
exactly half of its classical counterpart for any (partial or total) Boolean
function. Moreover, we show that the "black-box" approach to convert quantum
query algorithms into communication protocols by Buhrman-Cleve-Wigderson
[STOC'98] is optimal even in the unbounded error setting.
We also study a setting related to the unbounded error model, called the
weakly unbounded error setting, where the cost of a query algorithm is given by
q+log(1/2(p-1/2)), where q is the number of queries made and p>1/2 is the
success probability of the algorithm. In contrast to the case of communication
complexity, we show a tight Theta(log n) separation between quantum and
classical query complexity in the weakly unbounded error setting for a partial
Boolean function. We also show the asymptotic equivalence between them for some
well-studied total Boolean functions.
|
Compactifications of the heterotic string on special T^d/Z_2 orbifolds
realize a landscape of string models with 16 supercharges and a gauge group on
the left-moving sector of reduced rank d+8. The momenta of untwisted and
twisted states span a lattice known as the Mikhailov lattice II_{(d)}, which is
not self-dual for d > 1. By using computer algorithms which exploit the
properties of lattice embeddings, we perform a systematic exploration of the
moduli space for d=1 and 2, and give a list of maximally enhanced points where
the U(1)^{d+8} enhances to a rank d+8 non-Abelian gauge group. For d = 1, these
groups are simply-laced and simply-connected, and in fact can be obtained from
the Dynkin diagram of E_{10}. For d = 2 there are also symplectic and
doubly-connected groups. For the latter we find the precise form of their
fundamental groups from embeddings ofof lattices into the dual of II_{(2)}.
Our results easily generalize to d > 2.
|
We study the characteristic probability density distribution of random flat
band models by machine learning. The models considered here are constructed on
the basis of the molecular-orbital representation, which guarantees the
existence of the macroscopically degenerate zero-energy modes even in the
presence of randomness. We find that flat band states are successfully
distinguished from conventional extended and localized states, indicating the
characteristic feature of the flat band states. We also find that the flat band
states can be detected when the target data are defined in the different
lattice from the training data, which implies the universal feature of the flat
band states constructed by the molecular-orbital representation.
|
We identify the norm of the semigroup generated by the non-self-adjoint
harmonic oscillator acting on $L^2(\Bbb{R})$, for all complex times where it is
bounded. We relate this problem to embeddings between Gaussian-weighted spaces
of holomorphic functions, and we show that the same technique applies, in any
dimension, to the semigroup $e^{-tQ}$ generated by an elliptic quadratic
operator acting on $L^2(\Bbb{R}^n)$. The method used --- identifying the
exponents of sharp products of Mehler formulas --- is elementary and is
inspired by more general works of L. H\"ormander, A. Melin, and J. Sj\"ostrand.
|
We present new X-ray and radio data of the LMC SNR candidate DEM L205,
obtained by XMM-Newton and ATCA, along with archival optical and infrared
observations. We use data at various wavelengths to study this object and its
complex neighbourhood, in particular in the context of the star formation
activity, past and present, around the source. We analyse the X-ray spectrum to
derive some remnant's properties, such as age and explosion energy. Supernova
remnant features are detected at all observed wavelengths: soft and extended
X-ray emission is observed, arising from a thermal plasma with a temperature kT
between 0.2 keV and 0.3 keV. Optical line emission is characterised by an
enhanced [SII]/Halpha ratio and a shell-like morphology, correlating with the
X-ray emission. The source is not or only tentatively detected at near-infrared
wavelengths (< 10 microns), but there is a detection of arc-like emission at
mid and far-infrared wavelengths (24 and 70 micron) that can be unambiguously
associated with the remnant. We suggest that thermal emission from dust heated
by stellar radiation and shock waves is the main contributor to the infrared
emission. Finally, an extended and faint non-thermal radio emission correlates
with the remnant at other wavelengths and we find a radio spectral index
between -0.7 and -0.9, within the range for SNRs. The size of the remnant is
~79x64 pc and we estimate a dynamical age of about 35000 years. We definitely
confirm DEM L205 as a new SNR. This object ranks amongst the largest remnants
known in the LMC. The numerous massive stars and the recent outburst in star
formation around the source strongly suggest that a core-collapse supernova is
the progenitor of this remnant. (abridged)
|
Studies of dark matter models lie at the interface of astrophysics,
cosmology, nuclear physics and collider physics. Constraining such models
entails the capability to compare their predictions to a wide range of
observations. In this review, we present the impact of global constraints to a
specific class of models, called dark matter simplified models. These models
have been adopted in the context of collider studies to classify the possible
signatures due to dark matter production, with a reduced number of free
parameters. We classify the models that have been analysed so far and for each
of them we review in detail the complementarity of relic density, direct and
indirect searches with respect to the LHC searches. We also discuss the
capabilities of each type of search to identify regions where individual
approaches to dark matter detection are the most relevant to constrain the
model parameter space. Finally we provide a critical overview on the validity
of the dark matter simplified models and discuss the caveats for the
interpretation of the experimental results extracted for these models.
|
We consider a general linear control system and a general quadratic cost,
where the state evolves continuously in time and the control is sampled, i.e.,
is piecewise constant over a subdivision of the time interval. This is the
framework of a linear-quadratic optimal sampled-data control problem. As a
first result, we prove that, as the sampling periods tend to zero, the optimal
sampled-data controls converge pointwise to the optimal permanent control.
Then, we extend the classical Riccati theory to the sampled-data control
framework, by developing two different approaches: the first one uses a
recently established version of the Pontryagin maximum principle for optimal
sampled-data control problems, and the second one uses an adequate version of
the dynamic programming principle. In turn, we obtain a closed-loop expression
for optimal sampled-data controls of linear-quadratic problems.
|
We investigate the possible total radiated energy produced by a binary black
hole system containing non-vanishing total angular momentum. For the scenearios
considered we find that the total radiated energy does not exceed 1%.
Additionally we explore the gravitational radiation field and the variation of
angular momentum in the process.
|
RRhGe (R=Tb, Dy, Er, Tm) compounds have been studied by a number of
experimental probes and theoretical ab initio calculations. These compounds
show very interesting magnetic and electrical properties. All the compounds are
antiferromagnetic with some of them showing spin-reorientation transition at
low temperatures. The magnetocaloric effect (MCE) estimated from magnetization
data shows very good value in all these compounds. The electrical resistivity
shows metallic behavior of these compounds. MR shows negative sign near
ordering temperatures and positive at low temperatures. The electronic
structure calculations accounting for electronic correlations in the 4f
rare-earth shell reveal the closeness of the antiferromagnetic ground state and
other types of magnetic orderings in the rare-earth sublattice.
|
Could a laser field lead to the much sought-after tunable bandgaps in
graphene? By using Floquet theory combined with Green's functions techniques,
we predict that a laser field in the mid-infrared range can produce observable
bandgaps in the electronic structure of graphene. Furthermore, we show how they
can be tuned by using the laser polarization. Our results could serve as a
guidance to design opto-electronic nano-devices.
|
Machine learning inference pipelines commonly encountered in data science and
industries often require real-time responsiveness due to their user-facing
nature. However, meeting this requirement becomes particularly challenging when
certain input features require aggregating a large volume of data online.
Recent literature on interpretable machine learning reveals that most machine
learning models exhibit a notable degree of resilience to variations in input.
This suggests that machine learning models can effectively accommodate
approximate input features with minimal discernible impact on accuracy. In this
paper, we introduce Biathlon, a novel ML serving system that leverages the
inherent resilience of models and determines the optimal degree of
approximation for each aggregation feature. This approach enables maximum
speedup while ensuring a guaranteed bound on accuracy loss. We evaluate
Biathlon on real pipelines from both industry applications and data science
competitions, demonstrating its ability to meet real-time latency requirements
by achieving 5.3x to 16.6x speedup with almost no accuracy loss.
|
Jets are produced in early stages of heavy-ion collisions and undergo
modified showering in the quark-gluon plasma (QGP) medium relative to a vacuum
case. These modifications can be measured using observables like jet momentum
profile and generalized angularities to study the details of jet-medium
interactions. Jet momentum profile ($\rho(r)$) encodes radially differential
information about jet broadening and has shown migration of charged energy
towards the jet periphery in Pb+Pb collisions at the LHC. Measurements of
generalized angularities (girth $g$ and momentum dispersion $p_T^D$) and LeSub
(difference between leading and subleading constituents) from Pb+Pb collisions
at the LHC show harder, or more quark-like jet fragmentation, in the presence
of the medium. Measuring these distributions in heavy-ion collisions at RHIC
will help us further characterize the jet-medium interactions in a phase-space
region complimentary to that of the LHC. In this contribution, we present the
first measurements of fully corrected $g$, $p_T^D$ and LeSub observables using
hard-core jets in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV, collected by
the STAR experiment at RHIC.
|
The European Solar Telescope (EST) is a project of a new-generation solar
telescope. It has a large aperture of 4~m, which is necessary for achieving
high spatial and temporal resolution. The high polarimetric sensitivity of the
EST will allow to measure the magnetic field in the solar atmosphere with
unprecedented precision. Here, we summarise the recent advancements in the
realisation of the EST project regarding the hardware development and the
refinement of the science requirements.
|
Given a graph whose arc traversal times vary over time, the Time-Dependent
Travelling Salesman Problem consists in finding a Hamiltonian tour of least
total duration covering the vertices of the graph. The main goal of this work
is to define tight upper bounds for this problem by reusing the information
gained when solving instances with similar features. This is customary in
distribution management, where vehicle routes have to be generated over and
over again with similar input data. To this aim, we devise an upper bounding
technique based on the solution of a classical (and simpler) time-independent
Asymmetric Travelling Salesman Problem, where the constant arc costs are
suitably defined by the combined use of a Linear Program and a mix of
unsupervised and supervised Machine Learning techniques. The effectiveness of
this approach has been assessed through a computational campaign on the real
travel time functions of two European cities: Paris and London. The overall
average gap between our heuristic and the best-known solutions is about
0.001\%. For 31 instances, new best solutions have been obtained.
|
Certain retroviruses, including HIV, insert their DNA in a non-random
fraction of the host genome via poorly understood selection mechanisms. Here,
we develop a biophysical model for retroviral integrations as stochastic and
quasi-equilibrium topological reconnections between polymers. We discover that
physical effects, such as DNA accessibility and elasticity, play important and
universal roles in this process. Our simulations predict that integration is
favoured within nucleosomal and flexible DNA, in line with experiments, and
that these biases arise due to competing energy barriers associated with DNA
deformations. By considering a long chromosomal region in human T-cells during
interphase, we discover that at these larger scales integration sites are
predominantly determined by chromatin accessibility. Finally, we propose and
solve a reaction-diffusion problem that recapitulates the distribution of HIV
hot-spots within T-cells. With few generic assumptions, our model can
rationalise experimental observations and identifies previously unappreciated
physical contributions to retroviral integration site selection.
|
The amount of late decaying massive particles (e.g., gravitinos, moduli)
produced in the evaporation of primordial black holes (PBHs) of mass
$\Mbh\la10^9 $g is calculated. Limits imposed by big-bang nucleosynthesis on
the abundance of these particles are used to constrain the initial PBH mass
fraction $\beta$ (ratio of PBH energy density to critical energy density at
formation), as: $\beta\la 5\times10^{-19} (\xp/6 10^{-3})^{-1} (\Mbh/10^9 {\rm
g})^{-1/2} (\bar{\Yp}/10^{-14})$; $\xp$ is the fraction of PBH luminosity going
into gravitinos or moduli, $\bar{\Yp}$ is the upper bound imposed by
nucleosynthesis on the number density to entropy density ratio of gravitinos or
moduli. This notably implies that such PBHs should never come to dominate the
cosmic energy density.
|
The mixture extension of exponential family principal component analysis
(EPCA) was designed to encode much more structural information about data
distribution than the traditional EPCA does. For example, due to the linearity
of EPCA's essential form, nonlinear cluster structures cannot be easily
handled, but they are explicitly modeled by the mixing extensions. However, the
traditional mixture of local EPCAs has the problem of model redundancy, i.e.,
overlaps among mixing components, which may cause ambiguity for data
clustering. To alleviate this problem, in this paper, a
repulsiveness-encouraging prior is introduced among mixing components and a
diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
Specifically, a determinantal point process (DPP) is exploited as a
diversity-encouraging prior distribution over the joint local EPCAs. As
required, a matrix-valued measure for L-ensemble kernel is designed, within
which, $\ell_1$ constraints are imposed to facilitate selecting effective PCs
of local EPCAs, and angular based similarity measure are proposed. An efficient
variational EM algorithm is derived to perform parameter learning and hidden
variable inference. Experimental results on both synthetic and real-world
datasets confirm the effectiveness of the proposed method in terms of model
parsimony and generalization ability on unseen test data.
|
A significant fraction of RR Lyrae stars exhibits amplitude and/or phase
modulation known as the the Blazhko effect. The oscillation spectra suggest
that, at least in most of the cases, excitation of nonradial modes in addition
to the dominant radial modes is responsible for the effect. Though model
calculations predict that nonradial modes may be excited, there are problems
with explaining their observed properties in terms of finite amplitude
development of the linear instability. We propose a scenario, which like some
previous, postulates energy transfer from radial to nonradial modes, but avoids
those problems. The scenario predicts lower amplitudes in Blazhko stars. We
check this prediction with a new analysis of the Galactic bulge RR Lyrae stars
from OGLE-II database. The effect is seen, but the amplitude reduction is
smaller than predicted.
|
Although wireless sensor networks (WSNs) are powerful in monitoring physical
events, the data collected from a WSN are almost always incomplete if the
surveyed physical event spreads over a wide area. The reason for this
incompleteness is twofold: i) insufficient network coverage and ii) data
aggregation for energy saving. Whereas the existing recovery schemes only
tackle the second aspect, we develop Dual-lEvel Compressed Aggregation (DECA)
as a novel framework to address both aspects. Specifically, DECA allows a high
fidelity recovery of a widespread event, under the situations that the WSN only
sparsely covers the event area and that an in-network data aggregation is
applied for traffic reduction. Exploiting both the low-rank nature of
real-world events and the redundancy in sensory data, DECA combines matrix
completion with a fine-tuned compressed sensing technique to conduct a
dual-level reconstruction process. We demonstrate that DECA can recover a
widespread event with less than 5% of the data, with respect to the dimension
of the event, being collected. Performance evaluation based on both synthetic
and real data sets confirms the recovery fidelity and energy efficiency of our
DECA framework.
|
Learning and reasoning about physical phenomena is still a challenge in
robotics development, and computational sciences play a capital role in the
search for accurate methods able to provide explanations for past events and
rigorous forecasts of future situations. We propose a thermodynamics-informed
active learning strategy for fluid perception and reasoning from observations.
As a model problem, we take the sloshing phenomena of different fluids
contained in a glass. Starting from full-field and high-resolution synthetic
data for a particular fluid, we develop a method for the tracking (perception)
and analysis (reasoning) of any previously unseen liquid whose free surface is
observed with a commodity camera. This approach demonstrates the importance of
physics and knowledge not only in data-driven (grey box) modeling but also in
the correction for real physics adaptation in low data regimes and partial
observations of the dynamics. The method presented is extensible to other
domains such as the development of cognitive digital twins, able to learn from
observation of phenomena for which they have not been trained explicitly.
|
We derive central limit theorems for the Wasserstein distance between the
empirical distributions of Gaussian samples. The cases are distinguished
whether the underlying laws are the same or different. Results are based on the
(quadratic) Frechet differentiability of the Wasserstein distance in the
Gaussian case. Extensions to elliptically symmetric distributions are discussed
as well as several applications such as bootstrap and statistical testing.
|
We present the results of a study of specific heat on a single crystal of
Pr$_{0.63}$Ca$_{0.37}$MnO$_3$ performed over a temperature range 3K-300K in
presence of 0 and 8T magnetic fields. An estimate of the entropy and latent
heat in a magnetic field at the first order charge ordering (CO) transition is
presented. The total entropy change at the CO transition which is $\approx$ 1.8
J/mol K at 0T, decreases to $\sim$ 1.5 J/mol K in presence of 8T magnetic
field. Our measurements enable us to estimate the latent heat $L_{CO}$
$\approx$ 235 J/mol involved in the CO transition. Since the entropy of the
ferromagnetic metallic (FMM) state is comparable to that of the charge-ordered
insulating (COI) state, a subtle change in entropy stabilises either of these
two states. Our low temperature specific heat measurements reveal that the
linear term is absent in 0T and surprisingly not seen even in the metallic FMM
state.
|
Magnetars are the strongest magnets in the present universe and the
combination of extreme magnetic field, gravity and density makes them unique
laboratories to probe current physical theories (from quantum electrodynamics
to general relativity) in the strong field limit. Magnetars are observed as
peculiar, burst--active X-ray pulsars, the Anomalous X-ray Pulsars (AXPs) and
the Soft Gamma Repeaters (SGRs); the latter emitted also three "giant flares,"
extremely powerful events during which luminosities can reach up to 10^47 erg/s
for about one second. The last five years have witnessed an explosion in
magnetar research which has led, among other things, to the discovery of
transient, or "outbursting," and "low-field" magnetars. Substantial progress
has been made also on the theoretical side. Quite detailed models for
explaining the magnetars' persistent X-ray emission, the properties of the
bursts, the flux evolution in transient sources have been developed and
confronted with observations. New insight on neutron star asteroseismology has
been gained through improved models of magnetar oscillations. The long-debated
issue of magnetic field decay in neutron stars has been addressed, and its
importance recognized in relation to the evolution of magnetars and to the
links among magnetars and other families of isolated neutron stars. The aim of
this paper is to present a comprehensive overview in which the observational
results are discussed in the light of the most up-to-date theoretical models
and their implications. This addresses not only the particular case of magnetar
sources, but the more fundamental issue of how physics in strong magnetic
fields can be constrained by the observations of these unique sources.
|
We evaluate machine learning methods for event classification in the
Active-Target Time Projection Chamber detector at the National Superconducting
Cyclotron Laboratory (NSCL) at Michigan State University. An automated method
to single out the desired reaction product would result in more accurate
physics results as well as a faster analysis process. Binary and multi-class
classification methods were tested on data produced by the $^{46}$Ar(p,p)
experiment run at the NSCL in September 2015. We found a Convolutional Neural
Network to be the most successful classifier of proton scattering events for
transfer learning. Results from this investigation and recommendations for
event classification in future experiments are presented.
|
We introduce solitons supported by Bessel photonic lattices in cubic
nonlinear media. We show that the cylindrical geometry of the lattice, with
several concentric rings, affords unique soliton properties and dynamics. In
particular, besides the lowest-order solitons trapped in the center of the
lattice, we find soliton families trapped at different lattice rings. Such
solitons can be set into controlled rotation inside each ring, thus featuring
novel types of in-ring and inter-ring soliton interactions.
|
Automatic abstractive summaries are found to often distort or fabricate facts
in the article. This inconsistency between summary and original text has
seriously impacted its applicability. We propose a fact-aware summarization
model FASum to extract and integrate factual relations into the summary
generation process via graph attention. We then design a factual corrector
model FC to automatically correct factual errors from summaries generated by
existing systems. Empirical results show that the fact-aware summarization can
produce abstractive summaries with higher factual consistency compared with
existing systems, and the correction model improves the factual consistency of
given summaries via modifying only a few keywords.
|
Simultaneous stabilization problem arises in various systems and control
applications. This paper introduces a new approach to addressing this problem
in the multivariable scenario, building upon our previous findings in the
scalar case. The method utilizes a Riccati-type matrix equation known as the
Covariance Extension Equation, which yields all solutions parameterized in
terms of a matrix polynomial. The procedure is demonstrated through specific
examples.
|
Phenomena such as air pollution levels are of greatest interest when
observations are large, but standard prediction methods are not specifically
designed for large observations. We propose a method, rooted in extreme value
theory, which approximates the conditional distribution of an unobserved
component of a random vector given large observed values. Specifically, for
$\mathbf{Z}=(Z_1,...,Z_d)^T$ and $\mathbf{Z}_{-d}=(Z_1,...,Z_{d-1})^T$, the
method approximates the conditional distribution of
$[Z_d|\mathbf{Z}_{-d}=\mathbf{z}_{-d}]$ when $|\mathbf{z}_{-d}|>r_*$. The
approach is based on the assumption that $\mathbf{Z}$ is a multivariate
regularly varying random vector of dimension $d$. The conditional distribution
approximation relies on knowledge of the angular measure of $\mathbf{Z}$, which
provides explicit structure for dependence in the distribution's tail. As the
method produces a predictive distribution rather than just a point predictor,
one can answer any question posed about the quantity being predicted, and, in
particular, one can assess how well the extreme behavior is represented. Using
a fitted model for the angular measure, we apply our method to nitrogen dioxide
measurements in metropolitan Washington DC. We obtain a predictive distribution
for the air pollutant at a location given the air pollutant's measurements at
four nearby locations and given that the norm of the vector of the observed
measurements is large.
|
This paper attacks the challenging problem of video retrieval by text. In
such a retrieval paradigm, an end user searches for unlabeled videos by ad-hoc
queries described exclusively in the form of a natural-language sentence, with
no visual example provided. Given videos as sequences of frames and queries as
sequences of words, an effective sequence-to-sequence cross-modal matching is
crucial. To that end, the two modalities need to be first encoded into
real-valued vectors and then projected into a common space. In this paper we
achieve this by proposing a dual deep encoding network that encodes videos and
queries into powerful dense representations of their own. Our novelty is
two-fold. First, different from prior art that resorts to a specific
single-level encoder, the proposed network performs multi-level encoding that
represents the rich content of both modalities in a coarse-to-fine fashion.
Second, different from a conventional common space learning algorithm which is
either concept based or latent space based, we introduce hybrid space learning
which combines the high performance of the latent space and the good
interpretability of the concept space. Dual encoding is conceptually simple,
practically effective and end-to-end trained with hybrid space learning.
Extensive experiments on four challenging video datasets show the viability of
the new method.
|
The process of aligning a pair of shapes is a fundamental operation in
computer graphics. Traditional approaches rely heavily on matching
corresponding points or features to guide the alignment, a paradigm that
falters when significant shape portions are missing. These techniques generally
do not incorporate prior knowledge about expected shape characteristics, which
can help compensate for any misleading cues left by inaccuracies exhibited in
the input shapes. We present an approach based on a deep neural network,
leveraging shape datasets to learn a shape-aware prior for source-to-target
alignment that is robust to shape incompleteness. In the absence of ground
truth alignments for supervision, we train a network on the task of shape
alignment using incomplete shapes generated from full shapes for
self-supervision. Our network, called ALIGNet, is trained to warp complete
source shapes to incomplete targets, as if the target shapes were complete,
thus essentially rendering the alignment partial-shape agnostic. We aim for the
network to develop specialized expertise over the common characteristics of the
shapes in each dataset, thereby achieving a higher-level understanding of the
expected shape space to which a local approach would be oblivious. We constrain
ALIGNet through an anisotropic total variation identity regularization to
promote piecewise smooth deformation fields, facilitating both partial-shape
agnosticism and post-deformation applications. We demonstrate that ALIGNet
learns to align geometrically distinct shapes, and is able to infer plausible
mappings even when the target shape is significantly incomplete. We show that
our network learns the common expected characteristics of shape collections,
without over-fitting or memorization, enabling it to produce plausible
deformations on unseen data during test time.
|
The effective electroweak Hamiltonian in the gradient-flow formalism is
constructed for the current-current operators through next-to-next-to-leading
order QCD. The results are presented for two common choices of the operator
basis. This paves the way for a consistent matching of perturbatively evaluated
Wilson coefficients and non-perturbative matrix elements evaluated by lattice
simulations.
|
This note quantifies, via a sharp inequality, an interplay between (a) the
characteristic rank of a vector bundle over a topological space X, (b) the
Z/2Z-Betti numbers of X, and (c) sums of the numbers of certain partitions of
integers. In a particular context, (c) is transformed into a sum of the readily
calculable Betti numbers of the real Grassmann manifolds.
|
As the pace of progress that has followed Moore's law continues to diminish,
it is critical that the US support Integrated Circuit (IC or chip) education
and research to maintain technological innovation. Furthermore, US economic
independence, security, and future international standing rely on having
on-shore IC design capabilities. New devices with disparate technologies,
improved design software toolchains and methodologies, and technologies to
integrate heterogeneous systems will be needed to advance IC design
capabilities. This will require rethinking both how we teach design to address
the new complexity and how we inspire student interest in a hardware systems
career path. The main recommendation of this workshop is that accessibility is
the key issue. To this end, a National Chip Design Center (NCDC) should be
established to further research and education by partnering academics and
industry to train our future workforce. This should not be limited to R1
universities, but should also include R2, community college, minority serving
institutions (MSI), and K-12 institutions to have the broadest effect. The NCDC
should support the access, development, and maintenance of open design tools,
tool flows, design kits, design components, and educational materials.
Open-source options should be emphasized wherever possible to maximize
accessibility. The NCDC should also provide access and support for chip
fabrication, packaging and testing for both research and educational purposes.
|
The process $\gamma p \to \phi p$ close to threshold is investigated focusing
on the role played by the {\it s}- and {\it u}-channel nucleonic resonances.
For this purpose, a recent quark model approach, based on the $SU(6)\otimes
O(3)$ symmetry with an effective Lagrangian, is extended to the $\phi$ meson
photoproduction. Another non-diffractive process, the {\it t}-channel $\pi^0$
exchange, is also included. The diffractive contribution is produced by the
{\it t}-channel Pomeron exchange. Contributions from non-diffractive {\it s}-
and {\it u}-channel process are found small in the case of cross sections and
polarization observables at forward angles. However, backward angle
polarization asymmetries show high sensitivity to this non-diffractive process.
Different prescriptions to keep gauge invariance for the Pomeron exchange
amplitudes are investigated. Possible deviations from the exact $SU(6)\otimes
O(3)$ symmetry, due to the configuration mixing, are also discussed.
|
Deepfakes represent one of the toughest challenges in the world of
Cybersecurity and Digital Forensics, especially considering the high-quality
results obtained with recent generative AI-based solutions. Almost all
generative models leave unique traces in synthetic data that, if analyzed and
identified in detail, can be exploited to improve the generalization
limitations of existing deepfake detectors. In this paper we analyzed deepfake
images in the frequency domain generated by both GAN and Diffusion Model
engines, examining in detail the underlying statistical distribution of
Discrete Cosine Transform (DCT) coefficients. Recognizing that not all
coefficients contribute equally to image detection, we hypothesize the
existence of a unique ``discriminative fingerprint", embedded in specific
combinations of coefficients. To identify them, Machine Learning classifiers
were trained on various combinations of coefficients. In addition, the
Explainable AI (XAI) LIME algorithm was used to search for intrinsic
discriminative combinations of coefficients. Finally, we performed a robustness
test to analyze the persistence of traces by applying JPEG compression. The
experimental results reveal the existence of traces left by the generative
models that are more discriminative and persistent at JPEG attacks. Code and
dataset are available at https://github.com/opontorno/dcts_analysis_deepfakes.
|
We present a positivity conjecture for the coefficients of the development of
Jack polynomials in terms of power sums. This extends Stanley's ex-conjecture
about normalized characters of the symmetric group. We prove this conjecture
for partitions having a rectangular shape.
|
Modern concurrent programming benefits from a large variety of
synchronization techniques. These include conventional pessimistic locking, as
well as optimistic techniques based on conditional synchronization primitives
or transactional memory. Yet, it is unclear which of these approaches better
leverage the concurrency inherent to multi-cores.
In this paper, we compare the level of concurrency one can obtain by
converting a sequential program into a concurrent one using optimistic or
pessimistic techniques. To establish fair comparison of such implementations,
we introduce a new correctness criterion for concurrent programs, defined
independently of the synchronization techniques they use.
We treat a program's concurrency as its ability to accept a concurrent
schedule, a metric inspired by the theories of both databases and transactional
memory. We show that pessimistic locking can provide strictly higher
concurrency than transactions for some applications whereas transactions can
provide strictly higher concurrency than pessimistic locks for others. Finally,
we show that combining the benefits of the two synchronization techniques can
provide strictly more concurrency than any of them individually. We propose a
list-based set algorithm that is optimal in the sense that it accepts all
correct concurrent schedules. As we show via experimentation, the optimality in
terms of concurrency is reflected by scalability gains.
|
Recent results on open charm production at HERA are presented. Charm quarks
are identified via the reconstruction of D-mesons. The charm contribution to
the proton structure function is shown. Evidence for an exotic anti-charmed
baryon state observed by H1 is presented. The data show a narrow resonance in
the D*p invariant mass combination at 3099+-3(stat)+-5(syst) MeV. The resonance
is interpreted as an anti-charmed baryon with minimal constituent quark content
uuddcbar together with its charge conjugate. Such a signal is not observed in a
similar preliminary ZEUS analysis.
|
This work is devoted to the study of integration with respect to binomial
measures. We develop interpolatory quadrature rules and study their properties.
Local error estimates for these rules are derived in a general framework.
|
Using data obtained by the EUV Imaging Spectrometer (EIS) onboard Hinode, we
have per- formed a survey of obvious and persistent (without significant
damping) Doppler shift oscillations in the corona. We have found mainly two
types of oscillations from February to April in 2007. One type is found at loop
footpoint regions, with a dominant period around 10 minutes. They are
characterized by coherent behavior of all line parameters (line intensity,
Doppler shift, line width and profile asymmetry), apparent blue shift and
blueward asymmetry throughout almost the en- tire duration. Such oscillations
are likely to be signatures of quasi-periodic upflows (small-scale jets, or
coronal counterpart of type-II spicules), which may play an important role in
the supply of mass and energy to the hot corona. The other type of oscillation
is usually associated with the upper part of loops. They are most clearly seen
in the Doppler shift of coronal lines with forma- tion temperatures between one
and two million degrees. The global wavelets of these oscillations usually peak
sharply around a period in the range of 3-6 minutes. No obvious profile
asymmetry is found and the variation of the line width is typically very small.
The intensity variation is often less than 2%. These oscillations are more
likely to be signatures of kink/Alfven waves rather than flows. In a few cases
there seems to be a pi/2 phase shift between the intensity and Doppler shift
oscillations, which may suggest the presence of slow mode standing waves
according to wave theories. However, we demonstrate that such a phase shift
could also be produced by loops moving into and out of a spatial pixel as a
result of Alfvenic oscillations. In this scenario, the intensity oscillations
associated with Alfvenic waves are caused by loop displacement rather than
density change.
|
Finite-state Markov models are widely used for modeling wireless channels
affected by a variety of non-idealities, ranging from shadowing to
interference. In an industrial environment, the derivation of a Markov model
based on the wireless communication physics can be prohibitive as it requires a
complete knowledge of both the communication dynamics parameters and of the
disturbances/interferers. In this work, a novel methodology is proposed to
learn a Markov model of a fading channel via historical data of the
signal-to-interference-plus-noise-ratio (SINR). Such methodology can be used to
derive a Markov jump model of a wireless control network, and thus to design a
stochastic optimal controller that takes into account the interdependence
between the plant and the wireless channel dynamics. The proposed method is
validated by comparing its prediction accuracy and control performance with
those of a stationary finite-state Markov chain derived assuming perfect
knowledge of the physical channel model and parameters of a WirelessHART
point-to-point communication based on the IEEE-802.15.4 standard.
|
With the confirmed detection of short gamma-ray burst (GRB) in association
with a gravitational wave signal, we present the first fully Bayesian {\it
Fermi}-GBM short GRB spectral catalog. Both peak flux and time-resolved
spectral results are presented. Additionally, we release the full posterior
distributions and reduced data from our sample. Following our previous study,
we introduce three variability classes based of the observed light curve
structure.
|
Ensembles of nitrogen-vacancy (NV) center spins in diamond offer a robust,
precise and accurate magnetic sensor. As their applications move beyond the
laboratory, practical considerations including size, complexity, and power
consumption become important. Here, we compare two commonly-employed NV
magnetometry techniques -- continuous-wave (CW) vs pulsed magnetic resonance --
in a scenario limited by total available optical power. We develop a consistent
theoretical model for the magnetic sensitivity of each protocol that
incorporates NV photophysics - in particular, including the incomplete spin
polarization associated with limited optical power; after comparing the models'
behaviour to experiments, we use them to predict the relative DC sensitivity of
CW versus pulsed operation for an optical-power-limited, shot-noise-limited NV
ensemble magnetometer. We find a $\sim 2-3 \times$ gain in sensitivity for
pulsed operation, which is significantly smaller than seen in power-unlimited,
single-NV experiments. Our results provide a resource for practical sensor
development, informing protocol choice and identifying optimal operation
regimes when optical power is constrained.
|
We introduce a scheme for the parallel storage of frequency separated signals
in an optical memory and demonstrate that this dual-rail storage is a suitable
memory for high fidelity frequency qubits. The two signals are stored
simultaneously in the Zeeman-split Raman absorption lines of a cold atom
ensemble using gradient echo memory techniques. Analysis of the split-Zeeman
storage shows that the memory can be configured to preserve the relative
amplitude and phase of the frequency separated signals. In an experimental
demonstration dual-frequency pulses are recalled with 35% efficiency, 82%
interference fringe visibility, and 6 degrees phase stability. The fidelity of
the frequency-qubit memory is limited by frequency-dependent polarisation
rotation and ambient magnetic field fluctuations, our analysis describes how
these can be addressed in an alternative configuration.
|
We consider the spin orentation of the final Z bosons for the processes in
the Standard Model. We demonstrate that at the threshold energies of these
processes the analytical expressions for the Z boson polarization vectors and
alignment tensors coincide (e^+ e^- -> ZH, Z\gamma) or are very similar (e^+
e^- -> ZZ). In addition, we present interesting symmetry properties for the
spin orientation parameters.
|
It is known that under some transversality and curvature assumptions on the
hypersurfaces involved, the bilinear restriction estimate holds true with
better exponents than what would trivially follow from the corresponding linear
estimates. This subject was extensively studied for conic and parabolic
surfaces with sharp results proved by Wolff and Tao, and with later
generalizations by Lee. In this paper we provide a unified theory for general
hypersurfaces and clarify the role of curvature in this problem, by making
statements in terms of the shape operators of the hypersurfaces involved.
|
We are in the dawn of deep learning explosion for smartphones. To bridge the
gap between research and practice, we present the first empirical study on
16,500 the most popular Android apps, demystifying how smartphone apps exploit
deep learning in the wild. To this end, we build a new static tool that
dissects apps and analyzes their deep learning functions. Our study answers
threefold questions: what are the early adopter apps of deep learning, what do
they use deep learning for, and how do their deep learning models look like.
Our study has strong implications for app developers, smartphone vendors, and
deep learning R\&D. On one hand, our findings paint a promising picture of deep
learning for smartphones, showing the prosperity of mobile deep learning
frameworks as well as the prosperity of apps building their cores atop deep
learning. On the other hand, our findings urge optimizations on deep learning
models deployed on smartphones, the protection of these models, and validation
of research ideas on these models.
|
In the infinite-armed bandit problem, each arm's average reward is sampled
from an unknown distribution, and each arm can be sampled further to obtain
noisy estimates of the average reward of that arm. Prior work focuses on
identifying the best arm, i.e., estimating the maximum of the average reward
distribution. We consider a general class of distribution functionals beyond
the maximum, and propose unified meta algorithms for both the offline and
online settings, achieving optimal sample complexities. We show that online
estimation, where the learner can sequentially choose whether to sample a new
or existing arm, offers no advantage over the offline setting for estimating
the mean functional, but significantly reduces the sample complexity for other
functionals such as the median, maximum, and trimmed mean. The matching lower
bounds utilize several different Wasserstein distances. For the special case of
median estimation, we identify a curious thresholding phenomenon on the
indistinguishability between Gaussian convolutions with respect to the noise
level, which may be of independent interest.
|
Deep neural network compression techniques such as pruning and weight tensor
decomposition usually require fine-tuning to recover the prediction accuracy
when the compression ratio is high. However, conventional fine-tuning suffers
from the requirement of a large training set and the time-consuming training
procedure. This paper proposes a novel solution for knowledge distillation from
label-free few samples to realize both data efficiency and training/processing
efficiency. We treat the original network as "teacher-net" and the compressed
network as "student-net". A 1x1 convolution layer is added at the end of each
layer block of the student-net, and we fit the block-level outputs of the
student-net to the teacher-net by estimating the parameters of the added
layers. We prove that the added layer can be merged without adding extra
parameters and computation cost during inference. Experiments on multiple
datasets and network architectures verify the method's effectiveness on
student-nets obtained by various network pruning and weight decomposition
methods. Our method can recover student-net's accuracy to the same level as
conventional fine-tuning methods in minutes while using only 1% label-free data
of the full training data.
|
Teleportation is a basic primitive for quantum communication and quantum
computing. We address the problem of continuous-variable (unconditional and
conditional) teleportation of a pure single-photon state and a mixed attenuated
single-photon state generally in a nonunity gain regime. Our figure of merit is
the maximum of negativity of the Wigner function that witnesses highly
non-classical feature of the teleported state. We find that negativity of the
Wigner function of the single-photon state can be {\em unconditionally}
teleported for arbitrarily weak squeezed state used to create the entangled
state shared in the teleportation. In contrast, for the attenuated
single-photon state there is a strict threshold squeezing one has to surpass in
order to successfully teleport the negativity of its Wigner function. The {\em
conditional} teleportation allows to approach perfect transmission of the
single photon for an arbitrarily low squeezing at a cost of a success rate. On
the other hand, for the attenuated single photon conditional teleportation
cannot overcome the squeezing threshold of the unconditional teleportation and
it approaches negativity of the input state only if the squeezing
simultaneously increases. However, as soon as the threshold squeezing is
surpassed the conditional teleportation still pronouncedly outperforms the
unconditional one. The main consequences for quantum communication and quantum
computing with continuous variables are discussed.
|
We study automatic title generation for a given block of text and present a
method called DTATG to generate titles. DTATG first extracts a small number of
central sentences that convey the main meanings of the text and are in a
suitable structure for conversion into a title. DTATG then constructs a
dependency tree for each of these sentences and removes certain branches using
a Dependency Tree Compression Model we devise. We also devise a title test to
determine if a sentence can be used as a title. If a trimmed sentence passes
the title test, then it becomes a title candidate. DTATG selects the title
candidate with the highest ranking score as the final title. Our experiments
showed that DTATG can generate adequate titles. We also showed that
DTATG-generated titles have higher F1 scores than those generated by the
previous methods.
|
This paper deals with combinatorial aspects of finite covers of groups by
cosets or subgroups. Let $a_1G_1,...,a_kG_k$ be left cosets in a group $G$ such
that ${a_iG_i}_{i=1}^k$ covers each element of $G$ at least $m$ times but none
of its proper subsystems does. We show that if $G$ is cyclic, or $G$ is finite
and $G_1,...,G_k$ are normal Hall subgroups of $G$, then $k\geq
m+f([G:\bigcap_{i=1}^kG_i])$, where $f(\prod_{t=1}^r
p_t^{\alpha_t})=\sum_{t=1}^r\alpha_t(p_t-1)$ if $p_1,...,p_r$ are distinct
primes and $\alpha_1,...,\alpha_r$ are nonnegative integers. When all the $a_i$
are the identity element of $G$ and all the $G_i$ are subnormal in $G$, we
prove that there is a composition series from $\bigcap_{i=1}^kG_i$ to $G$ whose
factors are of prime orders. The paper also includes some other results and two
challenging conjectures.
|
In this paper we adopt the pullback approach to global Finsler geometry. We
investigate horizontally recurrent Finsler connections. We prove that for each
scalar ($\pi$)1-form $A$, there exists a unique horizontally recurrent Finsler
connection whose $h$-recurrence form is $A$. This result generalizes the
existence and uniqueness theorem of Cartan connection. We then study some
properties of a special kind of horizontally recurrent Finsler connection,
which we call special HRF-connection.
|
With an appropriate choice of parameters, a higher derivative theory of
gravity can describe a normal massive sector and a ghost massless sector. We
show that, when defined on an asymptotically de Sitter spacetime with Dirichlet
boundary conditions, such a higher derivative gravity can provide a framework
for a unitary theory of massive gravity in four spacetime dimensions. The
resulting theory is free not only of higher derivative ghosts but also of the
Boulware-Deser mode.
|
We show that there is a general, informative and reliable procedure for
discovering causal relations when, for all the investigator knows, both latent
variables and selection bias may be at work. Given information about
conditional independence and dependence relations between measured variables,
even when latent variables and selection bias may be present, there are
sufficient conditions for reliably concluding that there is a causal path from
one variable to another, and sufficient conditions for reliably concluding when
no such causal path exists.
|
Cell mechanical properties are fundamental to the organism but remain poorly
understood. We report a comprehensive phenomenological framework for the
nonlinear rheology of single fibroblast cells: a superposition of elastic
stiffening and viscoplastic kinematic hardening. Our results show, that in
spite of cell complexity its mechanical properties can be cast into simple,
well-defined rules, which provide mechanical cell strength and robustness via
control of crosslink slippage.
|
We consider nonlinear mutation selection models, known as replicator-mutator
equations in evolutionary biology. They involve a nonlocal mutation kernel and
a confining fitness potential. We prove that the long time behaviour of the
Cauchy problem is determined by the principal eigenelement of the underlying
linear operator. The novelties compared to the literature on these models are
about the case of symmetric mutations: we propose a new milder sufficient
condition for the existence of a principal eigenfunction, and we provide what
is to our knowledge the first quantification of the spectral gap. We also
recover existing results in the non-symmetric case, through a new approach.
|
The Fermi surfaces (FS's) and band dispersions of EuRh2As2 have been
investigated using angle-resolved photoemission spectroscopy. The results in
the high-temperature paramagnetic state are in good agreement with the full
potential linearized augmented plane wave calculations, especially in the
context of the shape of the two-dimensional FS's and band dispersion around the
Gamma (0,0) and X (pi,pi) points. Interesting changes in band folding are
predicted by the theoretical calculations below the magnetic transition
temperature Tn=47K. However, by comparing the FS's measured at 60K and 40K, we
did not observe any signature of this transition at the Fermi energy indicating
a very weak coupling of the electrons to the ordered magnetic moments or strong
fluctuations. Furthermore, the FS does not change across the temperature (~
25K) where changes are observed in the Hall coefficient. Notably, the Fermi
surface deviates drastically from the usual FS of the superconducting
iron-based AFe2As2 parent compounds, including the absence of nesting between
the Gamma and X FS pockets.
|
Using exactly solvable models, it is shown that black hole singularities in
different electrically charged configurations can be cured. Our solutions
describe black hole space-times with a wormhole giving structure to the
otherwise point-like singularity. We show that geodesic completeness is
satisfied despite the existence of curvature divergences at the wormhole
throat. In some cases, physical observers can go through the wormhole and in
other cases the throat lies at an infinite affine distance.
|
Event structures are fundamental models in concurrency theory, providing a
representation of events in computation and of their relations, notably
concurrency, conflict and causality. In this paper we present a theory of
minimisation for event structures. Working in a class of event structures that
generalises many stable event structure models in the literature (e.g., prime,
asymmetric, flow and bundle event structures), we study a notion of
behaviour-preserving quotient, referred to as a folding, taking (hereditary)
history preserving bisimilarity as a reference behavioural equivalence. We show
that for any event structure a folding producing a uniquely determined minimal
quotient always exists. We observe that each event structure can be seen as the
folding of a prime event structure, and that all foldings between general event
structures arise from foldings of (suitably defined) corresponding prime event
structures. This gives a special relevance to foldings in the class of prime
event structures, which are studied in detail. We identify folding conditions
for prime and asymmetric event structures, and show that also prime event
structures always admit a unique minimal quotient (while this is not the case
for various other event structure models).
|
Given commutative, unital rings $A$ and $B$ with a ring homomorphism $A\to B$
making $B$ free of finite rank as an $A$-module, we can ask for a "trace" or
"norm" homomorphism taking algebraic data over $B$ to algebraic data over $A$.
In this paper we we construct a norm functor for the data of a quadratic
algebra: given a locally-free rank-$2$ $B$-algebra $D$, we produce a
locally-free rank-$2$ $A$-algebra $\mathrm{Nm}_{B/A}(D)$ in a way that is
compatible with other norm functors and which extends a known construction for
\'etale quadratic algebras. We also conjecture a relationship between
discriminant algebras and this new norm functor.
|
The merger of binary neutron stars (NSs) is among the most promising
gravitational wave (GW) sources. Next-generation GW detectors are expected to
detect signals from the NS merger within 200 Mpc. Detection of electromagnetic
wave (EM) counterpart is crucial to understand the nature of GW sources. Among
possible EM emission from the NS merger, emission powered by radioactive
r-process nuclei is one of the best targets for follow-up observations.
However, prediction so far does not take into account detailed r-process
element abundances in the ejecta. We perform radiative transfer simulations for
the NS merger ejecta including all the r-process elements from Ga to U for the
first time. We show that the opacity in the NS merger ejecta is about kappa =
10 cm^2 g^{-1}, which is higher than that of Fe-rich Type Ia supernova ejecta
by a factor of ~ 100. As a result, the emission is fainter and longer than
previously expected. The spectra are almost featureless due to the high
expansion velocity and bound-bound transitions of many different r-process
elements. We demonstrate that the emission is brighter for a higher mass ratio
of two NSs and a softer equation of states adopted in the merger simulations.
Because of the red color of the emission, follow-up observations in red optical
and near-infrared (NIR) wavelengths will be the most efficient. At 200 Mpc,
expected brightness of the emission is i = 22 - 25 AB mag, z = 21 - 23 AB mag,
and 21 - 24 AB mag in NIR JHK bands. Thus, observations with wide-field 4m- and
8m-class optical telescopes and wide-field NIR space telescopes are necessary.
We also argue that the emission powered by radioactive energy can be detected
in the afterglow of nearby short gamma-ray bursts.
|
We prove that neither a prime nor {an l-almost prime} number theorem hold in
the class of regular Toeplitz subshifts. But, {when a quantitative
strengthening of the regularity with respect to the periodic structure
involving Euler's totient function is assumed}, then the two theorems hold.
|
The transition between the nearly smooth initial state of the Universe and
its clumpy state today occurred during the epoch when the first stars and
low-luminosity quasars formed. For Cold Dark Matter cosmologies, the radiation
produced by the first baryonic objects is expected to ionize the Universe at
z=10-20 and consequently suppress by 10% the amplitude of microwave
anisotropies on angular scales <10 degrees. Future microwave anisotropy
satellites will be able to detect this signature. The production and mixing of
metals by an early population of stars provides a natural explanation to the
metallicity, ~1% solar, found in the intergalactic medium at redshifts z<5. The
Next Generation Space Telescope (NGST) will be able to image directly the
``first light'' from these stars. With its nJy sensitivity, NGST is expected to
detect >10^3 star clusters per square arcminute at z>10. The brightest sources,
however, might be early quasars. The infrared flux from an Eddington
luminosity, 10^6 solar mass, black hole at z=10 is 10 nJy at 1 micron, easily
detectable with NGST. The time it takes a black hole with a radiative
efficiency of 10% to double its mass amounts to more than a tenth of the Hubble
time at z=10, and so a fair fraction of all systems which harbor a central
black hole at this redshift would appear active. The redshift of all sources
can be determined from the Lyman-limit break in their spectrum, which overlaps
with the NGST wavelength regime, 1-3.5 micron, for 10<z<35. Absorption spectra
of the first generation of star clusters or quasars would reveal the
reionization history of the Universe. The intergalactic medium might show a
significant opacity to infrared sources at z>10 due to dust produced by the
first supernovae.
|
We propose attribute-aware multimodal entity linking, where the input is a
mention described with a text and image, and the goal is to predict the
corresponding target entity from a multimodal knowledge base (KB) where each
entity is also described with a text description, a visual image and a set of
attributes and values. To support this research, we construct AMELI, a
large-scale dataset consisting of 18,472 reviews and 35,598 products. To
establish baseline performance on AMELI, we experiment with the current
state-of-the-art multimodal entity linking approaches and our enhanced
attribute-aware model and demonstrate the importance of incorporating the
attribute information into the entity linking process. To be best of our
knowledge, we are the first to build benchmark dataset and solutions for the
attribute-aware multimodal entity linking task. Datasets and codes will be made
publicly available.
|
This paper is concerned with modeling the dependence structure of two (or
more) time-series in the presence of a (possible multivariate) covariate which
may include past values of the time series. We assume that the covariate
influences only the conditional mean and the conditional variance of each of
the time series but the distribution of the standardized innovations is not
influenced by the covariate and is stable in time. The joint distribution of
the time series is then determined by the conditional means, the conditional
variances and the marginal distributions of the innovations, which we estimate
nonparametrically, and the copula of the innovations, which represents the
dependency structure. We consider a nonparametric as well as a semiparametric
estimator based on the estimated residuals. We show that under suitable
assumptions these copula estimators are asymptotically equivalent to estimators
that would be based on the unobserved innovations. The theoretical results are
illustrated by simulations and a real data example.
|
In this paper, a multivariate count distribution with Conway-Maxwell
(COM)-Poisson marginals is proposed. To do this, we develop a modification of
the Sarmanov method for constructing multivariate distributions. Our
multivariate COM-Poisson (MultCOMP) model has desirable features such as (i) it
admits a flexible covariance matrix allowing for both negative and positive
non-diagonal entries; (ii) it overcomes the limitation of the existing
bivariate COM-Poisson distributions in the literature that do not have
COM-Poisson marginals; (iii) it allows for the analysis of multivariate counts
and is not just limited to bivariate counts. Inferential challenges are
presented by the likelihood specification as it depends on a number of
intractable normalizing constants involving the model parameters. These
obstacles motivate us to propose a Bayesian inferential approach where the
resulting doubly-intractable posterior is dealt with via the exchange algorithm
and the Grouped Independence Metropolis-Hastings algorithm. Numerical
experiments based on simulations are presented to illustrate the proposed
Bayesian approach. We analyze the potential of the MultCOMP model through a
real data application on the numbers of goals scored by the home and away teams
in the Premier League from 2018 to 2021. Here, our interest is to assess the
effect of a lack of crowds during the COVID-19 pandemic on the well-known home
team advantage. A MultCOMP model fit shows that there is evidence of a
decreased number of goals scored by the home team, not accompanied by a reduced
score from the opponent. Hence, our analysis suggests a smaller home team
advantage in the absence of crowds, which agrees with the opinion of several
football experts.
|
Quantum computers can exploit a Hilbert space whose dimension increases
exponentially with the number of qubits. In experiment, quantum supremacy has
recently been achieved by the Google team by using a noisy intermediate-scale
quantum (NISQ) device with over 50 qubits. However, the question of what can be
implemented on NISQ devices is still not fully explored, and discovering useful
tasks for such devices is a topic of considerable interest. Hybrid
quantum-classical algorithms are regarded as well-suited for execution on NISQ
devices by combining quantum computers with classical computers, and are
expected to be the first useful applications for quantum computing. Meanwhile,
mitigation of errors on quantum processors is also crucial to obtain reliable
results. In this article, we review the basic results for hybrid
quantum-classical algorithms and quantum error mitigation techniques. Since
quantum computing with NISQ devices is an actively developing field, we expect
this review to be a useful basis for future studies.
|
We identify and characterise a Milky Way-like realisation from the Auriga
simulations with two consecutive massive mergers $\sim2\,$Gyr apart at high
redshift, comparable to the reported Kraken and Gaia-Sausage-Enceladus. The
Kraken-like merger ($z=1.6$, $M_{\rm Tot} = 8\times10^{10}\,$M$_{\odot}$) is
gas-rich, deposits most of its mass in the inner $10\,$kpc, and is largely
isotropic. The Sausage-like merger ($z=1.14$, $M_{\rm Tot} =
1\times10^{11}\,$M$_{\odot}$) leaves a more extended mass distribution at
higher energies, and has a radially anisotropic distribution. For the higher
redshift merger, the stellar mass ratio of the satellite to host galaxy is 1:3.
As a result, the chemistry of the remnant is indistinguishable from
contemporaneous in-situ populations, making it challenging to identify this
component through chemical abundances. This naturally explains why all
abundance patterns attributed so far to Kraken are in fact fully consistent
with the metal-poor in-situ so-called Aurora population and thick disc.
However, our model makes a falsifiable prediction: if the Milky Way underwent a
gas-rich double merger at high redshift, then this should be imprinted on its
star formation history with bursts about $\sim2\,$Gyrs apart. This may offer
constraining power on the highest-redshift major mergers.
|
In a multiple partners matching problem the agents can have multiple partners
up to their capacities. In this paper we consider both the two-sided
many-to-many stable matching problem and the one-sided stable fixtures problem
under lexicographic preferences. We study strong core and Pareto-optimal
solutions for this setting from a computational point of view. First we provide
an example to show that the strong core can be empty even under these severe
restrictions for many-to-many problems, and that deciding the non-emptiness of
the strong core is NP-hard. We also show that for a given matching checking
Pareto-optimality and the strong core properties are co-NP-complete problems
for the many-to-many problem, and deciding the existence of a complete
Pareto-optimal matching is also NP-hard for the fixtures problem. On the
positive side, we give efficient algorithms for finding a near feasible strong
core solution, where the capacities are only violated by at most one unit for
each agent, and also for finding a half-matching in the strong core of
fractional matchings. These polynomial time algorithms are based on the Top
Trading Cycle algorithm. Finally, we also show that finding a maximum size
matching that is Pareto-optimal can be done efficiently for many-to-many
problems, which is in contrast with the hardness result for the fixtures
problem.
|
We present an analytic computation of the gluon-initiated contribution to
diphoton plus jet production at hadron colliders up to two loops in QCD. We
reconstruct the analytic form of the finite remainders from numerical
evaluations over finite fields including all colour contributions. Compact
expressions are found using the pentagon function basis. We provide a fast and
stable implementation for the colour- and helicity-summed interference between
the one-loop and two-loop finite remainders in C++ as part of the NJet library.
|
We propose a solution to the longstanding permalloy problem$-$why the
particular composition of permalloy, Fe$_{21.5}$Ni$_{78.5}$, achieves a
dramatic drop in hysteresis, while its material constants show no obvious
signal of this behavior. We use our recently developed coercivity tool to show
that a delicate balance between local instabilities and magnetic material
constants are necessary to explain the dramatic drop of hysteresis at 78.5% Ni.
Our findings are in agreement with the permalloy experiments and, more broadly,
provide theoretical guidance for the discovery of novel low hysteresis magnetic
alloys.
|
The problem of finding the connected components of a graph is considered. The
algorithms addressed to solve the problem are used to solve such problems on
graphs as problems of finding points of articulation, bridges, maximin bridge,
etc. A natural approach to solving this problem is a breadth-first search, the
implementations of which are presented in software libraries designed to
maximize the use of the capabi\-lities of modern computer architectures. We
present an approach using perturbations of adjacency matrix of a graph. We
check wether the graph is connected or not by comparing the solutions of the
two systems of linear algebraic equations (SLAE): the first SLAE with a
perturbed adjacency matrix of the graph and the second SLAE with~unperturbed
matrix. This approach makes it possible to use effective numerical
implementations of SLAE solution methods to solve connectivity problems on
graphs. Iterations of iterative numerical methods for solving such SLAE can be
considered as carrying out a graph traversal. Generally speaking, the traversal
is not equivalent to the traversal that is carried out with breadth-first
search. An algorithm for finding the connected components of a graph using such
a traversal is presented. For any instance of the problem, this algorithm has
no greater computational complexity than breadth-first search, and for~most
individual problems it has less complexity.
|
We provide new examples of 3-manifolds with weight one fundamental group and
the same integral homology as the lens space $L(2k,1)$ which are not surgery on
any knot in the three-sphere. Our argument uses Furuta's 10/8-theorem, and is
simple and combinatorial to apply.
|
With the excellent accuracy and feasibility, the Neural Networks have been
widely applied into the novel intelligent applications and systems. However,
with the appearance of the Adversarial Attack, the NN based system performance
becomes extremely vulnerable:the image classification results can be
arbitrarily misled by the adversarial examples, which are crafted images with
human unperceivable pixel-level perturbation. As this raised a significant
system security issue, we implemented a series of investigations on the
adversarial attack in this work: We first identify an image's pixel
vulnerability to the adversarial attack based on the adversarial saliency
analysis. By comparing the analyzed saliency map and the adversarial
perturbation distribution, we proposed a new evaluation scheme to
comprehensively assess the adversarial attack precision and efficiency. Then,
with a novel adversarial saliency prediction method, a fast adversarial example
generation framework, namely "ASP", is proposed with significant attack
efficiency improvement and dramatic computation cost reduction. Compared to the
previous methods, experiments show that ASP has at most 12 times speed-up for
adversarial example generation, 2 times lower perturbation rate, and high
attack success rate of 87% on both MNIST and Cifar10. ASP can be also well
utilized to support the data-hungry NN adversarial training. By reducing the
attack success rate as much as 90%, ASP can quickly and effectively enhance the
defense capability of NN based system to the adversarial attacks.
|
Systems near to quantum critical points show universal scaling in their
response functions. We consider whether this scaling is reflected in their
fluctuations; namely in current-noise. Naive scaling predicts low-temperature
Johnson noise crossing over to noise power $\propto E^{z/(z+1)}$ at strong
electric fields. We study this crossover in the metallic state at the 2d z=1
superconductor/insulator quantum critical point. Using a Boltzmann-Langevin
approach within a 1/N-expansion, we show that the current noise obeys a scaling
form $S_j=T \Phi[T/T_{eff}(E)]$ with $T_{eff} \propto \sqrt{E}$. We recover
Johnson noise in thermal equilibrium and $S_j \propto \sqrt{E}$ at strong
electric fields. The suppression from free carrier shot noise is due to strong
correlations at the critical point. We discuss its interpretation in terms of a
diverging carrier charge $\propto 1/\sqrt{E}$ or as out-of-equilibrium Johnson
noise with effective temperature $\propto \sqrt{E}$.
|
Algebro-geometric methods have proven to be very successful in the study of
graphical models in statistics. In this paper we introduce the foundations to
carry out a similar study of their quantum counterparts. These quantum
graphical models are families of quantum states satisfying certain locality or
correlation conditions encoded by a graph. We lay out several ways to associate
an algebraic variety to a quantum graphical model. The classical graphical
models can be recovered from most of these varieties by restricting to quantum
states represented by diagonal matrices. We study fundamental properties of
these varieties and provide algorithms to compute their defining equations.
Moreover, we study quantum information projections to quantum exponential
families defined by graphs and prove a quantum analogue of Birch's Theorem.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.