text
stringlengths 6
128k
|
---|
The maximum entropy of a quantized surface is demonstrated to be proportional
to the surface area in the classical limit. The result is valid in loop quantum
gravity, and in a somewhat more general class of approaches to surface
quantization. The maximum entropy is calculated explicitly for some specific
cases.
|
Sequence comparison is a prerequisite to virtually all comparative genomic
analyses. It is often realized by sequence alignment techniques, which are
computationally expensive. This has led to increased research into
alignment-free techniques, which are based on measures referring to the
composition of sequences in terms of their constituent patterns. These
measures, such as $q$-gram distance, are usually computed in time linear with
respect to the length of the sequences. In this article, we focus on the
complementary idea: how two sequences can be efficiently compared based on
information that does not occur in the sequences. A word is an {\em absent
word} of some sequence if it does not occur in the sequence. An absent word is
{\em minimal} if all its proper factors occur in the sequence. Here we present
the first linear-time and linear-space algorithm to compare two sequences by
considering {\em all} their minimal absent words. In the process, we present
results of combinatorial interest, and also extend the proposed techniques to
compare circular sequences.
|
If driven sufficiently strongly, superconducting microresonators exhibit
nonlinear behavior including response bifurcation. This behavior can arise from
a variety of physical mechanisms including heating effects, grain boundaries or
weak links, vortex penetration, or through the intrinsic nonlinearity of the
kinetic inductance. Although microresonators used for photon detection are
usually driven fairly hard in order to optimize their sensitivity, most
experiments to date have not explored detector performance beyond the onset of
bifurcation. Here we present measurements of a lumped-element superconducting
microresonator designed for use as a far-infrared detector and operated deep
into the nonlinear regime. The 1 GHz resonator was fabricated from a 22 nm
thick titanium nitride film with a critical temperature of 2 K and a
normal-state resistivity of $100\, \mu \Omega\,$cm. We measured the response of
the device when illuminated with 6.4 pW optical loading using microwave readout
powers that ranged from the low-power, linear regime to 18 dB beyond the onset
of bifurcation. Over this entire range, the nonlinear behavior is well
described by a nonlinear kinetic inductance. The best noise-equivalent power of
$2 \times 10^{-16}$ W/Hz$^{1/2}$ at 10 Hz was measured at the highest readout
power, and represents a $\sim$10 fold improvement compared with operating below
the onset of bifurcation.
|
Global path planning is the key technology in the design of unmanned surface
vehicles. This paper establishes global environment modelling based on
electronic charts and hexagonal grids which are proved to be better than square
grids in validity, safety and rapidity. Besides, we introduce Cube coordinate
system to simplify hexagonal algorithms. Furthermore, we propose an improved A*
algorithm to realize the path planning between two points. Based on that, we
build the global path planning modelling for multiple task points and present
an improved ant colony optimization to realize it accurately. The simulation
results show that the global path planning system can plan an optimal path to
tour multiple task points safely and quickly, which is superior to traditional
methods in safety, rapidity and path length. Besides, the planned path can
directly apply to actual applications of USVs.
|
A generalized M1 sum rule for orbital magnetic dipole strength from excited
symmetric states to mixed-symmetry states is considered within the
proton-neutron interacting boson model of even-even nuclei. Analytic
expressions for the dominant terms in the B(M1) transition rates from the first
and second $2^+$ states are derived in the U(5) and SO(6) dynamic symmetry
limits of the model, and the applicability of a sum rule approach is examined
at and in-between these limits. Lastly, the sum rule is applied to the new data
on mixed-symmetry states of 94Mo and a quadrupole d-boson ratio
$nd(0^+_1)/nd(2^+_2) \approx 0.6$ is obtained in a largely
parameter-independent way
|
High pressure behaviour of liquid GeO2 is investigated by means of molecular
dynamics simulations in the pressure range 0-20 GPa and at various
temperatures. In agreement with the recent experiments (PRL, 92, 155506, 2004),
Ge-O coordination increases under compression. For 1650 K, at ~11.5 GPa the
structure becomes 6 coordinated with a discontinuous change in the density. The
transition from the low density liquid (LDL) to high density liquid (HDL) is
found to be reversible and the transition pressure increases with the
temperature. At lower temperatures, the low density to high density transition
is found to be continuous, but with the coexistence of two structures.
|
For a variety of superconducting qubits, tunable interactions are achieved
through mutual inductive coupling to a coupler circuit containing a nonlinear
Josephson element. In this paper we derive the general interaction mediated by
such a circuit under the Born-Oppenheimer Approximation. This interaction
naturally decomposes into a classical part, with origin in the classical
circuit equations, and a quantum part, associated with the coupler's zero-point
energy. Our result is non-perturbative in the qubit-coupler coupling strengths
and in the coupler nonlinearity. This can lead to significant departures from
previous, linear theories for the inter-qubit coupling, including
non-stoquastic and many-body interactions. Our analysis provides explicit and
efficiently computable series for any term in the interaction Hamiltonian and
can be applied to any superconducting qubit type. We conclude with a numerical
investigation of our theory using a case study of two coupled flux qubits, and
in particular study the regime of validity of the Born-Oppenheimer
Approximation.
|
This review aims at proposing to the field an overview of the Cusp-core
problem, including a discussion of its advocated solutions, assessing how each
can satisfactorily provide a description of central densities. Whether the
Cusp-core problem reflects our insufficient grasp on the nature of dark matter,
of gravity, on the impact of baryonic interactions with dark matter at those
scales, as included in semi-analytical models or fully numerical codes, the
solutions to it can point either to the need for a paradigm change in
cosmology, or to to our lack of success in ironing out the finer details of the
$\Lambda$CDM paradigm.
|
In this work, we present detailed $^{57}$Fe M\"ossbauer spectroscopy
investigations of (Co$_{0.97}$$^{57}$Fe$_{0.03}$)$_{4}$Nb$_{2}$O$_{9}$ compound
to study its possible magnetic structures. We have shown that the previously
reported magnetic structures can not satisfactorily describe our low
temperature M\"ossbauer spectra. Therefore, in combination with theoretical
calculations, we have proposed a modulated helicoidal magnetic structure that
can be used to simulate the whole series of our low temperature M\"ossbauer
spectra. Our results suggest that the combination of previously reported
different magnetic structures are only approximations of the average magnetic
structure from our modulated helicoidal model. We anticipate that the proposed
modulated non-collinear magnetic structure might shed light on the
understanding of the complex magnetoelectric effects observed in this system.
|
We study sets of nontypical points under the map $f_\beta \mapsto \beta x $
mod 1, for non-integer $\beta$ and extend our results from [F\"arm, Persson,
Schmeling, 2010] in several directions. In particular we prove that sets of
points whose forward orbit avoid certain Cantor sets, and set of points for
which ergodic averages diverge, have large intersection properties. We observe
that the technical condition $\beta>1.541$ found in [F\"arm, Persson,
Schmeling, 2010] can be removed.
|
A common theme in causal inference is learning causal relationships between
observed variables, also known as causal discovery. This is usually a daunting
task, given the large number of candidate causal graphs and the combinatorial
nature of the search space. Perhaps for this reason, most research has so far
focused on relatively small causal graphs, with up to hundreds of nodes.
However, recent advances in fields like biology enable generating experimental
data sets with thousands of interventions followed by rich profiling of
thousands of variables, raising the opportunity and urgent need for large
causal graph models. Here, we introduce the notion of factor directed acyclic
graphs (f-DAGs) as a way to restrict the search space to non-linear low-rank
causal interaction models. Combining this novel structural assumption with
recent advances that bridge the gap between causal discovery and continuous
optimization, we achieve causal discovery on thousands of variables.
Additionally, as a model for the impact of statistical noise on this estimation
procedure, we study a model of edge perturbations of the f-DAG skeleton based
on random graphs and quantify the effect of such perturbations on the f-DAG
rank. This theoretical analysis suggests that the set of candidate f-DAGs is
much smaller than the whole DAG space and thus may be more suitable as a search
space in the high-dimensional regime where the underlying skeleton is hard to
assess. We propose Differentiable Causal Discovery of Factor Graphs (DCD-FG), a
scalable implementation of -DAG constrained causal discovery for
high-dimensional interventional data. DCD-FG uses a Gaussian non-linear
low-rank structural equation model and shows significant improvements compared
to state-of-the-art methods in both simulations as well as a recent large-scale
single-cell RNA sequencing data set with hundreds of genetic interventions.
|
The comment mostly concerns mis-representation of my position on Quantum
Biology, by stating that I am 'reducing cell's behavior to quantum particles
inside the cell'. I contrast my position with that of McFadden-Al-Khalili, as
well as with the position of Asano et al. I also advertise an idea, described
in our latest paper (Bordonaro, Ogryzko. Quantum Biology at the Cellular level
- elements of the research program, BioSystems, 2013), for the need of
synthetic biology in testing some predictions that follow from our approach.
|
Knowledge editing techniques have been increasingly adopted to efficiently
correct the false or outdated knowledge in Large Language Models (LLMs), due to
the high cost of retraining from scratch. Meanwhile, one critical but
under-explored question is: can knowledge editing be used to inject harm into
LLMs? In this paper, we propose to reformulate knowledge editing as a new type
of safety threat for LLMs, namely Editing Attack, and conduct a systematic
investigation with a newly constructed dataset EditAttack. Specifically, we
focus on two typical safety risks of Editing Attack including Misinformation
Injection and Bias Injection. For the risk of misinformation injection, we
first categorize it into commonsense misinformation injection and long-tail
misinformation injection. Then, we find that editing attacks can inject both
types of misinformation into LLMs, and the effectiveness is particularly high
for commonsense misinformation injection. For the risk of bias injection, we
discover that not only can biased sentences be injected into LLMs with high
effectiveness, but also one single biased sentence injection can cause a bias
increase in general outputs of LLMs, which are even highly irrelevant to the
injected sentence, indicating a catastrophic impact on the overall fairness of
LLMs. Then, we further illustrate the high stealthiness of editing attacks,
measured by their impact on the general knowledge and reasoning capacities of
LLMs, and show the hardness of defending editing attacks with empirical
evidence. Our discoveries demonstrate the emerging misuse risks of knowledge
editing techniques on compromising the safety alignment of LLMs.
|
Trapped ions arranged in Coulomb crystals provide us with the elements to
study the physics of a single spin coupled to a boson bath. In this work we
show that optical forces allow us to realize a variety of spin-boson models,
depending on the crystal geometry and the laser configuration. We study in
detail the Ohmic case, which can be implemented by illuminating a single ion
with a travelling wave. The mesoscopic character of the phonon bath in trapped
ions induces new effects like the appearance of quantum revivals in the spin
evolution.
|
This paper works on one of the most recent pedestrian crowd evacuation
models, i.e., "a simulation model for pedestrian crowd evacuation based on
various AI techniques", developed in late 2019. This study adds a new feature
to the developed model by proposing a new method and integrating it with the
model. This method enables the developed model to find a more appropriate
evacuation area design, among others regarding safety due to selecting the best
exit door location among many suggested locations. This method is completely
dependent on the selected model's output, i.e., the evacuation time for each
individual within the evacuation process. The new method finds an average of
the evacuees' evacuation times of each exit door location; then, based on the
average evacuation time, it decides which exit door location would be the best
exit door to be used for evacuation by the evacuees. To validate the method,
various designs for the evacuation area with various written scenarios were
used. The results showed that the model with this new method could predict a
proper exit door location among many suggested locations. Lastly, from the
results of this research using the integration of this newly proposed method, a
new capability for the selected model in terms of safety allowed the right
decision in selecting the finest design for the evacuation area among other
designs.
|
Natural language processing (NLP) systems have become a central technology in
communication, education, medicine, artificial intelligence, and many other
domains of research and development. While the performance of NLP methods has
grown enormously over the last decade, this progress has been restricted to a
minuscule subset of the world's 6,500 languages. We introduce a framework for
estimating the global utility of language technologies as revealed in a
comprehensive snapshot of recent publications in NLP. Our analyses involve the
field at large, but also more in-depth studies on both user-facing technologies
(machine translation, language understanding, question answering,
text-to-speech synthesis) as well as more linguistic NLP tasks (dependency
parsing, morphological inflection). In the process, we (1) quantify disparities
in the current state of NLP research, (2) explore some of its associated
societal and academic factors, and (3) produce tailored recommendations for
evidence-based policy making aimed at promoting more global and equitable
language technologies.
|
The design and optimization of cryogenic propellant storage tanks for NASA's
future space missions require fast and accurate long-term fluid behavior
simulations. CFD codes offer high fidelity but face prohibitive computational
costs, whereas nodal codes are oversimplified and inadequate in scenarios
involving prominent three-dimensional phenomenon, such as thermal
stratification. Hence, an equation-free based data-driven coupling (EFD)
approach is developed to couple CFD and nodal codes for efficient and accurate
integrated analysis. The EFD approach, as a concurrent coupling scheme,
modifies the equation-free modeling and adapted the data-driven approaches. It
utilizes the CFD simulation results within a short co-solved period to generate
equation-free correlations through the data-driven approach. The nodal code
then solves the problem with the obtained correlations, producing "CFD-like"
solutions. This paper implements the EFD approach using the ANSYS Fluent and
the CRTech SINDA/FLUINT to investigate two-phase cryogenic tank
self-pressurization and periodic mixing problems. The EFD approach diminishes
the stratified temperature predictions errors in the top region by 89.1% and
98.9%, reducing computational time by 70% and 52%, respectively. The EFD
minimizes the risks of numerical instability and inherent correlation loss
compared to previous coupling methods, making it a flexible and easy-to-apply
approach for CFD and nodal code integrated analysis.
|
From observations collected with the ESPaDOnS & NARVAL spectropolarimeters at
CFHT and TBL, we report the detection of Zeeman signatures on the prototypical
classical TTauri star AATau, both in photospheric lines and accretion-powered
emission lines. Using time series of unpolarized and circularly polarized
spectra, we reconstruct at two epochs maps of the magnetic field, surface
brightness and accretion-powered emission of AATau. We find that AATau hosts a
2-3kG magnetic dipole tilted at ~20deg to the rotation axis, and of presumably
dynamo origin. We also show that the magnetic poles of AATau host large cool
spots at photospheric level and accretion regions at chromospheric level.
The logarithmic accretion rate at the surface of AATau at the time of our
observations is strongly variable, ranging from -9.6 to -8.5 and equal to -9.2
in average (in Msun/yr); this is an order of magnitude smaller than the disc
accretion rate at which the magnetic truncation radius (below which the disc is
disrupted by the stellar magnetic field) matches the corotation radius (where
the Keplerian period equals the stellar rotation period) - a necessary
condition for accretion to occur. It suggests that AATau is largely in the
propeller regime, with most of the accreting material in the inner disc regions
being expelled outwards and only a small fraction accreted towards the surface
of the star. The strong variability in the observed surface mass-accretion rate
and the systematic time-lag of optical occultations (by the warped accretion
disc) with respect to magnetic and accretion-powered emission maxima also
support this conclusion.
Our results imply that AATau is being actively spun-down by the star-disc
magnetic coupling and appears as an ideal laboratory for studying angular
momentum losses of forming Suns in the propeller regime.
|
In our Galactic Center, about 10,000 to 100,000 stars are estimated to have
survived tidal disruption events, resulting in partially disrupted remnants.
These events occur when a supermassive black hole (SMBH) tidally interacts with
a star, but not enough to completely disrupt the star. We use the 1D stellar
evolution code Kepler and the 3D smoothed particle hydrodynamics code Phantom
to model the tidal disruption of 1, 3, and 10 solar mass stars at zero-age
(ZAMS), middle-age (MAMS), and terminal-age main-sequence (TAMS). We map the
disruption remnants into Kepler in order to understand their post-distribution
evolution. We find distinct characteristics in the remnants, including
increased radius, rapid core rotation, and differential rotation in the
envelope. The remnants undergo composition mixing that affects their stellar
evolution. Whereas the remnants formed by disruption of ZAMS models evolve
similarly to unperturbed models of the same mass, for MAMS and TAMS stars, the
remnants have higher luminosity and effective temperature. Potential
observational signatures include peculiarities in nitrogen and carbon
abundances, higher luminosity, rapid rotation, faster evolution, and unique
tracks in the Hertzsprung-Russell diagram
|
Let L denote the variety of lattices. In 1982, the second author proved that
L is strongly tolerance factorable, that is, the members of L have quotients in
L modulo tolerances, although L has proper tolerances. We did not know any
other nontrivial example of a strongly tolerance factorable variety. Now we
prove that this property is preserved by forming independent joins (also called
products) of varieties. This enables us to present infinitely many {strongly}
tolerance factorable varieties with proper tolerances. Extending a recent
result of G.\ Cz\'edli and G.\ Gr\"atzer, we show that if V is a strongly
tolerance factorable variety, then the tolerances of V are exactly the
homomorphic images of congruences of algebras in V. Our observation that
(strong) tolerance factorability is not necessarily preserved when passing from
a variety to an equivalent one leads to an open problem.
|
Most studies of mass transfer in binary systems assume circular orbits at the
onset of Roche lobe overflow. However, there are theoretical and observational
indications that mass transfer could occur in eccentric orbits. In particular,
eccentricity could be produced via sudden mass loss and velocity kicks during
supernova explosions, or Lidov-Kozai (LK) oscillations in hierarchical triple
systems, or, more generally, secular evolution in multiple-star systems.
However, current analytic models of eccentric mass transfer are faced with the
problem that they are only well defined in the limit of very high
eccentricities, and break down for less eccentric and circular orbits. This
provides a major obstacle to implementing such models in binary and
higher-order population synthesis codes, which are useful tools for studying
the long-term evolution of a large number of systems. Here, we present a new
analytic model to describe the secular orbital evolution of binaries undergoing
conservative mass transfer. The main improvement of our model is that the mass
transfer rate is a smoothly varying function of orbital phase, rather than a
delta function centered at periapsis. Consequently, our model is in principle
valid for any eccentricity, thereby overcoming the main limitation of previous
works. We implement our model in an easy-to-use and publicly available code
that can be used as a basis for implementations of our model into population
synthesis codes. We investigate the implications of our model in a number of
applications with circular and eccentric binaries, and triples undergoing LK
oscillations.
|
We introduce a flux recovery scheme for the computed solution of a quadratic
immersed finite element method. The recovery is done at nodes and interface
point first and by interpolation at the remaining points. We show that the end
nodes are superconvergence points for both the primary variable $p$ and its
flux $u$. Furthermore, in the case of piecewise constant diffusion coefficient
without the absorption term the errors at end nodes and interface point in the
approximation of $u$ and $p$ are zero. In the general case, flux error at end
nodes and interface point is third order. Numerical results are provided to
confirm the theory.
|
Hawking's theorem on the topology of black holes asserts that cross sections
of the event horizon in 4-dimensional asymptotically flat stationary black hole
spacetimes obeying the dominant energy condition are topologically 2-spheres.
This conclusion extends to outer apparent horizons in spacetimes that are not
necessarily stationary. In this paper we obtain a natural generalization of
Hawking's results to higher dimensions by showing that cross sections of the
event horizon (in the stationary case) and outer apparent horizons (in the
general case) are of positive Yamabe type, i.e., admit metrics of positive
scalar curvature. This implies many well-known restrictions on the topology,
and is consistent with recent examples of five dimensional stationary black
hole spacetimes with horizon topology $S^2 \times S^1$. The proof is inspired
by previous work of Schoen and Yau on the existence of solutions to the Jang
equation (but does not make direct use of that equation).
|
The $1s^2->1s2p(^1P)$ excitation in confined and compressed helium atoms in
either the bulk material or encapsulated in a bubble is shifted to energies
higher than that in the free atom. For bulk helium, the energy shifts predicted
from non-empirical electronic structure computations are in excellent agreement
with the experimentally determined values. However, there are significant
discrepancies both between the results of experiments on different bubbles and
between these and the well established descriptions of the bulk. A critique is
presented of previous attempts to determine the densities in bubbles by
measuring the intensities of the electrons inelastically scattered in STEM
experiments. The reported densities are untrustworthy because it was assumed
that the cross section for inelastic electron scattering was the same as that
of a free atom whilst it is now known that this property is greatly enhanced
for atoms confined at significant pressures.
It is shown how experimental measurements of bubbles can be combined with
data on the bulk using a graphical method to determine whether the behavior of
an encapsulated guest differs from that in the bulk material. Experimental
electron energy loss data from an earlier study of helium encapsulated in
silicon is reanalyzed using this new method to show that the properties of the
helium in these bubbles do not differ significantly from those in the bulk
thereby enabling the densities in the bubbles to be determined. These enable
the bubble pressures to be deduced from a well established experimentally
derived equation of state. It is shown that the errors of up to 80% in the
incorrectly determined densities are greatly magnified in the predicted
pressures which can be too large by factors of over seven. This has major
practical implications for the study of radiation damage of materials exposed
to $\alpha$ particle bombardment.
|
Transformers combined with convolutional encoders have been recently used for
hand gesture recognition (HGR) using micro-Doppler signatures. We propose a
vision-transformer-based architecture for HGR with multi-antenna
continuous-wave Doppler radar receivers. The proposed architecture consists of
three modules: a convolutional encoderdecoder, an attention module with three
transformer layers, and a multi-layer perceptron. The novel convolutional
decoder helps to feed patches with larger sizes to the attention module for
improved feature extraction. Experimental results obtained with a dataset
corresponding to a two-antenna continuous-wave Doppler radar receiver operating
at 24 GHz (published by Skaria et al.) confirm that the proposed architecture
achieves an accuracy of 98.3% which substantially surpasses the
state-of-the-art on the used dataset.
|
When we extract information from a system by performing a quantum
measurement, the state of the system is disturbed due to the backaction of the
measurement. Numerous studies have been performed to quantitatively formulate
tradeoff relations between information and disturbance. We formulate a tradeoff
relation between information and disturbance from an estimation-theoretic point
of view, and derive an inequality between them. The information is defined as
the classical Fisher information obtained by the measurement, and the
disturbance is defined as the average loss of the quantum Fisher information.
We show that pure and reversible measurements achieve the equality of the
inequality. We also identify the necessary condition for various divergences
between two quantum states to satisfy a similar relation. The obtained relation
holds not only for the quantum relative entropy but also for the maximum
quantum relative entropy.
|
We study the dramatic decrease in iron absorption strength in the iron
low-ionization broad absorption line quasar SDSS J084133.15+200525.8. We report
on the continued weakening of absorption in the prototype of this class of
variable broad absorption line quasar, FBQS J140806.2+305448. We also report a
third example of this class, SDSS J123103.70+392903.6; unlike the other two
examples, it has undergone an increase in observed continuum brightness (at
3000~\AA\ rest-frame) as well as a decrease in iron absorption strength. These
changes could be caused by absorber transverse motion or by ionization
variability. We note that the \mgii\ and UV \feii\ lines in several FeLoBAL
quasars are blueshifted by thousands of \kms\ relative to the \Hb\ emission
line peak. We suggest that such emission arises in the outflowing winds
normally seen only in absorption.
|
Spectral projectors of Hermitian matrices play a key role in many
applications, and especially in electronic structure computations. Linear
scaling methods for gapped systems are based on the fact that these special
matrix functions are localized, which means that the entries decay
exponentially away from the main diagonal or with respect to more general
sparsity patterns. The relation with the sign function together with an
integral representation is used to obtain new decay bounds, which turn out to
be optimal in an asymptotic sense. The influence of isolated eigenvalues in the
spectrum on the decay properties is also investigated and a superexponential
behaviour is predicted.
|
The microscopic origin of slow carrier cooling in lead-halide perovskites
remains debated, and has direct implications for applications. Slow carrier
cooling has been attributed to either polaron formation or a hot-phonon
bottleneck effect at high excited carrier densities (> 10$^{18}$ cm$^{-3}$).
These effects cannot be unambiguously disentangled from optical experiments
alone. However, they can be distinguished by direct observations of ultrafast
lattice dynamics, as these effects are expected to create qualitatively
distinct fingerprints. To this end, we employ femtosecond electron diffraction
and directly measure the sub-picosecond lattice dynamics of weakly confined
CsPbBr$_3$ nanocrystals following above-gap photo-excitation. The data reveal a
light-induced structural distortion appearing on a time scale varying between
380 fs to 1200 fs depending on the excitation fluence. We attribute these
dynamics to the effect of exciton-polarons on the lattice, and the slower
dynamics at high fluences to slower hot carrier cooling, which slows down the
establishment of the exciton-polaron population. Further analysis and
simulations show that the distortion is consistent with motions of the
[PbBr$_3$]$^{-}$ octahedral ionic cage, and closest agreement with the data is
obtained for Pb-Br bond lengthening. Our work demonstrates how direct studies
of lattice dynamics on the sub-picosecond timescale can discriminate between
competing scenarios, thereby shedding light on the origin of slow carrier
cooling in lead-halide perovskites.
|
The rise of serverless computing provides an opportunity to rethink cloud
security. We present an approach for securing serverless systems using a novel
form of dynamic information flow control (IFC).
We show that in serverless applications, the termination channel found in
most existing IFC systems can be arbitrarily amplified via multiple concurrent
requests, necessitating a stronger termination-sensitive non-interference
guarantee, which we achieve using a combination of static labeling of
serverless processes and dynamic faceted labeling of persistent data.
We describe our implementation of this approach on top of JavaScript for AWS
Lambda and OpenWhisk serverless platforms, and present three realistic case
studies showing that it can enforce important IFC security properties with low
overhead.
|
A conjecture of I. Krasikov is proved. Several discrete analogues of
classical polynomial inequalities are derived, along with results which allow
extensions to a class of transcendental entire functions in the
Laguerre-P\'olya class.
|
We consider the problem EnumIP of enumerating prime implicants of Boolean
functions represented by decision decomposable negation normal form (dec-DNNF)
circuits. We study EnumIP from dec-DNNF within the framework of enumeration
complexity and prove that it is in OutputP, the class of output polynomial
enumeration problems, and more precisely in IncP, the class of polynomial
incremental time enumeration problems. We then focus on two closely related,
but seemingly harder, enumeration problems where further restrictions are put
on the prime implicants to be generated. In the first problem, one is only
interested in prime implicants representing subset-minimal abductive
explanations, a notion much investigated in AI for more than three decades. In
the second problem, the target is prime implicants representing sufficient
reasons, a recent yet important notion in the emerging field of eXplainable AI,
since they aim to explain predictions achieved by machine learning classifiers.
We provide evidence showing that enumerating specific prime implicants
corresponding to subset-minimal abductive explanations or to sufficient reasons
is not in OutputP.
|
Context: Detailed oscillation spectra comprising individual frequencies for
numerous solar-type stars and red giants are or will become available. These
data can lead to a precise characterisation of stars.
Aims: Our goal is to test and compare different methods for obtaining stellar
properties from oscillation frequencies and spectroscopic constraints, in order
to evaluate their accuracy and the reliability of the error bars.
Methods: In the context of the SpaceInn network, we carried out a
hare-and-hounds exercise in which one group produced "observed" oscillation
spectra for 10 artificial solar-type stars, and various groups characterised
these stars using either forward modelling or acoustic glitch signatures.
Results: Results based on the forward modelling approach were accurate to 1.5
% (radius), 3.9 % (mass), 23 % (age), 1.5 % (surface gravity), and 1.8 % (mean
density). For the two 1 Msun stellar targets, the accuracy on the age is better
than 10 % thereby satisfying PLATO 2.0 requirements. The average accuracies for
the acoustic radii of the base of the convection zone, the He II ionisation,
and the Gamma_1 peak were 17 %, 2.4 %, and 1.9 %, respectively. Glitch fitting
analysis seemed to be affected by aliasing problems for some of the targets.
Conclusions: Forward modelling is the most accurate approach, but needs to be
complemented by model-independent results from, e.g., glitch analysis.
Furthermore, global optimisation algorithms provide more robust error bars.
|
We have measured mesoscopic superconducting Au$_{0.7}$In$_{0.3}$ rings
prepared by e-beam lithography and sequential deposition of Au and In at room
temperature followed by a standard lift-off procedure. In samples showing no
Little-Parks resistance oscillations, highly unusual double resistance
anomalies, two resistance peaks found near the onset of superconductivity, were
observed. Although resistance anomaly featuring a single resistance peak has
been seen in various mesoscopic superconducting samples, double resistance
anomalies have never been observed previously. The dynamical resistance
measurements suggest that there are two critical currents in these samples. In
addition, the two resistance peaks were found to be suppressed at different
magnetic fields. We attribute the observed double resistance anomalies to an
underlying phase separation in which In-rich grains of intermetallic compound
of AuIn precipitate in a uniform In-dilute matrix of Au$_{0.9}$In$_{0.1}$. The
intrinsic superconducting transition temperature of the In-rich grains is
substantially higher than that of the In-dilute matrix. The suppression of the
conventional Little-Parks resistance oscillation is explained in the same
picture by taking into consideration a strong variation in the $T_c$ of the
In-rich grains. We also report the observation of an unusual
magnetic-field-induced metallic state with its resistance higher than the
normal-state resistance, referred to here as excessive resistance, and an h/2e
resistance oscillation with the amplitude of oscillation depends extremely
weakly on temperature.
|
Diffusion Monte Carlo (DMC) simulations for fermions are becoming the
standard to provide high quality reference data in systems that are too large
to be investigated via quantum chemical approaches. DMC with the fixed-node
approximation relies on modifications of the Green function to avoid
singularities near the nodal surface of the trial wavefunction. We show that
these modifications affect the DMC energies in a way that is not
size-consistent, resulting in large time-step errors. Building on the
modifications of Umrigar {\em et al.} and of DePasquale {\em et al.} we propose
a simple Green function modification that restores size-consistency to large
values of time-step; substantially reducing the time-step errors. The new
algorithm also yields remarkable speedups of up to two orders of magnitude in
the calculation of molecule-molecule binding energies and crystal cohesive
energies, thus extending the horizons of what is possible with DMC.
|
This work proposes a visual odometry method that combines points and plane
primitives, extracted from a noisy depth camera. Depth measurement uncertainty
is modelled and propagated through the extraction of geometric primitives to
the frame-to-frame motion estimation, where pose is optimized by weighting the
residuals of 3D point and planes matches, according to their uncertainties.
Results on an RGB-D dataset show that the combination of points and planes,
through the proposed method, is able to perform well in poorly textured
environments, where point-based odometry is bound to fail.
|
The current onion routing implementation of Tribler works as expected but
throttles the overall throughput of the Tribler system. This article discusses
a measuring procedure to reproducibly profile the tunnel implementation so
further optimizations of the tunnel community can be made. Our work has been
integrated into the Tribler eco-system.
|
EuB$_6$ is a magnetic semiconductor in which defects introduce charge
carriers into the conduction band with the Fermi energy varying with
temperature and magnetic field. We present experimental and theoretical work on
the electronic magnetotransport in single-crystalline EuB$_6$. Magnetization,
magnetoresistance and Hall effect data were recorded at temperatures between 2
and 300 K and in magnetic fields up to 5.5 T. The negative magnetoresistance is
well reproduced by a model in which the spin disorder scattering is reduced by
the applied magnetic field. The Hall effect can be separated into an ordinary
and an anomalous part. At 20 K the latter accounts for half of the observed
Hall voltage, and its importance decreases rapidly with increasing temperature.
As for Gd and its compounds, where the rare-earth ion adopts the same Hund's
rule ground state as Eu$^{2+}$ in EuB$_{6}$, the standard antisymmetric
scattering mechanisms underestimate the $size$ of this contribution by several
orders of magnitude, while reproducing its $shape$ almost perfectly. Well below
the bulk ferromagnetic ordering at $T_C$ = 12.5 K, a two-band model
successfully describes the magnetotransport. Our description is consistent with
published de Haas van Alphen, optical reflectivity, angular-resolved
photoemission, and soft X-ray emission as well as absorption data, but requires
a new interpretation for the gap feature deduced from the latter two
experiments.
|
Given a binary dominance relation on a set of alternatives, a common thread
in the social sciences is to identify subsets of alternatives that satisfy
certain notions of stability. Examples can be found in areas as diverse as
voting theory, game theory, and argumentation theory. Brandt and Fischer [BF08]
proved that it is NP-hard to decide whether an alternative is contained in some
inclusion-minimal upward or downward covering set. For both problems, we raise
this lower bound to the Theta_{2}^{p} level of the polynomial hierarchy and
provide a Sigma_{2}^{p} upper bound. Relatedly, we show that a variety of other
natural problems regarding minimal or minimum-size covering sets are hard or
complete for either of NP, coNP, and Theta_{2}^{p}. An important consequence of
our results is that neither minimal upward nor minimal downward covering sets
(even when guaranteed to exist) can be computed in polynomial time unless P=NP.
This sharply contrasts with Brandt and Fischer's result that minimal
bidirectional covering sets (i.e., sets that are both minimal upward and
minimal downward covering sets) are polynomial-time computable.
|
Recently we have proposed a novel method to probe primordial gravitational
waves from upper bounds on the abundance of primordial black holes (PBHs). When
the amplitude of primordial tensor perturbations generated in the early
Universe is fairly large, they induce substantial scalar perturbations due to
their second-order effects. If these induced scalar perturbations are too large
when they reenter the horizon, then PBHs are overproduced, their abundance
exceeding observational upper limits. That is, primordial tensor perturbations
on superhorizon scales can be constrained from the absence of PBHs. In our
recent paper we have only shown simple estimations of these new constraints,
and hence in this paper, we present detailed derivations, solving the Einstein
equations for scalar perturbations induced at second order in tensor
perturbations. We also derive an approximate formula for the probability
density function of induced density perturbations, necessary to relate the
abundance of PBHs to the primordial tensor power spectrum, assuming primordial
tensor perturbations follow Gaussian distributions. Our new upper bounds from
PBHs are compared with other existing bounds obtained from big bang
nucleosynthesis, cosmic microwave background, LIGO/Virgo and pulsar timing
arrays.
|
A recently developed spectral-element adaptive refinement incompressible
magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp.
Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island
coalescence instability (MICI) in two dimensions. MICI is a fundamental MHD
process that can produce sharp current layers and subsequent reconnection and
heating in a high-Lundquist number plasma such as the solar corona [Ng and
Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin
current layers, it is highly desirable to use adaptively or statically refined
grids to resolve them, and to maintain accuracy at the same time. The output of
the spectral-element static adaptive refinement simulations are compared with
simulations using a finite difference method on the same refinement grids, and
both methods are compared to pseudo-spectral simulations with uniform grids as
baselines. It is shown that with the statically refined grids roughly scaling
linearly with effective resolution, spectral element runs can maintain accuracy
significantly higher than that of the finite difference runs, in some cases
achieving close to full spectral accuracy.
|
We discuss differences in simulation results that arise between the use of
either the thermal energy or the entropy as an independent variable in smoothed
particle hydrodynamics (SPH). In this context, we derive a new version of SPH
that manifestly conserves both energy and entropy if smoothing lengths are
allowed to adapt freely to the local mass resolution. To test various formu-
lations of SPH, we consider point-like energy injection and find that powerful
explosions are well represented by SPH even when the energy is deposited into a
single particle, provided that the entropy equation is integrated. If the
thermal energy is instead used as an independent variable, unphysical solutions
can be obtained for this problem. We also examine the radiative cooling of gas
spheres that collapse and virialize in isolation and of halos that form in
cosmological simulations of structure formation. When applied to these
problems, the thermal energy version of SPH leads to substantial overcooling in
halos that are resolved with up to a few thousand particles, while the entropy
formulation is biased only moderately low for these halos. For objects resolved
with much larger particle numbers, the two approaches yield consistent results.
We trace the origin of the differences to systematic resolution effects in the
outer parts of cooling flows. The cumulative effect of this overcooling can be
significant. In cosmological simulations of moderate size, we find that the
fraction of baryons which cool and condense can be reduced by up to a factor ~2
if the entropy equation is employed rather than the thermal energy equation. We
also demonstrate that the entropy method leads to a greatly reduced scatter in
the density-temperature relation of the low-density Ly-alpha forest relative to
the thermal energy approach, in accord with theoretical expectations.(abridged)
|
One of the most challenging problems in audio-driven talking head generation
is achieving high-fidelity detail while ensuring precise synchronization. Given
only a single reference image, extracting meaningful identity attributes
becomes even more challenging, often causing the network to mirror the facial
and lip structures too closely. To address these issues, we introduce RADIO, a
framework engineered to yield high-quality dubbed videos regardless of the pose
or expression in reference images. The key is to modulate the decoder layers
using latent space composed of audio and reference features. Additionally, we
incorporate ViT blocks into the decoder to emphasize high-fidelity details,
especially in the lip region. Our experimental results demonstrate that RADIO
displays high synchronization without the loss of fidelity. Especially in harsh
scenarios where the reference frame deviates significantly from the ground
truth, our method outperforms state-of-the-art methods, highlighting its
robustness.
|
As shown in the famous \emph{EPR} paper (Einstein, Podolsky e Rosen,1935),
Quantum Mechanics is non-local. The Bell theorem and the experiments by Aspect
and many others, ruled out the possibility of explaining quantum correlations
between entangled particles using local hidden variables models (except for
implausible combinations of loopholes). Some authors (Bell, Eberhard, Bohm and
Hiley) suggested that quantum correlations could be due to superluminal
communications (tachyons) that propagate isotropically with velocity
\emph{$v_{t}>c$} in a preferred reference frame. For finite values of
\emph{$v_{t}$}, Quantum Mechanics and superluminal models lead to different
predictions. Some years ago a Geneva group and our group did experiments on
entangled photons to evidence possible discrepancies between experimental
results and quantum predictions. Since no discrepancy was found, these
experiments established only lower bounds for the possible tachyon velocities
\emph{$v_{t}$}. Here we propose an improved experiment that should lead us to
explore a much larger range of possible tachyon velocities \emph{$v_{t}$} for
any possible direction of velocity $\vec{V}$ of the tachyons preferred frame.
|
Taking advantage of recent progress in neutron instrumentation and in the
understanding of magnetic-field-dependent small-angle neutron scattering, here,
we study the three-dimensional magnetization distribution within an isotropic
Nd-Fe-B bulk magnet. The magnetic neutron scattering cross section of this
system features the so-called spike anisotropy, which points towards the
presence of a strong magnetodipolar interaction. This experimental result
combined with a damped oscillatory behavior of the corresponding correlation
function and recent micromagnetic simulation results on spherical nanoparticles
suggest an interpretation of the neutron data in terms of vortex-like
flux-closure patterns. The field-dependent correlation length Lc is well
reproduced by a phenomenological power-law model. While the experimental
neutron data for Lc are described by an exponent close to unity (p = 0.86), the
simulation results yield p = 1.70, posing a challenge to theory to include
vortex-vortex interaction effects.
|
In quantum theory we refer to the probability of finding a particle between
positions $x$ and $x+dx$ at the instant $t$, although we have no capacity of
predicting exactly when the detection occurs. In this work, first we present an
extended non-relativistic quantum formalism where space and time play
equivalent roles. It leads to the probability of finding a particle between $x$
and $x+dx$ during [$t$,$t+dt$]. Then, we find a Schr\"odinger-like equation for
a "mirror" wave function $\phi(t,x)$ associated with the probability of
measuring the system between $t$ and $t+dt$, given that detection occurs at
$x$. In this framework, it is shown that energy measurements of a stationary
state display a non-zero dispersion, and that energy-time uncertainty arises
from first principles. We show that a central result on arrival time, obtained
through approaches that resort to {\it ad hoc} assumptions, is a natural,
built-in part of the formalism presented here.
|
We demonstrate an amplitude-based micro-displacement sensor that uses a
plastic photonic bandgap Bragg fiber with one end coated with a silver layer.
The reflection intensity of the Bragg fiber is characterized in response to
different displacements (or bending curvatures). We note that the Bragg
reflector of the fiber acts as an efficient mode stripper for the wavelengths
near the edge of the fiber bandgap, which makes the sensor extremely sensitive
to bending or displacements at these wavelengths. Besides, by comparison of the
Bragg fiber sensor to a sensor based on a regular multimode fiber with similar
outer diameter and length, we find that the Bragg fiber sensor is more
sensitive to bending due to presence of mode stripper in the form of the
multilayer reflector. Experimental results show that the minimum detection
limit of the Bragg fiber sensor can be smaller than 5 um for displacement
sensing.
|
Large Language Models (LLMs) have demonstrated proficiency in utilizing
various tools by coding, yet they face limitations in handling intricate logic
and precise control. In embodied tasks, high-level planning is amenable to
direct coding, while low-level actions often necessitate task-specific
refinement, such as Reinforcement Learning (RL). To seamlessly integrate both
modalities, we introduce a two-level hierarchical framework, RL-GPT, comprising
a slow agent and a fast agent. The slow agent analyzes actions suitable for
coding, while the fast agent executes coding tasks. This decomposition
effectively focuses each agent on specific tasks, proving highly efficient
within our pipeline. Our approach outperforms traditional RL methods and
existing GPT agents, demonstrating superior efficiency. In the Minecraft game,
it rapidly obtains diamonds within a single day on an RTX3090. Additionally, it
achieves SOTA performance across all designated MineDojo tasks.
|
eHWC J2019+368 is one of the sources emitting $\gamma$-rays with energies
higher than 100 TeV based on the recent measurement with the High Altitude
Water Cherenkov Observatory (HAWC), and the origin is still in debate. The
pulsar PSR J2021$+$3651 is spatially coincident with the TeV source. We
investigate theoretically whether the multiband nonthermal emission of eHWC
J2019+368 can originate from the pulsar wind nebula (PWN) G75.2$+$0.1 powered
by PSR J2021$+$3651. In the model, the spin-down power of the pulsar is
transferred to high-energy particles and magnetic field in the nebula. As the
particles with an energy distribution of either a broken power-law or a
power-law continually injected into the nebula, the multiband nonthermal
emission is produced via synchrotron radiation and inverse Compton scattering.
The spectral energy distribution of the nebula from the model with the
reasonable parameters is generally consistent with the detected radio, X-ray
and TeV $\gamma$-ray fluxes. Our study supports that the PWN has the ability to
produce the TeV $\gamma$-rays of eHWC J2019+368, and the most energetic
particles in the nebula have energies up to about $0.4$ PeV.
|
Information-theory based variational principles have proven effective at
providing scalable uncertainty quantification (i.e. robustness) bounds for
quantities of interest in the presence of nonparametric model-form uncertainty.
In this work, we combine such variational formulas with functional inequalities
(Poincar{\'e}, $\log$-Sobolev, Liapunov functions) to derive explicit
uncertainty quantification bounds for time-averaged observables, comparing a
Markov process to a second (not necessarily Markov) process. These bounds are
well-behaved in the infinite-time limit and apply to steady-states of both
discrete and continuous-time Markov processes.
|
In the positron-electron annihilation process, finite deviations from the
standard calculation based on the Fermi's Golden rule are suggested in recent
theoretical work. This paper describes an experimental test of the predictions
of this theoretical work by searching for events with two photons from positron
annihilation of energy larger than the electron rest mass ($511\,{\rm keV}$).
The positrons came from a ${\rm {}^{22}Na}$ source, tagging the third photon
from the spontaneous emission of ${\rm {}^{22}{Ne}^*}$ de-exitation to suppress
backgrounds. Using the collected sample of $1.06\times 10^{7}$
positron-electron annihilations, triple coincidence photon events in the signal
enhanced energy regions are examined. The observed number of events in two
signal regions, $N^{\rm SR1}_{\rm obs}=0$ and $N^{\rm SR2}_{\rm obs}=0$ are,
within a current precision, consistent with the expected number of events,
$N^{\rm SR1}_{\rm exp}=0.86\pm0.08({\rm stat.})^{+1.85}_{-0.81}({\rm syst.})$
and $N^{\rm SR2}_{\rm exp}=0.37\pm 0.05({\rm stat.})^{+0.80}_{-0.29}({\rm
syst.})$ from Fermi's golden rule respectively. Based on the $P^{(d)}$
modeling, the 90% CL lower limit on the photon wave packet size is obtained.
|
We investigate wide-angle pi^0 photoproduction within the handbag approach to
twist-3 accuracy. In contrast to earlier work both the 2-particle as well as
the 3-particle twist-3 contributions are taken into account. It is shown that
both are needed for consistent results that respect gauge invariance and
crossing properties. The numerical studies reveal the dominance of the twist-3
contribution. With it fair agreement with the recent CLAS measurement of the
pi^0 cross section is obtained. We briefly comment also on wide-angle
photoproduction of other pseudoscalar mesons.
|
The perceived randomness in the time evolution of "chaotic" dynamical systems
can be characterized by universal probabilistic limit laws, which do not depend
on the fine features of the individual system. One important example is the
Poisson law for the times at which a particle with random initial data hits a
small set. This was proved in various settings for dynamical systems with
strong mixing properties. The key result of the present study is that, despite
the absence of mixing, the hitting times of integrable flows also satisfy
universal limit laws which are, however, not Poisson. We describe the limit
distributions for "generic" integrable flows and a natural class of target
sets, and illustrate our findings with two examples: the dynamics in central
force fields and ellipse billiards. The convergence of the hitting time process
follows from a new equidistribution theorem in the space of lattices, which is
of independent interest. Its proof exploits Ratner's measure classification
theorem for unipotent flows, and extends earlier work of Elkies and McMullen.
|
We use a non-equilibrium chemical network to revisit and study the effect of
H_{2}, HD and LiH molecular cooling on a primordial element of gas. We solve
both the thermal and chemical equations for a gas element with an initial
temperature T\approx 1000K and a gas number density in the range
n_{tot}=1-10^{4} cm^{-3}. At low densities, n_{tot}<10^{2} cm^{-3}, the gas
reaches temperatures \sim 100K and the main coolant is H_{2}, but at higher
densities, n_{tot}>10^{2} cm^{-3}, the HD molecule dominates the gas
temperature evolution. The effect of LiH is negligible in all cases. We studied
the effect of D abundance on the gas cooling. The D abundance was set initially
to be in the range n_{D}/n_{H}=10^{-7}-10^{-4.5}, with
n_{HD}/n_{H}={D^{+}}/n_{H}=10^{-10}. The simulations show that at
n_{tot}>10^{2} cm^{-3} the HD cooling dominates the temperature evolution for D
abundances greater than 10^{-5}n_{H}. This number decrease at higher densities.
Furthermore, we studied the effect of electrons and ionized particules on the
gas temperature. We followed the gas temperature evolution with
n_{H_{+}}/n_{H}=10^{-4}-10^{-1} and n_{D^{+}}/n_{H^{+}}=10^{-5}. The gas
temperature reached lower values at high ionization degree because electrons,
H^{+} and D^{+} are catalizers in the formation paths of the H_{2} and HD
molecules, which are the main coolers at low temperatures. Finaly, we studied
the effect of an OB star, with T_{eff}=4\times 10^{4}K, would have on gas
cooling. It is very difficult for a gas with n_{tot} in the range between 1-100
cm^{-3} to drop its temperature if the star is at a distance less than 100 pc.
|
Laminar as well as turbulent oscillatory pipe flows occur in many fields of
biomedical science and engineering. Pulmonary air flow and vascular blood flow
are usually laminar, because shear forces acting on the physiological system
ought to be small. However, frictional losses and shear stresses vary
considerably with transition to turbulence. This plays an important role in
cases of e.g. artificial respiration or stenosis. On the other hand, in piston
engines and reciprocating thermal/chemical process devices, turbulent or
transitional oscillatory flows affect mixing properties, and also mass and heat
transfer. In contrast to the extensively investigated statistically steady wall
bounded shear flows, rather little work has been devoted to the onset,
amplification and decay of turbulence in pipe flows driven by an unsteady
external force. Experiments [1, 3, 6] indicate that transition to turbulence
depends on only one parameter, i.e. Re_{\delta} \sim Re/Wo with a critical
value of about 550, at least for Womersley numbers Wo > 7. We perform direct
numerical simulations (DNS) of oscillatory pipe flows at several combinations
of Re and Wo to extend the validity of this critical value to higher Wo. To
better understand the physical mechanisms involved during decay and
amplification of the turbulent flow, we further analyse the turbulent kinetic
energy distribution and its budgets terms.
|
We review the fundamental ideas of quantizing a theory on a Light Front
including the Hamiltonian approach to the problem of bound states on the Light
Front and the limiting transition from formulating a theory in Lorentzian
coordinates (where the quantization occurs on spacelike hyperplanes) to the
theory on the Light Front, which demonstrates the equivalence of these variants
of the theory. We describe attempts to find such a form of the limiting
transition for gauge theories on the Wilson lattice.
|
This contains the Proceedings of the 5th International Workshop on Thermal
Field Theories and Their Applications (TFT98) which took place on 10-14 August
1998 in Regensburg, Germany. The contents are in html and postscript format.
The html files contain clickable links to the postscript files of the papers on
the LANL e-print archive. All contributions underwent strict peer review before
being accepted for this volume. Comments and suggestions for improvements
should be sent to<EMAIL_ADDRESS> |
In this work we investigate the possibility of transporting material to the
NEO region via the 8:3 MMR with Jupiter, potentially even material released
from the dwarf planet Ceres. By applying the FLI map method to the 8:3 MMR
region in the orbital plane of Ceres, we were able to distinguish between
stable and unstable orbits. Subsequently, based on the FLI maps (for mean
anomaly $M=60^\circ$ and also $M=30^\circ$), 500 of the most stable and 500 of
the most unstable particles were integrated for $15\,Myr$ for each map.
Long-term integration in the case of $M=60^\circ$ showed that most of the
stable particles evolved, in general, in uneventful ways with only 0.8\% of
particles reaching the limit of q $\leq$ 1.3 $AU$. However, in the case of
$M=30^\circ$, a stable evolution was not confirmed. Over 40\% of particles
reached orbits with q $\leq$ 1.3 $AU$ and numerous particles were ejected to
hyperbolic orbits or orbits with a > 100 $AU$. The results for stable particles
indicate that short-term FLI maps are more suitable for finding chaotic orbits,
than for detecting the stable ones. A rough estimate shows that it is possible
for material released from Ceres to get to the region of 8:3 MMR with Jupiter.
A long-term integration of unstable particles in both cases showed that
transportation of material via 8:3 MMR close to the Earth is possible.
|
Based on a data set of $(27.12\pm0.10)\times 10^8$ $\psi(3686)$ events
collected at the BESIII experiment, the absolute branching fractions of the
three dominant $\Omega^-$ decays are measured to be $\mathcal{B}_{\Omega^- \to
\Xi^0 \pi^-} = (25.03\pm0.44\pm0.53)\%$, $\mathcal{B}_{\Omega^- \to \Xi^-
\pi^0} = (8.43\pm0.52\pm0.28)\%$, and $\mathcal{B}_{\Omega^- \to \Lambda K^-} =
(66.3\pm0.8\pm2.0)\%$, where the first and second uncertainties are statistical
and systematic, respectively. The ratio between $\mathcal{B}_{\Omega^- \to
\Xi^0 \pi^-}$ and $\mathcal{B}_{\Omega^- \to \Xi^- \pi^0}$ is determined to be
$2.97\pm0.19\pm0.11$, which is in good agreement with the PDG value of
$2.74\pm0.15$, but greater by more than four standard deviations than the
theoretical prediction of 2 obtained from the $\Delta I = 1/2$ rule.
|
Despite rapid growth in the data science workforce, people of color, women,
those with disabilities, and others remain underrepresented in, underserved by,
and sometimes excluded from the field. This pattern prevents equal opportunity
for individuals, while also creating products and policies that perpetuate
inequality. Thus, for statistics and data science educators of the next
generation, accessibility and inclusion should be of utmost importance in our
programs and courses. In this paper, we discuss how we developed an
accessibility and inclusion framework, hence a structure for holding ourselves
accountable to these principles, for the writing of a statistics textbook. We
share our experiences in setting accessibility and inclusion goals, the tools
we used to achieve these goals, and recommendations for other educators. We
provide examples for instructors that can be implemented in their own courses.
|
Often in Phase 3 clinical trials measuring a long-term time-to-event
endpoint, such as overall survival or progression-free survival, investigators
also collect repeated measures on biomarkers which may be predictive of the
primary endpoint. Although these data may not be leveraged directly to support
early stopping decisions, can we make greater use of these data to increase
efficiency and improve interim decision making? We present a joint model for
longitudinal and time-to-event data and a method which establishes the
distribution of successive estimates of parameters in the joint model across
interim analyses. With this in place, we can use the estimates to define both
efficacy and futility stopping rules. Using simulation, we evaluate the
benefits of incorporating biomarker information and the affects on interim
decision making.
|
This paper studies a wireless-energy-transfer (WET) enabled massive
multiple-input-multiple-output (MIMO) system (MM) consisting of a hybrid
data-and-energy access point (H-AP) and multiple single-antenna users. In the
WET-MM system, the H-AP is equipped with a large number $M$ of antennas and
functions like a conventional AP in receiving data from users, but additionally
supplies wireless power to the users. We consider frame-based transmissions.
Each frame is divided into three phases: the uplink channel estimation (CE)
phase, the downlink WET phase, as well as the uplink wireless information
transmission (WIT) phase. Firstly, users use a fraction of the previously
harvested energy to send pilots, while the H-AP estimates the uplink channels
and obtains the downlink channels by exploiting channel reciprocity. Next, the
H-AP utilizes the channel estimates just obtained to transfer wireless energy
to all users in the downlink via energy beamforming. Finally, the users use a
portion of the harvested energy to send data to the H-AP simultaneously in the
uplink (reserving some harvested energy for sending pilots in the next frame).
To optimize the throughput and ensure rate fairness, we consider the problem of
maximizing the minimum rate among all users. In the large-$M$ regime, we obtain
the asymptotically optimal solutions and some interesting insights for the
optimal design of WET-MM system. We define a metric, namely, the massive MIMO
degree-of-rate-gain (MM-DoRG), as the asymptotic UL rate normalized by
$\log(M)$. We show that the proposed WET-MM system is optimal in terms of
MM-DoRG, i.e., it achieves the same MM-DoRG as the case with ideal CE.
|
Real world applications of planning, like in industry and robotics, require
modelling rich and diverse scenarios. Their resolution usually requires
coordinated and concurrent action executions. In several cases, such planning
problems are naturally decomposed in a hierarchical way and expressed by a
Hierarchical Task Network (HTN) formalism. The PDDL language used to specify
planning domains has evolved to cover the different planning paradigms.
However, formulating real and complex scenarios where numerical and temporal
constraints concur in defining a solution is still a challenge. Our proposition
aims at filling the gap between existing planning languages and operational
needs. To do so, we propose to extend HDDL taking inspiration from PDDL 2.1 and
ANML to express temporal and numerical expressions. This paper opens
discussions on the semantics and the syntax needed to extend HDDL, and
illustrate these needs with the modelling of an Earth Observing Satellite
planning problem.
|
The combination of source coding with decoder side-information (Wyner-Ziv
problem) and channel coding with encoder side-information (Gel'fand-Pinsker
problem) can be optimally solved using the separation principle. In this work
we show an alternative scheme for the quadratic-Gaussian case, which merges
source and channel coding. This scheme achieves the optimal performance by a
applying modulo-lattice modulation to the analog source. Thus it saves the
complexity of quantization and channel decoding, and remains with the task of
"shaping" only. Furthermore, for high signal-to-noise ratio (SNR), the scheme
approaches the optimal performance using an SNR-independent encoder, thus it is
robust to unknown SNR at the encoder.
|
We study radiative activity of magnetic white dwarf undergoing torsional
vibrations about axis of its own dipole magnetic moment under the action of
Lorentz restoring force. It is shown that pulsating white dwarf can convert its
vibration energy into the energy of magneto-dipole emission, oscillating with
the frequency equal to the frequency of Alfv\'en torsional vibrations, provided
that internal magnetic field is decayed. The most conspicuous feature of the
vibration energy powered radiation in question is the lengthening of periods of
oscillating emission; the rate of period elongation is determined by the rate
magnetic field decay.
|
Weyl semimetals are 3D condensed matter systems characterized by a degenerate
Fermi surface, consisting of a pair of `Weyl nodes'. Correspondingly, in the
infrared limit, these systems behave effectively as Weyl fermions in $3+1$
dimensions. We consider a class of interacting 3D lattice models for Weyl
semimetals and prove that the quadratic response of the quasi-particle flow
between the Weyl nodes is universal, that is, independent of the interaction
strength and form. Universality is the counterpart of the Adler-Bardeen
non-renormalization property of the chiral anomaly for the infrared emergent
description, which is proved here in the presence of a lattice and at a
non-perturbative level. Our proof relies on constructive bounds for the
Euclidean ground state correlations combined with lattice Ward Identities, and
it is valid arbitrarily close to the critical point where the Weyl points merge
and the relativistic description breaks down.
|
We present the 3- and 4-point tree-level scattering amplitudes involving
first-massive states in closed Type IIB superstring theory, computed in the
Minkowski vacuum. In particular, the 3-point amplitudes are computed for all
possible combinations of first-massive and massless string states, and the
4-point amplitudes are computed for all such combinations involving at most two
massive string states. We verify that unitarity is satisfied by checking the
massless and first-massive poles of a 4-point amplitude. This paper is intended
to serve as a reference providing all these amplitudes explicitly in consistent
conventions as well as worldsheet correlators necessary to perform further
computations beyond those listed here.
|
We prove that a beam splitter, one of the most common optical components,
fulfills several classes of majorization relations, which govern the amount of
quantum entanglement that it can generate. First, we show that the state
resulting from k photons impinging on a beam splitter majorizes the
corresponding state with any larger photon number k'>k, implying that the
entanglement monotonically grows with k. Then, we examine parametric
infinitesimal majorization relations as a function of the beam-splitter
transmittance, and find that there exists a parameter region where majorization
is again fulfilled, implying a monotonic increase of entanglement by moving
towards a balanced beam splitter. We also identify regions with a majorization
default, where the output states become incomparable. In this latter situation,
we find examples where catalysis may nevertheless be used in order to recover
majorization. The catalyst states can be as simple as a path-entangled
single-photon state or a two-mode vacuum squeezed state.
|
A new necessary and sufficient condition for the existence of minor left
prime factorizations of multivariate polynomial matrices without full row rank
is presented. The key idea is to establish a relationship between a matrix and
its full row rank submatrix. Based on the new result, we propose an algorithm
for factorizing matrices and have implemented it on the computer algebra system
Maple. Two examples are given to illustrate the effectiveness of the algorithm,
and experimental data shows that the algorithm is efficient.
|
Certain time-like singularities are shown to be resolved already in classical
General Relativity once one passes from particle probes to scalar waves. The
time evolution can be defined uniquely and some general conditions for that are
formulated. The Reissner-Nordstrom singularity allows for communication through
the singularity and can be termed "beam splitter" since the transmission
probability of a suitably prepared high energy wave packet is 25%. The high
frequency dependence of the cross section is w^{-4/3}. However, smooth
geometries arbitrarily close to the singular one require a finite amount of
negative energy matter. The negative-mass Schwarzschild has a qualitatively
different resolution interpreted to be fully reflecting. These 4d results are
similar to the 2d black hole and are generalized to an arbitrary dimension d>4.
|
Utilizing a benchmark measurement of laser-induced ionization of an H$_2^+$
molecular ion beam target at infrared wavelength around 2 $\mu$m, we show that
the characteristic two-peak structure predicted for laser-induced enhanced
ionization of H$_2^+$ and diatomic molecules in general, is a phenomenon which
is confined to a small laser parameter space --- a Goldilocks Zone. Further, we
control the effect experimentally and measure its imprint on the electron
momentum. We replicate the behavior with simulations, which reproduce the
measured kinetic-energy release as well as the correlated-electron spectra.
Based on this, a model, which both maps out the Goldilocks Zone and illustrates
why enhanced ionization has proven so elusive in H$_2^+$, is derived.
|
This is a first paper in convex analysis dedicated to the bipotential theory,
based on an extension of Fenchel's inequality. Introduced by the second author,
bipotentials lead to a succesful new treatment of the constitutive laws of some
dissipative materials: frictional contact, non-associated Drucker-Prager model,
or Lemaitre plastic ductile damage law. We solve here the problems of existence
and construction of a bipotential for a nonsmooth mechanics constitutive law.
|
We solve the problem of finding all eigenvalues and eigenvectors of the
Neumann matrix of the matter sector of open bosonic string field theory,
including the zero modes, and switching on a background B-field. We give the
discrete eigenvalues as roots of transcendental equations, and we give
analytical expressions for all the eigenvectors.
|
Large-scale kernel approximation is an important problem in machine learning
research. Approaches using random Fourier features have become increasingly
popular [Rahimi and Recht, 2007], where kernel approximation is treated as
empirical mean estimation via Monte Carlo (MC) or Quasi-Monte Carlo (QMC)
integration [Yang et al., 2014]. A limitation of the current approaches is that
all the features receive an equal weight summing to 1. In this paper, we
propose a novel shrinkage estimator from "Stein effect", which provides a
data-driven weighting strategy for random features and enjoys theoretical
justifications in terms of lowering the empirical risk. We further present an
efficient randomized algorithm for large-scale applications of the proposed
method. Our empirical results on six benchmark data sets demonstrate the
advantageous performance of this approach over representative baselines in both
kernel approximation and supervised learning tasks.
|
This work considers the uplink (UL) of a multi-cell massive MIMO system with
$L$ cells, having each $K$ mono-antenna users communicating with an
$N-$antennas base station (BS). The channel model involves Rician fading with
distinct per-user Rician factors and channel correlation matrices and takes
into account pilot contamination and imperfect CSI. The objective is to
evaluate the performances of such systems with different single-cell and
multi-cell detection methods. In the former, we investigate MRC and single-cell
MMSE (S-MMSE); as to the latter, we are interested in multi-cell MMSE (M-MMSE)
that was recently shown to provide unbounded rates in Rayleigh fading. The
analysis is led assuming the infinite $N$ limit and yields informative
closed-form approximations that are substantiated by a selection of numerical
results for finite system dimensions. Accordingly, these expressions are
accurate and provide relevant insights into the effects of the different system
parameters on the overall performances.
|
The original carpet cloak [Phys. Rev. Lett. 101, 203901 (2008)] was designed
by a numerical method, the quasi-conformal mapping. Therefore its refractive
index profile was obtained numerically. In this letter, we propose a new carpet
cloak based on the optical conformal mapping, with an analytical form of a
refractive index profile, thereby facilitating future experimental designs.
|
We report a novel lensless on-chip microscopy platform based on near-field
blind ptychographic modulation. In this platform, we place a thin diffuser in
between the object and the image sensor for light wave modulation. By blindly
scanning the unknown diffuser to different x-y positions, we acquire a sequence
of modulated intensity images for quantitative object recovery. Different from
previous ptychographic implementations, we employ a unit magnification
configuration with a Fresnel number of ~50,000, which is orders of magnitude
higher than previous ptychographic setups. The unit magnification configuration
allows us to have the entire sensor area, 6.4 mm by 4.6 mm, as the imaging
field of view. The ultra-high Fresnel number enables us to directly recover the
positional shift of the diffuser in the phase retrieval process, addressing the
positioning accuracy issue plagued in regular ptychographic experiments. In our
implementation, we use a low-cost, DIY scanning stage to perform blind diffuser
modulation. Precise mechanical scanning that is critical in conventional
ptychography experiments is no longer needed in our setup. We further employ an
up-sampling phase retrieval scheme to bypass the resolution limit set by the
imager pixel size and demonstrate a half-pitch resolution of 0.78 micron. We
validate the imaging performance via in vitro cell cultures, transparent and
stained tissue sections, and a thick biological sample. We show that the
recovered quantitative phase map can be used to perform effective cell
segmentation of the dense yeast culture. We also demonstrate 3D digital
refocusing of the thick biological sample based on the recovered wavefront. The
reported platform provides a cost-effective and turnkey solution for large
field-of-view, high-resolution, and quantitative on-chip microscopy.
|
The exotic structures in the 2s_{1/2} states of five pairs of mirror nuclei
^{17}O-^{17}F, ^{26}Na-^{26}P, ^{27}Mg-^{27}P, ^{28}Al-^{28}P and
^{29}Si-^{29}P are investigated with the relativistic mean-field (RMF) theory
and the single-particle model (SPM) to explore the role of the Coulomb effects
on the proton halo formation. The present RMF calculations show that the exotic
structure of the valence proton is more obvious than that of the valence
neutron of its mirror nucleus, the difference of exotic size between each
mirror nuclei becomes smaller with the increase of mass number A of the mirror
nuclei and the ratios of the valence proton and valence neutron
root-mean-square (RMS) radius to the matter radius in each pair of mirror
nuclei all decrease linearly with the increase of A. In order to interpret
these results, we analyze two opposite effects of Coulomb interaction on the
exotic structure formation with SPM and find that the contribution of the
energy level shift is more important than that of the Coulomb barrier for light
nuclei. However, the hindrance of the Coulomb barrier becomes more obvious with
the increase of A. When A is larger than 34, Coulomb effects on the exotic
structure formation will almost become zero because its two effects counteract
with each other.
|
Motivated by the swampland conjectures, we study cosmological signatures of a
quintessence potential which induces time variation in the low energy effective
field theory. After deriving the evolution of the quintessence field, we
illustrate its possible ramifications by exploring putative imprints in a
number of directions of particle phenomenology. We first show that a dark
matter self-interaction rate increasing with time gives a novel way of
reconciling the large self interactions required to address small scale
structure issues with the constraint coming from clusters. Next, we study the
effects of kinetic mixing variation during the radiation dominated era on
freeze-in dark matter production. Last, we elucidate quintessence effects on
the restoration of the electroweak symmetry at finite temperature and the
lifetime of the electroweak vacuum through a modification of the effective
Higgs mass and quartic coupling.
|
There has been an intense recent activity in embedding of very high
dimensional and nonlinear data structures, much of it in the data science and
machine learning literature. We survey this activity in four parts. In the
first part we cover nonlinear methods such as principal curves,
multidimensional scaling, local linear methods, ISOMAP, graph based methods and
diffusion mapping, kernel based methods and random projections. The second part
is concerned with topological embedding methods, in particular mapping
topological properties into persistence diagrams and the Mapper algorithm.
Another type of data sets with a tremendous growth is very high-dimensional
network data. The task considered in part three is how to embed such data in a
vector space of moderate dimension to make the data amenable to traditional
techniques such as cluster and classification techniques. Arguably this is the
part where the contrast between algorithmic machine learning methods and
statistical modeling, the so-called stochastic block modeling, is at its
greatest. In the paper, we discuss the pros and cons for the two approaches.
The final part of the survey deals with embedding in $\mathbb{R}^ 2$, i.e.
visualization. Three methods are presented: $t$-SNE, UMAP and LargeVis based on
methods in parts one, two and three, respectively. The methods are illustrated
and compared on two simulated data sets; one consisting of a triplet of noisy
Ranunculoid curves, and one consisting of networks of increasing complexity
generated with stochastic block models and with two types of nodes.
|
All-in-one (AiO) frameworks restore various adverse weather degradations with
a single set of networks jointly. To handle various weather conditions, an AiO
framework is expected to adaptively learn weather-specific knowledge for
different degradations and shared knowledge for common patterns. However,
existing methods: 1) rely on extra supervision signals, which are usually
unknown in real-world applications; 2) employ fixed network structures, which
restrict the diversity of weather-specific knowledge. In this paper, we propose
a Language-driven Restoration framework (LDR) to alleviate the aforementioned
issues. First, we leverage the power of pre-trained vision-language (PVL)
models to enrich the diversity of weather-specific knowledge by reasoning about
the occurrence, type, and severity of degradation, generating description-based
degradation priors. Then, with the guidance of degradation prior, we sparsely
select restoration experts from a candidate list dynamically based on a
Mixture-of-Experts (MoE) structure. This enables us to adaptively learn the
weather-specific and shared knowledge to handle various weather conditions
(e.g., unknown or mixed weather). Experiments on extensive restoration
scenarios show our superior performance (see Fig. 1). The source code will be
made available.
|
We propose a Target Conditioned Representation Independence (TCRI) objective
for domain generalization. TCRI addresses the limitations of existing domain
generalization methods due to incomplete constraints. Specifically, TCRI
implements regularizers motivated by conditional independence constraints that
are sufficient to strictly learn complete sets of invariant mechanisms, which
we show are necessary and sufficient for domain generalization. Empirically, we
show that TCRI is effective on both synthetic and real-world data. TCRI is
competitive with baselines in average accuracy while outperforming them in
worst-domain accuracy, indicating desired cross-domain stability.
|
We investigate the interplay of electronic correlations and spin-orbit
coupling (SOC) for a one-band and a two-band honeycomb lattice model. The main
difference between the two models concerning SOC is that in the one-band case
the SOC is a purely non-local term in the basis of the $p_z$ orbitals, whereas
in the two-band case with $p_x$ and $p_y$ as basis functions it is purely
local. In order to grasp the correlation effects on non-local spin-orbit
coupling, we apply the TRILEX approach that allows to calculate non-local
contributions to the self-energy approximatively. For the two-band case we
apply dynamical mean-field theory. In agreement with previous studies, we find
that for all parameter values in our study, the effect of correlations on the
spin-orbit coupling strength is that the bare effective SOC parameter is
increased. However, this increase is much weaker in the non-local than in the
local SOC case. Concerning the TRILEX method, we introduce the necessary
formulas for calculations with broken SU(2) symmetry.
|
We present PHORHUM, a novel, end-to-end trainable, deep neural network
methodology for photorealistic 3D human reconstruction given just a monocular
RGB image. Our pixel-aligned method estimates detailed 3D geometry and, for the
first time, the unshaded surface color together with the scene illumination.
Observing that 3D supervision alone is not sufficient for high fidelity color
reconstruction, we introduce patch-based rendering losses that enable reliable
color reconstruction on visible parts of the human, and detailed and plausible
color estimation for the non-visible parts. Moreover, our method specifically
addresses methodological and practical limitations of prior work in terms of
representing geometry, albedo, and illumination effects, in an end-to-end model
where factors can be effectively disentangled. In extensive experiments, we
demonstrate the versatility and robustness of our approach. Our
state-of-the-art results validate the method qualitatively and for different
metrics, for both geometric and color reconstruction.
|
We study explicit solutions for orientifolds of Matrix theory compactified on
noncommutative torus. As quotients of torus, cylinder, Klein bottle and
M\"obius strip are applicable as orientifolds. We calculate the solutions using
Connes, Douglas and Schwarz's projective module solution, and investigate
twisted gauge bundle on quotient spaces as well. They are Yang-Mills theory on
noncommutative torus with proper boundary conditions which define the geometry
of the dual space.
|
H_alpha spectropolarimetry on Herbig Ae/Be stars shows that the innermost
regions of intermediate mass (2 -- 15 M_sun) Pre-Main Sequence stars are
flattened. This may be the best evidence to date that the higher mass Herbig Be
stars are embedded in circumstellar discs.
A second outcome of our study is that the spectropolarimetric signatures for
the lower mass Herbig Ae stars differ from those of the higher mass Herbig Be
stars. Depolarisations across H_alpha are observed in the Herbig Be group,
whereas line polarisations are common amongst the Herbig Ae stars in our
sample. These line polarisation effects can be understood in terms of a compact
H_alpha source that is polarised by a rotating disc-like configuration. The
difference we detect between the Herbig Be and Ae stars may be the first
indication that there is a transition in the Hertzsprung-Russell Diagram from
magnetic accretion at spectral type A to disc accretion at spectral type B.
However, it is also possible that the compact polarised line component, present
in the Herbig Ae stars, is masked in the Herbig Be stars due to their higher
levels of H_alpha emission.
|
Some aspects of integrable field theories possessing purely transmitting
defects are described. The main example is the sine-Gordon model and several
striking features of a classical field theory containing one or more defects
are pointed out. Similar features appearing in the associated quantum field
theory are also reviewed briefly.
|
The INTEGRAL Science Data Centre (ISDC) provides the INTEGRAL data and means
to analyse them to the scientific community. The ISDC runs a gamma ray burst
alert system that provides the position of gamma ray bursts on the sky within
seconds to the community. It operates a quick-look analysis of the data within
few hours that detects new and unexpected sources as well as it monitors the
instruments. The ISDC processes the data through a standard analysis the
results of which are provided to the observers together with their data.
|
Given a symplectic surface $(\Sigma, \omega)$ of genus $g \ge 4$, we show
that the free group with two generators embeds into every asymptotic cone of
$(\mathrm{Ham}(\Sigma, \omega), d_\mathrm{H})$, where $d_\mathrm{H}$ is the
Hofer metric. The result stabilizes to products with symplectically aspherical
manifolds.
|
By considering a Moran-type construction of fractals on $[0,1]$, we show that
for any $0\le s\le 1$, there exists some Moran fractal set, which is perfect,
with Hausdorff dimension $s$ whose Fourier dimension is zero and it contains
arbitrarily long arithmetic progressions.
|
The fusion rules and modular matrix of a rational conformal field theory obey
a list of properties. We use these properties to classify rational conformal
field theories with not more than six primary fields and small values of the
fusion coefficients. We give a catalogue of fusion rings which can arise for
these field theories. It is shown that all such fusion rules can be realized by
current algebras. Our results support the conjecture that all rational
conformal field theories are related to current algebras.
|
We present a simple continuous-time model of clearing in financial networks.
Financial firms are represented as "tanks" filled with fluid (money), flowing
in and out. Once "pipes" connecting "tanks" are open, the system reaches the
clearing payment vector in finite time. This approach provides a simple
recursive solution to a classical static model of financial clearing in
bankruptcy, and suggests a practical payment mechanism. With sufficient
resources, a system of mutual obligations can be restructured into an
equivalent system that has a cascade structure: there is a group of banks that
paid off their debts, another group that owes money only to banks in the first
group, and so on. Technically, we use the machinery of Markov chains to analyze
evolution of a deterministic dynamical system.
|
In this paper we consider the acoustic metric obtained from an Abelian Higgs
model extended with higher derivative terms in the bosonic sector which
describes an acoustic rotating black hole and analyze the phenomena of
superresonance, analogue Aharonov-Bohm effect and absorption. We investigate
these effects by computing the scalar wave scattering by a draining bathtub
vortex and discuss the physical interpretation of higher derivative terms.
Furthermore, we show that the absorption does not vanish as the draining
parameter tends to zero.
|
Deep Learning (DL) applications are gaining momentum in the realm of
Artificial Intelligence, particularly after GPUs have demonstrated remarkable
skills for accelerating their challenging computational requirements. Within
this context, Convolutional Neural Network (CNN) models constitute a
representative example of success on a wide set of complex applications,
particularly on datasets where the target can be represented through a
hierarchy of local features of increasing semantic complexity. In most of the
real scenarios, the roadmap to improve results relies on CNN settings involving
brute force computation, and researchers have lately proven Nvidia GPUs to be
one of the best hardware counterparts for acceleration. Our work complements
those findings with an energy study on critical parameters for the deployment
of CNNs on flagship image and video applications: object recognition and people
identification by gait, respectively. We evaluate energy consumption on four
different networks based on the two most popular ones (ResNet/AlexNet): ResNet
(167 layers), a 2D CNN (15 layers), a CaffeNet (25 layers) and a ResNetIm (94
layers) using batch sizes of 64, 128 and 256, and then correlate those with
speed-up and accuracy to determine optimal settings. Experimental results on a
multi-GPU server endowed with twin Maxwell and twin Pascal Titan X GPUs
demonstrate that energy correlates with performance and that Pascal may have up
to 40% gains versus Maxwell. Larger batch sizes extend performance gains and
energy savings, but we have to keep an eye on accuracy, which sometimes shows a
preference for small batches. We expect this work to provide a preliminary
guidance for a wide set of CNN and DL applications in modern HPC times, where
the GFLOPS/w ratio constitutes the primary goal.
|
We analyze the most natural formulations of the minimal lepton flavour
violation hypothesis compatible with a type-I seesaw structure with three heavy
singlet neutrinos N, and satisfying the requirement of being predictive, in the
sense that all LFV effects can be expressed in terms of low energy observables.
We find a new interesting realization based on the flavour group $SU(3)_e\times
SU(3)_{\ell+N}$ (being $e$ and $\ell$ respectively the SU(2) singlet and
doublet leptons). An intriguing feature of this realization is that, in the
normal hierarchy scenario for neutrino masses, it allows for sizeable
enhancements of $\mu \to e$ transitions with respect to LFV processes involving
the $\tau$ lepton. We also discuss how the symmetries of the type-I seesaw
allow for a strong suppression of the N mass scale with respect to the scale of
lepton number breaking, without implying a similar suppression for possible
mechanisms of N production
|
In Machine to Machine (M2M) networks, a robust Medium Access Control (MAC)
protocol is crucial to enable numerous machine-type devices to concurrently
access the channel. Most literatures focus on developing simplex (reservation
or contention based)MAC protocols which cannot provide a scalable solution for
M2M networks with large number of devices. In this paper, a frame-based Hybrid
MAC scheme, which consists of a contention period and a transmission period, is
proposed for M2M networks. In the proposed scheme, the devices firstly contend
the transmission opportunities during the contention period, only the
successful devices will be assigned a time slot for transmission during the
transmission period. To balance the tradeoff between the contention and
transmission period in each frame, an optimization problem is formulated to
maximize the system throughput by finding the optimal contending probability
during contention period and optimal number of devices that can transmit during
transmission period. A practical hybrid MAC protocol is designed to implement
the proposed scheme. The analytical and simulation results demonstrate the
effectiveness of the proposed Hybrid MAC protocol.
|
This study investigates the vulnerabilities of data-driven offline signature
verification (DASV) systems to generative attacks and proposes robust
countermeasures. Specifically, we explore the efficacy of Variational
Autoencoders (VAEs) and Conditional Generative Adversarial Networks (CGANs) in
creating deceptive signatures that challenge DASV systems. Using the Structural
Similarity Index (SSIM) to evaluate the quality of forged signatures, we assess
their impact on DASV systems built with Xception, ResNet152V2, and DenseNet201
architectures. Initial results showed False Accept Rates (FARs) ranging from 0%
to 5.47% across all models and datasets. However, exposure to synthetic
signatures significantly increased FARs, with rates ranging from 19.12% to
61.64%. The proposed countermeasure, i.e., retraining the models with real +
synthetic datasets, was very effective, reducing FARs between 0% and 0.99%.
These findings emphasize the necessity of investigating vulnerabilities in
security systems like DASV and reinforce the role of generative methods in
enhancing the security of data-driven systems.
|
We improve the resolvent estimate in the Kreiss matrix theorem for a set of
matrices that generate uniformly bounded semigroups. The new resolvent estimate
is proved to be equivalent to Kreiss's resolvent condition, and it better
describes the behavior of the resolvents at infinity.
|
We present the first detailed analysis of the statistical properties of jump
processes bounded by a saturation function and driven by Poisson white noise,
being a random sequence of delta pulses. The Kolmogorov-Feller equation for the
probability density function (PDF) of such processes is derived and its
stationary solutions are found analytically in the case of the symmetric
uniform distribution of pulse sizes. Surprisingly, these solutions can exhibit
very complex behavior arising from both the boundedness of pulses and
processes. We show that all features of the stationary PDF (number of branches,
their form, extreme values probability, etc.) are completely determined by the
ratio of the saturation function width to the half-width of the pulse-size
distribution. We verify all theoretical results by direct numerical
simulations.
|
We study the receptive metric entropy for semigroup actions on probability
spaces, inspired by a similar notion of topological entropy introduced by
Hofmann and Stoyanov. We analyze its basic properties and its relation with the
classical metric entropy. In the case of semigroup actions on compact metric
spaces we compare the receptive metric entropy with the receptive topological
entropy looking for a Variational Principle. With this aim we propose several
characterizations of the receptive topological entropy. Finally we introduce a
receptive local metric entropy inspired by a notion by Bowen generalized in the
classical setting of amenable group actions by Zheng and Chen, and we prove
partial versions of the Brin-Katok Formula and the local Variational Principle.
|
Subsets and Splits