text
stringlengths 6
128k
|
---|
Magnetic skyrmions are topologically protected spin textures, stabilised in
systems with strong Dzyaloshinskii-Moriya interaction (DMI). Several studies
have shown that electrical currents can move skyrmions efficiently through
spin-orbit torques. While promising for technological applications,
current-driven skyrmion motion is intrinsically collective and accompanied by
undesired heating effects. Here we demonstrate a new approach to control
individual skyrmion positions precisely, which relies on the magnetic
interaction between sample and a magnetic force microscopy (MFM) probe. We
investigate perpendicularly magnetised X/CoFeB/MgO multilayers, where for X = W
or Pt the DMI is sufficiently strong to allow for skyrmion nucleation in an
applied field. We show that these skyrmions can be manipulated individually
through the local field gradient generated by the scanning MFM probe with an
unprecedented level of accuracy. Furthermore, we show that the probe stray
field can assist skyrmion nucleation. Our proof-of-concepts results offer
current-free paradigms to efficient individual skyrmion control.
|
The problems of the construction of the asymptotically distribution free
goodness-of-fit tests for three models of stochastic processes are considered.
The null hypothesis for all models is composite parametric. All tests are based
on the score-function processes, where the unknown parameter is replaced by the
MLE. We show that a special change of time transforms the limit score-function
processes into the Brownian bridge. This property allows us to construct the
asymptotically distribution free tests for the following three models of
stochastic processes : dynamical systems with small noise, ergodic diffusion
processes, inhomogeneous Poisson processes and nonlinear AR time series.
|
We investigate the phase diagram and, in particular, the nature of the the
multicritical point in three-dimensional frustrated $N$-component spin models
with noncollinear order in the presence of an external field, for instance
easy-axis stacked triangular antiferromagnets in the presence of a magnetic
field along the easy axis. For this purpose we study the renormalization-group
flow in a Landau-Ginzburg-Wilson \phi^4 theory with symmetry O(2)x[Z_2 +O(N-1)]
that is expected to describe the multicritical behavior. We compute its MS
\beta functions to five loops. For N\ge 4, their analysis does not support the
hypothesis of an effective enlargement of the symmetry at the multicritical
point, from O(2) x [Z_2+O(N-1)] to O(2)xO(N). For the physically interesting
case N=3, the analysis does not allow us to exclude the corresponding symmetry
enlargement controlled by the O(2)xO(3) fixed point. Moreover, it does not
provide evidence for any other stable fixed point. Thus, on the basis of our
field-theoretical results, the transition at the multicritical point is
expected to be either continuous and controlled by the O(2)xO(3) fixed point or
to be of first order.
|
The rise of online social networks has facilitated the evolution of social
recommender systems, which incorporate social relations to enhance users'
decision-making process. With the great success of Graph Neural Networks in
learning node representations, GNN-based social recommendations have been
widely studied to model user-item interactions and user-user social relations
simultaneously. Despite their great successes, recent studies have shown that
these advanced recommender systems are highly vulnerable to adversarial
attacks, in which attackers can inject well-designed fake user profiles to
disrupt recommendation performances. While most existing studies mainly focus
on targeted attacks to promote target items on vanilla recommender systems,
untargeted attacks to degrade the overall prediction performance are less
explored on social recommendations under a black-box scenario. To perform
untargeted attacks on social recommender systems, attackers can construct
malicious social relationships for fake users to enhance the attack
performance. However, the coordination of social relations and item profiles is
challenging for attacking black-box social recommendations. To address this
limitation, we first conduct several preliminary studies to demonstrate the
effectiveness of cross-community connections and cold-start items in degrading
recommendations performance. Specifically, we propose a novel framework
Multiattack based on multi-agent reinforcement learning to coordinate the
generation of cold-start item profiles and cross-community social relations for
conducting untargeted attacks on black-box social recommendations.
Comprehensive experiments on various real-world datasets demonstrate the
effectiveness of our proposed attacking framework under the black-box setting.
|
We introduced and analyzed robust recovery-based a posteriori error
estimators for various lower order finite element approximations to interface
problems in [9, 10], where the recoveries of the flux and/or gradient are
implicit (i.e., requiring solutions of global problems with mass matrices). In
this paper, we develop fully explicit recovery-based error estimators for lower
order conforming, mixed, and non- conforming finite element approximations to
diffusion problems with full coefficient tensor. When the diffusion coefficient
is piecewise constant scalar and its distribution is local quasi-monotone, it
is shown theoretically that the estimators developed in this paper are robust
with respect to the size of jumps. Numerical experiments are also performed to
support the theoretical results.
|
The principles of statistical mechanics and information theory play an
important role in learning and have inspired both theory and the design of
numerous machine learning algorithms. The new aspect in this paper is a focus
on integrating feedback from the learner. A quantitative approach to
interactive learning and adaptive behavior is proposed, integrating model- and
decision-making into one theoretical framework. This paper follows simple
principles by requiring that the observer's world model and action policy
should result in maximal predictive power at minimal complexity. Classes of
optimal action policies and of optimal models are derived from an objective
function that reflects this trade-off between prediction and complexity. The
resulting optimal models then summarize, at different levels of abstraction,
the process's causal organization in the presence of the learner's actions. A
fundamental consequence of the proposed principle is that the learner's optimal
action policies balance exploration and control as an emerging property.
Interestingly, the explorative component is present in the absence of policy
randomness, i.e. in the optimal deterministic behavior. This is a direct result
of requiring maximal predictive power in the presence of feedback.
|
We report high-resolution single-crystal inelastic neutron scattering
measurements on the spin-1/2 antiferromagnet Ba(TiO)Cu$_4$(PO$_4$)$_4$. This
material is formed from layers of four-site \cupola" structures, oriented
alternately upwards and downwards, which constitute a rather special
realization of two-dimensional (2D) square-lattice magnetism. The strong
Dzyaloshinskii-Moriya (DM) interaction within each cupola, or plaquette, unit
has a geometry largely unexplored among the numerous studies of magnetic
properties in 2D Heisenberg models with spin and spatial anisotropies. We have
measured the magnetic excitations at zero field and in fields up to 5 T,
finding a complex mode structure with multiple characteristic features that
allow us to extract all the relevant magnetic interactions by modelling within
the linear spin-wave approximation. We demonstrate that
Ba(TiO)Cu$_4$(PO$_4$)$_4$ is a checkerboard system with almost equal intra- and
inter-plaquette couplings, in which the intra-plaquette DM interaction is
instrumental both in enforcing robust magnetic order and in opening a large gap
at the Brillouin-zone center. We place our observations in the perspective of
generalized phase diagrams for spin-1/2 square-lattice models and materials,
where exploring anisotropies and frustration as routes to quantum disorder
remains a frontier research problem.
|
We show how to efficiently count and generate uniformly at random finitely
generated subgroups of the modular group $\textsf{PSL}(2,\mathbb{Z})$ of a
given isomorphism type. The method to achieve these results relies on a natural
map of independent interest, which associates with any finitely generated
subgroup of $\textsf{PSL}(2,\mathbb{Z})$ a graph which we call its silhouette,
and which can be interpreted as a conjugacy class of free finite index
subgroups of $\textsf{PSL}(2,\mathbb{Z})$.
|
We obtain estimates for the nonlinear variational capacity of annuli in
weighted R^n and in metric spaces. We introduce four different (pointwise)
exponent sets, show that they all play fundamental roles for capacity
estimates, and also demonstrate that whether an end point of an exponent set is
attained or not is important. As a consequence of our estimates we obtain, for
instance, criteria for points to have zero (resp. positive) capacity. Our
discussion holds in rather general metric spaces, including Carnot groups and
many manifolds, but it is just as relevant on weighted R^n. Indeed, to
illustrate the sharpness of our estimates, we give several examples of radially
weighted R^n, which are based on quasiconformality of radial stretchings in
R^n.
|
We calculate the thermal conductivity of electrons produced by
electron-electron Coulomb scattering in a strongly degenerate electron gas
taking into account the Landau damping of transverse plasmons. The Landau
damping strongly reduces this conductivity in the domain of ultrarelativistic
electrons at temperatures below the electron plasma temperature. In the inner
crust of a neutron star at temperatures T < 1e7 K this thermal conductivity
completely dominates over the electron conductivity due to electron-ion
(electron-phonon) scattering and becomes competitive with the the electron
conductivity due to scattering of electrons by impurity ions.
|
In this article, several 2+1 dimensional lattice hierarchies proposed by
Blaszak and Szum [J. Math. Phys. {\bf 42}, 225(2001)] are further investigated.
We first describe their discrete zero curvature representations. Then, by means
of solving the corresponding discrete spectral equation, we demonstrate the
existence of infinitely many conservation laws for them and obtain the
corresponding conserved densities and associated fluxes formulaically. Thus,
their integrability is further confirmed.
|
We perform a global leading-order QCD fit to recent polarized structure
function data in order to extract a consistent set of spin-dependent parton
distributions. Assuming that there is no significant intrinsic polarization of
the quark sea, the data are consistent with a modest amount of the proton's
spin carried by the gluon, although the shape of the gluon distribution is not
well constrained. We show how inelastic $J/\psi$ production in polarized
photon-hadron scattering can, in principle, provide definitive information on
the shape of the gluon distribution. (Talk presented by W.J.Stirling at the
27th International Conference on High Energy Physics, Glasgow, July 1994)
|
The CH$_3$O and CH$_2$OH radicals can be important precursors of complex
organic molecules (COMs) in interstellar dust. The COMs presumably originating
from these radicals were abundantly found in various astronomical objects.
Because each radical leads to different types of COMs, determining the
abundance ratio of CH$_3$O to CH$_2$OH is crucial for a better understanding of
the chemical evolution to various COMs. Recent work suggested that the reaction
between CH$_3$OH and OH on ice dust plays an important role in forming CH$_3$O
and CH$_2$OH radicals. However, quantitative details on the abundance of these
radicals have not been presented to date. Herein, we experimentally determined
the branching ratio (CH$_3$O/CH$_2$OH) resulting from the CH$_3$OH + OH
reaction on the water ice surface at 10 K to be 4.3 $\pm$ 0.6. Furthermore, the
CH$_3$O product in the reaction would participate in subsequent diffusive
reactions even at a temperature as low as 10 K. This fact should provide
critical information for COMs formation models in cold molecular clouds.
|
We describe the asymptotic behavior of the number $Z_n[a_n,\infty)$ of
individuals with a large value in a stable bifurcating autoregressive process.
The study of the associated first moment $\mathbb{E}(Z_n[a_n,\infty))$ is
equivalent to the annealed large deviation problem $\mathbb{P}(Y_n\geq a_n)$,
where $Y$ is an autoregressive process in a random environment and
$a_n\rightarrow \infty$.
The population with large values and the trajectorial behavior of
$Z_n[a_n,\infty)$ is obtained from the ancestral paths associated to the large
deviations of $Y$ together with its environment. The study of large deviations
of autoregressive processes in random environment is of independent interest
and achieved first in this paper. The proofs of trajectorial estimates for
bifurcating autoregressive process involves then a law of large numbers for
non-homogenous trees.
Two regimes appear in the stable case, depending on the fact that one of the
autoregressive parameter is greater than one or not. It yields two different
asymptotic behaviors for the large local densities and maximal value of the
bifurcating autoregressive process.
|
Porous materials are used in a variety of industrial applications owing to
their large surface areas, large pore volumes, hierarchical porosities, and low
densities. The motion of particles inside the pores of porous materials has
attracted considerable attention. We investigated nano-particle motion in a
porous material using the single-particle tracking method. Particle motion such
as absorption and desorption at the wall was observed. The displacement
probability distribution deviated from the Gaussian distribution at the tail,
indicating non-Gaussian motion of the particles. Moreover, an analysis of the
relative angle between three consecutive particle positions revealed that the
probability of the particle moving backward was approximately twice that of the
particle moving forward. These results indicate that particle motion inside
porous materials is highly complex and that a single-particle study is
essensital for fabricating a structure that is suitable for applications.
|
The direct URCA process of rapid neutrino emission can occur in nonuniform
nuclear pasta phases that are expected in the inner crust of neutron stars.
Here, the periodic potential for a nucleon in nuclear pasta allows momentum
conservation to be satisfied for direct URCA reactions. We improve on earlier
work by modeling a rich variety of pasta phases (gnocchi, waffle, lasagna, and
anti-spaghetti) with large-scale molecular dynamics simulations. We find that
the neutrino luminosity due to direct URCA reactions in nuclear pasta can be 3
to 4 orders of magnitude larger than that from the modified URCA process in the
NS core. Thus neutrino radiation from pasta could dominate radiation from the
core and this could significantly impact the cooling of neutron stars
|
We investigate spike-timing dependent plasticity (STPD) in the case of a
synapse connecting two neural cells. We develop a theoretical analysis of
several STDP rules using Markovian theory. In this context there are two
different timescales, fast neural activity and slower synaptic weight updates.
Exploiting this timescale separation, we derive the long-time limits of a
single synaptic weight subject to STDP. We show that the pairing model of
presynaptic and postsynaptic spikes controls the synaptic weight dynamics for
small external input, on an excitatory synapse. This result implies in
particular that mean-field analysis of plasticity may miss some important
properties of STDP. Anti-Hebbian STDP seems to favor the emergence of a stable
synaptic weight, but only for high external input. In the case of inhibitory
synapse the pairing schemes matter less, and we observe convergence of the
synaptic weight to a non-null value only for Hebbian STDP. We extensively study
different asymptotic regimes for STDP rules, raising interesting questions for
future works on adaptative neural networks and, more generally, on adaptive
systems.
|
Given a fluid equation with reduced Lagrangian $l$ which is a functional of
velocity $\MM{u}$ and advected density $D$ given in Eulerian coordinates, we
give a general method for semidiscretising the equations to give a canonical
Hamiltonian system; this system may then be integrated in time using a
symplectic integrator. The method is Lagrangian, with the variables being a set
of Lagrangian particle positions and their associated momenta. The canonical
equations obtained yield a discrete form of Euler-Poincar\'e equations for $l$
when projected onto the grid, with a new form of discrete calculus to represent
the gradient and divergence operators. Practical symplectic time integrators
are suggested for a large family of equations which include the shallow-water
equations, the EP-Diff equations and the 3D compressible Euler equations, and
we illustrate the technique by showing results from a numerical experiment for
the EP-Diff equations.
|
The existence of a universal learning architecture in human cognition is a
widely spread conjecture supported by experimental findings from neuroscience.
While no low-level implementation can be specified yet, an abstract outline of
human perception and learning is believed to entail three basic properties: (a)
hierarchical attention and processing, (b) memory-based knowledge
representation, and (c) progressive learning and knowledge compaction. We
approach the design of such a learning architecture from a system-theoretic
viewpoint, developing a closed-loop system with three main components: (i) a
multi-resolution analysis pre-processor, (ii) a group-invariant feature
extractor, and (iii) a progressive knowledge-based learning module.
Multi-resolution feedback loops are used for learning, i.e., for adapting the
system parameters to online observations. To design (i) and (ii), we build upon
the established theory of wavelet-based multi-resolution analysis and the
properties of group convolution operators. Regarding (iii), we introduce a
novel learning algorithm that constructs progressively growing knowledge
representations in multiple resolutions. The proposed algorithm is an extension
of the Online Deterministic Annealing (ODA) algorithm based on annealing
optimization, solved using gradient-free stochastic approximation. ODA has
inherent robustness and regularization properties and provides a means to
progressively increase the complexity of the learning model i.e. the number of
the neurons, as needed, through an intuitive bifurcation phenomenon. The
proposed multi-resolution approach is hierarchical, progressive,
knowledge-based, and interpretable. We illustrate the properties of the
proposed architecture in the context of the state-of-the-art learning
algorithms and deep learning methods.
|
We study the Chow group of zero-cycles of smooth projective varieties over
local and strictly local fields. We prove in particular the injectivity of the
cycle class map to integral l-adic cohomology for a large class of surfaces
with positive geometric genus, over local fields of residue characteristic
different from l. The same statement holds for semistable K3 surfaces defined
over C((t)), but does not hold in general for surfaces over strictly local
fields.
|
Variational quantum algorithms dominate gate-based applications of modern
quantum processors. The so called, {\it layer-wise trainability conjecture}
appears in various works throughout the variational quantum computing
literature. The conjecture asserts that a quantum circuit can be trained
piece-wise, e.g.~that a few layers can be trained in sequence to minimize an
objective function. Here we prove this conjecture false. Counterexamples are
found by considering objective functions that are exponentially close (in the
number of qubits) to the identity matrix. In the finite setting, we found
abrupt transitions in the ability of quantum circuits to be trained to minimize
these objective functions. Specifically, we found that below a critical (target
gate dependent) threshold, circuit training terminates close to the identity
and remains near to the identity for subsequently added blocks trained
piece-wise. A critical layer depth will abruptly train arbitrarily close to the
target, thereby minimizing the objective function. These findings shed new
light on the divide-and-conquer trainability of variational quantum circuits
and apply to a wide collection of contemporary literature.
|
This extended abstract presents our recent work on the leader-following
consensus control for generic linear multi-agent systems. An improved dynamic
event-triggered control framework are proposed, based on a moving average
approach. The proposed methods involve model-based estimation and clock-like
auxiliary dynamic variables to increase the inter-event time as long as
possible eventually. Compared to the static event-triggered strategy and the
existing state-of-the-art dynamic event-triggered mechanism, the proposed
approach significantly reduces the communication frequency while still
guaranteeing asymptotic convergence. Numerical simulations demonstrate the
validity of the proposed theoretical results.
|
A minimally constructed $\Lambda$-nucleus density-dependent optical potential
is used to calculate binding energies of observed $1s_{\Lambda}$,
$1p_{\Lambda}$ states across the periodic table, leading to a repulsive
$\Lambda NN$ contribution $D_{\Lambda}^{(3)}\approx 14$ MeV to the
phenomenological $\Lambda$-nucleus potential depth $D_{\Lambda}\approx -30$
MeV. This value is significant in connection with the so-called 'hyperon
puzzle'.
|
We present the first attempt to fit the light curve of the interstellar
visitor `Oumuamua using a physical model which includes optional torque. We
consider both conventional (Lommel-Seeliger triaxial ellipsoid) and alternative
("black-and-white ball", "solar sail") brightness models. With all the
brightness models, some torque is required to explain the timings of the most
conspicuous features -- deep minima -- of the asteroid's light curve. Our
best-fitting models are a thin disc (aspect ratio 1:6) and a thin cigar (aspect
ratio 1:8) which are very close to being axially symmetric. Both models are
tumbling and require some torque which has the same amplitude in relation to
`Oumuamua's linear non-gravitational acceleration as in Solar System comets
which dynamics is affected by outgassing. Assuming random orientation of the
angular momentum vector, we compute probabilities for our best-fitting models.
We show that cigar-shaped models suffer from a fine-tuning problem and have
only 16 per cent probability to produce light curve minima as deep as the ones
present in `Oumuamua's light curve. Disc-shaped models, on the other hand, are
very likely (at 91 per cent) to produce minima of the required depth. From our
analysis, the most likely model for `Oumuamua is a thin disc (slab)
experiencing moderate torque from outgassing.
|
In multiview geometry when correspondences among multiple views are unknown
the image points can be understood as being unlabeled. This is a common problem
in computer vision. We give a novel approach to handle such a situation by
regarding unlabeled point configurations as points on the Chow variety
$\text{Sym}_m(\mathbb{P}^2)$. For two unlabeled points we design an algorithm
that solves the triangulation problem with unknown correspondences. Further the
unlabeled multiview variety $\text{Sym}_m(V_A)$ is studied.
|
The frequency-domain Kalman filter (FKF) has been utilized in many audio
signal processing applications due to its fast convergence speed and
robustness. However, the performance of the FKF in under-modeling situations
has not been investigated. This paper presents an analysis of the steady-state
behavior of the commonly used diagonalized FKF and reveals that it suffers from
a biased solution in under-modeling scenarios. Two efficient improvements of
the FKF are proposed, both having the benefits of the guaranteed optimal
steady-state behavior at the cost of a very limited increase of the
computational burden. The convergence behavior of the proposed algorithms is
also compared analytically. Computer simulations are conducted to validate the
improved performance of the proposed methods.
|
We propose a self-supervised Gaussian ATtention network for image Clustering
(GATCluster). Rather than extracting intermediate features first and then
performing the traditional clustering algorithm, GATCluster directly outputs
semantic cluster labels without further post-processing. Theoretically, we give
a Label Feature Theorem to guarantee the learned features are one-hot encoded
vectors, and the trivial solutions are avoided. To train the GATCluster in a
completely unsupervised manner, we design four self-learning tasks with the
constraints of transformation invariance, separability maximization, entropy
analysis, and attention mapping. Specifically, the transformation invariance
and separability maximization tasks learn the relationships between sample
pairs. The entropy analysis task aims to avoid trivial solutions. To capture
the object-oriented semantics, we design a self-supervised attention mechanism
that includes a parameterized attention module and a soft-attention loss. All
the guiding signals for clustering are self-generated during the training
process. Moreover, we develop a two-step learning algorithm that is
memory-efficient for clustering large-size images. Extensive experiments
demonstrate the superiority of our proposed method in comparison with the
state-of-the-art image clustering benchmarks. Our code has been made publicly
available at https://github.com/niuchuangnn/GATCluster.
|
We give a "modern" version, based on Mori theory, of the classification of
birational involutions of P^2 up to conjugacy. The result has been known for
more than one century but the classical proofs are not always convincing.
|
Based on experimental discovery that the mass-square of neutrino is negative,
a quantum equation for superluminal neutrino is proposed in comparison with
Dirac equation and Dirac equation with imaginary mass. A virtual particle may
also be viewed as superluminal one. Both the basic symmetry of space-time
inversion and the maximum violation of space-inversion symmetry are emphasized.
|
Three entanglement concentration protocols (ECPs) are proposed. The first ECP
and a modified version of that are shown to be useful for the creation of
maximally entangled cat and GHZ-like states from their non-maximally entangled
counterparts. The last two ECPs are designed for the creation of maximally
entangled $(n+1)$-qubit state
$\frac{1}{\sqrt{2}}\left(|\Psi_{0}\rangle|0\rangle+|\Psi_{1}\rangle|1\rangle\right)$
from the partially entangled $(n+1)$-qubit normalized state
$\alpha|\Psi_{0}\rangle|0\rangle+\beta|\Psi_{1}\rangle|1\rangle$, where
$\langle\Psi_{1}|\Psi_{0}\rangle=0$ and $|\alpha|\neq\frac{1}{\sqrt{2}}$. It is
also shown that W, GHZ, GHZ-like, Bell and cat states and specific states from
the 9 SLOCC-nonequivalent families of 4-qubit entangled states can be expressed
as
$\frac{1}{\sqrt{2}}\left(|\Psi_{0}\rangle|0\rangle+|\Psi_{1}\rangle|1\rangle\right)$
and consequently the last two ECPs proposed here are applicable to all these
states. Quantum circuits for implementation of the proposed ECPs are provided
and it is shown that the proposed ECPs can be realized using linear optics.
Efficiency of the ECPs are studied using a recently introduced quantitative
measure (Phys. Rev. A $\textbf{85}$, 012307 (2012)). Limitations of the measure
are also reported.
|
A novel approach to the class of cosmic barotropic fluids in which the speed
of sound squared is defined as a function of the Equation of State parameter,
so called $c_s^2(w)$ models, is examined. For this class of models, a new
analytical reconstruction method is introduced for finding their equivalent
purely kinetic k-essence formulation. The method is explicitly demonstrated for
several $c_s^2(w)$ models. The application of the obtained explicit or closed
form solutions in understanding dark sector unification models is discussed.
|
In the Daya Bay Reactor Neutrino Experiment 960 20-cm-diameter waterproof
photomultiplier tubes are used to instrument three water pools as Cherenkov
detectors for detecting cosmic-ray muons. Of these 960 photomultiplier tubes,
341 are recycled from the MACRO experiment. A systematic program was undertaken
to refurbish them as waterproof assemblies. In the context of passing the water
leakage check, a success rate better than 97% was achieved. Details of the
design, fabrication, testing, operation, and performance of these waterproofed
photomultiplier-tube assemblies are presented.
|
It is well-known that staggered fermions do not necessarily satisfy the same
global symmetries as the continuum theory. We analyze the mechanism behind this
phenomenon for arbitrary dimension and gauge group representation. For this
purpose we vary the number of lattice sites between even and odd parity in each
single direction. Since the global symmetries are manifest in the lowest
eigenvalues of the Dirac operator, the spectral statistics and also the
symmetry breaking pattern will be affected. We analyze these effects and
compare our predictions with Monte-Carlo simulations of naive Dirac operators
in the strong coupling limit.
|
Federated learning is a promising distributed training paradigm that
effectively safeguards data privacy. However, it may involve significant
communication costs, which hinders training efficiency. In this paper, we aim
to enhance communication efficiency from a new perspective. Specifically, we
request the distributed clients to find optimal model updates relative to
global model parameters within predefined random noise. For this purpose, we
propose Federated Masked Random Noise (FedMRN), a novel framework that enables
clients to learn a 1-bit mask for each model parameter and apply masked random
noise (i.e., the Hadamard product of random noise and masks) to represent model
updates. To make FedMRN feasible, we propose an advanced mask training
strategy, called progressive stochastic masking (PSM). After local training,
each client only need to transmit local masks and a random seed to the server.
Additionally, we provide theoretical guarantees for the convergence of FedMRN
under both strongly convex and non-convex assumptions. Extensive experiments
are conducted on four popular datasets. The results show that FedMRN exhibits
superior convergence speed and test accuracy compared to relevant baselines,
while attaining a similar level of accuracy as FedAvg.
|
Armchair biphenylene nanoribbons are investigated by using density functional
theory. The nanoribbon that contains one biphenylene subunit in a unit cell is
a semiconductor with a direct band gap larger than 1 eV, while that containing
four biphenylene subunits is a metal. The semiconducting nanoribbon has high
electron mobility of 57174 cm2V-1s-1, superior to armchair graphene
nanoribbons. Negative differential resistance behavior is observed in two
electronic devices composed of the semiconducting and metallic nanoribbons. The
on/off ratios are in the order of 10^3. All these indicate that armchair
biphenylene nanoribbons are potential candidates for ultra-small logic devices.
|
We introduce PathGAN, a deep neural network for visual scanpath prediction
trained on adversarial examples. A visual scanpath is defined as the sequence
of fixation points over an image defined by a human observer with its gaze.
PathGAN is composed of two parts, the generator and the discriminator. Both
parts extract features from images using off-the-shelf networks, and train
recurrent layers to generate or discriminate scanpaths accordingly. In scanpath
prediction, the stochastic nature of the data makes it very difficult to
generate realistic predictions using supervised learning strategies, but we
adopt adversarial training as a suitable alternative. Our experiments prove how
PathGAN improves the state of the art of visual scanpath prediction on the iSUN
and Salient360! datasets. Source code and models are available at
https://imatge-upc.github.io/pathgan/
|
Estimating the motion of the camera together with the 3D structure of the
scene from a monocular vision system is a complex task that often relies on the
so-called scene rigidity assumption. When observing a dynamic environment, this
assumption is violated which leads to an ambiguity between the ego-motion of
the camera and the motion of the objects. To solve this problem, we present a
self-supervised learning framework for 3D object motion field estimation from
monocular videos. Our contributions are two-fold. First, we propose a two-stage
projection pipeline to explicitly disentangle the camera ego-motion and the
object motions with dynamics attention module, called DAM. Specifically, we
design an integrated motion model that estimates the motion of the camera and
object in the first and second warping stages, respectively, controlled by the
attention module through a shared motion encoder. Second, we propose an object
motion field estimation through contrastive sample consensus, called CSAC,
taking advantage of weak semantic prior (bounding box from an object detector)
and geometric constraints (each object respects the rigid body motion model).
Experiments on KITTI, Cityscapes, and Waymo Open Dataset demonstrate the
relevance of our approach and show that our method outperforms state-of-the-art
algorithms for the tasks of self-supervised monocular depth estimation, object
motion segmentation, monocular scene flow estimation, and visual odometry.
|
The equation of motion for the two-fermion two-time correlation function in
the pairing channel is considered at finite temperature. Within the Matsubara
formalism, the Dyson-type Bethe-Salpeter equation (Dyson-BSE) with the
frequency-dependent interaction kernel is obtained. Similarly to the case of
zero temperature, it is decomposed into the static and dynamical components,
where the former is given by the contraction of the bare interaction with the
two-fermion density and the latter is represented by the double contraction of
the four-fermion two-time correlation function, or propagator, with two
interaction matrix elements. The dynamical kernel with the four-body
propagator, being formally exact, requires approximations to avoid generating
prohibitively complicated hierarchy of equations. We focus on the approximation
where the dynamical interaction kernel is truncated on the level of two-body
correlation functions, neglecting the irreducible three-body and higher-rank
correlations. Such a truncation leads to the dynamical kernel with the coupling
between correlated fermionic pairs, which can be interpreted as emergent
bosonic quasibound states, or phonons, of normal and superfluid nature. The
latter ones are, thus, the mediators of the dynamical superfluid pairing. In
this framework, we obtained the closed system of equations for the fermionic
particle-hole and particle-particle propagators. This allows us to study the
temperature dependence of the pairing gap beyond the Bardeen-Cooper-Schrieffer
approximation, that is implemented for medium-heavy nuclear systems. The cases
of 68Ni and 44,46Ca are discussed in detail.
|
Savage and Sagan have recently defined a notion of st-Wilf equivalence for
any permutation statistic st and any two sets of permutations $\Pi$ and $\Pi'$.
In this paper we give a thorough investigation of st-Wilf equivalence for the
charge statistic on permutations and use a bijection between the charge
statistic and the major index to prove a conjecture of Dokos, Dwyer, Johnson,
Sagan and Selsor regarding powers of 2 and the major index.
|
The objective of this project is to solve one of the major problems faced by
the people having word processing issues like trauma, or mild mental
disability. "ARTH" is the short form of Algorithm for Reading Handily. ARTH is
a self-learning set of algorithms that is an intelligent way of fulfilling the
need for "reading and understanding the text effortlessly" which adjusts
according to the needs of every user. The research project propagates in two
steps. In the first step, the algorithm tries to identify the difficult words
present in the text based on two features -- the number of syllables and usage
frequency -- using a clustering algorithm. After the analysis of the clusters,
the algorithm labels these clusters, according to their difficulty level. In
the second step, the algorithm interacts with the user. It aims to test the
user's comprehensibility of the text and his/her vocabulary level by taking an
automatically generated quiz. The algorithm identifies the clusters which are
difficult for the user, based on the result of the analysis. The meaning of
perceived difficult words is displayed next to them. The technology "ARTH"
focuses on the revival of the joy of reading among those people, who have a
poor vocabulary or any word processing issues.
|
The ubiquity of smart phones and electronic devices has placed a wealth of
information at the fingertips of consumers as well as creators of digital
content. This has led to millions of notifications being issued each second
from alerts about posted YouTube videos to tweets, emails and personal
messages. Adding work related notifications and we can see how quickly the
number of notifications increases. Not only does this cause reduced
productivity and concentration but has also been shown to cause alert fatigue.
This condition makes users desensitized to notifications, causing them to
ignore or miss important alerts. Depending on what domain users work in, the
cost of missing a notification can vary from a mere inconvenience to life and
death. Therefore, in this work, we propose an alert and notification framework
that intelligently issues, suppresses and aggregates notifications, based on
event severity, user preferences, or schedules, to minimize the need for users
to ignore, or snooze their notifications and potentially forget about
addressing important ones. Our framework can be deployed as a backend service,
but is better suited to be integrated into proactive conversational agents, a
field receiving a lot of attention with the digital transformation era, email
services, news services and others. However, the main challenge lies in
developing the right machine learning algorithms that can learn models from a
wide set of users while customizing these models to individual users'
preferences.
|
Partial classification popularly known as nugget discovery comes under
descriptive knowledge discovery. It involves mining rules for a target class of
interest. Classification "If-Then" rules are the most sought out by decision
makers since they are the most comprehensible form of knowledge mined by data
mining techniques. The rules have certain properties namely the rule metrics
which are used to evaluate them. Mining rules with user specified properties
can be considered as a multi-objective optimization problem since the rules
have to satisfy more than one property to be used by the user. Cultural
algorithm (CA) with its knowledge sources have been used in solving many
optimization problems. However research gap exists in using cultural algorithm
for multi-objective optimization of rules. In the current study a
multi-objective cultural algorithm is proposed for partial classification.
Results of experiments on benchmark data sets reveal good performance.
|
This paper takes up the problem of medical resource sharing through
MicroService architecture without compromising patient privacy. To achieve this
goal, we suggest refactoring the legacy EHR systems into autonomous
MicroServices communicating by the unified techniques such as RESTFul web
service. This lets us handle clinical data queries directly and far more
efficiently for both internal and external queries. The novelty of the proposed
approach lies in avoiding the data de-identification process often used as a
means of preserving patient privacy. The implemented toolkit combines software
engineering technologies such as Java EE, RESTful web services, JSON Web Tokens
to allow exchanging medical data in an unidentifiable XML and JSON format as
well as restricting users to the need-to-know principle. Our technique also
inhibits retrospective processing of data such as attacks by an adversary on a
medical dataset using advanced computational methods to reveal Protected Health
Information (PHI). The approach is validated on an endoscopic reporting
application based on openEHR and MST standards. From the usability perspective,
the approach can be used to query datasets by clinical researchers,
governmental or non-governmental organizations in monitoring health care and
medical record services to improve quality of care and treatment.
|
This paper is concerned with the evolution of the periodic boundary value
problem and the mixed boundary value problem for a compressible mixture of
binary fluids modeled by the Navier-Stokes-Cahn-Hilliard system in one
dimensional space. The global existence and the large time behavior of the
strong solutions for these two systems are studied. The solutions are proved to
be asymptotically stable even for the large initial disturbance of the density
and the large velocity data. We show that the average concentration difference
for the two components of the initial state determines the long time behavior
of the diffusive interface for the two-phase flow.
|
During the total solar eclipse of 11 July 2010, multi-slit spectroscopic
observations of the solar corona were performed from Easter Island, Chile. To
search for high-frequency waves, observations were taken at a high cadence in
the green line at 5303 A due to [Fe xiv] and the red line at 6374 A due to [Fe
x]. The data are analyzed to study the periodic variations in the intensity,
Doppler velocity and line width using wavelet analysis. The data with high
spectral and temporal resolution enabled us to study the rapid dynamical
changes within coronal structures. We find that at certain locations each
parameter shows significant oscillation with periods ranging from 6 - 25 s. For
the first time, we could detect damping of high-frequency oscillations with
periods of the order of 10 s. If the observed damped oscillations are due to
magnetohydrodynamic (MHD) waves then they can contribute significantly in the
heating of the corona. From a statistical study we try to characterize the
nature of the observed oscillations while looking at the distribution of power
in different line parameters.
|
In this paper we have considered the problem of parametric sound generation
in an acoustic resonator flled with a fluid, taking explicitely into account
the influence of the nonlinearly generated second harmonic. A simple model is
presented, and its stationary solutions obtained. The main feature of these
solutions is the appearance of bistable states of the fundamental field
resulting from the coupling to the second harmonic. An experimental setup was
designed to check the predictions of the theory. The results are consistent
with the predicted values for the mode amplitudes and parametric thresholds. At
higher driving values a self-modulation of the amplitudes is observed. We
identify this phenomenon with a secondary instability previously reported in
the frame of the theoretical model.
|
We derive a general expression for the expectation value of the phase
acquired by a time dependent wave function in a multi component system, as
excursions are made in its coordinate space. We then obtain the mean phase for
the (linear dynamic $E \otimes \epsilon$) Jahn-Teller situation in an
electronically degenerate system. We interpret the phase-change as an
observable measure of the {\it effective} nodal structure of the wave function.
|
In factorization formulae for cross sections of scattering processes,
final-state jets are described by jet functions, which are a crucial ingredient
in the resummation of large logarithms. We present an approach to calculate
generic one-loop jet functions, by using the geometric subtraction scheme. This
method leads to local counterterms generated from a slicing procedure; and
whose analytic integration is particularly simple. The poles are obtained
analytically, up to an integration over the azimuthal angle for the
observable-dependent soft counterterm. The poles depend only on the soft limit
of the observable, characterized by a power law, and the finite term is written
as a numerical integral. We illustrate our method by reproducing the known
expressions for the jet function for angularities, the jet shape, and jets
defined through a cone or $k_T$ algorithm. As a new result, we obtain the
one-loop jet function for an angularity measurement in $e^+e^-$ collisions,
that accounts for the formally power-suppressed but potentially large effect of
recoil. An implementation of our approach is made available as the GOJet
Mathematica package accompanying this paper.
|
Up to now the origin of 102 lead ingots of the Comacchio (Ferrara, Italy)
relict, found in 1981, has been difficult to solve and different hypothesis
have been proposed. Recently 20 representative ingots have been analysed at the
European JRC Laboratory of Ispra(Italy) and the lead isotope signature
determined. From the performed results we may suggest the ores of
Cartagena-Mazarron or Sierra Almagrera (Sud-East of Spain) be the probable lead
origin of ingots. Archaeological examination and epigraphic arguments indicate
Cartagena-Mazarron as the most probable of the two mine regions. The role of
different persons, whose names are reported on the ingots, is discussed under a
commercial point of view. We try to understand the commercial travel of the
ship and the presence of the ingots in the Nord Adriatic sea.
|
In the field of cavity nano-optomechanics, the nanoresonator-in-the-middle
approach consists in inserting a sub-wavelength sized deformable resonator,
here a nanowire, in the small mode volume of a fiber microcavity. Internal
resonances in the nanowire enhance the light nanowire interaction which provide
giant coupling strengthes -- sufficient to enter the single photon regime of
cavity optomechanics -- at the condition to precisely position the nanowire
within the cavity field. Here we expose a theoretical description that combines
an analytical formulation of the Mie-scattering of the intracavity light by the
nanowire and an input-output formalism describing the dynamics of the
intracavity optical eigenmodes. We investigate both facets of the
optomechanical interaction describing the position dependent parametric and
dissipative optomechanical coupling strengths, as well as the optomechanical
force field experienced by the nanowire. We find a quantitative agreement with
recent experimental realization. We discuss the specific phenomenology of the
optomechanical interaction which acquires a vectorial character since the
nanowire can identically vibrate along both transverse directions: the
optomechanical force field presents a non-zero rotational, while anomalous
positive cavity shifts are expected. Taking advantage of the large Kerr-like
non linearity, this work opens perspectives in the field of quantum optics with
nanoresonator with for instance broadband squeezing of the outgoing cavity
fields close to the single photon level.
|
We explore entanglement entropy of a cap-like region for a generic quantum
field theory residing in the Bunch-Davies vacuum on de Sitter space.
Entanglement entropy in our setup is identical with the thermal entropy in the
static patch of de Sitter, and we derive a simple relation between the vacuum
expectation value of the energy-momentum tensor trace and the RG flow of
entanglement entropy. In particular, renormalization of the cosmological
constant and logarithmic divergence of the entanglement entropy are
interrelated in our setup. We confirm our findings by recovering known
universal contributions for a free field theory deformed by a mass operator as
well as obtain correct universal behaviour at the fixed points. Simple examples
of entanglement entropy flows are elaborated in $d=2,3,4$. In three dimensions
we find that while the renormalized entanglement entropy is stationary at the
fixed points, it is not monotonic. We provide a computational evidence that the
universal `area law' for a conformally coupled scalar is different from the
known result in the literature, and argue that this difference survives in the
limit of flat space. Finally, we carry out the spectral decomposition of
entanglement entropy flow and discuss its application to the F-theorem.
|
KIC8462852 is a completely-ordinary F3 main sequence star, except that the
light curve from Kepler shows episodes of unique and inexplicable day-long dips
with up to 20% dimming. Here, I provide a light curve of 1338 Johnson B-band
magnitudes from 1890 to 1989 taken from archival photographic plates at
Harvard. KIC8462852 displays a secular dimming at an average rate of
0.164+-0.013 magnitudes per century. From the early-1890s to the late-1980s,
KIC8462852 faded by 0.193+-0.030 mag. The decline is not an artifact because
nearby check stars have closely flat light curves. This century-long dimming is
unprecedented for any F-type main sequence star. Thus the Harvard light curve
provides the first confirmation (past the several dips seen in the Kepler light
curve alone) that KIC8462852 has anything unusual. The century-long dimming and
the day-long dips are both just extreme ends of a spectrum of timescales for
unique dimming events. By Ockham's Razor, two such unique and similar effects
are very likely produced by one physical mechanism. This one mechanism does not
appear as any isolated catastrophic event in the last century, but rather must
be some ongoing process with continuous effects. Within the context of
dust-occultation models, the century-long dimming trend requires 10^4 to 10^7
times as much dust as for the deepest Kepler dip. Within the context of the
comet-family idea, the century-long dimming trend requires an estimated 648,000
giant comets (each with 200 km diameter) all orchestrated to pass in front of
the star within the last century.
|
Let X be a 2-sphere with n punctures. We classify all conjugacy classes of
Zariski-dense representations $$\rho: \pi_1(X)\to SL_2(\mathbb{C})$$ with
finite orbit under the mapping class group of X, such that the local monodromy
at one or more punctures has infinite order. We show that all such
representations are "of pullback type" or arise via middle convolution from
finite complex reflection groups. In particular, we classify all rank 2 local
systems of geometric origin on the projective line with n generic punctures,
and with local monodromy of infinite order about at least one puncture.
|
We measured the angular rotation and proper motion of the Triangulum Galaxy
(M33) with the Very Long Baseline Array by observing two H2O masers on opposite
sides of the galaxy. By comparing the angular rotation rate with the
inclination and rotation speed, we obtained a distance of 730 +/- 168
kiloparsecs. This distance is consistent with the most recent Cepheid distance
measurement. M33 is moving with a velocity of 190 +/- 59 km/s relative to the
Milky Way. These measurements promise a new method to determine dynamical
models for the Local Group and the mass and dark matter halos of M31, M33 and
the Milky Way.
|
A concept of using Neural Ordinary Differential Equations(NODE) for Transfer
Learning has been introduced. In this paper we use the EfficientNets to explore
transfer learning on CIFAR-10 dataset. We use NODE for fine-tuning our model.
Using NODE for fine tuning provides more stability during training and
validation.These continuous depth blocks can also have a trade off between
numerical precision and speed .Using Neural ODEs for transfer learning has
resulted in much stable convergence of the loss function.
|
We study the spectrum of the bremsstrahlung photons coming from the electrons
and positrons, which are produced in the strong electromagnetic fields present
in peripheral relativistic heavy ion collisions. We compare different
approaches, making use of the exact pair production cross section in heavy ion
collisions as well as the double equivalent photon approximation.
|
Analytic predictions have been derived recently by V. Dohm and S. Wessel,
Phys. Rev. Lett. {\bf 126}, 060601 (2021) from anisotropic $\varphi^4$ theory
and conformal field theory for the amplitude ${\cal F}_c$ of the critical free
energy of finite anisotropic systems in the two-dimensional Ising universality
class. These predictions employ the hypothesis of multiparameter universality.
We test these predictions by means of high-precision Monte Carlo (MC)
simulations for ${\cal F}_c$ of the Ising model on a square lattice with
isotropic ferromagnetic couplings between nearest neighbors and with an
anisotropic coupling between next-nearest neighbors along one diagonal. We find
remarkable agreement between the MC data and the analytical prediction. This
agreement supports the validity of multiparameter universality and invalidates
two-scale-factor universality as ${\cal F}_c$ is found to exhibit a
nonuniversal dependence on the microscopic couplings of the scalar $\varphi^4$
model and the Ising model. Our results are compared with the exact result for
${\cal F}_c$ in the three-dimensional $\varphi^4$ model with a planar
anisotropy in the spherical limit. The critical Casimir amplitude is briefly
discussed.
|
The Shapes Constraint Language (SHACL) is a formal language for validating
RDF graphs against a set of conditions. Following this idea and implementing a
subset of the language, the Metadata Quality Assessment Framework provides
Shacl4Bib: a mechanism to define SHACL-like rules for data sources in non-RDF
based formats, such as XML, CSV and JSON. QA catalogue extends this concept
further to MARC21, UNIMARC and PICA data. The criteria can be defined either
with YAML or JSON configuration files or with Java code. Libraries can validate
their data against criteria expressed in a unified language, that improves the
clarity and the reusability of custom validation processes.
|
We present a brief overview of test-bed observations on accreting neutron
star binaries for the Simbol-X mission. We show that Simbol-X will provide
unique observations able to disclose the physical mechanisms responsible for
their high energy emission.
|
This thesis applies entropy as a model independent measure to address three
research questions concerning financial time series. In the first study we
apply transfer entropy to drawdowns and drawups in foreign exchange rates, to
study their correlation and cross correlation. When applied to daily and hourly
EUR/USD and GBP/USD exchange rates, we find evidence of dependence among the
largest draws (i.e. 5% and 95% quantiles), but not as strong as the correlation
between the daily returns of the same pair of FX rates. In the second study we
use state space models (Hidden Markov Models) of volatility to investigate
volatility spill overs between exchange rates. Among the currency pairs, the
co-movement of EUR/USD and CHF/USD volatility states show the strongest
observed relationship. With the use of transfer entropy, we find evidence for
information flows between the volatility state series of AUD, CAD and BRL. The
third study uses the entropy of S&P realised volatility in detecting changes of
volatility regime in order to re-examine the theme of market volatility timing
of hedge funds. A one-factor model is used, conditioned on information about
the entropy of market volatility, to measure the dynamic of hedge funds equity
exposure. On a cross section of around 2500 hedge funds with a focus on the US
equity markets we find that, over the period from 2000 to 2014, hedge funds
adjust their exposure dynamically in response to changes in volatility regime.
This adds to the literature on the volatility timing behaviour of hedge fund
manager, but using entropy as a model independent measure of volatility regime.
|
Knowledge Graph Question Answering (KGQA) involves retrieving entities as
answers from a Knowledge Graph (KG) using natural language queries. The
challenge is to learn to reason over question-relevant KG facts that traverse
KG entities and lead to the question answers. To facilitate reasoning, the
question is decoded into instructions, which are dense question representations
used to guide the KG traversals. However, if the derived instructions do not
exactly match the underlying KG information, they may lead to reasoning under
irrelevant context. Our method, termed ReaRev, introduces a new way to KGQA
reasoning with respect to both instruction decoding and execution. To improve
instruction decoding, we perform reasoning in an adaptive manner, where
KG-aware information is used to iteratively update the initial instructions. To
improve instruction execution, we emulate breadth-first search (BFS) with graph
neural networks (GNNs). The BFS strategy treats the instructions as a set and
allows our method to decide on their execution order on the fly. Experimental
results on three KGQA benchmarks demonstrate the ReaRev's effectiveness
compared with previous state-of-the-art, especially when the KG is incomplete
or when we tackle complex questions. Our code is publicly available at
https://github.com/cmavro/ReaRev_KGQA.
|
The subject of this textbook is the analysis of Boolean functions. Roughly
speaking, this refers to studying Boolean functions $f : \{0,1\}^n \to \{0,1\}$
via their Fourier expansion and other analytic means. Boolean functions are
perhaps the most basic object of study in theoretical computer science, and
Fourier analysis has become an indispensable tool in the field. The topic has
also played a key role in several other areas of mathematics, from
combinatorics, random graph theory, and statistical physics, to Gaussian
geometry, metric/Banach spaces, and social choice theory.
The intent of this book is both to develop the foundations of the field and
to give a wide (though far from exhaustive) overview of its applications. Each
chapter ends with a "highlight" showing the power of analysis of Boolean
functions in different subject areas: property testing, social choice,
cryptography, circuit complexity, learning theory, pseudorandomness, hardness
of approximation, concrete complexity, and random graph theory.
The book can be used as a reference for working researchers or as the basis
of a one-semester graduate-level course. The author has twice taught such a
course at Carnegie Mellon University, attended mainly by graduate students in
computer science and mathematics but also by advanced undergraduates, postdocs,
and researchers in adjacent fields. In both years most of Chapters 1-5 and 7
were covered, along with parts of Chapters 6, 8, 9, and 11, and some additional
material on additive combinatorics. Nearly 500 exercises are provided at the
ends of the book's chapters.
|
Borophene, a monoatomic layer of boron atoms, stands out among
two-dimensional (2D) materials, with its versatile properties of polymorphism,
metallicity, plasmonics, superconductivity, tantalizing for physics exploration
and next-generation devices. Yet its phases are all synthesized on and stay
bound to metal substrates, hampering both characterization and use. The growth
on the inert insulator would allow post-synthesis exfoliation of borophene, but
its weak adhesion to such substrate results in a very high 2D-nucleation
barrier preventing clean borophene growth. This challenge can be circumvented
in a devised and demonstrated here, with ab initio calculations, strategy.
Naturally present 1D-defects, the step-edges on h-BN substrate surface, enable
boron epitaxial assembly, reduce the nucleation dimensionality and lower the
barrier by an order of magnitude (to 1.1 eV or less), yielding v1/9 phase. Weak
borophene adhesion to the insulator makes it readily accessible for
comprehensive property tests or transfer into the device setting.
|
We present lattice-gas modeling of the steady-state behavior in CO oxidation
on the facets of nanoscale metal clusters, with coupling via inter-facet CO
diffusion. The model incorporates the key aspects of reaction process, such as
rapid CO mobility within each facet, and strong nearest-neighbor repulsion
between adsorbed O. The former justifies our use a "hybrid" simulation approach
treating the CO coverage as a mean-field parameter. For an isolated facet,
there is one bistable region where the system can exist in either a reactive
state (with high oxygen coverage) or a (nearly CO-poisoned) inactive state.
Diffusion between two facets is shown to induce complex multistability in the
steady states of the system. The bifurcation diagram exhibits two regions with
bistabilities due to the difference between adsorption properties of the
facets. We explore the role of enhanced fluctuations in the proximity of a cusp
bifurcation point associated with one facet in producing transitions between
stable states on that facet, as well as their influence on fluctuations on the
other facet. The results are expected to shed more light on the reaction
kinetics for supported catalysts.
|
We obtain exact traveling-wave solutions of the coupled nonlinear partial
differential equations that describe the dynamics of two classical scalar
fields in 1+1 dimensions. The solutions are kinks interpolating between
neighboring vacua. We compute the classical kink mass and show that it
saturates a Bogomol'nyi-type bound. We also present exact traveling-wave
solutions of a more general class of models. Examples include coupled $\phi^4$
and sine-Gordon models.
|
The diffusion of Electric Vehicles (EVs) plays a pivotal role in mitigating
greenhouse gas emissions, particularly in the U.S., where ambitious
zero-emission and carbon neutrality objectives have been set. In pursuit of
these goals, many states have implemented a range of incentive policies aimed
at stimulating EV adoption and charging infrastructure development, especially
public EV charging stations (EVCS). This study examines the indirect network
effect observed between EV adoption and EVCS deployment within urban
landscapes. We developed a two-sided log-log regression model with historical
data on EV purchases and EVCS development to quantify this effect. To test the
robustness, we then conducted a case study of the EV market in Los Angeles (LA)
County, which suggests that a 1% increase in EVCS correlates with a 0.35%
increase in EV sales. Additionally, we forecasted the future EV market dynamics
in LA County, revealing a notable disparity between current policies and the
targeted 80% EV market share for private cars by 2045. To bridge this gap, we
proposed a combined policy recommendation that enhances EV incentives by 60%
and EVCS rebates by 66%, facilitating the achievement of future EV market
objectives.
|
We present a new measurement of the $\alpha$-spectroscopic factor
($S_\alpha$) and the asymptotic normalization coefficient (ANC) for the 6.356
MeV 1/2$^+$ subthreshold state of $^{17}$O through the $^{13}$C($^{11}$B,
$^{7}$Li)$^{17}$O transfer reaction and we determine the $\alpha$-width of this
state. This is believed to have a strong effect on the rate of the
$^{13}$C($\alpha$, $n$)$^{16}$O reaction, the main neutron source for {\it
slow} neutron captures (the $s$-process) in asymptotic giant branch (AGB)
stars. Based on the new width we derive the astrophysical S-factor and the
stellar rate of the $^{13}$C($\alpha$, $n$)$^{16}$O reaction. At a temperature
of 100 MK our rate is roughly two times larger than that by \citet{cau88} and
two times smaller than that recommended by the NACRE compilation. We use the
new rate and different rates available in the literature as input in
simulations of AGB stars to study their influence on the abundances of selected
$s$-process elements and isotopic ratios. There are no changes in the final
results using the different rates for the $^{13}$C($\alpha$, $n$)$^{16}$O
reaction when the $^{13}$C burns completely in radiative conditions. When the
$^{13}$C burns in convective conditions, as in stars of initial mass lower than
$\sim$2 $M_\sun$ and in post-AGB stars, some changes are to be expected, e.g.,
of up to 25% for Pb in our models. These variations will have to be carefully
analyzed when more accurate stellar mixing models and more precise
observational constraints are available.
|
We introduce and theoretically analyze a scheme to prepare and detect
non-Gaussian quantum states of an optically levitated particle via the
interaction with a light pulse that generates cubic and inverted potentials. We
show that this allows operating on short time- and lengthscales, which
significantly reduces the demands on decoherence rates in such experiments.
Specifically, our scheme predicts the observation of interference of
nanoparticles with a mass above $10^8$ atomic mass units delocalised over
several nanometers, on timescales of milliseconds, when operated at vacuum
levels around $10^{-10}$~mbar and at room temperature. We discuss the prospect
of using this approach for coherently splitting the wavepacket of massive
dielectric objects using neither projective measurements nor an internal level
structure.
|
Short-range lattice superstructures have been studied with high-energy x-ray
diffuse scattering in underdoped, optimally doped, and overdoped $\rm
(Y,Ca)Ba_2 Cu_3 O_{6+x}$. A new four-unit-cell superstructure was observed in
compounds with $x\sim 0.95$. Its temperature, doping, and material dependence
was used to attribute its origin to short-range oxygen vacancy ordering, rather
than electronic instabilities in the $\rm CuO_2$ layers. No significant diffuse
scattering is observed in YBa$_2$Cu$_4$O$_{8}$. The oxygen superstructures must
be taken into account when interpreting spectral anomalies in $\rm (Y,Ca)Ba_2
Cu_3 O_{6+x}$.
|
Although introduced in the case of Poisson random measures, the lent particle
method applies as well in other situations. We study here the case of marked
point processes. In this case the Malliavin calculus (here in the sense of
Dirichlet forms) operates on the marks and the point process doesn't need to be
Poisson. The proof of the method is even much simpler than in the case of
Poisson random measures. We give applications to isotropic processes and to
processes whose jumps are modified by independent diffusions.
|
The problem of describing the analytic functions $g$ on the unit disc such
that the integral operator $T_g(f)(z)=\int_0^zf(\zeta)g'(\zeta)\,d\zeta$ is
bounded (or compact) from a Banach space (or complete metric space) $X$ of
analytic functions to the Hardy space $H^\infty$ is a tough problem and remains
unsettled in many cases. For analytic functions $g$ with non-negative Maclaurin
coefficients, we describe the boundedness and compactness of $T_g$ acting from
a weighted Dirichlet space $D^p_\omega$, induced by an upper doubling weight
$\omega$, to $H^\infty$. We also characterize, in terms of neat conditions on
$\omega$, the upper doubling weights for which $T_g: D^p_\omega\to H^\infty$ is
bounded (or compact) only if $g$ is constant.
|
This paper presents a method of constructing Parseval frames from any
collection of complex envelopes. The resulting Enveloped Sinusoid Parseval
(ESP) frames can represent a wide variety of signal types as specified by their
physical morphology. Since the ESP frame retains its Parseval property even
when generated from a variety of envelopes, it is compatible with large scale
and iterative optimization algorithms. ESP frames are constructed by applying
time-shifted enveloping functions to the discrete Fourier Transform basis, and
in this way are similar to the short-time Fourier Transform.
This work provides examples of ESP frame generation for both synthetic and
experimentally measured signals. Furthermore, the frame's compatibility with
distributed sparse optimization frameworks is demonstrated, and efficient
implementation details are provided. Numerical experiments on acoustics data
reveal that the flexibility of this method allows it to be simultaneously
competitive with the STFT in time-frequency processing and also with Prony's
Method for time-constant parameter estimation, surpassing the shortcomings of
each individual technique.
|
It follows from an observation of A. Coble in 1919 that the automorphism
group of an unnodal Enriques surface contains the $2$-congruence subgroup of
the Weyl group of the $E_{10}$-lattice. In this article, we determine how much
bigger the automorphism group of an unnodal Enriques surface can be.
Furthermore, we show that the automorphism group is in fact equal to the
$2$-congruence subgroup for generic Enriques surfaces in arbitrary
characteristic (under the additional assumption that the Enriques surface is
ordinary if the characteristic is $2$), improving the corresponding result of
W. Barth and C. Peters for very general Enriques surfaces over the complex
numbers.
|
Second-order topological insulators are crystalline insulators with a gapped
bulk and gapped crystalline boundaries, but topologically protected gapless
states at the intersection of two boundaries. Without further spatial
symmetries, five of the ten Altland-Zirnbauer symmetry classes allow for the
existence of such second-order topological insulators in two and three
dimensions. We show that reflection symmetry can be employed to systematically
generate examples of second-order topological insulators and superconductors,
although the topologically protected states at corners (in two dimensions) or
at crystal edges (in three dimensions) continue to exist if reflection symmetry
is broken. A three-dimensional second-order topological insulator with broken
time-reversal symmetry shows a Hall conductance quantized in units of $e^2/h$.
|
We report fully momentum dependent, self-consistent calculations of the gap
symmetry, Fermi surface (FS) anisotropy and Tc of superconducting (SC) LiFeAs
using the experimental band structure and a realistic small-q electron phonon
interaction within the framework of Migdal-Eliashberg theory. In the
stoichiometric regime, we find the exact s++ gap as reported by ARPES. For
slight deviations from stoichiometry towards electron doping, we find that a
chiral triplet p_x+ip_y state stabilizes near Tc and that at lower temperatures
a transition from the triplet to singlet s+- SC takes place. Further doping
stabilizes the chiral p-wave SC down to T=0. Precisely the same behavior was
observed recently by NMR. Our results provide a natural and universal
understanding of the conflicting experimental observations in LiFeAs.
|
We explore a Leviathan analogy between neurons in a brain and human beings in
society, asking ourselves whether individual intelligence is necessary for
collective intelligence to emerge and, most importantly, what sort of
individual intelligence is conducive of greater collective intelligence. We
first review disparate insights from connectionist cognitive science,
agent-based modeling, group psychology, economics and physics. Subsequently, we
apply these insights to the sort and degrees of intelligence that in the
Lotka-Volterra model lead to either co-existence or global extinction of
predators and preys.
We find several individual behaviors -- particularly of predators -- that are
conducive to co-existence, eventually with oscillations around an equilibrium.
However, we also find that if both preys and predators are sufficiently
intelligent to extrapolate one other's behavior, co-existence comes along with
indefinite growth of both populations. Since the Lotka-Volterra model is also
interpreted to represent the business cycle, we understand this finding as a
condition for economic growth around oscillations. Specifically, we hypothesize
that pre-modern societies may not have exhibited limitless growth also because
capitalistic future-oriented thinking based on saving and investing concerned
at most a fraction of the population.
|
We study a static black hole localized on a brane in the Randall-Sundrum (RS)
II braneworld scenario. To solve this problem numerically, we develop a code
having the almost 4th-order accuracy. This code derives the highly accurate
result for the case where the brane tension is zero, i.e., the spherically
symmetric case. However, a nonsystematic error is detected in the cases where
the brane tension is nonzero. This error is irremovable by any systematic
methods such as increasing the resolution, setting the outer boundary at more
distant location, or improving the convergence of the numerical relaxation. We
discuss the possible origins for the nonsystematic error, and conclude that our
result is naturally interpreted as the evidence for the nonexistence of
solutions to this setup, although an "approximate" solution exists for
sufficiently small brane tension. We discuss the possibility that the black
holes produced on a brane may be unstable and lead to two interesting
consequences: the event horizon pinch and the brane pinch.
|
The origin of the slow solar wind is still an open issue. It has been
suggested that upflows at the edge of active regions (AR) can contribute to the
slow solar wind. Here, we compared the upflow region and the AR core and
studied how the plasma properties change from the chromosphere via the
transition region to the corona. We studied limb-to-limb observations NOAA
12687 (14th - 25th Nov 2017). We analysed spectroscopic data simultaneously
obtained from IRIS and Hinode/EIS in six spectral lines. We studied the mutual
relationships between the plasma properties for each emission line, as well as
comparing the plasma properties between the neighbouring formation temperature
lines. To find the most characteristic spectra, we classified the spectra in
each wavelength using the machine learning technique k-means. We found that in
the upflow region the Doppler velocities of the coronal lines are strongly
correlated, but the transition region and coronal lines show no correlation.
However, their fluxes are strongly correlated. The upflow region has lower
density and lower temperature than the AR core. In the upflow region, the
Doppler and non-thermal velocity show a strong correlation in the coronal
lines, but the correlation is not seen in the AR core. At the boundary between
the upflow region and the AR core, the upflow region shows an increase in the
coronal non-thermal velocity, the emission obtained from the DEM, and the
domination of the redshifted regions in the chromosphere. The obtained results
suggest that at least three parallel mechanisms generate the plasma upflow: (1)
the reconnection between closed loops and open magnetic field lines in the
lower corona or upper chromosphere; (2) the reconnection between the
chromospheric small-scale loops and open magnetic field; (3) the expansion of
the magnetic field lines that allows the chromospheric plasma to escape to the
solar corona.
|
Privacy-minded Internet service operators anonymize IPv6 addresses by
truncating them to a fixed length, perhaps due to long-standing use of this
technique with IPv4 and a belief that it's "good enough." We claim that simple
anonymization by truncation is suspect since it does not entail privacy
guarantees nor does it take into account some common address assignment
practices observed today. To investigate, with standard activity logs as input,
we develop a counting method to determine a lower bound on the number of active
IPv6 addresses that are simultaneously assigned, such as those of clients that
access World-Wide Web services. In many instances, we find that these empirical
measurements offer no evidence that truncating IPv6 addresses to a fixed number
of bits, e.g., 48 in common practice, protects individuals' privacy.
To remedy this problem, we propose kIP anonymization, an aggregation method
that ensures a certain level of address privacy. Our method adaptively
determines variable truncation lengths using parameter k, the desired number of
active (rather than merely potential) addresses, e.g., 32 or 256, that can not
be distinguished from each other once anonymized. We describe our
implementation and present first results of its application to millions of real
IPv6 client addresses active over a week's time, demonstrating both feasibility
at large scale and ability to automatically adapt to each network's address
assignment practice and synthesize a set of anonymous aggregates (prefixes),
each of which is guaranteed to cover (contain) at least k of the active
addresses. Each address is anonymized by truncating it to the length of its
longest matching prefix in that set.
|
We present a conditional space-time proper orthogonal decomposition (POD)
formulation that is tailored to the eduction of the average, rare or
intermittent event from an ensemble of realizations of a fluid process. By
construction, the resulting spatio-temporal modes are coherent in space and
over a pre-defined finite time horizon and optimally capture the variance, or
energy of the ensemble. For the example of intermittent acoustic radiation from
a turbulent jet, we introduce a conditional expectation operator that focuses
on the loudest events, as measured by a pressure probe in the far-field and
contained in the tail of the pressure signal's probability distribution.
Applied to high-fidelity simulation data, the method identifies a statistically
significant `prototype', or average acoustic burst event that is tracked over
time. Most notably, the burst event can be traced back to its precursor, which
opens up the possibility of prediction of an imminent burst. We furthermore
investigate the mechanism underlying the prototypical burst event using linear
stability theory and find that its structure and evolution is accurately
predicted by optimal transient growth theory. The jet-noise problem
demonstrates that the conditional space-time POD formulation applies even for
systems with probability distributions that are not heavy-tailed, i.e. for
systems in which events overlap and occur in rapid succession.
|
We investigate a self-gravitating thick domain wall for a $\lambda \Phi^4$
potential. The system of scalar and Einstein equations admits two types of
non-trivial solutions: domain wall solutions and false vacuum-de Sitter
solutions. The existence and stability of these solutions depends on the
strength of the gravitational interaction of the scalar field, which is
characterized by the number $\epsilon$. For $\epsilon \ll 1$, we find a domain
wall solution by expanding the equations of motion around the flat spacetime
kink. For ``large'' $\epsilon$, we show analytically and numerically that only
the de Sitter solution exists, and that there is a phase transition at some
$\epsilon_{\rm max}$ which separates the two kinds of solution. Finally, we
comment on the existence of this phase transition and its relation to the
topology of the domain wall spacetime.
|
Conventional sorting algorithms make use of such data structures as array,
file and list which define access methods of the items to be sorted. Such
traditional methods as exchange sort, divide and conquer sort, selection sort
and insertion sort require supervisory control program. The supervisory control
program has access to the items and is responsible for arranging them in the
proper order. This paper presents a different sorting algorithm that does not
require supervisory control program. The objects sort themselves and they are
able to terminate when sorting is completed. The algorithm also employs
parallel processing mechanisms to increase its efficiency and effectiveness.
The paper makes a review of the traditional sorting methods, identifying their
pros and cons and proposes a different design based on conceptual combination
of these algorithms. Algorithms designed were implemented and tested in Java
desktop application
|
The task of compressed sensing is to recover a sparse vector from a small
number of linear and non-adaptive measurements, and the problem of finding a
suitable measurement matrix is very important in this field. While most recent
works focused on random matrices with entries drawn independently from certain
probability distributions, in this paper we show that a partial random
symmetric Bernoulli matrix whose entries are not independent, can be used to
recover signal from observations successfully with high probability. The
experimental results also show that the proposed matrix is a suitable
measurement matrix.
|
NGC 4477 is a low-mass lenticular galaxy in the Virgo Cluster, residing at
100\,kpc to the north of M87. Using a total of 116\,ks {\sl Chandra}
observations, we study the interplay between its hot ($\sim$0.3\,keV) gas halo
and the central supermassive black hole. A possible cool core is indicated by
the short cooling time of the gas at the galaxy centre. We identify a pair of
symmetric cavities lying 1.1\,kpc southeast and 0.9\,kpc northwest of the
galaxy centre with diameters of 1.3\,kpc and 0.9\,kpc, respectively. We
estimate that these cavities are newly formed with an age of $\sim$4\,Myr. No
radio emission is detected at the positions of the cavities with the existing
VLA data. The total energy required to produce the two cavities is
$\sim$$10^{54}$\,erg, at least two orders of magnitude smaller than that of
typical X-ray cavities. NGC 4477 is arguably far the smallest system and the
only lenticular galaxy in which AGN X-ray cavities have been found. It falls on
the scaling relation between the cavity power and the AGN radio luminosity,
calibrated for groups and clusters. Our findings suggest that AGN feedback is
universal among all cool core systems. Finally, we note the presence of
molecular gas in NGC~4477 in the shape of a regular disk with ordered rotation,
which may not be related to the feedback loop.
|
Based oh the properties of Lie algebras, in this work we develop a general
framework to linearize the von-Neumann equation rendering it in a suitable form
for quantum simulations. We show that one of these linearizations of the
von-Neumann equation corresponds to the standard case in which the state vector
becomes the column stacked elements of the density matrix and the Hamiltonian
superoperator takes the form $I\otimes H-H^\top \otimes I$ where $I$ is the
identity matrix and $H$ is the standard Hamiltonian. It is proven that this
particular form belongs to a wider class of ways of linearizing the von Neumann
equation that can be categorized by the algebra from which they originated.
Particular attention is payed to Hermitian algebras that yield real density
matrix coefficients substantially simplifying the quantum tomography of the
state vector. Based on this ideas, a quantum algorithm to simulate the dynamics
of the density matrix is proposed. It is shown that this method, along with the
unique properties of the algebra formed by Pauli strings allows to avoid the
use of Trotterization hence considerably reducing the circuit depth. Even
though we have used the special case of the algebra formed by the Pauli
strings, the algorithm can be readily adapted to other algebras. The algorithm
is demonstrated for two toy Hamiltonians using the IBM noisy quantum circuit
simulator.
|
With 2.5x the previously reported exposure, the Daya Bay experiment has
improved the measurement of the neutrino mixing parameter sin^2(2theta_13) =
0.089+-0.010(stat)+-0.005(syst). Reactor anti-neutrinos were produced by six
2.9 GW(th) commercial power reactors, and measured by six 20-ton target-mass
detectors of identical design. A total of 234,217 anti-neutrino candidates were
detected in 127 days of exposure. An anti-neutrino rate of
0.944+-0.007(stat)+-0.003(syst) was measured by three detectors at a
flux-weighted average distance of 1648 m from the reactors, relative to two
detectors at 470 m and one detector at 576 m. Detector design and depth
underground limited the background to 5+-0.3% (far detectors) and 2+-0.2% (near
detectors) of the candidate signals. The improved precision confirms the
initial measurement of reactor anti-neutrino disappearance, and continues to be
the most precise measurement of theta_13.
|
We consider a class of one-dimensional nonhermitian oscillators and discuss
the relationship between the real eigenvalues of PT-symmetric oscillators and
the resonances obtained by different authors. We also show the relationship
between the strong-coupling expansions for the eigenvalues of those
oscillators. Comparison of the results of the complex rotation and the
Riccati-Pad\'{e} methods reveals that the optimal rotation angle converts the
oscillator into either a PT-symmetric or an Hermitian one. In addition to the
real positive eigenvalues the PT-symmetric oscillators exhibit real positive
resonances under different boundary conditions. They can be calculated by means
of the straightforward diagonalization method. The Riccati-Pad\'e method yields
not only the resonances of the nonhermitian oscillators but also the
eigenvalues of the PT-symmetric ones.
|
The bone quality is asociated with changes in its dielectric properties
(permittivity and conductivity). The feasibility of detecting changes in these
properties is evaluated using a tomographic array of 16 monopole antennas with
z-polarized microwaves at 1.3GHz. The direct problem was evaluated
computationally with the Finite-Difference-Time-Domain (FDTD) method. Local and
global sensitivity analysis were considered for identifiyng the parameters that
most affect the detection. We observed that the direct problem is highly
sensitive to the conductivity of the tissues that surround the calcaneus and
the one of the calcaneus itself. Global and local sensitivity methods have
shown evidences for feasible detection of variation in dielectric properties of
bone.
|
We propose a general theory for the analytical description of versatile
hysteretic phenomena in a graphene field effect transistor (GFET) allowing for
the existence of the external dipoles on graphene free surface and the
localized states at the graphene-surface interface. We demonstrated that the
absorbed dipole molecules (e.g. dissociated or highly polarized water
molecules) can cause hysteretic form of carrier concentration as a function of
gate voltage and corresponding dependence of graphene conductivity in GFET on
the substrate of different types, including the most common SiO2 and
ferroelectric ones. It was shown that the increase of the gate voltage sweeping
rate leads to the complete vanishing of hysteresis for GFET on SiO2 substrate,
as well as for GFET on ferroelectric substrate for applied electric fields E
less than the critical value Ec. For E>Ec the crossover from the hysteresis to
antihysteresis takes place. These results well correlate with the available
experimental data up to the quantitative agreement. Proposed model takes into
consideration the carriers trapping from the graphene channel by the interface
states and describes the antihysteresis in GFET on PZT substrate well enough.
Obtained results clarify the fundamental principles of GFET operation as well
as can be directly applied to describe the basic characteristics of advanced
nonvolatile ultra-fast memory devices using GFET on versatile substrates.
|
We report the results of a 50 ks Chandra observation of the recently
discovered radio object G141.2+5.0, presumed to be a pulsar-wind nebula. We
find a moderately bright unresolved X-ray source which we designate CXOU
J033712.8 615302 coincident with the central peak radio emission. An absorbed
power-law fit to the 241 counts describes the data well, with absorbing column
$N_H = 6.7 (4.0, 9.7) \times 10^{21}$ cm$^{-2}$ and photon index $\Gamma = 1.8
(1.4, 2.2)$. For a distance of 4 kpc, the unabsorbed luminosity between 0.5 and
8 keV is $ 1.7^{+0.4}_{-0.3} \times 10^{32}$ erg s$^{-1}$ (90\% confidence
intervals). Both $L_X$ and $\Gamma$ are quite typical of pulsars in PWNe. No
extended emission is seen; we estimate a conservative $3 \sigma$ upper limit to
the surface brightness of any X-ray PWN near the point source to be $3 \times
10^{-17}$ erg cm$^{-2}$ s$^{-1}$ arcsec$^{-2}$ between 0.5 and 8 keV, assuming
the same spectrum as the point source; for a nebula of diameter $13"$, the flux
limit is 6\% of the flux of the point source. The steep radio spectrum of the
PWN ($\alpha \sim -0.7$), if continued to the X-ray without a break, predicts
$L_X\ \rm{(nebula)} \sim 1 \times 10^{33}$ erg s$^{-1}$, so additional spectral
steepening between radio and X-rays is required, as is true of all known PWNe.
The high Galactic latitude gives a $z$-distance of 350 pc above the Galactic
plane, quite unusual for a Population I object.
|
We describe a device (adapter) for off-axis guiding and photometric
calibration of wide-angle spectrographs operating in the prime focus of the 6-m
telescope of the Special Astrophysical Observatory of the Russian Academy of
Sciences. To compensate coma in off-axis star images an achromatic lens
corrector is used, which ensures maintaining image quality (FWHM) at a level of
about 1'' within 15' from the optical axis. The device has two 54'-diameter
movable guiding fields, which can move in 10' x 4'.5 rectangular areas. The
device can perform automatic search for guiding stars, use them to control the
variations of atmospheric transmittance, and focus the telescope during
exposure. The limiting magnitude of potential guiding stars is mR ~17 mag. The
calibration path whose optical arrangement meets the telecentrism condition
allows the spectrograph to be illuminated both by a source of line spectrum (a
He-Ne-Ar filled lamp) and by a source of continuum spectrum. The latter is
usually represented either by a halogen lamp or a set of light-emitting diodes,
which provide illumination of approximately uniform intensity over the
wavelength interval from 350 to 900 nm. The adapter is used for observations
with SCORPIO-2 multimode focal reducer.
|
We give a complete proof of the fact that the trace of the curvature of the
connection associated to a planar d-web (d>3) is the sum of the Blaschke
curvatures of its sub 3-webs.
|
Solar coronal mass ejections (CMEs) are large-scale eruptions of plasma and
magnetic field from the Sun into the corona and interplanetary space. They are
the most significant drivers of adverse space weather at Earth and other
locations in the heliosphere, so it is important to understand the physics
governing their eruption and propagation. However the diffuse morphology and
transient nature of CMEs makes them difficult to identify and track using
traditional image processing techniques. In this thesis the implementation of
multiscale image processing techniques to identify and track the CME front
through coronagraph images is detailed. An ellipse characterisation of the CME
front is used to determine the CME kinematics and morphology with increased
precision as compared to techniques used in current CME catalogues, and efforts
are underway to automate this procedure for applying to a large number of CME
observations for future analysis. It was found that CMEs do not simply undergo
constant acceleration, but rather tend to show a higher acceleration early in
their propagation. The angular width of CMEs was also found to change as they
propagate, normally increasing with height from the Sun. However these results
were derived from plane-of-sky measurements with no correction for how the true
CME geometry and direction affect the kinematics and morphology observed. With
the advent of the unique dual perspectives of the STEREO spacecraft, the
multiscale methods were extended to an elliptical tie-pointing technique in
order reconstruct the front of a CME in three-dimensions. Applying this
technique to the Earth-directed CME of 12 December 2008 allowed an accurate
determination of its true kinematics and morphology, and the CME was found to
undergo early acceleration, non-radial motion, angular width expansion, and
aerodynamic drag in the solar wind as it propagated towards Earth.
|
Recent IoT applications gradually adapt more complicated end systems with
commodity software. Ensuring the runtime integrity of these software is a
challenging task for the remote controller or cloud services. Popular
enforcement is the runtime remote attestation which requires the end system
(prover) to generate evidence for its runtime behavior and a remote trusted
verifier to attest the evidence. Control-flow attestation is a kind of runtime
attestation that provides diagnoses towards the remote control-flow hijacking
at the prover. Most of these attestation approaches focus on small or embedded
software. The recent advance to attesting complicated software depends on the
source code and CFG traversing to measure the checkpoint-separated subpaths,
which may be unavailable for commodity software and cause possible context
missing between consecutive subpaths in the measurements. In this work, we
propose a resilient control-flow attestation (ReCFA), which does not need the
offline measurement of all legitimate control-flow paths, thus scalable to be
used on complicated commodity software. Our main contribution is a multi-phase
approach to condensing the runtime control-flow events; as a result, the vast
amount of control-flow events are abstracted into a deliverable size. The
condensing approach consists of filtering skippable call sites, folding
program-structure related control-flow events, and a greedy compression. Our
approach is implemented with binary-level static analysis and instrumentation.
We employ a shadow stack mechanism at the verifier to enforce context-sensitive
control-flow integrity and diagnose the compromised control-flow events
violating the security policy. The experimental results on real-world
benchmarks show both the efficiency of the control-flow condensing and the
effectiveness of security enforcement.
|
We study the prediction with expert advice setting, where the aim is to
produce a decision by combining the decisions generated by a set of experts,
e.g., independently running algorithms. We achieve the min-max optimal dynamic
regret under the prediction with expert advice setting, i.e., we can compete
against time-varying (not necessarily fixed) combinations of expert decisions
in an optimal manner. Our end-algorithm is truly online with no prior
information, such as the time horizon or loss ranges, which are commonly used
by different algorithms in the literature. Both our regret guarantees and the
min-max lower bounds are derived with the general consideration that the expert
losses can have time-varying properties and are possibly unbounded. Our
algorithm can be adapted for restrictive scenarios regarding both loss feedback
and decision making. Our guarantees are universal, i.e., our end-algorithm can
provide regret guarantee against any competitor sequence in a min-max optimal
manner with logarithmic complexity. Note that, to our knowledge, for the
prediction with expert advice problem, our algorithms are the first to produce
such universally optimal, adaptive and truly online guarantees with no prior
knowledge.
|
In this paper, we almost completely solve the existence of an almost
resolvable cycle system with odd cycle length. We also use almost resolvable
cycle systems as well as other combinatorial structures to give some new
solutions to the Hamilton-Waterloo problem.
|
Signature from Pop III massive stars of $140$--$260\,{\rm M_\odot}$ that end
their lives as pair-instability supernovae (PISNe) are expected to be seen in
very metal-poor (VMP) stars of ${\rm [Fe/H]}\leq -2$. Although thousands of VMP
stars have been discovered, the identification of a VMP star with a PISN
signature has been elusive. Recently, the VMP star LAMOST J1010+2358 was
claimed to be the first star with a clear PISN signature. A subsequent study
showed that ejecta from low-mass core-collapse supernovae (CCSNe) can also fit
the abundance pattern equally well and additional elements such as C and Al are
required to differentiate the two sources. Follow-up observations of LAMOST
J1010+2358 by two independent groups were able to detect both C and Al.
Additionally, key odd elements such as Na and Sc were also detected whose
abundances were found to be higher than the upper limits found in the original
detection. We perform a detailed analysis of the newly observed abundance
patterns by exploring various possible formation channels for VMP stars. We
find that purely low-mass CCSN ejecta as well as the combination of CCSN and
Type 1a SN ejecta can provide an excellent fit to the newly observed abundance
pattern. Our results confirm earlier analysis that the newly observed abundance
pattern is peculiar but has no signatures of PISN.
|
A graph $ G $ is minimally $ t $-tough if the toughness of $ G $ is $ t $ and
deletion of any edge from $ G $ decreases its toughness. Katona et al.
conjectured that the minimum degree of any minimally $ t $-tough graph is $
\lceil 2t\rceil $ and gave some upper bounds on the minimum degree of the
minimally $ t $-tough graphs in \cite{Katona, Gyula}. In this paper, we show
that a minimally 1-tough graph $ G $ with girth $ g\geq 5 $ has minimum degree
at most $ \lfloor\frac{n}{g+1}\rfloor+g-1$, and a minimally $ 1 $-tough graph
with girth $ 4 $ has minimum degree at most $ \frac{n+6}{4}$. We also prove
that the minimum degree of minimally $\frac{3}2$-tough claw-free graphs is $ 3
$.
|
Quantum fluctuations are ubiquitous in physics. Ranging from conventional
examples like the harmonic oscillator to intricate theories on the origin of
the universe, they alter virtually all aspects of matter -- including
superconductivity, phase transitions and nanoscale processes. As a rule of
thumb, the smaller the object, the larger their impact. This poses a serious
challenge to modern nanotechnology, which aims total control via atom-by-atom
engineered devices. In magnetic nanostructures, high stability of the magnetic
signal is crucial when targeting realistic applications in information
technology, e.g. miniaturized bits. Here, we demonstrate that zero-point
spin-fluctuations are paramount in determining the fundamental magnetic
exchange interactions that dictate the nature and stability of the magnetic
state. Hinging on the fluctuation-dissipation theorem, we establish that
quantum fluctuations correctly account for the large overestimation of the
interactions as obtained from conventional static first-principles frameworks,
filling in a crucial gap between theory and experiment [1,2]. Our analysis
further reveals that zero-point spin-fluctuations tend to promote the
non-collinearity and stability of chiral magnetic textures such as skyrmions --
a counter-intuitive quantum effect that inspires practical guidelines for
designing disruptive nanodevices.
|
Recent advancements in diffusion models have significantly impacted the
trajectory of generative machine learning research, with many adopting the
strategy of fine-tuning pre-trained models using domain-specific text-to-image
datasets. Notably, this method has been readily employed for medical
applications, such as X-ray image synthesis, leveraging the plethora of
associated radiology reports. Yet, a prevailing concern is the lack of
assurance on whether these models genuinely comprehend their generated content.
With the evolution of text-conditional image generation, these models have
grown potent enough to facilitate object localization scrutiny. Our research
underscores this advancement in the critical realm of medical imaging,
emphasizing the crucial role of interpretability. We further unravel a
consequential trade-off between image fidelity as gauged by conventional
metrics and model interpretability in generative diffusion models.
Specifically, the adoption of learnable text encoders when fine-tuning results
in diminished interpretability. Our in-depth exploration uncovers the
underlying factors responsible for this divergence. Consequently, we present a
set of design principles for the development of truly interpretable generative
models. Code is available at https://github.com/MischaD/chest-distillation.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.