text
stringlengths 6
128k
|
---|
We explicitly write down the Eisenstein elements inside the space of modular
symbols for Eisenstein series with integer coefficients for the congruence
subgroups $\Gamma_0(N)$ with $N$ odd square-free. We also compute the winding
elements explicitly for these congruence subgroups. This gives an answer to a
question of Merel in these cases. Our results are explicit versions of the
Manin-Drinfeld Theorem [Thm. 6]. These results are the generalization of the
paper [1] results to odd square-free level.
|
A dilemma worth Shakespeare's Hamlet is increasingly haunting companies and
security researchers: ``to update or not to update, this is the question``.
From the perspective of recommended common practices by software vendors the
answer is unambiguous: you should keep your software up-to-date. But is common
sense always good sense? We argue it is not.
|
We calculate the amplitude for exclusive electroweak production of a
pseudoscalar $D_s$ or a vector $D^*_s$ charmed strange meson on an unpolarized
nucleon, through a charged current, in leading order in $\alpha_s$. We work in
the framework of the collinear QCD approach where generalized gluon
distributions factorize from perturbatively calculable coefficient functions.
We include both $O(m_c)$ terms in the coefficient functions and $O(M_D)$ mass
term contributions in the heavy meson distribution amplitudes. We show that
this process may be accessed at future electron-ion colliders.
|
The recent advancements in generative language models have demonstrated their
ability to memorize knowledge from documents and recall knowledge to respond to
user queries effectively. Building upon this capability, we propose to enable
multimodal large language models (MLLMs) to memorize and recall images within
their parameters. Given a user query for visual content, the MLLM is
anticipated to "recall" the relevant image from its parameters as the response.
Achieving this target presents notable challenges, including inbuilt visual
memory and visual recall schemes within MLLMs. To address these challenges, we
introduce a generative cross-modal retrieval framework, which assigns unique
identifier strings to represent images and involves two training steps:
learning to memorize and learning to retrieve. The first step focuses on
training the MLLM to memorize the association between images and their
respective identifiers. The latter step teaches the MLLM to generate the
corresponding identifier of the target image, given the textual query input. By
memorizing images in MLLMs, we introduce a new paradigm to cross-modal
retrieval, distinct from previous discriminative approaches. The experiments
demonstrate that the generative paradigm performs effectively and efficiently
even with large-scale image candidate sets.
|
Feature selection problems arise in a variety of applications, such as
microarray analysis, clinical prediction, text categorization, image
classification and face recognition, multi-label learning, and classification
of internet traffic. Among the various classes of methods, forward feature
selection methods based on mutual information have become very popular and are
widely used in practice. However, comparative evaluations of these methods have
been limited by being based on specific datasets and classifiers. In this
paper, we develop a theoretical framework that allows evaluating the methods
based on their theoretical properties. Our framework is grounded on the
properties of the target objective function that the methods try to
approximate, and on a novel categorization of features, according to their
contribution to the explanation of the class; we derive upper and lower bounds
for the target objective function and relate these bounds with the feature
types. Then, we characterize the types of approximations taken by the methods,
and analyze how these approximations cope with the good properties of the
target objective function. Additionally, we develop a distributional setting
designed to illustrate the various deficiencies of the methods, and provide
several examples of wrong feature selections. Based on our work, we identify
clearly the methods that should be avoided, and the methods that currently have
the best performance.
|
We discuss the fermionization of fusion category symmetries in
two-dimensional topological quantum field theories (TQFTs). When the symmetry
of a bosonic TQFT is described by the representation category $\mathrm{Rep}(H)$
of a semisimple weak Hopf algebra $H$, the fermionized TQFT has a superfusion
category symmetry $\mathrm{SRep}(\mathcal{H}^u)$, which is the supercategory of
super representations of a weak Hopf superalgebra $\mathcal{H}^u$. The weak
Hopf superalgebra $\mathcal{H}^u$ depends not only on $H$ but also on a choice
of a non-anomalous $\mathbb{Z}_2$ subgroup of $\mathrm{Rep}(H)$ that is used
for the fermionization. We derive a general formula for $\mathcal{H}^u$ by
explicitly constructing fermionic TQFTs with $\mathrm{SRep}(\mathcal{H}^u)$
symmetry. We also construct lattice Hamiltonians of fermionic gapped phases
when the symmetry is non-anomalous. As concrete examples, we compute the
fermionization of finite group symmetries, the symmetries of finite gauge
theories, and duality symmetries. We find that the fermionization of duality
symmetries depends crucially on $F$-symbols of the original fusion categories.
The computation of the above concrete examples suggests that our fermionization
formula of fusion category symmetries can also be applied to non-topological
QFTs.
|
It has been found that the topology effect and the possible emergent scale
and hidden local flavor symmetries at high density reveal a novel structure of
the compact star matter. The $N_f\geq2$ baryons can be described by the
skyrmion in the large $N_c$ limit and there is a robust topology change in the
skyrmion matter approach to dense nuclear matter. The hidden scale and local
flavor symmetries which are sources introducing the lightest scalar meson --
dilaton -- and lowest lying vector mesons into to nonlinear chiral effective
theory are seen to play important roles in understanding the nuclear force. We
review in this paper the generalized nuclear effective theory (G$n$EFT), which
applicable to nuclear matter from low density to the compact star density,
constructed with the robust conclusion from the topology approach to dense
matter and emergent scale and hidden local flavor symmetries. The topology
change at density larger than two times saturation density $n_0$ encoded in the
parameters of the effective field theory is interpreted as the hadron-quark
continuity in the sense of Cheshire Cat Principle. A novel feature predicted in
this theory that has not been found before is the precocious appearance of the
conformal sound velocity in the cores of massive stars, although the trace of
the energy-momentum tensor of the system is not zero. That is, in contrast to
the usual picture, the cores of massive stars are composed of quasiparticles of
fractional baryon charges, neither baryons nor deconfined quarks. Hidden scale
and local flavor symmetries emerge and give rise a resolution of the
longstanding $g_A$ quench problem in nuclei transition. To illustrate the
rationality of the GnEFT, we finally confront the generalized effective field
theory to the global properties of neutron star and the data from gravitational
wave detections.
|
Off-center stellar tidal disruption flares have been suggested to be a
powerful probe of recoiling supermassive black holes (SMBHs) out of galactic
centers due to anisotropic gravitational wave radiations. However, off-center
tidal flares can also be produced by SMBHs in merging galaxies. In this paper,
we computed the tidal flare rates by dual SMBHs in two merging galaxies before
the SMBHs become self-gravitationally bounded. We employ an analytical model to
calculate the tidal loss-cone feeding rates for both SMBHs, taking into account
two-body relaxation of stars, tidal perturbations by the companion galaxy, and
chaotic stellar orbits in triaxial gravitational potential. We show that for
typical SMBHs with mass 10^7 M_\sun, the loss-cone feeding rates are enhanced
by mergers up to \Gamma ~ 10^{-2} yr^{-1}, about two order of magnitude higher
than those by single SMBHs in isolated galaxies and about four orders of
magnitude higher than those by recoiling SMBHs. The enhancements are mainly due
to tidal perturbations by the companion galaxy. We suggest that off-center
tidal flares are overwhelmed by those from merging galaxies, making the
identification of recoiling SMBHs challenging. Based on the calculated rates,
we estimate the relative contributions of tidal flare events by single, binary,
and dual SMBH systems during cosmic time. Our calculations show that the
off-center tidal disruption flares by un-bound SMBHs in merging galaxies
contribute a fraction comparable to that by single SMBHs in isolated galaxies.
We conclude that off-center tidal disruptions are powerful tracers of the
merging history of galaxies and SMBHs.
|
Advances in Web technology enable personalization proxies that assist users
in satisfying their complex information monitoring and aggregation needs
through the repeated querying of multiple volatile data sources. Such proxies
face a scalability challenge when trying to maximize the number of clients
served while at the same time fully satisfying clients' complex user profiles.
In this work we use an abstraction of complex execution intervals (CEIs)
constructed over simple execution intervals (EIs) represents user profiles and
use existing offline approximation as a baseline for maximizing completeness of
capturing CEIs. We present three heuristic solutions for the online problem of
query scheduling to satisfy complex user profiles. The first only considers
properties of individual EIs while the other two exploit properties of all EIs
in the CEI. We use an extensive set of experiments on real traces and synthetic
data to show that heuristics that exploit knowledge of the CEIs dominate across
multiple parameter settings.
|
We study the direct detection of supersymmetric dark matter in the light of
recent experimental results. In particular, we show that regions in the
parameter space of several scenarios with a neutralino-nucleon cross section of
the order of $10^{-6}$ pb, i.e., where current dark matter detectors are
sensitive, can be obtained. These are supergravity scenarios with intermediate
unification scale, and superstring scenarios with D-branes.
|
A protocol named Threshold Bipolar (TB) is proposed as a fetching strategy at
the startup stage of p2p live streaming systems. In this protocol, chunks are
fetched consecutively from buffer head at the beginning. After the buffer is
filled into a threshold, chunks at the buffer tail will be fetched first while
keeping the contiguously filled part in the buffer above the threshold even
when the buffer is drained at a playback rate. High download rate, small
startup latency and natural strategy handover can be reached at the same time
by this protocol. Important parameters in this protocol are identified. The
buffer progress under this protocol is then expressed as piecewise lines
specified by those parameters. Startup traces of peers measured from PPLive are
studied to show the real performance of TB protocol in a real system. A simple
design model of TB protocol is proposed to reveal important considerations in a
practical design.
|
We consider the resummation of soft gluon emission for squark and gluino
hadroproduction at next-to-leading-logarithmic (NLL) accuracy in the framework
of the minimal supersymmetric standard model. We present analytical results for
squark-squark and squark-gluino production and provide numerical predictions
for all squark and gluino pair-production processes at the Tevatron and at the
LHC. The size of the soft-gluon corrections and the reduction in the scale
uncertainty are most significant for processes involving gluino production. At
the LHC, where the sensitivity to squark and gluino masses ranges up to 3 TeV,
the corrections due to NLL resummation over and above the NLO predictions can
be as high as 35% in the case of gluino-pair production, whereas at the
Tevatron, the NLL corrections are close to 40% for squark-gluino final states
with sparticle masses around 500 GeV.
|
It is shown that, for kernel-based classification with univariate
distributions and two populations, optimal bandwidth choice has a dichotomous
character. If the two densities cross at just one point, where their curvatures
have the same signs, then minimum Bayes risk is achieved using bandwidths which
are an order of magnitude larger than those which minimize pointwise estimation
error. On the other hand, if the curvature signs are different, or if there are
multiple crossing points, then bandwidths of conventional size are generally
appropriate. The range of different modes of behavior is narrower in
multivariate settings. There, the optimal size of bandwidth is generally the
same as that which is appropriate for pointwise density estimation. These
properties motivate empirical rules for bandwidth choice.
|
In supervised learning, automatically assessing the quality of the labels
before any learning takes place remains an open research question. In certain
particular cases, hypothesis testing procedures have been proposed to assess
whether a given instance-label dataset is contaminated with class-conditional
label noise, as opposed to uniform label noise. The existing theory builds on
the asymptotic properties of the Maximum Likelihood Estimate for parametric
logistic regression. However, the parametric assumptions on top of which these
approaches are constructed are often too strong and unrealistic in practice. To
alleviate this problem, in this paper we propose an alternative path by showing
how similar procedures can be followed when the underlying model is a product
of Local Maximum Likelihood Estimation that leads to more flexible
nonparametric logistic regression models, which in turn are less susceptible to
model misspecification. This different view allows for wider applicability of
the tests by offering users access to a richer model class. Similarly to
existing works, we assume we have access to anchor points which are provided by
the users. We introduce the necessary ingredients for the adaptation of the
hypothesis tests to the case of nonparametric logistic regression and
empirically compare against the parametric approach presenting both synthetic
and real-world case studies and discussing the advantages and limitations of
the proposed approach.
|
Much work has studied effective interactions between micron-sized particles
carrying linkers forming reversible, inter-particle linkages. These studies
allowed understanding the equilibrium properties of colloids interacting
through ligand-receptor interactions. Nevertheless, understanding the kinetics
of multivalent interactions remains an open problem. Here, we study how
molecular details of the linkers, such as the reaction rates at which
inter-particle linkages form/break, affect the relative dynamics of pairs of
cross-linked colloids. Using a simulation method tracking single
binding/unbinding events between complementary linkers, we rationalize recent
experiments and prove that particles' interfaces can move across each other
while being cross-linked. We clarify how, starting from diffusing colloids, the
dynamics become arrested when increasing the number of inter-particle linkages
or decreasing the reaction rates. Before getting arrested, particles diffuse
through rolling motion. The ability to detect rolling motion will be useful to
shed new light on host-pathogen interactions.
|
In this paper we study the universal lifting spaces of local Galois
representations valued in arbitrary reductive group schemes when $\ell \neq p$.
In particular, under certain technical conditions applicable to any root datum
we construct a canonical smooth component in such spaces, generalizing the
minimally ramified deformation condition previously studied for classical
groups. Our methods involve extending the notion of isotypic decomposition for
a $\textrm{GL}_n$-valued representation to general reductive group schemes. To
deal with certain scheme-theoretic issues coming from this notion, we are led
to a detailed study of certain families of disconnected reductive groups, which
we call weakly reductive group schemes. Our work can be used to produce
geometric lifts for global Galois representations, and we illustrate this for
$\mathrm{G}_2$-valued representations.
|
Kepler has identified over 600 multiplanet systems, many of which have
several planets with orbital distances smaller than that of Mercury -- quite
different from the Solar System. Because these systems may be difficult to
explain in the paradigm of core accretion and disk migration, it has been
suggested that they formed in situ within protoplanetary disks with high solid
surface densities. The strong connection between giant planet occurrence and
stellar metallicity is thought to be linked to enhanced solid surface densities
in disks around metal-rich stars, so the presence of a giant planet can be a
detectable sign of planet formation in a high solid surface density disk. I
formulate quantitative predictions for the frequency of long-period giant
planets in these in situ models of planet formation by translating the proposed
increase in disk mass into an equivalent metallicity enhancement. I rederive
the scaling of giant planet occurrence with metallicity as P_gp =
0.05_{-0.02}^{+0.02} x 10^{(2.1 +/- 0.4) [M/H]} = 0.08_{-0.03}^{+0.02} x
10^{(2.3 +/- 0.4) [Fe/H]} and show that there is significant tension between
the frequency of giant planets suggested by the minimum mass extrasolar nebula
scenario and the observational upper limits. This fact suggests that high-mass
disks alone cannot explain the observed properties of the close-in Kepler
multiplanet systems and that migration is still a necessary contributor to
their formation. More speculatively, I combine the metallicity scaling of giant
planet occurrence with recently published small planet occurrence rates to
estimate the number of Solar System analogs in the Galaxy. I find that in the
Milky Way there are perhaps 4 x 10^6 true Solar System analogs with an FGK star
hosting both a terrestrial planet in the habitable zone and a long-period giant
planet companion.
|
We propose a new two-stage pre-training framework for video-to-text
generation tasks such as video captioning and video question answering: A
generative encoder-decoder model is first jointly pre-trained on massive
image-text data to learn fundamental vision-language concepts, and then adapted
to video data in an intermediate video-text pre-training stage to learn
video-specific skills such as spatio-temporal reasoning. As a result, our
VideoOFA model achieves new state-of-the-art performance on four Video
Captioning benchmarks, beating prior art by an average of 9.7 points in CIDEr
score. It also outperforms existing models on two open-ended Video Question
Answering datasets, showcasing its generalization capability as a universal
video-to-text model.
|
When modeling the three-dimensional hydrodynamics of interstellar material
rotating in a galactic gravitational potential, it is useful to have an
analytic expression for gravitational perturbations due to stellar spiral arms.
We present such an expression for which changes in the assumed characteristics
of the arms can be made easily and the sensitivity of the hydrodynamics to
those characteristics examined. This analytic expression also makes it easy to
rotate the force field at the pattern angular velocity with little overhead on
the calculations.
|
Nowadays the Lyapunov exponents and Lyapunov dimension have become so
widespread and common that they are often used without references to the
rigorous definitions or pioneering works. It may lead to a confusion since
there are at least two well-known definitions, which are used in computations:
the upper bounds of the exponential growth rate of the norms of linearized
system solutions (Lyapunov characteristic exponents, LCEs) and the upper bounds
of the exponential growth rate of the singular values of the fundamental matrix
of linearized system (Lyapunov exponents, LEs). In this work the relation
between Lyapunov exponents and Lyapunov characteristic exponents is discussed.
The invariance of Lyapunov exponents for regular and irregular linearizations
under the change of coordinates is demonstrated.
|
Model discovery aims at autonomously discovering differential equations
underlying a dataset. Approaches based on Physics Informed Neural Networks
(PINNs) have shown great promise, but a fully-differentiable model which
explicitly learns the equation has remained elusive. In this paper we propose
such an approach by integrating neural network-based surrogates with Sparse
Bayesian Learning (SBL). This combination yields a robust model discovery
algorithm, which we showcase on various datasets. We then identify a connection
with multitask learning, and build on it to construct a Physics Informed
Normalizing Flow (PINF). We present a proof-of-concept using a PINF to directly
learn a density model from single particle data. Our work expands PINNs to
various types of neural network architectures, and connects neural
network-based surrogates to the rich field of Bayesian parameter inference.
|
Heisenberg uncertainty relation is at the origin of understanding minimum
uncertainty states and squeezed states of light. In the recent past, sum
uncertainty relation was formulated by Maccone and Pati [Phys. Rev. Lett. 113,
260401 (2014)] which is claimed to be stronger than the existing
Heisenberg-Robertson product uncertainty relation for the set of two
incompatible observables. We deduce a different sum uncertainty relation that
is weaker than the previous but necessary and sufficient to define MUS for sum
uncertainty relations. We claim that the MUS for the sum uncertainty relation
is always the MUS for the traditional product uncertainty relation. This means
that the definition of squeezed states remains completely unaffected by the
stronger sum uncertainty relation.
|
Let X be a closed Riemannian manifold and let H\hookrightarrow X be an
embedded hypersurface. Let X=X_+ \cup_H X_- be a decomposition of X into two
manifolds with boundary, with X_+ \cap X_- = H. In this expository article,
surgery -- or gluing -- formul\ae for several geometric and spectral invariants
associated to a Dirac-type operator \eth_X on X are presented. Considered in
detail are: the index of \eth_X, the index bundle and the determinant bundle
associated to a family of such operators, the eta invariant and the analytic
torsion. In each case the precise form of the surgery theorems, as well as the
different techniques used to prove them, are surveyed.
|
By using the relation between CP-violation phase and the mixing angles in
Cabibbo-Kobayashi-Maskawa matrix postulated by us before, the rephasing
invariant is recalculated. Furthermore, the problem about maximal CP violation
is discussed. We find that the maximal value of Jarlskog's invariant is about
0.038. And it presents at alpha=1.239, beta=1.574 and gamma=0.327 in triangle
db.
|
In recent years, explainable methods for artificial intelligence (XAI) have
tried to reveal and describe models' decision mechanisms in the case of
classification tasks. However, XAI for semantic segmentation and in particular
for single instances has been little studied to date. Understanding the process
underlying automatic segmentation of single instances is crucial to reveal what
information was used to detect and segment a given object of interest. In this
study, we proposed two instance-level explanation maps for semantic
segmentation based on SmoothGrad and Grad-CAM++ methods. Then, we investigated
their relevance for the detection and segmentation of white matter lesions
(WML), a magnetic resonance imaging (MRI) biomarker in multiple sclerosis (MS).
687 patients diagnosed with MS for a total of 4043 FLAIR and MPRAGE MRI scans
were collected at the University Hospital of Basel, Switzerland. Data were
randomly split into training, validation and test sets to train a 3D U-Net for
MS lesion segmentation. We observed 3050 true positive (TP), 1818 false
positive (FP), and 789 false negative (FN) cases. We generated instance-level
explanation maps for semantic segmentation, by developing two XAI methods based
on SmoothGrad and Grad-CAM++. We investigated: 1) the distribution of gradients
in saliency maps with respect to both input MRI sequences; 2) the model's
response in the case of synthetic lesions; 3) the amount of perilesional tissue
needed by the model to segment a lesion. Saliency maps (based on SmoothGrad) in
FLAIR showed positive values inside a lesion and negative in its neighborhood.
Peak values of saliency maps generated for these four groups of volumes
presented distributions that differ significantly from one another, suggesting
a quantitative nature of the proposed saliency. Contextual information of 7mm
around the lesion border was required for their segmentation.
|
Given a family $\mathcal{F}$ of subsets of $[n]$, we say two sets $A, B \in
\mathcal{F}$ are comparable if $A \subset B$ or $B \subset A$. Sperner's
celebrated theorem gives the size of the largest family without any comparable
pairs. This result was later generalised by Kleitman, who gave the minimum
number of comparable pairs appearing in families of a given size.
In this paper we study a complementary problem posed by Erd\H{o}s and Daykin
and Frankl in the early '80s. They asked for the maximum number of comparable
pairs that can appear in a family of $m$ subsets of $[n]$, a quantity we denote
by $c(n,m)$. We first resolve an old conjecture of Alon and Frankl, showing
that $c(n,m) = o(m^2)$ when $m = n^{\omega(1)} 2^{n/2}$. We also obtain more
accurate bounds for $c(n,m)$ for sparse and dense families, characterise the
extremal constructions for certain values of $m$, and sharpen some other known
results.
|
We present proofs for the existence of distributional potentials
$F\in{\mathcal D}'(\Omega)$ for distributional vector fields $G\in{\mathcal
D}'(\Omega)^n$, i.e. $\operatorname{grad} F=G$, where $\Omega$ is an open
subset of ${\mathbb R}^n$. The hypothesis in these proofs is the compatibility
condition $\partial_jG_k=\partial_kG_j$ for all $j,k\in\{1,\dots,n\}$, if
$\Omega$ is simply connected, and a stronger condition in the general case. A
key ingredient of our treatment is the use of the Bogovskii formula, assigning
vector fields $v\in{\mathcal D}(\Omega)^n$ with $\operatorname{div} v=\varphi$
to functions $\varphi\in{\mathcal D}(\Omega)$ with $\int
\varphi(x)\,\mathrm{d}x=0$. The results are applied to properties of Hilbert
spaces of functions occurring in the treatment of the Stokes operator and the
Navier--Stokes equations.
|
We investigate the 3rd term of spectral heat content for killed subordinate
and subordinate killed Brownian motions on a bounded open interval D = (a, b)
in a real line when the underlying subordinators are stable subordinators with
index \alpha is in (1, 2) or \alpha = 1. We prove that in the 3rd term of
spectral heat content, one can observe the length b-a of the interval D.
|
The CDF and D0 experiments at the Tevatron have used p-pbar collisions at
sqrt(s)=1.96 TeV to measure the cross section of W and Z boson production using
several leptonic final states. An indirect measurement of the total W width has
been extracted, and the lepton charge asymmetry in Drell-Yan production has
been studied up to invariant masses of 600 GeV/c^2.
|
We examine the chiral corrections to exotic meson masses calculated in
lattice QCD. In particular, we ask whether the non-linear chiral behavior at
small quark masses, which has been found in other hadronic systems, could lead
to large corrections to the predictions of exotic meson masses based on linear
extrapolations to the chiral limit. We find that our present understanding of
exotic meson decay dynamics suggests that open channels may not make a
significant contribution to such non-linearities whereas the virtual, closed
channels may be important.
|
Let p be a prime number. In [9], I introduced the Roquette category R_p of
finite p-groups, which is an additive tensor category containing all finite
p-groups among its objects. In R_p, every finite p-group P admits a canonical
direct summand dP, called the edge of P. Moreover P splits uniquely as a direct
sum of edges of Roquette p-groups. In this note, I would like to describe a
fast algorithm to obtain such a decomposition, when p is odd. ref: [9] The
Roquette category of finite p-groups, J.E.M.S (to appear)
|
In this paper, we are interested in the minimal null control time of
one-dimensional first-order linear hyperbolic systems by one-sided boundary
controls. Our main result is an explicit characterization of the smallest and
largest values that this minimal null control time can take with respect to the
internal coupling matrix. In particular, we obtain a complete description of
the situations where the minimal null control time is invariant with respect to
all the possible choices of internal coupling matrices. The proof relies on the
notion of equivalent systems, in particular the backstepping method, a
canonical $LU$-decomposition for boundary coupling matrices and a
compactness-uniqueness method adapted to the null controllability property.
|
Realizing a spatial superposition with massive objects is one of the most
fundamental challenges, as it will test quantum theory in new regimes, probe
quantum-gravity, and enable to test exotic theories like gravitationally
induced collapse. A natural extension of the successful implementation of an
atomic Stern-Gerlach interferometer (SGI), is a SGI with a nano-diamond (ND) in
which a single spin is embedded in the form of a nitrogen-vacancy center (NV).
As the ND rotation, and with it the rotation of the NV spin direction, may
inhibit such a realization, both in terms of Newtonian trajectories and quantum
phases, we analyze here the role of rotations in the SGI. We take into account
fundamental limits, such as those imposed by the quantum angular uncertainty
relation and thermal fluctuations. We provide a detailed recipe for which a
superposition of massive objects is enabled. This may open the door not only to
fundamental tests, but also to new forms of quantum technology.
|
The Whitehead asphericity problem, regarded as a problem of combinatorial
group theory, asks whether any subpresentation of an aspherical group
presentation is also aspherical. This is a long standing open problem which has
attracted a lot of attention. Related to it, throughout the years there have
been given several useful characterizations of asphericity which are either
combinatorial or topological in nature. The aim of this paper is two fold.
First, it brings in methods from semigroup theory to give a new combinatorial
characterization of asphericity in terms of what we define here to be the weak
dominion of a submonoid of a monoid, and uses this to give a sufficient and
necessary condition under which a subpresentation of an aspherical group
presentation is aspherical.
|
We have studied the branching ratios of doubly charged Higgs bosons at the
LHC using a version of the SU(3)$_L\otimesU(1)_N$ electroweak model. At the end
of this work we have made a very simple plotting comparating the total cross
section of this model using Drell-Yan, gluon-gluon fusion and Left-right
symmetric model.
|
The discovery of topological insulators (TIs) and their unique electronic
properties has motivated research into a variety of applications, including
quantum computing. It has been proposed that TI surface states will be
energetically discretized in a quantum dot nanoparticle. These discretized
states could then be used as basis states for a qubit that is more resistant to
decoherence. In this work, prototypical TI Bi2Se3 nanoparticles are grown on
GaAs (001) using the droplet epitaxy technique, and we demonstrate the control
of nanoparticle height, area, and density by changing the duration of bismuth
deposition and substrate temperature. Within the growth window studied,
nanoparticles ranged from 5-15 nm tall with an 8-18nm equivalent circular
radius, and the density could be relatively well controlled by changing the
substrate temperature and bismuth deposition time.
|
We present the first findings of the spin transistor effect in the Rashba
gate-controlled ring embedded in the p-type self-assembled silicon quantum well
that is prepared on the n-type Si (100) surface. The coherence and phase
sensitivity of the spin-dependent transport of holes are studied by varying the
value of the external magnetic field and the top gate voltage that are applied
perpendicularly to the plane of the double-slit ring and revealed by the
Aharonov-Bohm (AB) and Aharonov-Casher (AC) conductance oscillations,
respectively. Firstly, the amplitude and phase sensitivity of the 0.7(2e2/h)
feature of the hole quantum conductance staircase revealed by the quantum point
contact inserted in the one of the arms of the double-slit ring are found to
result from the interplay of the spontaneous spin polarization and the Rashba
spin-orbit interaction (SOI). Secondly, the values of the AC conductance
oscillations caused by the Rashba SOI are found to take the fractional form
with both the plateaus and steps as a function of the top gate voltage.
|
Edge machine learning involves the development of learning algorithms at the
network edge to leverage massive distributed data and computation resources.
Among others, the framework of federated edge learning (FEEL) is particularly
promising for its data-privacy preservation. FEEL coordinates global model
training at a server and local model training at edge devices over wireless
links. In this work, we explore the new direction of energy-efficient radio
resource management (RRM) for FEEL. To reduce devices' energy consumption, we
propose energy-efficient strategies for bandwidth allocation and scheduling.
They adapt to devices' channel states and computation capacities so as to
reduce their sum energy consumption while warranting learning performance. In
contrast with the traditional rate-maximization designs, the derived optimal
policies allocate more bandwidth to those scheduled devices with weaker
channels or poorer computation capacities, which are the bottlenecks of
synchronized model updates in FEEL. On the other hand, the scheduling priority
function derived in closed form gives preferences to devices with better
channels and computation capacities. Substantial energy reduction contributed
by the proposed strategies is demonstrated in learning experiments.
|
In recent years, the question of the reliability of Machine Learning (ML)
methods has acquired significant importance, and the analysis of the associated
uncertainties has motivated a growing amount of research. However, most of
these studies have applied standard error analysis to ML models, and in
particular Deep Neural Network (DNN) models, which represent a rather
significant departure from standard scientific modelling. It is therefore
necessary to integrate the standard error analysis with a deeper
epistemological analysis of the possible differences between DNN models and
standard scientific modelling and the possible implications of these
differences in the assessment of reliability. This article offers several
contributions. First, it emphasises the ubiquitous role of model assumptions
(both in ML and traditional Science) against the illusion of theory-free
science. Secondly, model assumptions are analysed from the point of view of
their (epistemic) complexity, which is shown to be language-independent. It is
argued that the high epistemic complexity of DNN models hinders the estimate of
their reliability and also their prospect of long-term progress. Some potential
ways forward are suggested. Thirdly, this article identifies the close relation
between a model's epistemic complexity and its interpretability, as introduced
in the context of responsible AI. This clarifies in which sense, and to what
extent, the lack of understanding of a model (black-box problem) impacts its
interpretability in a way that is independent of individual skills. It also
clarifies how interpretability is a precondition for assessing the reliability
of any model, which cannot be based on statistical analysis alone. This article
focuses on the comparison between traditional scientific models and DNN models.
But, Random Forest and Logistic Regression models are also briefly considered.
|
We give a completely explicit formula for all harmonic maps of finite uniton
number from a Riemann surface to the unitary group U(n) in any dimension, and
so all harmonic maps from the 2-sphere, in terms of freely chosen meromorphic
functions on the surface and their derivatives, using only combinations of
projections and avoiding the usual dbar-problems or loop group factorizations.
We interpret our constructions using Segal's Grassmannian model, giving an
explicit factorization of the algebraic loop group, and showing how to obtain
harmonic maps into a Grassmannian.
|
Current deep-learning models for object recognition are known to be heavily
biased toward texture. In contrast, human visual systems are known to be biased
toward shape and structure. What could be the design principles in human visual
systems that led to this difference? How could we introduce more shape bias
into the deep learning models? In this paper, we report that sparse coding, a
ubiquitous principle in the brain, can in itself introduce shape bias into the
network. We found that enforcing the sparse coding constraint using a
non-differential Top-K operation can lead to the emergence of structural
encoding in neurons in convolutional neural networks, resulting in a smooth
decomposition of objects into parts and subparts and endowing the networks with
shape bias. We demonstrated this emergence of shape bias and its functional
benefits for different network structures with various datasets. For object
recognition convolutional neural networks, the shape bias leads to greater
robustness against style and pattern change distraction. For the image
synthesis generative adversary networks, the emerged shape bias leads to more
coherent and decomposable structures in the synthesized images. Ablation
studies suggest that sparse codes tend to encode structures, whereas the more
distributed codes tend to favor texture. Our code is host at the github
repository: \url{https://github.com/Crazy-Jack/nips2023_shape_vs_texture}
|
For most service architectures, such as OSGi and Spring,
architecture-specific tools allow software developers and architects to
visualize otherwise obscure configurations hidden in the project files. Such
visualization tools are often used for documentation purposes and help to
better understand programs than with source code alone. However, such tools
often do not address project-specific peculiarities or do not exist at all for
less common architectures, requiring developers to use different visualization
and analysis tools within the same architecture. Furthermore, many generic
modeling tools and architecture visualization tools require their users to
create and maintain models manually.
We here propose a DSL-driven approach that allows software architects to
define and adapt their own project visualization tool. The approach, which we
refer to as Software Project Visualization (SPViz), uses two DSLs, one to
describe architectural elements and their relationships, and one to describe
how these should be visualized. We demonstrate how SPViz can then automatically
synthesize a customized, project-specific visualization tool that can adapt to
changes in the underlying project automatically.
We implemented our approach in an open-source library, also termed SPViz and
discuss and analyze four different tools that follow this concept, including
open-source projects and projects from an industrial partner in the railway
domain.
|
Assuming Majorana neutrinos, we infer from oscillation data the expected
values of the parameters m_{nu_e} and m_{ee} probed by beta and 0nu2beta-decay
experiments. If neutrinos have a `normal hierarchy' we get the 90% CL ranges
|m_{ee}| = (0.7 - 4.6) meV, and discuss in which cases future experiments can
test this possibility. For `inverse hierarchy', we get |m_{ee}| = (12 - 57) meV
and m_{\nu_e} = (40 - 57) meV. The 0nu2beta data imply that almost degenerate
neutrinos are lighter than 1.05 h eV at 90% CL, competitive with the beta-decay
bound. We critically reanalyse the data that were recently used to claim an
evidence for 0nu2beta, and discuss their implications. Finally, we review the
predictions of flavour models for m_{ee} and theta_{13}.
|
Federated Deep Learning (FDL) is helping to realize distributed machine
learning in the Internet of Vehicles (IoV). However, FDL's global model needs
multiple clients to upload learning model parameters, thus still existing
unavoidable communication overhead and data privacy risks. The recently
proposed Swarm Learning (SL) provides a decentralized machine-learning approach
uniting edge computing and blockchain-based coordination without the need for a
central coordinator. This paper proposes a Swarm-Federated Deep Learning
framework in the IoV system (IoV-SFDL) that integrates SL into the FDL
framework. The IoV-SFDL organizes vehicles to generate local SL models with
adjacent vehicles based on the blockchain empowered SL, then aggregates the
global FDL model among different SL groups with a proposed credibility weights
prediction algorithm. Extensive experimental results demonstrate that compared
with the baseline frameworks, the proposed IoV-SFDL framework achieves a 16.72%
reduction in edge-to-global communication overhead while improving about 5.02%
in model performance with the same training iterations.
|
We study classical solutions of a low energy effective theory of a string
theory with tachyons. With a certain ansatz, we obtain all possible solutions
which are weakly coupled and weakly curved. We find, in addition to the
interpolating solutions studied in our previous paper, black hole solutions and
solutions including the geometry of a capped cylinder. Some possible
implications of the solutions to closed string tachyon condensation are
discussed.
|
Strongly correlated electron systems at the border of magnetism are of active
current interest, particularly because the accompanying quantum criticality
provides a route towards both strange-metal non-Fermi liquid behavior and
unconventional superconductivity. Among the many important questions is whether
the magnetism acts simply as a source of fluctuations in the textbook Landau
framework, or instead serves as a proxy for some unexpected new physics. We put
into this general context the recent developments on quantum phase transitions
in antiferromagnetic heavy fermion metals. Among these are the extensive recent
theoretical and experimental studies on the physics of Kondo destruction in a
class of beyond-Landau quantum critical points. Also discussed are the
theoretical basis for a global phase diagram of antiferromagnetic heavy fermion
metals, and the recent surge of materials suitable for studying this phase
diagram. Furthermore, we address the generalization of this global phase
diagram to the case of Kondo insulators, and consider the future prospect to
study the interplay among Kondo coherence, magnetism and topological states.
Finally, we touch upon related issues beyond the antiferromagnetic settings,
arising in mixed valent, ferromagnetic, quadrupolar, or spin glass f-electron
systems, as well as some general issues on emergent phases near quantum
critical points.
|
In this work we present a mimetic spectral element discretization for the 2D
incompressible Navier-Stokes equations that in the limit of vanishing
dissipation exactly preserves mass, kinetic energy, enstrophy and total
vorticity on unstructured grids. The essential ingredients to achieve this are:
(i) a velocity-vorticity formulation in rotational form, (ii) a sequence of
function spaces capable of exactly satisfying the divergence free nature of the
velocity field, and (iii) a conserving time integrator. Proofs for the exact
discrete conservation properties are presented together with numerical test
cases on highly irregular grids.
|
In this paper, for a henselian valued field $(K,v)$ of arbitrary rank and an
extension $w$ of $v$ to $K(X),$ we use abstract key polynomials for $w$ to
obtain distinguished pairs and saturated distinguished chains.
|
We try to retrieve the power spectra with certainty to the highest spatial
frequencies allowed by current instrumentation. For this, we use 2D inversion
code that were able to recover information up to the instrumental diffraction
limit. The retrieved power spectra have shallow slopes extending further down
to much smaller scales than found before. They seem not to show any power law.
The observed slopes at subgranular scales agree with those obtained from recent
local dynamo simulations. Small differences are found for vertical component of
kinetic energy that suggest that observations suffer from an instrumental
effect that is not taken into account.
|
In the sixth chapter of his notebooks Ramanujan introduced a method of
summing divergent series which assigns to the series the value of the
associated Euler-MacLaurin constant that arises by applying the Euler-MacLaurin
summation formula to the partial sums of the series. This method is now called
the Ramanujan summation process. In this paper we calculate the Ramanujan sum
of the exponential generating functions $\sum_{n\geq 1}\log n e^{nz}$ and
$\sum_{n\geq 1}H_n^{(j)} e^{-nz}$ where $H_n^{(j)}=\sum_{m=1}^n \frac{1}{m^j}$.
We find a surprising relation between the two sums when $j=1$ from which
follows a formula that connects the derivatives of the Riemann zeta - function
at the negative integers to the Ramanujan summation of the divergent Euler sums
$\sum_{n\ge 1} n^kH_n, k \ge 0$, where $H_n= H_n^{(1)}$. Further, we express
our results on the Ramanujan summation in terms of the classical summation
process called the Borel sum.
|
String-net condensation can give rise to non-Abelian anyons whereas loop
condensation usually gives rise to Abelian anyons. It has been proposed that
generalized quantum loop gases with non-orthogonal inner products can produce
non-Abelian anyons. We detail an exact mapping between the string-net and the
generalized loop models and explain how the non-orthogonal products arise. We
also introduce a loop model of double-stranded nets where quantum loops with an
orthogonal inner product and local interactions supports non-Abelian Fibonacci
anyons. Finally we emphasize the origin of the sign problem in such systems and
its consequences on the complexity of their ground state wave functions.
|
In Lorentz-violating electrodynamics a steady current (and similarly a static
charge) generates both static magnetic and electric fields. These induced
fields, acting on interfering particles, change the interference pattern. We
find that particle interference experiments are sensitive to small Lorentz
violating effects, and thus they can be used to improve current bounds on some
Lorentz-violating parameters.
|
We present a new procedure using on-shell recursion to determine coefficients
of integral functions appearing in one-loop scattering amplitudes of gauge
theories, including QCD. With this procedure, coefficients of integrals,
including bubbles and triangles, can be determined without resorting to
integration. We give criteria for avoiding spurious singularities and boundary
terms that would invalidate the recursion. As an example where the criteria are
satisfied, we obtain all cut-constructible contributions to the one-loop
n-gluon scattering amplitude, A_n^{oneloop}(...--+++...), with split-helicity
from an N=1 chiral multiplet and from a complex scalar. Using the
supersymmetric decomposition, these are ingredients in the construction of QCD
amplitudes with the same helicities. This method requires prior knowledge of
amplitudes with sufficiently large numbers of legs as input. In many cases,
these are already known in compact forms from the unitarity method.
|
A simplified Heisenberg spin model is studied in order to examine the idea of
decoherence in closed quantum systems. For this purpose, we present a
quantifiable definition to quantum coherence $\Xi$, and discuss in some detail
a general coherence theory and its elementary results. As expected, decoherence
is understood as a statistical process that is caused by the dynamics of the
system, similar to the growth of entropy. It appears that coherence is an
important measure that helps to understand quantum properties of a system,
e.g., the decoherence time can be derived from the coherence function $\Xi(t)$,
but not from the entropy dynamics. Moreover, the concept of decoherence time is
applicable in closed and finite systems. However, in most cases, the decay of
off-diagonal elements differs from the usual $\exp(-t/\tau_{\rm d})$ behaviour.
For concreteness, we report the form of decoherence time $\tau_{\rm d}$ in a
finite Heisenberg model with respect to the number of particles $N$, density
$n_{\rho}$, spatial dimension $D$ and $\epsilon$ in a $\eta/r^{\epsilon}$-type
of potential.
|
Motivated by the fact that humans like some level of unpredictability or
novelty, and might therefore get quickly bored when interacting with a
stationary policy, we introduce a novel non-stationary bandit problem, where
the expected reward of an arm is fully determined by the time elapsed since the
arm last took part in a switch of actions. Our model generalizes previous
notions of delay-dependent rewards, and also relaxes most assumptions on the
reward function. This enables the modeling of phenomena such as progressive
satiation and periodic behaviours. Building upon the Combinatorial Semi-Bandits
(CSB) framework, we design an algorithm and prove a bound on its regret with
respect to the optimal non-stationary policy (which is NP-hard to compute).
Similarly to previous works, our regret analysis is based on defining and
solving an appropriate trade-off between approximation and estimation.
Preliminary experiments confirm the superiority of our algorithm over both the
oracle greedy approach and a vanilla CSB solver.
|
We present results of the search for supersolid 4He using low-frequency,
low-level mechanical excitation of a solid sample grown and cooled at fixed
volume. We have observed low frequency non-linear resonances that constitute
anomalous features. These features, which appear below about 0.8 K, are absent
in 3He. The frequency, the amplitude at which the nonlinearity sets in, and the
upper temperature limit of existence of these resonances depend markedly on the
sample history.
|
We analyze nonequilibrium fluctuations of the averaging process on $\mathbb
T_\varepsilon^d$, a continuous degenerate Gibbs sampler running over the edges
of the discrete $d$-dimensional torus. We show that, if we start from a smooth
deterministic non-flat interface, recenter, blow-up by a non-standard
CLT-scaling factor $\theta_\varepsilon=\varepsilon^{-(d/2+1)}$, and rescale
diffusively, Gaussian fluctuations emerge in the limit $\varepsilon\to 0$.
These fluctuations are purely dynamical, zero at times $t=0$ and $t=\infty$,
and non-trivial for $t\in (0,\infty)$. We fully determine the correlation
matrix of the limiting noise, non-diagonal as soon as $d\ge 2$. The main
technical challenge in this stochastic homogenization procedure lies in a LLN
for a weighted space-time average of squared discrete gradients. We accomplish
this through a Poincar\'e inequality with respect to the underlying randomness
of the edge updates, a tool from Malliavin calculus in Poisson space. This
inequality, combined with sharp gradients' second moment estimates, yields
quantitative variance bounds without prior knowledge of the limiting mean. Our
method avoids higher (e.g., fourth) moment bounds, which seem inaccessible with
the present techniques.
|
We discuss how renormalisation group equations can be consistently formulated
using the algebraic renormalisation framework, in the context of a
dimensionally-renormalised chiral field theory in the BMHV scheme, where the
BRST symmetry, originally broken at the quantum level, is restored via finite
counterterms. We compare it with the more standard multiplicative
renormalisation approach, which application would be more cumbersome in this
setting. Both procedures are applied and compared on the example of a massless
chiral right-handed QED model, and beta-function and anomalous dimensions are
evaluated up to two-loop orders.
|
We investigate the lowest energy configurations for string - antistring pairs
at fixed separations by numerically minimizing the energy. We show that for
separations smaller than a critical value, a region of false vacuum develops in
the middle due to large gradient energy density. Consequently, well defined
string - antistring pairs do not exist for such separations. We present an
example of vortex - antivortex production by vacuum bubbles where this effect
seems to play a dynamical role in the annihilation of the pair. We also study
the dependence of the energy of an string-antistring pair on their separation
and find deviations from a simple logarithmic dependence for small separations.
|
We show that transparent dielectrics with strong optical anisotropy support a
new class of electromagnetic waves that combine the properties of propagating
and evanescent fields. These "ghost waves" are created in tangent bifurcations
that "annihilate" pairs of positive- and negative-index modes, and represent
the optical analogue of the "ghost orbits" in the quantum theory of
non-integrable dynamical systems. Similarly to the regular evanescent fields,
ghost waves support high transverse wavenumbers, but in addition to the
exponential decay show oscillatory behavior in the direction of propagation.
Ghost waves can be resonantly coupled to the incident evanescent waves, which
then grow exponentially through the anisotropic media - as in the case of
negative index materials.As ghost waves are supported by transparent dielectric
media, they are free from the "curse" of material loss that is inherent to
conventional negative index composites.
|
We consider a model of heat conduction which consists of a finite nonlinear
chain coupled to two heat reservoirs at different temperatures. We study the
low temperature asymptotic behavior of the invariant measure. We show that, in
this limit, the invariant measure is characterized by a variational principle.
We relate the heat flow to the variational principle. The main technical
ingredient is an extension of Freidlin-Wentzell theory to a class of degenerate
diffusions.
|
Mutual information $I(X;Y)$ is a useful definition in information theory to
estimate how much information the random variable $Y$ holds about the random
variable $X$. One way to define the mutual information is by comparing the
joint distribution of $X$ and $Y$ with the product of the marginals through the
KL-divergence. If the two distributions are close to each other there will be
almost no leakage of $X$ from $Y$ since the two variables are close to being
independent. In the discrete setting the mutual information has the nice
interpretation of how many bits $Y$ reveals about $X$ and if $I(X;Y)=H(X)$ (the
Shannon entropy of $X$) then $X$ is completely revealed. However, in the
continuous case we do not have the same reasoning. For instance the mutual
information can be infinite in the continuous case. This fact enables us to try
different metrics or divergences to define the mutual information. In this
paper, we are evaluating different metrics or divergences such as
Kullback-Liebler (KL) divergence, Wasserstein distance, Jensen-Shannon
divergence and total variation distance to form alternatives to the mutual
information in the continuous case. We deploy different methods to estimate or
bound these metrics and divergences and evaluate their performances.
|
Motivated by recent observational constraints on dust reprocessed emission in
star forming galaxies at $z\sim 6$ and above we use the very-large cosmological
hydrodynamical simulation \bluetides\ to explore predictions for the amount of
dust obscured star formation in the early Universe ($z>8$). \bluetides\ matches
current observational constraints on both the UV luminosity function and galaxy
stellar mass function and predicts that approximately $90\%$ of the star
formation in high-mass ($M_{*}>10^{10}\,{\rm M_{\odot}}$) galaxies at $z=8$ is
already obscured by dust. The relationship between dust attenuation and stellar
mass predicted by \bluetides\ is consistent with that observed at lower
redshift. However, observations of several individual objects at $z>6$ are
discrepant with the predictions, though it is possible their uncertainties may
have been underestimated. We find that the predicted surface density of $z\ge
8$ sub-mm sources is below that accessible to current {\em Herschel}, SCUBA-2,
and ALMA sub-mm surveys. However, as ALMA continues to accrue additional
surface area the population of $z>8$ dust-obscured galaxies may become
accessible in the near future.
|
We present a finite blocklength performance bound for a DNA storage channel
with insertions, deletions, and substitutions. The considered bound -- the
dependency testing (DT) bound, introduced by Polyanskiy et al. in 2010 --
provides an upper bound on the achievable frame error probability and can be
used to benchmark coding schemes in the practical short-to-medium blocklength
regime. In particular, we consider a concatenated coding scheme where an inner
synchronization code deals with insertions and deletions and the outer code
corrects remaining (mostly substitution) errors. The bound depends on the inner
synchronization code. Thus, it allows to guide its choice. We then consider
low-density parity-check codes for the outer code, which we optimize based on
extrinsic information transfer charts. Our optimized coding schemes achieve a
normalized rate of $88\%$ to $96\%$ with respect to the DT bound for code
lengths up to $2000$ DNA symbols for a frame error probability of $10^{-3}$ and
code rate 1/2.
|
In learning-based functionality stealing, the attacker is trying to build a
local model based on the victim's outputs. The attacker has to make choices
regarding the local model's architecture, optimization method and, specifically
for NLP models, subword vocabulary, such as BPE. On the machine translation
task, we explore (1) whether the choice of the vocabulary plays a role in model
stealing scenarios and (2) if it is possible to extract the victim's
vocabulary. We find that the vocabulary itself does not have a large effect on
the local model's performance. Given gray-box model access, it is possible to
collect the victim's vocabulary by collecting the outputs (detokenized subwords
on the output). The results of the minimum effect of vocabulary choice are
important more broadly for black-box knowledge distillation.
|
Quantum algorithms profit from the interference of quantum states in an
exponentially large Hilbert space and the fact that unitary transformations on
that Hilbert space can be broken down to universal gates that act only on one
or two qubits at the same time. The former aspect renders the direct classical
simulation of quantum algorithms difficult. Here we introduce higher-order
partial derivatives of a probability distribution of particle positions as a
new object that shares these basic properties of quantum mechanical states
needed for a quantum algorithm. Discretization of the positions allows one to
represent the quantum mechanical state of $n_\text{bit}$ qubits by
$2(n_\text{bit}+1)$ classical stochastic bits. Based on this, we demonstrate
many-particle interference and representation of pure entangled quantum states
via derivatives of probability distributions and find the universal set of
stochastic maps that correspond to the quantum gates in a universal gate set.
We prove that the propagation via the stochastic map built from those universal
stochastic maps reproduces up to a prefactor exactly the evolution of the
quantum mechanical state with the corresponding quantum algorithm, leading to
an automated translation of a quantum algorithm to a stochastic classical
algorithm. We implement several well-known quantum algorithms, analyse the
scaling of the needed number of realizations with the number of qubits, and
highlight the role of destructive interference for the cost of the emulation.
Foundational questions raised by the new representation of a quantum state are
discussed.
|
We investigate the possibility that the process of $\rho^{0}$-meson
photoproduction on proton, $\gamma+p\to p+\rho^{0}$, in the near threshold
region $E_{\gamma}< 2$ GeV, can be considered in the framework of model with
$\pi$-, $\sigma$- and N-exchanges. This suggestion is based on a study of the
t-dependence of differential cross section, $d\sigma(\gamma p \to p
\rho^{0})/dt$, which has been measured by SAPHIR Collaboration. We find that
the suggested model provides a good description of the experimental data with
new values of $\rho NN$-coupling constants in the region of the time-like
$\rho^{0}$-meson momentum. Our results suggest that such model can be
considered as a suitable nonresonant background mechanism for the future
discussion of possible role of nucleon resonance contributions. Our predictions
for $\rho^{0}$-meson photoproduction on neutron target and for beam asymmetry
on both proton and neutron targets are presented.
|
With the introduction of educational robotics (ER) and computational thinking
(CT) in classrooms, there is a rising need for operational models that help
ensure that CT skills are adequately developed. One such model is the Creative
Computational Problem Solving Model (CCPS) which can be employed to improve the
design of ER learning activities. Following the first validation with students,
the objective of the present study is to validate the model with teachers,
specifically considering how they may employ the model in their own practices.
The Utility, Usability and Acceptability framework was leveraged for the
evaluation through a survey analysis with 334 teachers. Teachers found the CCPS
model useful to foster transversal skills but could not recognise the impact of
specific intervention methods on CT-related cognitive processes. Similarly,
teachers perceived the model to be usable for activity design and intervention,
although felt unsure about how to use it to assess student learning and adapt
their teaching accordingly. Finally, the teachers accepted the model, as shown
by their intent to replicate the activity in their classrooms, but were less
willing to modify it or create their own activities, suggesting that they need
time to appropriate the model and underlying tenets.
|
The binary star HD 45166 has been observed since 1922 but its orbital period
has not yet been found. It is considered a peculiar Wolf-Rayet star, and its
assigned classification varied along the years. High-resolution spectroscopic
observations show that the spectrum, in emission and in absorption, is quite
rich. The emission lines have great diversity of widths and profiles. The
Hydrogen and Helium lines are systematically broader than the CNO lines.
Assuming that HD 45166 is a double-line spectroscopic binary, it presents an
orbital period of P = 1.596 days, with an eccentricity of e = 0.18. In
addition, a search for periodicity using standard techniques reveals that the
emission lines present at least two other periods, of 5 hours and of 15 hours.
The secondary star has a spectral type of B7 V and, therefore, should have a
mass of about 4.8 solar masses. Given the radial velocity amplitudes, we
determined the mass of the hot (primary) star as being 4.2 solar masses and the
inclination angle of the system, i = 0.77 degr. As the eccentricity of the
orbit is non zero, the Roche lobes increase and decrease as a function of the
orbital phase. At periastron, the secondary star fills its Roche lobe. The
distance to the star has been re-determined as d = 1.3 kpc and a color excess
of E(B-V)=0.155 has been derived. This implies an absolute B magnitude of -0.6
for the primary star and -0.7 for the B7 star. We suggest that the discrete
absorption components (DACs) observed in the ultraviolet with a periodicity
similar to the orbital period may be induced by periastron events.
|
We consider electroweak singlet dark matter with a mass comparable to the
Higgs mass. The singlet is assumed to couple to standard matter through a
perturbative coupling to the Higgs particle. The annihilation of a singlet with
a mass comparable to the Higgs mass is dominated by proximity to the W, Z and
Higgs peaks in the annihilation cross section. We find that the continuous
photon spectrum from annihilation of a perturbatively coupled singlet in the
galactic halo can reach a level of several per mil of the EGRET diffuse gamma
ray flux.
|
In the literature on projection-based nonlinear model order reduction for
fluid dynamics problems, it is often claimed that due to modal truncation, a
projection-based reduced-order model (PROM) does not resolve the dissipative
regime of the turbulent energy cascade and therefore is numerically unstable.
Efforts at addressing this claim have ranged from attempting to model the
effects of the truncated modes to enriching the classical subspace of
approximation in order to account for the truncated phenomena. This paper
challenges this claim. Exploring the relationship between projection-based
model order reduction and semi-discretization and using numerical evidence from
three relevant flow problems, it argues in an orderly manner that the real
culprit behind most if not all reported numerical instabilities of PROMs for
turbulence and convection-dominated turbulent flow problems is the Galerkin
framework that has been used for constructing the PROMs. The paper also shows
that alternatively, a Petrov-Galerkin framework can be used to construct
numerically stable PROMs for convection-dominated laminar as well as turbulent
flow problems that are numerically stable and accurate, without resorting to
additional closure models or tailoring of the subspace of approximation. It
also shows that such alternative PROMs deliver significant speedup factors.
|
We propose a natural $\mathbb{Z}_2 \times \mathbb{Z}_2$-graded generalisation
of $d=2$, $\mathcal{N}=(1,1)$ supersymmetry and construct a
$\mathbb{Z}_2^2$-space realisation thereof. Due to the grading, the
supercharges close with respect to, in the classical language, a commutator
rather than an anticommutator. This is then used to build classical (linear and
non-linear) sigma models that exhibit this novel supersymmetry via mimicking
standard superspace methods. The fields in our models are bosons, right-handed
and left-handed Majorana-Weyl spinors, and exotic bosons. The bosons commute
with all the fields, the spinors belong to different sectors that cross commute
rather than anticommute, while the exotic boson anticommute with the spinors.
As a particular example of one of the models, we present a `double-graded'
version of supersymmetric sine-Gordon theory.
|
Quasi-equilibrium states that can be prepared in solids through Nuclear
Magnetic Resonance (NMR) techniques are out-of-equilibrium states that slowly
relax towards thermodynamic equilibrium with the lattice. In this work, we use
the quantum discord dynamics as a witness of the quantum correlation in this
kind of state. The studied system is a dipole interacting spin pair whose
initial state is prepared with the NMR Jeener-Broekaert pulse sequence,
starting from equilibrium at high temperature and high external magnetic field.
It then evolves as an open quantum system within two different dynamic
scenarios: adiabatic decoherence driven by the coupling of the pairs to a
common phonon field, described within a non-markovian approach; and
spin-lattice relaxation represented by a markovian master equation, and driven
by thermal fluctuations. In this way, the studied model is endowed with the
dynamics of a realistic solid sample. The quantum discord rapidly increases
during the preparation of the initial state, escalating several orders of
magnitude compared with thermal equilibrium at room temperature. Despite the
vanishing of coherences during decoherence, the quantum discord oscillates
around this high value and undergoes a minor attenuation, holding the same
order of magnitude as the initial state. Finally, the quantum discord
dissipates within a time scale shorter than but comparable to spin-lattice
relaxation.
|
Herbig Ae stars are young A-type stars in the pre-main sequence evolutionary
phase with masses of ~1.5-3 M_o. They show rather intense surface activity
(Dunkin et al. 1997) and infrared excess related to the presence of
circumstellar disks. Because of their youth, primordial magnetic fields
inherited from the parent molecular cloud may be expected, but no direct
evidence for the presence of magnetic fields on their surface, except in one
case (Donati et al. 1997), has been found until now. Here we report
observations of optical circular polarization with FORS 1 at the VLT in the
three Herbig Ae stars HD 139614, HD 144432 and HD 144668. A definite
longitudinal magnetic field at 4.8 sigma level, <B_z>=-450+-93 G, has been
detected in the Herbig Ae star HD 139614. This is the largest magnetic field
ever diagnosed for a Herbig Ae star. A hint of a weak magnetic field is found
in the other two Herbig Ae stars, HD 144432 and HD 144668, for which magnetic
fields are measured at the ~1.6 sigma and ~2.5 sigma level respectively.
Further, we report the presence of circular polarization signatures in the Ca
II K line in the V Stokes spectra of HD 139614 and HD 144432, which appear
unresolved at the low spectral resolution achievable with FORS 1. We suggest
that models involving accretion of matter from the disk to the star along a
global stellar magnetic field of a specific geometry can account for the
observed Zeeman signatures.
|
One-shot neural architecture search (NAS) has played a crucial role in making
NAS methods computationally feasible in practice. Nevertheless, there is still
a lack of understanding on how these weight-sharing algorithms exactly work due
to the many factors controlling the dynamics of the process. In order to allow
a scientific study of these components, we introduce a general framework for
one-shot NAS that can be instantiated to many recently-introduced variants and
introduce a general benchmarking framework that draws on the recent large-scale
tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one-shot NAS
methods. To showcase the framework, we compare several state-of-the-art
one-shot NAS methods, examine how sensitive they are to their hyperparameters
and how they can be improved by tuning their hyperparameters, and compare their
performance to that of blackbox optimizers for NAS-Bench-101.
|
We discuss strategies to make inferences on the thermal relic abundance of a
Weakly Interacting Massive Particle (WIMP) when the same effective
dimension-six operator that explains an experimental excess in direct detection
is assumed to drive decoupling at freeze-out, and apply them to the
proton-philic Spin-dependent Inelastic Dark Matter (pSIDM) scenario, a
phenomenological set-up containing two states $\chi_1$ and $\chi_2$ with
$m_{\chi_2}>m_{\chi_1}$ that we have shown in a previous paper to explain the
DAMA effect in compliance with the constraints from other detectors. We update
experimental constraints on pSIDM, extend the analysis to the most general
spin-dependent momentum-dependent interactions allowed by non-relativistic
Effective Field Theory (EFT), and consider for the WIMP velocity distribution
in our Galaxy both a halo-independent approach and a standard Maxwellian. The
problem of calculating the relic abundance by using direct detection data to
fix the model parameters is affected by a strong sensitivity on $f(v)$ and by
the degeneracy between the WIMP local density and the WIMP-nucleon scattering
cross section. As a consequence, a DM direct detection experiment is not
directly sensitive to the physical cut-off scale of the EFT, but on some
dimensional combination that does not depend on the actual value of the relic
abundance. However, such degeneracy can be used to develop a consistency test
on the possibility that the WIMP is a thermal relic in the first place. When we
apply it to the pSIDM scenario we find that only a WIMP with a standard
spin-dependent interaction ${\cal O}_{spin}$ with quarks can be a thermal
relic, for a galactic velocity distribution that departs from a Maxwellian.
However all the $\chi_2$ states must have already decayed today, and this
requires some additional mechanism besides that provided by the ${\cal
O}_{spin}$ operator.
|
We tackle the problem of computing counterfactual explanations -- minimal
changes to the features that flip an undesirable model prediction. We propose a
solution to this question for linear Support Vector Machine (SVMs) models.
Moreover, we introduce a way to account for weighted actions that allow for
more changes in certain features than others. In particular, we show how to
find counterfactual explanations with the purpose of increasing model
interpretability. These explanations are valid, change only actionable
features, are close to the data distribution, sparse, and take into account
correlations between features. We cast this as a mixed integer programming
optimization problem. Additionally, we introduce two novel scale-invariant cost
functions for assessing the quality of counterfactual explanations and use them
to evaluate the quality of our approach with a real medical dataset. Finally,
we build a support vector machine model to predict whether law students will
pass the Bar exam using protected features, and used our algorithms to uncover
the inherent biases of the SVM.
|
Recent trackers adopt the Transformer to combine or replace the widely used
ResNet as their new backbone network. Although their trackers work well in
regular scenarios, however, they simply flatten the 2D features into a sequence
to better match the Transformer. We believe these operations ignore the spatial
prior of the target object which may lead to sub-optimal results only. In
addition, many works demonstrate that self-attention is actually a low-pass
filter, which is independent of input features or key/queries. That is to say,
it may suppress the high-frequency component of the input features and preserve
or even amplify the low-frequency information. To handle these issues, in this
paper, we propose a unified Spatial-Frequency Transformer that models the
Gaussian spatial Prior and High-frequency emphasis Attention (GPHA)
simultaneously. To be specific, Gaussian spatial prior is generated using dual
Multi-Layer Perceptrons (MLPs) and injected into the similarity matrix produced
by multiplying Query and Key features in self-attention. The output will be fed
into a Softmax layer and then decomposed into two components, i.e., the direct
signal and high-frequency signal. The low- and high-pass branches are rescaled
and combined to achieve all-pass, therefore, the high-frequency features will
be protected well in stacked self-attention layers. We further integrate the
Spatial-Frequency Transformer into the Siamese tracking framework and propose a
novel tracking algorithm, termed SFTransT. The cross-scale fusion based
SwinTransformer is adopted as the backbone, and also a multi-head
cross-attention module is used to boost the interaction between search and
template features. The output will be fed into the tracking head for target
localization. Extensive experiments on both short-term and long-term tracking
benchmarks all demonstrate the effectiveness of our proposed framework.
|
Fingerprint is widely used in a variety of applications. Security measures
have to be taken to protect the privacy of fingerprint data. Cancelable
biometrics is proposed as an effective mechanism of using and protecting
biometrics. In this paper we propose a new method of constructing cancelable
fingerprint template by combining real template with synthetic template.
Specifically, each user is given one synthetic minutia template generated with
random number generator. Every minutia point from the real template is
individually thrown into the synthetic template, from which its k-nearest
neighbors are found. The verification template is constructed by combining an
arbitrary set of the k-nearest neighbors. To prove the validity of the scheme,
testing is carried out on three databases. The results show that the
constructed templates satisfy the requirements of cancelable biometrics.
|
A nonlinear dynamics semi-classical model is used to show that standard
quantum spin analysis can be obtained. The model includes a classically driven
nonlinear differential equation with dissipation and a semi-classical
interpretation of the torque on a spin magnetic moment in the presence of a
realistic magnetic field, which will represent two equilibrium positions. The
highly complicated driven nonlinear dissipative semi-classical model is used to
introduce chaos, which is necessary to produce the correct statistical quantum
results. The resemblance between this semi-classical spin model and the
thoroughly studied classical driven-damped nonlinear pendulum are shown and
discussed.
|
We show that the warped de Sitter compactifications are possible under
certain conditions in D-dimensional gravitational theory coupled to a dilaton,
a form field strength, and a cosmological constant. We find that the solutions
of field equations give de Sitter spacetime with the warped structure, and
discuss cosmological models directly obtained from these solutions. We also
construct a cosmological model in the lower-dimensional effective theory. If
there is a field strength having non-vanishing components along the internal
space, the moduli can be fixed at the minimum of the effective potential where
a de Sitter vacuum can be obtained.
|
We use parallax data from the Gaia second data release (GDR2), combined with
parallax data based on Hipparcos and HST data, to derive the
period-luminosity-metallicity (PLZ) relation for Galactic classical cepheids
(CCs) in the V,K, and Wesenheit WVK bands. An initial sample of 452 CCs are
extracted from the literature with spectroscopically derived iron abundances.
Reddening values, pulsation periods, and mean magnitudes are taken from the
literature.
Based on nine CCs with a goodness-of-fit (GOF) statistic <8 and with an
accurate non-Gaia parallax, a parallax zero-point offset of -0.049 +- 0.018 mas
is derived. Selecting a GOF statistic <8 removes about 40\% of the sample most
likely related due to binarity. Excluding first overtone and multi-mode
cepheids and applying some other criteria reduces the sample to about 200
stars.
The derived PL(Z) relations depend strongly on the parallax zero-point
offset. The slope of the PL relation is found to be different from the
relations in the LMC at the 3 sigma level. Fixing the slope to the value found
in the LMC leads to a distance modulus (DM) to the LMC of order 18.7 mag,
larger than the canonical distance. The canonical DM of around 18.5 mag would
require a parallax zero-point offset of order $-0.1$ mas.
Given the strong correlation between zero point, period and metallicity
dependence of the PL relation, and the parallax zero-point offset there is no
evidence for a metallicity term in the PLZ relation.
The GDR2 release does not allow us to improve on the current distance scale
based on CCs. The value of and the uncertainty on the parallax zero-point
offset leads to uncertainties of order 0.15 mag on the distance scale. The
parallax zero-point offset will need to be known at a level of 3 microas or
better to have a 0.01 mag or smaller effect on the zero point of the PL
relation and the DM to the LMC.
|
The factors contributing to the persistence and stability of life are
fundamental for understanding complex living systems. Organisms are commonly
challenged by harsh and fluctuating environments that are suboptimal for growth
and reproduction, which can lead to extinction. Species often contend with
unfavorable and noisy conditions by entering a reversible state of reduced
metabolic activity, a phenomenon known as dormancy. Here, we develop Spore
Life, a model to investigate the effects of dormancy on population dynamics. It
is based on Conway's Game of Life, a deterministic cellular automaton where
simple rules govern the metabolic state of an individual based on the metabolic
state of its neighbors. For individuals that would otherwise die, Spore Life
provides a refuge in the form of an inactive state. These dormant individuals
(spores) can resuscitate when local conditions improve. The model includes a
parameter alpha that controls the survival probability of spores, interpolating
between Game of Life (alpha = 0) and Spore Life (alpha = 1), while capturing
stochastic dynamics in the intermediate regime (0 < alpha < 1). In addition to
identifying the emergence of unique periodic configurations, we find that spore
survival increases the average number of active individuals and buffers
populations from extinction. Contrary to expectations, the stabilization of the
population is not the result of a large and long-lived seed bank. Instead, the
demographic patterns in Spore Life only require a small number of resuscitation
events. Our approach yields novel insight into what is minimally required for
the emergence of complex behaviors associated with dormancy and the seed banks
that they generate.
|
In this paper we consider the time dependent Peierls-Nabarro model in
dimension one. This model is a semi-linear integro-differential equation
associated to the half Laplacian. This model describes the evolution of phase
transitions associated to dislocations. At large scale with well separated
dislocations, we show that the dislocations move at a velocity proportional to
the effective stress. This implies Orowan's law which claims that the plastic
strain velocity is proportional to the product of the density of dislocations
by the effective stress.
|
We present a decomposition scheme based on Lie-Trotter-Suzuki product
formulae to represent an ordered operator exponential as a product of ordinary
operator exponentials. We provide a rigorous proof that does not use a
time-displacement superoperator, and can be applied to non-analytic functions.
Our proof provides explicit bounds on the error and includes cases where the
functions are not infinitely differentiable. We show that Lie-Trotter-Suzuki
product formulae can still be used for functions that are not infinitely
differentiable, but that arbitrary order scaling may not be achieved.
|
Whole-plane SLE$_\kappa$ is a random fractal curve between two points on the
Riemann sphere. Zhan established for $\kappa \leq 4$ that whole-plane
SLE$_\kappa$ is reversible, meaning invariant in law under conformal
automorphisms swapping its endpoints. Miller and Sheffield extended this to
$\kappa \leq 8$. We prove whole-plane SLE$_\kappa$ is reversible for $\kappa >
8$, resolving the final case and answering a conjecture of Viklund and Wang.
Our argument depends on a novel mating-of-trees theorem of independent
interest, where Liouville quantum gravity on the disk is decorated by an
independent radial space-filling SLE curve.
|
Causal models and methods have great promise, but their progress has been
stalled. Proposals using causality get squeezed between two opposing
worldviews. Scientific perfectionism--an insistence on only using "correct"
models--slows the adoption of causal methods in knowledge generating
applications. Pushing in the opposite direction, the academic discipline of
computer science prefers algorithms with no or few assumptions, and
technologies based on automation and scalability are often selected for
economic and business applications. We argue that these system-centric
inductive biases should be replaced with a human-centric philosophy we refer to
as scientific pragmatism. The machine learning community must strike the right
balance to make space for the causal revolution to prosper.
|
We construct non-K\"ahler Calabi-Yau manifolds of dimension $\ge$ 4 with
arbitrarily large 2nd Betti numbers by smoothing normal crossing varieties. The
examples have K3 fibrations over smooth projective varieties and their
algebraic dimensions are of codimension 2.
|
TCSPs (Temporal Constraint Satisfaction Problems), as defined in [Dechter et
al., 1991], get rid of unary constraints by binarizing them after having added
an "origin of the world" variable. In this work, we look at the constraints
between the "origin of the world" variable and the other variables, as the
(binarized) domains of these other variables. With this in mind, we define a
notion of arc-consistency for TCSPs, which we will refer to as
binarized-domains Arc-Consistency, or bdArc-Consistency for short. We provide
an algorithm achieving bdArc-Consistency for a TCSP, which we will refer to as
bdAC-3, for it is an adaptation of Mackworth's [1977] well-known
arc-consistency algorithm AC-3. We show that if a convex TCSP, referred to in
[Dechter et al., 1991] as an STP (Simple Temporal Problem), is
bdArc-Consistent, and its "origin of the world" variable disconnected from none
of the other variables, its binarized domains are minimal. We provide two
polynomial backtrack-free procedures: one for the task of getting, from a
bdArc-Consistent STP, either that it is inconsistent or, in case of
consistency, a bdArc-Consistent STP refinement whose "origin of the world"
variable is disconnected from none of the other variables; the other for the
task of getting a solution from a bdArc-Consistent STP whose "origin of the
world" variable is disconnected from none of the other variables. We then show
how to use our results both in a general TCSP solver and in a TCSP-based job
shop scheduler. From our work can be extracted a one-to-all all-to-one shortest
paths algorithm of an IR-labelled directed graph. Finally, we show that an
existing adaptation to TCSPs of Mackworth's [1977] path-consistency algorithm
PC-2 is not guaranteed to always terminate, and correct it.
|
The goal of this work is to introduce a local and a global interpolator in
Jacobi-weighted spaces, with optimal order of approximation in the context of
the $p$-version of finite element methods. Then, an a posteriori error
indicator of the residual type is proposed for a model problem in two
dimensions and, in the mathematical framework of the Jacobi-weighted spaces,
the equivalence between the estimator and the error is obtained on appropriate
weighted norm.
|
We make a rigorous analysis of the existence and characterization of the free
boundary related to the optimal stopping problem that maximizes the mean of an
Ornstein--Uhlenbeck bridge. The result includes the Brownian bridge problem as
a limit case. The methodology hereby presented relies on a time-space
transformation that casts the original problem into a more tractable one with
an infinite horizon and a Brownian motion underneath. We comment on two
different numerical algorithms to compute the free-boundary equation and
discuss illustrative cases that shed light on the boundary's shape. In
particular, the free boundary generally does not share the monotonicity of the
Brownian bridge case.
|
The maximum genus $\gamma_M(G)$ of a graph G is the largest genus of an
orientable surface into which G has a cellular embedding. Combinatorially, it
coincides with the maximum number of disjoint pairs of adjacent edges of G
whose removal results in a connected spanning subgraph of G. In this paper we
prove that removing pairs of adjacent edges from G arbitrarily while retaining
connectedness leads to at least $\gamma_M(G)/2$ pairs of edges removed. This
allows us to describe a greedy algorithm for the maximum genus of a graph; our
algorithm returns an integer k such that $\gamma_M(G)/2\le k \le \gamma_M(G)$,
providing a simple method to efficiently approximate maximum genus. As a
consequence of our approach we obtain a 2-approximate counterpart of Xuong's
combinatorial characterisation of maximum genus.
|
Oceanic tides are a major source of tidal dissipation. They drive the
evolution of planetary systems and the rotational dynamics of planets. However,
2D models commonly used for the Earth cannot be applied to extrasolar telluric
planets hosting potentially deep oceans because they ignore the
three-dimensional effects related to the ocean vertical structure. Our goal is
to investigate in a consistant way the importance of the contribution of
internal gravity waves in the oceanic tidal response and to propose a modeling
allowing to treat a wide range of cases from shallow to deep oceans. A 3D ab
initio model is developed to study the dynamics of a global planetary ocean.
This model takes into account compressibility, stratification and sphericity
terms, which are usually ignored in 2D approaches. An analytic solution is
computed and used to study the dependence of the tidal response on the tidal
frequency and on the ocean depth and stratification. In the 2D asymptotic
limit, we recover the frequency-resonant behaviour due to surface
inertial-gravity waves identified by early studies. As the ocean depth and
Brunt-V\"ais\"al\"a frequency increase, the contribution of internal gravity
waves grows in importance and the tidal response become three-dimensional. In
the case of deep oceans, the stable stratification induces resonances that can
increase the tidal dissipation rate by several orders of magnitude. It is thus
able to affect significantly the evolution time scale of the planetary
rotation.
|
Identifying and quantifying factors influencing human decision making remains
an outstanding challenge, impacting the performance and predictability of
social and technological systems. In many cases, system failures are traced to
human factors including congestion, overload, miscommunication, and delays.
Here we report results of a behavioral network science experiment, targeting
decision making in a natural disaster. In each scenario, individuals are faced
with a forced "go" versus "no go" evacuation decision, based on information
available on competing broadcast and peer-to-peer sources. In this controlled
setting, all actions and observations are recorded prior to the decision,
enabling development of a quantitative decision making model that accounts for
the disaster likelihood, severity, and temporal urgency, as well as competition
between networked individuals for limited emergency resources. Individual
differences in behavior within this social setting are correlated with
individual differences in inherent risk attitudes, as measured by standard
psychological assessments. Identification of robust methods for quantifying
human decisions in the face of risk has implications for policy in disasters
and other threat scenarios.
|
We discuss the evolution of purity in mixed quantum/classical approaches to
electronic nonadiabatic dynamics in the context of the Ehrenfest model. As it
is impossible to exactly determine initial conditions for a realistic system,
we choose to work in the statistical Ehrenfest formalism that we introduced in
Ref. 1. From it, we develop a new framework to determine exactly the change in
the purity of the quantum subsystem along the evolution of a statistical
Ehrenfest system. In a simple case, we verify how and to which extent Ehrenfest
statistical dynamics makes a system with more than one classical trajectory and
an initial quantum pure state become a quantum mixed one. We prove this
numerically showing how the evolution of purity depends on time, on the
dimension of the quantum state space $D$, and on the number of classical
trajectories $N$ of the initial distribution. The results in this work open new
perspectives for studying decoherence with Ehrenfest dynamics.
|
Implications of recently well-measured neutron star masses, particularly near
and above 2 solar masses, for the equation of state (EOS) of neutron star
matter are highlighted. Model-independent upper limits to thermodynamic
properties in neutron stars, which only depend on the neutron star maximum
mass, established from causality considerations are presented. The need for
non-perturbative treatments of quark matter in neutron stars is stressed
through studies of self-bound quark matter stars, and of nucleon-quark hybrid
stars. The extent to which several well-measured masses and radii of individual
neutron stars can establish a model-independent EOS through an inversion of the
stellar structure equations is briefly discussed.
|
We establish a positivity property for a class of semilinear elliptic
problems involving indefinite sublinear nonlinearities. Namely, we show that
any nontrivial nonnegative solution is positive for a class of problems the
strong maximum principle does not apply to. Our approach is based on a
continuity argument combined with variational techniques, the sub and
supersolutions method and some a priori bounds. Both Dirichlet and Neumann
homogeneous boundary conditions are considered. As a byproduct, we deduce some
existence and uniqueness results. Finally, as an application, we derive some
positivity results for indefinite concave-convex type problems.
|
By imposing special compatible similarity constraints on a class of
integrable partial $q$-difference equations of KdV-type we derive a hierarchy
of second-degree ordinary $q$-difference equations. The lowest (non-trivial)
member of this hierarchy is a second-order second-degree equation which can be
considered as an analogue of equations in the class studied by Chazy. We
present corresponding isomonodromic deformation problems and discuss the
relation between this class of difference equations and other equations of
Painleve type.
|
The aim of this paper is to deal with the asymptotics of generalized Orlicz
norms when the lower growth rate tends to infinity. $\Gamma$-convergence
results and related representation theorems in terms of $L^\infty$ functionals
are proven for sequences of generalized Orlicz energies under mild convexity
assumptions. This latter hypothesis is removed in the variable exponent
setting.
|
A fraction of AGN producing VHE gamma-rays are located in galaxy clusters.
The magnetic field present in the intra-cluster medium would lead to
conversions of VHE photons into axion-like particles (ALPs), which are a
generic prediction of several extensions of the Standard Model. ALPs produced
in this way would traverse cosmological distances unaffected by the
extragalactic background light at variance with VHE photons which undergo a
substantial absorption. Eventually, a nontrivial fraction of ALPs would
re-convert into VHE photons in the magnetic field of the Milky Way. This
mechanism produces a significant hardening of the VHE spectrum of AGN in galaxy
clusters. As a specific example we consider the energy spectra of two observed
VHE gamma-ray sources located in galaxy clusters, namely 1ES 0414+009 at
redshift z=0.287 and Mkn 501 at z=0.034. We find that the hardening in the
observed spectra becomes relevant at E > 1 TeV. The detection of this signature
would allow to indirectly probe the existence of ultra-light ALPs with mass m_a
< 10^{-8} eV and photon-ALP coupling g_{a gamma} < 10^{-10} GeV^{-1} with the
presently operating Imaging Atmospheric Cherenkov Telescopes like H.E.S.S.,
MAGIC, VERITAS and CANGAROO-III and even more likely with the planned detectors
like CTA, HAWC and HiSCORE. An independent laboratory check of ultra-light ALPs
invoked in this mechanism can be performed with the planned upgrade of the
photon regeneration experiment ALPS at DESY and with the next generation solar
axion detector IAXO.
|
Subsets and Splits