text
stringlengths 6
128k
|
---|
This contribution to the proceedings collects new recent results on
preheating after inflation. We discuss tachyonic preheating in the SUSY
motivated hybrid inflation; development of equilibrium after preheating; theory
of fermionic preheating and the problem of gravitino overproduction from
preheating.
|
We introduce a "system-wide safety staffing" (SWSS) parameter for multiclass
multi-pool networks of any tree topology, Markovian or non-Markovian, in the
Halfin-Whitt regime. This parameter can be regarded as the optimal reallocation
of the capacity fluctuations (positive or negative) of order $\sqrt{n}$ when
each server pool employs a square-root staffing rule. We provide an explicit
form of the SWSS as a function of the system parameters, which is derived using
a graph theoretic approach based on Gaussian elimination.
For Markovian networks, we give an equivalent characterization of the SWSS
parameter via the drift parameters of the limiting diffusion. We show that if
the SWSS parameter is negative, the limiting diffusion and the diffusion-scaled
queueing processes are transient under any Markov control, and cannot have a
stationary distribution when this parameter is zero. If it is positive, we show
that the diffusion-scaled queueing processes are uniformly stabilizable, that
is, there exists a scheduling policy under which the stationary distributions
of the controlled processes are tight over the size of the network. In
addition, there exists a control under which the limiting controlled diffusion
is exponentially ergodic. Thus we have identified a necessary and sufficient
condition for the uniform stabilizability of such networks in the Halfin-Whitt
regime.
We use a constant control resulting from the leaf elimination algorithm to
stabilize the limiting controlled diffusion, while a family of Markov
scheduling policies which are easy to compute are used to stabilize the
diffusion-scaled processes. Finally, we show that under these controls the
processes are exponentially ergodic and the stationary distributions have
exponential tails.
|
We classify the subvarieties of infinite dimensional affine space that are
stable under the infinite symmetric group. We determine the defining equations
and point sets of these varieties as well as the containments between them.
|
We consider two-dimensional waveguide with a rectangular obstacle symmetric
about the axis of the waveguie. We study the behaviour of the Neumann
eigenvalues located below the first threshold when the sides of the obstacle
approach the edges of the waveguide. We show that only one of the eigenvalues
converge to the first threshold, and the rate of convergence depends on whether
the length of the obstacle divided by the width of the waveguide is integer or
not.
|
We present phenomenological results for the inclusive cross section for the
production of a lepton-pair via virtual photon exchange at
next-to-next-to-next-to-leading order (N$^3$LO) in perturbative QCD. In line
with the case of Higgs production, we find that the hadronic cross section
receives corrections at the percent level, and the residual dependence on the
perturbative scales is reduced. However, unlike in the Higgs case, we observe
that the uncertainty band derived from scale variation is no longer contained
in the band of the previous order.
|
The aim of this work is to revisit the phenomenological theory of the
interaction between membrane inclusions, mediated by the membrane fluctuations.
We consider the case where the inclusions are separated by distances larger
than their characteristic size. Within our macroscopic approach a physical
nature of such inclusions is not essential, however we have always in mind two
prototypes of such inclusions: proteins and RNA macromolecules. Because the
interaction is driven by the membrane fluctuations, and the coupling between
inclusions and the membrane, it is possible to change the interaction potential
by external actions affecting these factors. As an example of such external
action we consider an electric field. Under external electric field (both dc or
ac), we propose a new coupling mechanism between inclusions possessing dipole
moments (as it is the case for most protein macromolecules) and the membrane.
We found, quite unexpected and presumably for the first time, that the new
coupling mechanism yields to giant enhancement of the pairwise potential of the
inclusions. This result opens up a way to handle purposefully the interaction
energy, and as well to test of the theory set forth in our article.
|
Metamaterials and meta-surfaces represent a remarkably versatile platform for
light manipulation, biological and chemical sensing, nonlinear optics, and even
spaser lasing. Many of these applications rely on the resonant nature of
metamaterials, which is the basis for extreme spectrally selective
concentration of optical energy in the near field. The simplicity of free-space
light coupling into sharply-resonant meta-surfaces with high resonance quality
factors Q>>1 is a significant practical advantage over the extremely
angle-sensitive diffractive structures or inherently inhomogeneous high-Q
photonic structures such as toroid or photonic crystal microcavities. Such
spectral selectivity is presently impossible for the overwhelming majority of
metamaterials that are made of metals and suffer from high plasmonic losses.
Here, we propose and demonstrate Fano-resonant all-semiconductor optical
meta-surfaces supporting optical resonances with quality factors Q>100 that are
almost an order of magnitude sharper than those supported by their plasmonic
counterparts. These silicon-based structures are shown to be planar chiral,
opening exciting possibilities for efficient ultra-thin circular polarizers and
narrow-band thermal emitters of circularly polarized radiation.
|
Crystallographic anisotropy of the spin-dependent conductivity tensor can be
exploited to generate transverse spin-polarized current in a ferromagnetic
film. This ferromagnetic spin Hall effect is analogous to the spin-splitting
effect in altermagnets and does not require spin-orbit coupling.
First-principles screening of 41 non-cubic ferromagnets revealed that many of
them, when grown as a single crystal with tilted crystallographic axes, can
exhibit large spin Hall angles comparable with the best available
spin-orbit-driven spin Hall sources. Macroscopic spin Hall effect is possible
for uniformly magnetized ferromagnetic films grown on some low-symmetry
substrates with epitaxial relations that prevent cancellation of contributions
from different orientation domains. Macroscopic response is also possible for
any substrate if magnetocrystalline anisotropy is strong enough to lock the
magnetization to the crystallographic axes in different orientation domains.
|
Optimal transport (OT) distances are finding evermore applications in machine
learning and computer vision, but their wide spread use in larger-scale
problems is impeded by their high computational cost. In this work we develop a
family of fast and practical stochastic algorithms for solving the optimal
transport problem with an entropic penalization. This work extends the recently
developed Greenkhorn algorithm, in the sense that, the Greenkhorn algorithm is
a limiting case of this family. We also provide a simple and general
convergence theorem for all algorithms in the class, with rates that match the
best known rates of Greenkorn and the Sinkhorn algorithm, and conclude with
numerical experiments that show under what regime of penalization the new
stochastic methods are faster than the aforementioned methods.
|
Let $\frak{n}$ be a square-free ideal of $\mathbb{F}_q[T]$. We study the
rational torsion subgroup of the Jacobian variety $J_0(\frak{n})$ of the
Drinfeld modular curve $X_0(\frak{n})$. We prove that for any prime number
$\ell$ not dividing $q(q-1)$, the $\ell$-primary part of this group coincides
with that of the cuspidal divisor class group. We further determine the
structure of the $\ell$-primary part of the cuspidal divisor class group for
any prime $\ell$ not dividing $q-1$.
|
The derivation of Warburg's impedance presented in several books and
scientific papers is reconsidered. It has been obtained by assuming that the
total electric current across the sample is just due to the diffusion, and that
the external potential applied to the electrode is responsible for an increase
of the bulk density of charge described by Nernst's model. We show that these
assumptions are not correct, and hence the proposed derivations questionable. A
correct determination of the electrochemical impedance of a cell of an
insulating material where are injected external charges of a given sign, when
the diffusion and the displacement currents are taken into account, does not
predict, in the high frequency region, for the real and imaginary parts of the
impedance, the trends predicted by Warburg's impedance in the Nernstian
approximation. The presented model can be generalized to the case of asymmetric
cell, assuming boundary conditions physically sound.
|
We calculate the number of open walks of fixed length and algebraic area on a
square planar lattice by an extension of the operator method used for the
enumeration of closed walks. The open walk area is defined by closing the walks
with a straight line across their endpoints and can assume half-integer values
in lattice cell units. We also derive the length and area counting of walks
with endpoints on specific straight lines and outline an approach for dealing
with walks with fully fixed endpoints.
|
We present a multimodal framework to learn general audio representations from
videos. Existing contrastive audio representation learning methods mainly focus
on using the audio modality alone during training. In this work, we show that
additional information contained in video can be utilized to greatly improve
the learned features. First, we demonstrate that our contrastive framework does
not require high resolution images to learn good audio features. This allows us
to scale up the training batch size, while keeping the computational load
incurred by the additional video modality to a reasonable level. Second, we use
augmentations that mix together different samples. We show that this is
effective to make the proxy task harder, which leads to substantial performance
improvements when increasing the batch size. As a result, our audio model
achieves a state-of-the-art of 42.4 mAP on the AudioSet classification
downstream task, closing the gap between supervised and self-supervised methods
trained on the same dataset. Moreover, we show that our method is advantageous
on a broad range of non-semantic audio tasks, including speaker identification,
keyword spotting, language identification, and music instrument classification.
|
This paper details a simple approach to the implementation of Optimality
Theory (OT, Prince and Smolensky 1993) on a computer, in part reusing standard
system software. In a nutshell, OT's GENerating source is implemented as a
BinProlog program interpreting a context-free specification of a GEN structural
grammar according to a user-supplied input form. The resulting set of textually
flattened candidate tree representations is passed to the CONstraint stage.
Constraints are implemented by finite-state transducers specified as `sed'
stream editor scripts that typically map ill-formed portions of the candidate
to violation marks. EVALuation of candidates reduces to simple sorting: the
violation-mark-annotated output leaving CON is fed into `sort', which orders
candidates on the basis of the violation vector column of each line, thereby
bringing the optimal candidate to the top. This approach gave rise to OT
SIMPLE, the first freely available software tool for the OT framework to
provide generic facilities for both GEN and CONstraint definition. Its
practical applicability is demonstrated by modelling the OT analysis of
apparent subtractive pluralization in Upper Hessian presented in Golston and
Wiese (1996).
|
This paper is the second in the series Commutative Scaling of Width and Depth
(WD) about commutativity of infinite width and depth limits in deep neural
networks. Our aim is to understand the behaviour of neural functions (functions
that depend on a neural network model) as width and depth go to infinity (in
some sense), and eventually identify settings under which commutativity holds,
i.e. the neural function tends to the same limit no matter how width and depth
limits are taken. In this paper, we formally introduce and define the
commutativity framework, and discuss its implications on neural network design
and scaling. We study commutativity for the neural covariance kernel which
reflects how network layers separate data. Our findings extend previous results
established in [55] by showing that taking the width and depth to infinity in a
deep neural network with skip connections, when branches are suitably scaled to
avoid exploding behaviour, result in the same covariance structure no matter
how that limit is taken. This has a number of theoretical and practical
implications that we discuss in the paper. The proof techniques in this paper
are novel and rely on tools that are more accessible to readers who are not
familiar with stochastic calculus (used in the proofs of WD(I))).
|
Under current policy decision making paradigm, we make or evaluate a policy
decision by intervening different socio-economic parameters and analyzing the
impact of those interventions. This process involves identifying the causal
relation between interventions and outcomes. Matching method is one of the
popular techniques to identify such causal relations. However, in one-to-one
matching, when a treatment or control unit has multiple pair assignment options
with similar match quality, different matching algorithms often assign
different pairs. Since, all the matching algorithms assign pair without
considering the outcomes, it is possible that with same data and same
hypothesis, different experimenters can make different conclusions. This
problem becomes more prominent in case of large-scale observational studies.
Recently, a robust approach is proposed to tackle the uncertainty which uses
discrete optimization techniques to explore all possible assignments. Though
optimization techniques are very efficient in its own way, they are not
scalable to big data. In this work, we consider causal inference testing with
binary outcomes and propose computationally efficient algorithms that are
scalable to large-scale observational studies. By leveraging the structure of
the optimization model, we propose a robustness condition which further reduces
the computational burden. We validate the efficiency of the proposed algorithms
by testing the causal relation between Hospital Readmission Reduction Program
(HRRP) and readmission to different hospital (non-index readmission) on the
State of California Patient Discharge Database from 2010 to 2014. Our result
shows that HRRP has a causal relation with the increase in non-index
readmission and the proposed algorithms proved to be highly scalable in testing
causal relations from large-scale observational studies.
|
For the description of quantum evolution, the use of a manifestly
time-dependent quantum Hamiltonian $\mathfrak{h}(t) =\mathfrak{h}^\dagger(t)$
is shown equivalent to the work with its simplified, time-independent
alternative $G\neq G^\dagger$. A tradeoff analysis is performed recommending
the latter option. The physical unitarity requirement is shown fulfilled in a
suitable ad hoc representation of Hilbert space.
|
In the current paper we present some new data on the issue of quasi-normal
modes (QNMs) of uniform, neutron and quark stars. These questions have already
been addressed in the literature before, but we have found some interesting
features that have not been discussed so far. We have increased the range of
frequency values for the scalar and axial perturbations of such stars and made
a comparison between such QNMs and those of the very well-known Schwarzschild
black holes. Also addressed in this work was the interesting feature of
competing modes, which appear not only for uniform stars, but for quark stars
as well.
|
We demonstrate a method to create potential barriers with polarized light
beams for polaritons in semiconductor microcavities. The form of the barriers
is engineered via the real space shape of a focalised beam on the sample. Their
height can be determined by the visibility of the scattering waves generated in
a polariton fluid interacting with them. This technique opens up the way to the
creation of dynamical potentials and defects of any shape in semiconductor
microcavities.
|
The conservative dephasing effects of gravitational self forces for extreme
mass-ratio inspirals are studied. Both secular and non-secular conservative
effects may have a significant effect on LISA waveforms that is independent of
the mass ratio of the system. Such effects need to be included in generated
waveforms to allow for accurate gravitational wave astronomy that requires
integration times as long as a year.
|
Phase measurement using a lossless Mach-Zehnder interferometer with certain
entangled $N$-photon states can lead to a phase sensitivity of the order of
1/N, the Heisenberg limit. However, previously considered output measurement
schemes are different for different input states to achieve this limit. We show
that it is possible to achieve this limit just by the parity measurement for
all the commonly proposed entangled states. Based on the parity measurement
scheme, the reductions of the phase sensitivity in the presence of photon loss
are examined for the various input states.
|
We study the asymptotic behaviour of the following linear
growth-fragmentation equation$$\dfrac{\partial}{\partial t} u(t,x) +
\dfrac{\partial}{\partial x} \big(x u(t,x)\big) + B(x) u(t,x) =4
B(2x)u(t,2x),$$ and prove that under fairly general assumptions on the division
rate $B(x),$ its solution converges towards an oscillatory function,explicitely
given by the projection of the initial state on the space generated by the
countable set of the dominant eigenvectors of the operator. Despite the lack of
hypo-coercivity of the operator, the proof relies on a general relative entropy
argument in a convenient weighted $L^2$ space, where well-posedness is obtained
via semigroup analysis. We also propose a non-dissipative numerical scheme,
able to capture the oscillations.
|
Patients with atrial fibrillation have a 5-7 fold increased risk of having an
ischemic stroke. In these cases, the most common site of thrombus localization
is inside the left atrial appendage (LAA) and studies have shown a correlation
between the LAA shape and the risk of ischemic stroke. These studies make use
of manual measurement and qualitative assessment of shape and are therefore
prone to large inter-observer discrepancies, which may explain the
contradictions between the conclusions in different studies. We argue that
quantitative shape descriptors are necessary to robustly characterize LAA
morphology and relate to other functional parameters and stroke risk.
Deep Learning methods are becoming standardly available for segmenting
cardiovascular structures from high resolution images such as computed
tomography (CT), but only few have been tested for LAA segmentation.
Furthermore, the majority of segmentation algorithms produces non-smooth 3D
models that are not ideal for further processing, such as statistical shape
analysis or computational fluid modelling. In this paper we present a fully
automatic pipeline for image segmentation, mesh model creation and statistical
shape modelling of the LAA. The LAA anatomy is implicitly represented as a
signed distance field (SDF), which is directly regressed from the CT image
using Deep Learning. The SDF is further used for registering the LAA shapes to
a common template and build a statistical shape model (SSM). Based on 106
automatically segmented LAAs, the built SSM reveals that the LAA shape can be
quantified using approximately 5 PCA modes and allows the identification of two
distinct shape clusters corresponding to the so-called chicken-wing and
non-chicken-wing morphologies.
|
We study the entropy of the set traced by an $n$-step random walk on $\Z^d$.
We show that for $d \geq 3$, the entropy is of order $n$. For $d = 2$, the
entropy is of order $n/\log^2 n$. These values are essentially governed by the
size of the boundary of the trace.
|
So far, magneto-ionics, understood as voltage-driven ion transport in
magnetic materials, has largely relied on controlled migration of oxygen
ion/vacancy and, to a lesser extent, lithium and hydrogen. Here, we demonstrate
efficient, room-temperature, voltage-driven nitrogen transport (i.e., nitrogen
magneto-ionics) by electrolyte-gating of a single CoN film (without an
ion-reservoir layer). Nitrogen magneto-ionics in CoN is compared to oxygen
magneto-ionics in Co3O4, both layers showing a nanocrystalline
face-centered-cubic structure and reversible voltage-driven ON-OFF
ferromagnetism. In contrast to oxygen, nitrogen transport occurs uniformly
creating a plane-wave-like migration front, without assistance of diffusion
channels. Nitrogen magneto-ionics requires lower threshold voltages and
exhibits enhanced rates and cyclability. This is due to the lower activation
energy for ion diffusion and the lower electronegativity of nitrogen compared
to oxygen. These results are appealing for the use of magneto-ionics in nitride
semiconductor devices, in applications requiring endurance and moderate speeds
of operation, such as brain-inspired computing.
|
In this paper we consider a robot patrolling problem in which events arrive
randomly over time at the vertices of a graph. When an event arrives it remains
active for a random amount of time. If that time active exceeds a certain
threshold, then we say that the event is a true event; otherwise it is a false
event. The robot(s) can traverse the graph to detect newly arrived events, and
can revisit these events in order to classify them as true or false. The goal
is to plan robot paths that maximize the number of events that are correctly
classified, with the constraint that there are no false positives. We show that
the offline version of this problem is NP-hard. We then consider a simple
patrolling policy based on the traveling salesman tour, and characterize the
probability of correctly classifying an event. We investigate the problem when
multiple robots follow the same path, and we derive the optimal (and not
necessarily uniform) spacing between robots on the path.
|
Photo-induced edge states in low dimensional materials have attracted
considerable attention due to the tunability of topological properties and
dispersion. Specifically, graphene nanoribbons have been predicted to host
chiral edge modes upon irradiation with circularly polarized light. Here, we
present numerical calculations of time-resolved angle resolved photoemission
spectroscopy (trARPES) and time-resolved resonant inelastic x-ray scattering
(trRIXS) of a graphene nanoribbon. We characterize pump-probe spectroscopic
signatures of photo-induced edge states, illustrate the origin of distinct
spectral features that arise from Floquet topological edge modes, and
investigate the roles of incoming photon energies and finite core-hole lifetime
in RIXS. With momentum, energy, and time resolution, pump-probe spectroscopies
can play an important role in understanding the behavior of photo-induced
topological states of matter.
|
An idea of possible anomalous contribution of non-perturbative origin to the
nucleon spin was examined by analysing data on spin asymmetries in polarized
deep inelastic scattering of leptons on nucleons. The region of high Bjorken x
was explored. It was shown that experimental data available at present do not
evidence for this effect.
|
In preparation for the nuclear physics Long Range Plan exercise, a group of
104 neutrino physicists met in Seattle September 21-23 to discuss both the
present state of the field and the new opportunities of the next decade. This
report summarizes the conclusions of that meeting and presents its
recommendations. Further information is available at the workshop's web site.
This report will be further reviewed at the upcoming Oakland Town Meeting.
|
Besicovitch showed that if a set is null for the Hausdorff measure associated
to a given dimension function, then it is still null for the Hausdorff measure
corresponding to a smaller dimension function. We prove that this is not true
for packing measures. Moreover, we consider the corresponding questions for
sets of non-$\sigma$-finite packing measure, and for pre-packing measure
instead of packing measure.
|
We use recently developed efficient versions of the configuration interaction
method to perform {\em ab initio} calculations of the spectra of superheavy
elements seaborgium (Sg, $Z=106$), bohrium (Bh, $Z=107$), hassium (Hs, $Z=108$)
and meitnerium (Mt, $Z=109$). We calculate energy levels, ionization
potentials, isotope shifts and electric dipole transition amplitudes.
Comparison with lighter analogs reveals significant differences caused by
strong relativistic effects in superheavy elements. Very large spin-orbit
interaction distinguishes subshells containing orbitals with a definite total
electron angular momentum $j$. This effect replaces Hund's rule holding for
lighter elements.
|
A novel approach that combines visible light communication (VLC) with
unmanned aerial vehicles (UAVs) to simultaneously provide flexible
communication and illumination is proposed. To minimize the power consumption,
the locations of UAVs and the cell associations are optimized under
illumination and communication constraints. An efficient sub-optimal solution
that divides the original problem into two sub-problems is proposed. The first
sub-problem is modeled as a classical smallest enclosing disk problem to obtain
the optimal locations of UAVs, given the cell association. Then, assuming fixed
UAV locations, the second sub-problem is modeled as a min-size clustering
problem to obtain the optimized cell association. In addition, the obtained UAV
locations and cell associations are iteratively optimized multiple times to
reduce the power consumption. Numerical results show that the proposed approach
can reduce the total transmit power consumption by at least 53.8% compared to
two baseline algorithms with fixed UAV locations.
|
Pulmonary hemorrhage (P-Hem) occurs among multiple species and can have
various causes. Cytology of bronchoalveolarlavage fluid (BALF) using a 5-tier
scoring system of alveolar macrophages based on their hemosiderin content is
considered the most sensitive diagnostic method. We introduce a novel, fully
annotated multi-species P-Hem dataset which consists of 74 cytology whole slide
images (WSIs) with equine, feline and human samples. To create this
high-quality and high-quantity dataset, we developed an annotation pipeline
combining human expertise with deep learning and data visualisation techniques.
We applied a deep learning-based object detection approach trained on 17
expertly annotated equine WSIs, to the remaining 39 equine, 12 human and 7
feline WSIs. The resulting annotations were semi-automatically screened for
errors on multiple types of specialised annotation maps and finally reviewed by
a trained pathologists. Our dataset contains a total of 297,383
hemosiderophages classified into five grades. It is one of the largest publicly
availableWSIs datasets with respect to the number of annotations, the scanned
area and the number of species covered.
|
Precision timing of large arrays (>50) of millisecond pulsars will detect the
nanohertz gravitational-wave emission from supermassive binary black holes
within the next ~3-7 years. We review the scientific opportunities of these
detections, the requirements for success, and the synergies with
electromagnetic instruments operating in the 2020s.
|
The SparseStep algorithm is presented for the estimation of a sparse
parameter vector in the linear regression problem. The algorithm works by
adding an approximation of the exact counting norm as a constraint on the model
parameters and iteratively strengthening this approximation to arrive at a
sparse solution. Theoretical analysis of the penalty function shows that the
estimator yields unbiased estimates of the parameter vector. An iterative
majorization algorithm is derived which has a straightforward implementation
reminiscent of ridge regression. In addition, the SparseStep algorithm is
compared with similar methods through a rigorous simulation study which shows
it often outperforms existing methods in both model fit and prediction
accuracy.
|
It has been long discussed that cosmic rays may contain signals of dark
matter. In the last couple of years an anomaly of cosmic-ray positrons has
drawn a lot of attentions, and recently an excess in cosmic-ray anti-proton has
been reported by AMS-02 collaboration. Both excesses may indicate towards
decaying or annihilating dark matter with a mass of around 1-10 TeV. In this
article we study the gamma rays from dark matter and constraints from cross
correlations with distribution of galaxies, particularly in a local volume. We
find that gamma rays due to inverse-Compton process have large intensity, and
hence they give stringent constraints on dark matter scenarios in the TeV scale
mass regime. Taking the recent developments in modeling astrophysical gamma-ray
sources as well as comprehensive possibilities of the final state products of
dark matter decay or annihilation into account, we show that the parameter
regions of decaying dark matter that are suggested to explain the excesses are
excluded. We also discuss the constrains on annihilating scenarios.
|
This paper extends the Karhunen-Loeve representation from classical Gaussian
random processes to quantum Wiener processes which model external bosonic
fields for open quantum systems. The resulting expansion of the quantum Wiener
process in the vacuum state is organised as a series of sinusoidal functions on
a bounded time interval with statistically independent coefficients consisting
of noncommuting position and momentum operators in a Gaussian quantum state. A
similar representation is obtained for the solution of a linear quantum
stochastic differential equation which governs the system variables of an open
quantum harmonic oscillator. This expansion is applied to computing a
quadratic-exponential functional arising as a performance criterion in the
framework of risk-sensitive control for this class of open quantum systems.
|
We propose a method to map the conventional optical interferometry setup into
quantum circuits. The unknown phase shift inside a Mach-Zehnder interferometer
in the presence of photon loss is estimated by simulating the quantum circuits.
For this aim, we use the Bayesian approach in which the likelihood functions
are needed, and they are obtained by simulating the appropriate quantum
circuits. The precision of four different definite photon-number states of
light, which all possess six photons, is compared. In addition, the fisher
information for the four definite photon-number states in the setup is also
estimated to check the optimality of the chosen measurement scheme.
|
We construct general anisotropic cosmological scenarios governed by an
$f(R)=R^n$ gravitational sector. Focusing then on some specific geometries, and
modelling the matter content as a perfect fluid, we perform a phase-space
analysis. We analyze the possibility of accelerating expansion at late times,
and additionally, we determine conditions for the parameter $n$ for the
existence of phantom behavior, contracting solutions as well as of cyclic
cosmology. Furthermore, we analyze if the universe evolves towards the future
isotropization without relying on a cosmic no-hair theorem. Our results
indicate that anisotropic geometries in modified gravitational frameworks
present radically different cosmological behaviors compared to the simple
isotropic scenarios.
|
We study the theory of the Lorentz group (1/2,0)+(0,1/2) representation in
the helicity basis of the corresponding 4-spinors. As Berestetski, Lifshitz and
Pitaevskii mentioned, the helicity eigenstates are not the parity eigenstates.
Relations with the Gelfand-Tsetlin-Sokolik-type quantum field theory are
discussed. Finally, a new form of the parity operator (which commutes with the
Hamiltonian) is proposed in the Fock space.
|
In service-oriented architectures, accurately predicting the Quality of
Service (QoS) is crucial for maintaining reliability and enhancing user
satisfaction. However, significant challenges remain due to existing methods
always overlooking high-order latent collaborative relationships between users
and services and failing to dynamically adjust feature learning for every
specific user-service invocation, which are critical for learning accurate
features. Additionally, reliance on RNNs for capturing QoS evolution hampers
models' ability to detect long-term trends due to difficulties in managing
long-range dependencies. To address these challenges, we propose the
\underline{T}arget-Prompt \underline{O}nline \underline{G}raph
\underline{C}ollaborative \underline{L}earning (TOGCL) framework for
temporal-aware QoS prediction. TOGCL leverages a dynamic user-service
invocation graph to model historical interactions, providing a comprehensive
representation of user-service relationships. Building on this graph, it
develops a target-prompt graph attention network to extract online deep latent
features of users and services at each time slice, simultaneously considering
implicit collaborative relationships between target users/services and their
neighbors, as well as relevant historical QoS values. Additionally, a
multi-layer Transformer encoder is employed to uncover temporal feature
evolution patterns of users and services, leading to temporal-aware QoS
prediction. Extensive experiments conducted on the WS-DREAM dataset demonstrate
that our proposed TOGCL framework significantly outperforms state-of-the-art
methods across multiple metrics, achieving improvements of up to 38.80\%. These
results underscore the effectiveness of the TOGCL framework for precise
temporal QoS prediction.
|
In this paper we explore the functional correlation approach to operational
risk. We consider networks with heterogeneous a-priori conditional and
unconditional failure probability. In the limit of sparse connectivity,
self-consistent expressions for the dynamical evolution of order parameters are
obtained. Under equilibrium conditions, expressions for the stationary states
are also obtained. The consequences of the analytical theory developed are
analyzed using phase diagrams. We find co-existence of operational and
non-operational phases, much as in liquid-gas systems. Such systems are
susceptible to discontinuous phase transitions from the operational to
non-operational phase via catastrophic breakdown. We find this feature to be
robust against variation of the microscopic modelling assumptions.
|
Sentiment analysis is the most basic NLP task to determine the polarity of
text data. There has been a significant amount of work in the area of
multilingual text as well. Still hate and offensive speech detection faces a
challenge due to inadequate availability of data, especially for Indian
languages like Hindi and Marathi. In this work, we consider hate and offensive
speech detection in Hindi and Marathi texts. The problem is formulated as a
text classification task using the state of the art deep learning approaches.
We explore different deep learning architectures like CNN, LSTM, and variations
of BERT like multilingual BERT, IndicBERT, and monolingual RoBERTa. The basic
models based on CNN and LSTM are augmented with fast text word embeddings. We
use the HASOC 2021 Hindi and Marathi hate speech datasets to compare these
algorithms. The Marathi dataset consists of binary labels and the Hindi dataset
consists of binary as well as more-fine grained labels. We show that the
transformer-based models perform the best and even the basic models along with
FastText embeddings give a competitive performance. Moreover, with normal
hyper-parameter tuning, the basic models perform better than BERT-based models
on the fine-grained Hindi dataset.
|
Standard approaches to the theory of financial markets are based on
equilibrium and efficiency. Here we develop an alternative based on concepts
and methods developed by biologists, in which the wealth invested in a
financial strategy is like the abundance of a species. We study a toy model of
a market consisting of value investors, trend followers and noise traders. We
show that the average returns of strategies are strongly density dependent,
i.e. they depend on the wealth invested in each strategy at any given time. In
the absence of noise the market would slowly evolve toward an efficient
equilibrium, but the statistical uncertainty in profitability (which is
adjusted to match real markets) makes this noisy and uncertain. Even in the
long term, the market spends extended periods of time away from perfect
efficiency. We show how core concepts from ecology, such as the community
matrix and food webs, give insight into market behavior. The wealth dynamics of
the market ecology explain how market inefficiencies spontaneously occur and
gives insight into the origins of excess price volatility and deviations of
prices from fundamental values.
|
We introduce notation Q(1) * ... * Q(n) * L(F_r)$ for von Neumann algebra
II_1 factors where $r$ is allowed to be negative. This notation is defined by
rescalings of free products of II_1 factors, and is proved to be consistent
with known results and natural operations. We also give two statements which we
prove are equivalent to isomorphism of free group factors.
|
In task fMRI analysis, OLS is typically used to estimate task-induced
activation in the brain. Since task fMRI residuals often exhibit temporal
autocorrelation, it is common practice to perform prewhitening prior to OLS to
satisfy the assumption of residual independence, equivalent to GLS. While
theoretically straightforward, a major challenge in prewhitening in fMRI is
accurately estimating the residual autocorrelation at each location of the
brain. Assuming a global autocorrelation model, as in several fMRI software
programs, may under- or over-whiten particular regions and fail to achieve
nominal false positive control across the brain. Faster multiband acquisitions
require more sophisticated models to capture autocorrelation, making
prewhitening more difficult. These issues are becoming more critical now
because of a trend towards subject-level analysis, where prewhitening has a
greater impact than in group-average analyses. In this article, we first
thoroughly examine the sources of residual autocorrelation in multiband task
fMRI. We find that residual autocorrelation varies spatially throughout the
cortex and is affected by the task, the acquisition method, modeling choices,
and individual differences. Second, we evaluate the ability of different
AR-based prewhitening strategies to effectively mitigate autocorrelation and
control false positives. We find that allowing the prewhitening filter to vary
spatially is the most important factor for successful prewhitening, even more
so than increasing AR model order. To overcome the computational challenge
associated with spatially variable prewhitening, we developed a computationally
efficient R implementation based on parallelization and fast C++ backend code.
This implementation is included in the open source R package BayesfMRI.
|
Model waveforms are used in gravitational wave data analysis to detect and
then to measure the properties of a source by matching the model waveforms to
the signal from a detector. This paper derives accuracy standards for model
waveforms which are sufficient to ensure that these data analysis applications
are capable of extracting the full scientific content of the data, but without
demanding excessive accuracy that would place undue burdens on the model
waveform simulation community. These accuracy standards are intended primarily
for broad-band model waveforms produced by numerical simulations, but the
standards are quite general and apply equally to such waveforms produced by
analytical or hybrid analytical-numerical methods.
|
By using the Zubarev nonequilibrium statistical operator method, and the
Liouville equation with fractional derivatives, a generalized diffusion
equation with fractional derivatives is obtained within the Renyi statistics.
Averaging in generalized diffusion coefficient is performed with a power
distribution with the Renyi parameter $q$.
|
We show how to improve the efficiency of the computation of fast Fourier
transforms over F_p where p is a word-sized prime. Our main technique is
optimisation of the basic arithmetic, in effect decreasing the total number of
reductions modulo p, by making use of a redundant representation for integers
modulo p. We give performance results showing a significant improvement over
Shoup's NTL library.
|
Accidental exposure to overdose ionizing radiation will inevitably lead to
severe biological damage, thus detecting and localizing radiation is essential.
Traditional measurement techniques are generally restricted to the limited
detection range of few centimeters, posing a great risk to operators. The
potential in remote sensing makes femtosecond laser filament technology great
candidates for constructively address this challenge. Here we propose a novel
filament-based ionizing radiation sensing technology (FIRST), and clarify the
interaction mechanism between filaments and ionizing radiation. Specifically,
it is demonstrated that the energetic electrons and ions produced by {\alpha}
radiation in air can be effectively accelerated within the filament, serving as
seed electrons, which will markedly enhance nitrogen fluorescence. The extended
nitrogen fluorescence lifetime of ~1 ns is also observed. These findings
provide insights into the intricate interaction among ultra-strong light filed,
plasma and energetic particle beam, and pave the way for the remote sensing of
ionizing radiation.
|
A chiral extrapolation of the light vector meson masses in the up, down and
strange quark masses of QCD is presented. We apply an effective chiral
Lagrangian based on the hadrogenesis conjecture to QCD lattice ensembles of
PACS-CS, QCDSF-UKQCD and HSC in the strict isospin limit. The leading orders
low-energy constants are determined upon a global fit to the lattice data set.
We use the pion and kaon masses as well as the size of the finite volume as
lattice ensemble parameters only. The quark mass ratio on the various ensembles
are then predicted in terms of our set of low-energy constants. An accurate
reproduction of the vector meson masses and quark-mass ratios as provided by
the lattice collaborations and the Particle Data Group (PDG) is achieved.
Particular attention is paid to the \omega-\phi mixing phenomenon, which is
demonstrated to show a strong quark mass dependence.
|
The Halting problem of a quantum computer is considered. It is shown that if
halting of a quantum computer takes place the associated dynamics is described
by an irreversible operator.
|
Synchrotron radiation sources are immensely useful tools for scientific
researches and many practical applications. Currently, the state-of-the-art
synchrotrons rely on conventional accelerators, where electrons are accelerated
in a straight line and radiate in bending magnets or other insertion devices.
However, these facilities are usually large and costly. Here, we propose a
compact all-optical synchrotron-like radiation source based on laser-plasma
acceleration either in a straight or in a curved plasma channel. With the laser
pulse off-axially injected in a straight channel, the centroid oscillation of
the pulse causes a wiggler motion of the whole accelerating structure including
the trapped electrons, leading to strong synchrotron-like radiations with
tunable spectra. It is further shown that a ring-shaped synchrotron is possible
in a curved plasma channel. Due to the intense acceleration and bending fields
inside plasmas, the central part of the sources can be made within palm size.
With its potential of high flexibility and tunability, such compact light
sources once realized would find applications in wide areas and make up the
shortage of large synchrotron radiation facilities.
|
This paper considers variational inequalities (VI) defined by the conditional
value-at-risk (CVaR) of uncertain functions and provides three stochastic
approximation schemes to solve them. All methods use an empirical estimate of
the CVaR at each iteration. The first algorithm constrains the iterates to the
feasible set using projection. To overcome the computational burden of
projections, the second one handles inequality and equality constraints
defining the feasible set differently. Particularly, projection onto to the
affine subspace defined by the equality constraints is achieved by matrix
multiplication and inequalities are handled by using penalty functions.
Finally, the third algorithm discards projections altogether by introducing
multiplier updates. We establish asymptotic convergence of all our schemes to
any arbitrary neighborhood of the solution of the VI. A simulation example
concerning a network routing game illustrates our theoretical findings.
|
There are many cases in collider physics and elsewhere where a calibration
dataset is used to predict the known physics and / or noise of a target region
of phase space. This calibration dataset usually cannot be used out-of-the-box
but must be tweaked, often with conditional importance weights, to be maximally
realistic. Using resonant anomaly detection as an example, we compare a number
of alternative approaches based on transporting events with normalizing flows
instead of reweighting them. We find that the accuracy of the morphed
calibration dataset depends on the degree to which the transport task is set up
to carry out optimal transport, which motivates future research into this area.
|
We use a combination of dynamical mean-field model calculations and LDA+U
material specific calculations to investigate the low temperature phase
transition in the compounds from the (Pr$_{1-y}$R$_y$)$_x$Ca$_{1-x}$CoO$_3$
(R=Nd, Sm, Eu, Gd, Tb, Y) family (PCCO). The transition, marked by a sharp peak
in the specific heat, leads to an exponential increase of dc resistivity and a
drop of the magnetic susceptibility, but no order parameter has been identified
yet. We show that condensation of spin-triplet, atomic-size excitons provides a
consistent explanation of the observed physics. In particular, it explains the
exchange splitting on the Pr sites and the simultaneous Pr valence transition.
The excitonic condensation in PCCO is an example of a general behavior expected
in certain systems in the proximity of a spin-state transition.
|
The paper develops Newton's method of finding multiple eigenvalues with one
Jordan block and corresponding generalized eigenvectors for matrices dependent
on parameters. It computes the nearest value of a parameter vector with a
matrix having a multiple eigenvalue of given multiplicity. The method also
works in the whole matrix space (in the absence of parameters). The approach is
based on the versal deformation theory for matrices. Numerical examples are
given. The implementation of the method in MATLAB code is available.
|
We investigate a stability equation involving two-component eigenfunctions
which is associated with a potential model in terms of two coupled real scalar
fields, which presents non BPS topological defect.
|
Among systems that display generic scale invariance, those whose asymptotic
properties are anisotropic in space (strong anisotropy, SA) have received a
relatively smaller attention, specially in the context of kinetic roughening
for two-dimensional surfaces. This is in contrast with their experimental
ubiquity, e.g. in the context of thin film production by diverse techniques.
Based on exact results for integrable (linear) cases, here we formulate a SA
Ansatz that, albeit equivalent to existing ones borrowed from equilibrium
critical phenomena, is more naturally adapted to the type of observables that
are measured in experiments on the dynamics of thin films, such as one and
two-dimensional height structure factors. We test our Ansatz on a paradigmatic
nonlinear stochastic equation displaying strong anisotropy like the Hwa-Kardar
equation [Phys. Rev. Lett. 62, 1813 (1989)], that was initially proposed to
describe the interface dynamics of running sand piles. A very important role to
elucidate its SA properties is played by an accurate (Gaussian) approximation
through a non-local linear equation that shares the same asymptotic properties.
|
We study the performance of a commercially available large language model
(LLM) known as ChatGPT on math word problems (MWPs) from the dataset DRAW-1K.
To our knowledge, this is the first independent evaluation of ChatGPT. We found
that ChatGPT's performance changes dramatically based on the requirement to
show its work, failing 20% of the time when it provides work compared with 84%
when it does not. Further several factors about MWPs relating to the number of
unknowns and number of operations that lead to a higher probability of failure
when compared with the prior, specifically noting (across all experiments) that
the probability of failure increases linearly with the number of addition and
subtraction operations. We also have released the dataset of ChatGPT's
responses to the MWPs to support further work on the characterization of LLM
performance and present baseline machine learning models to predict if ChatGPT
can correctly answer an MWP. We have released a dataset comprised of ChatGPT's
responses to support further research in this area.
|
The excursion set of a $C^2$ smooth random field carries relevant information
in its various geometric measures. From a computational viewpoint, one never
has access to the continuous observation of the excursion set, but rather to
observations at discrete points in space. It has been reported that for
specific regular lattices of points in dimensions 2 and 3, the usual estimate
of the surface area of the excursions remains biased even when the lattice
becomes dense in the domain of observation. In the present work, under the key
assumptions of stationarity and isotropy, we demonstrate that this limiting
bias is invariant to the locations of the observation points. Indeed, we
identify an explicit formula for the bias, showing that it only depends on the
spatial dimension $d$. This enables us to define an unbiased estimator for the
surface area of excursion sets that are approximated by general tessellations
of polytopes in $\mathbb{R}^d$, including Poisson-Voronoi tessellations. We
also establish a joint central limit theorem for the surface area and volume
estimates of excursion sets observed over hypercubic lattices.
|
Ultrasonic agitation is a proven method for breaking down layered materials
such as MoS2 into single or few layer nanoparticles. In this experiment, MoS2
powder is sonicated in isopropanol for an extended period of time in an attempt
to create particles of the smallest possible size. As expected, the process
yielded a significant quantity of nanoscale MoS2 in the form of finite layer
sheets with lateral dimensions as small as a few tens of nanometers. Although
no evidence was found to indicate a larger the longer sonication times resulted
in a significant increase in yield of single layer MoS2, the increased
sonication did result in the formation of several types of carbon allotropes in
addition to the sheets of MoS2. These carbon structures appear to originate
from the breakdown of the isopropanol and consist of finite layer graphite
platelets as well as a large number of multi-walled fullerenes, also known as
carbon onions. Both the finite layer graphite and MoS2 nanoplatelets were both
found to be heavily decorated with carbon onions. However, isolated clusters of
carbon onions could also be found. Our results show that liquid exfoliation of
MoS2 is not only useful for forming finite layer MoS2, but also creating carbon
onions at room temperature as well.
|
The thick-target yield (TTY) is a macroscopic quantity reflected by nuclear
reactions and matter properties of targets. In order to evaluate TTYs on
radioactive targets, we suggest a conversion method from inverse kinematics
corresponding to the reaction of radioactive beams on stable targets. The
method to deduce the TTY is theoretically derived from inverse kinematics. We
apply the method to the natCu(12C,X)24Na reaction to confirm availability. In
addition, it is applied to the 137Cs + 12C reaction as an example of a
radioactive system and discussed a conversion coefficient of a TTY measurement.
|
Motivated by a recent experiment [J. H. Han, et. al., Phys. Rev. Lett.122,
065303 (2019)], we investigate many-body physics of interacting fermions in a
synthetic Hall tube, using state-of-the-art density-matrix
renormalization-group numerical method. Since the inter-leg couplings of this
synthetic Hall tube generate an interesting spin-tensor Zeeman field, exotic
topological and magnetic properties occur. Especially, four new quantum phases,
such as nontopological spin-vector and -tensor paramagnetic insulators, and
topological and nontopological spin-mixed paramagnetic insulators, are
predicted by calculating entanglement spectrum, entanglement entropies, energy
gaps, and local magnetic orders with 3 spin-vectors and 5 spin-tensors.
Moreover, the topologically magnetic phase transitions induced by the
interaction as well as the inter-leg couplings are also revealed. Our results
pave a new way to explore many-body (topological) states induced by both the
spiral spin-vector and -tensor Zeeman fields.
|
Networks Lte(4G) and Wi-Fi complementarity establishes a heterogeneous system
of wireless and mobile networks. We study and analyze the optimal performances
of this heterogeneous system based on the bit rate, the blocking probability
and user connection loss. Random Waypoint (RWP) is the user mobility model.
Users provided mobile terminal equipped with multiple accesses interfaces. We
developed a Markov chain to estimate the performances obtained from the
heterogeneous networks system, which allowed us to propose an average bit rate
value in a sub-zone of this system then the average blocking probability user
connection in this zone. We also proposed a sensitivity factor of maximal
decrease of these selection network parameters. This factor informs about the
heterogeneous networks congestion and dis-congestion system.
|
Stress granules (SG) are droplets of proteins and RNA that form in the cell
cytoplasm during stress conditions. We consider minimal models of stress
granule formation based on the mechanism of phase separation regulated by
ATP-driven chemical reactions. Motivated by experimental observations, we
identify a minimal model of SG formation triggered by ATP depletion. Our
analysis indicates that ATP is continuously hydrolysed to deter SG formation
under normal conditions, and we provide specific predictions that can be tested
experimentally.
|
We study entanglement entropy in a non-relativistic Schr\"{o}dinger field
theory at finite temperature and electric charge using the principle of
gauge/gravity duality. The spacetime geometry is obtained from a charged AdS
black hole by a null Melvin twist. By using an appropriate modification of the
holographic Ryu-Takayanagi formula, we calculate the entanglement entropy,
mutual information, and entanglement wedge cross-section for the simplest strip
subsystem. The entanglement measures show non-trivial dependence on the black
hole parameters.
|
We study two and three meson decays of the tau lepton within the framework of
the Resonance Chiral Theory, that is based on the following properties of QCD:
its chiral symmetry in the massless case, its large-N_C limit, and the
asymptotic behaviour it demands to the relevant form factors. Most of the
couplings in the Lagrangian are determined this way rendering the theory
predictive. Our outcomes can be tested thanks to the combination of a very good
experimental effort (current and forthcoming, at B- and tau-charm-factories)
and the very accurate devoted Monte Carlo generators.
|
In the last decade, it was understood that quantum networks involving several
independent sources of entanglement which are distributed and measured by
several parties allowed for completely novel forms of nonclassical quantum
correlations, when entangled measurements are performed. Here, we
experimentally obtain quantum correlations in a triangle network structure, and
provide solid evidence of its nonlocality. Specifically, we first obtain the
elegant distribution proposed in (Entropy 21, 325) by performing a six-photon
experiment. Then, we justify its nonlocality based on machine learning tools to
estimate the distance of the experimentally obtained correlation to the local
set, and through the violation of a family of conjectured inequalities tailored
for the triangle network.
|
Data-driven learning of partial differential equations' solution operators
has recently emerged as a promising paradigm for approximating the underlying
solutions. The solution operators are usually parameterized by deep learning
models that are built upon problem-specific inductive biases. An example is a
convolutional or a graph neural network that exploits the local grid structure
where functions' values are sampled. The attention mechanism, on the other
hand, provides a flexible way to implicitly exploit the patterns within inputs,
and furthermore, relationship between arbitrary query locations and inputs. In
this work, we present an attention-based framework for data-driven operator
learning, which we term Operator Transformer (OFormer). Our framework is built
upon self-attention, cross-attention, and a set of point-wise multilayer
perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern
of the input function or query locations. We show that the proposed framework
is competitive on standard benchmark problems and can flexibly be adapted to
randomly sampled input.
|
The dynamics of a quantum particle bound by an accelerating delta-functional
potential is investigated. Three cases are considered, using the reference
frame moving along with the {\delta}-function, in which the acceleration is
converted into the additional linear potential. (i) A stationary regime, which
corresponds to a resonance state, with a minimum degree of delocalization,
supported by the accelerating potential trap. (ii) A pulling scenario: an
initially bound particle follows the accelerating delta-functional trap, within
a finite time. (iii) The pushing scenario: the particle, which was initially
localized to the right of the repulsive delta-function, is shoved to the right
by the accelerating potential. For the two latter scenarios, the life time of
the trapped particle, and the largest velocity to which it can be accelerated
while staying trapped, are found. Analytical approximations are developed for
the cases of small and large accelerations in the pulling regime, and also for
a small acceleration in the stationary situation, and in the regime of pushing.
The same regimes may be realized by Airy-like planar optical beams guided by a
narrow bending potential channel or crest. Physical estimates are given for an
atom steered by a stylus of a scanning tunneling microscope (STM), and for the
optical beam guided by a bending stripe.
|
We study gluon scattering amplitudes in N=4 super Yang-Mills theory at strong
coupling via the AdS/CFT correspondence. We solve numerically the discretized
Euler-Lagrange equations on the square worldsheet for the minimal surface with
light-like boundaries in AdS spacetime. We evaluate the area of the surface for
the 4, 6 and 8-point amplitudes using worldsheet and radial cut-off
regularizations. Their infrared singularities in the cut-off regularization are
found to agree with the analytical results near the cusp less than 5% at
520x520 lattice points.
|
Frequency comb based multidimensional coherent spectroscopy is a novel
optical method that enables high resolution measurement in a short acquisition
time. The method's resolution makes multidimensional coherent spectroscopy
relevant for atomic systems that have narrow resonances. We use double-quantum
multidimensional coherent spectroscopy to reveal collective hyperfine
resonances in rubidium vapor at 100 C induced by dipole-dipole interactions. We
observe tilted lineshapes in the double-quantum 2D spectra, which has never
been reported for Doppler-broadened systems. The tilted lineshapes suggest that
the signal is predominately from the interacting atoms that have near zero
relative velocity.
|
We present a complete one-loop computation of the $H^\pm \rightarrow W^\pm Z$
decay in the aligned two-Higgs-doublet model. The constraints from the
electroweak precision observables, perturbative unitarity, vacuum stability and
flavour physics are all taken into account along with the latest Large Hadron
Collider searches for the charged Higgs. It is observed that a large
enhancement of the branching ratio can be obtained in the limit where there is
a large splitting between the charged and pseudo-scalar Higgs masses as well as
for the largest allowed values of the alignment parameter $\varsigma_u$. We
find that the maximum possible branching ratio in the case of a large mass
splitting between $m_{H^\pm}$ and $m_A$ is $\approx 10^{-3}$ for $m_{H^\pm} \in
(200,700)$ GeV which is in the reach of the high luminosity phase of the Large
Hadron Collider.
|
We study the flux parameter spaces for semi-realistic supersymmetric
Pati-Salam models in the AdS vacua on Type IIA orientifold and realistic
supersymmetric Pati-Salam models in the Minkowski vacua on Type IIB
orientifold. Because the fluxes can be very large, we show explicitly that
there indeed exists a huge number of semi-realistic Type IIA and realistic Type
IIB flux models. In the Type IIA flux models, in the very large flux limit, the
theory can become weakly coupled and the AdS vacua can approach to the
Minkowski vacua. In a series of realistic Type IIB flux models, at the string
scale, the gauge symmetry can be broken down to the Standard Model (SM) gauge
symmetry, the gauge coupling unification can be achieved naturally, all the
extra chiral exotic particles can be decoupled, and the observed SM fermion
masses and mixings can be obtained as well. In particular, the real parts of
the dilaton, K\"ahler moduli, and the unified gauge coupling are independent of
the very large fluxes. The very large fluxes only affect the real and/or
imaginary parts of the complex structure moduli, and/or the imaginary parts of
the dilaton and K\"ahler moduli. However, these semi-realistic Type IIA and
realistic Type IIB flux models can not be populated in the string landscape.
|
In this paper we propose to interpret the large discretization artifacts
affecting the neutral pion mass in maximally twisted lattice QCD simulations as
O(a^2) effects whose magnitude is roughly proportional to the modulus square of
the (continuum) matrix element of the pseudoscalar density operator between
vacuum and one-pion state. The numerical size of this quantity is determined by
the dynamical mechanism of spontaneous chiral symmetry breaking and turns out
to be substantially larger than its natural magnitude set by the value of
Lambda_QCD.
|
We match the electroweak chiral Lagrangian with two singlet scalars to the
next-to-minimal composite Higgs model with $ SO(6)/SO(5) $ coset structure and
extract the scalar divergences to one loop. Assuming the additional scalar to
be heavy, we integrate it out and perform a matching to the well-established
electroweak chiral Lagrangian with one light Higgs.
|
Superoscillations are band-limited functions with the peculiar characteristic
that they can oscillate with a frequency arbitrarily faster than their fastest
Fourier component. First anticipated in different contexts, such as optics or
radar physics, superoscillations have recently garnered renewed interest after
more modern studies have successfully linked their properties to a novel
quantum measurement theory, the weak value scheme. Under this framework,
superoscillations have quickly developed into a fruitful area of mathematical
study whose applications have evolved from the theoretical to the practical
world. Their mathematical understanding, though still incomplete, recognises
such oscillations will only arise in regions where the function is extremely
small, establishing an inherent limitation to their applicability. This paper
aims to provide a detailed look into the current state of research, both
theoretical and practical, on the topic of superoscillations, as well as
introducing the two-state vector formalism under which the weak value scheme
may be realised.
|
In this paper, we proved the normal scalar curvature conjecture and the
Bottcher-Wenzel conjecture.
|
We turn back to the well known problem of interpretation of the Schrodinger
operator with the pseudopotential being the first derivative of the Dirac
function. We show that the problem in its conventional formulation contains
hidden parameters and the choice of the proper selfadjoint operator is
ambiguously determined. We study the asymptotic behavior of spectra and
eigenvectors of the Hamiltonians with increasing smooth potentials perturbed by
short-range potentials. Appropriate solvable models are constructed and the
corresponding approximation theorems are proved. We introduce the concepts of
the resonance set and the coupling function, which are spectral characteristics
of the shape of squeezed potentials. The selfadjoint operators in the solvable
models are determined by means of the resonance set and the coupling function.
|
With the astonishing rate that the genomic and metagenomic sequence data sets
are accumulating, there are many reasons to constrain the data analyses. One
approach to such constrained analyses is to focus on select subsets of gene
families that are particularly well suited for the tasks at hand. Such gene
families have generally been referred to as marker genes. We are particularly
interested in identifying and using such marker genes for phylogenetic and
phylogeny-driven ecological studies of microbes and their communities. We
therefore refer to these as PhyEco (for phylogenetic and phylogenetic ecology)
markers. The dual use of these PhyEco markers means that we needed to develop
and apply a set of somewhat novel criteria for identification of the best
candidates for such markers. The criteria we focused on included universality
across the taxa of interest, ability to be used to produce robust phylogenetic
trees that reflect as much as possible the evolution of the species from which
the genes come, and low variation in copy number across taxa. We describe here
an automated protocol for identifying potential PhyEco markers from a set of
complete genome sequences. The protocol combines rapid searching, clustering
and phylogenetic tree building algorithms to generate protein families that
meet the criteria listed above. We report here the identification of PhyEco
markers for different taxonomic levels including 40 for all bacteria and
archaea, 114 for all bacteria, and much more for some of the individual phyla
of bacteria. This new list of PhyEco markers should allow much more detailed
automated phylogenetic and phylogenetic ecology analyses of these groups than
possible previously.
|
Existence and uniqueness as well as the iterative approximation of fixed
points of enriched almost contractions in Banach spaces are studied. The
obtained results are generalizations of the great majority of metric fixed
point theorems, in the setting of a Banach space. The main tool used in the
investigations is to work with the averaged operator $T_\lambda$ instead of the
original operator $T$. The effectiveness of the new results thus derived is
illustrated by appropriate examples. An application of the strong convergence
theorems to solving a variational inequality is also presented.
|
Archiving Web pages into themed collections is a method for ensuring these
resources are available for posterity. Services such as Archive-It exists to
allow institutions to develop, curate, and preserve collections of Web
resources. Understanding the contents and boundaries of these archived
collections is a challenge for most people, resulting in the paradox of the
larger the collection, the harder it is to understand. Meanwhile, as the sheer
volume of data grows on the Web, "storytelling" is becoming a popular technique
in social media for selecting Web resources to support a particular narrative
or "story". There are multiple stories that can be generated from an archived
collection with different perspectives about the collection. For example, a
user may want to see a story that is composed of the key events from a specific
Web site, a story that is composed of the key events of the story regardless of
the sources, or how a specific event at a specific point in time was covered by
different Web sites, etc. In this paper, we provide different case studies for
possible types of stories that can be extracted from a collection. We also
provide the definitions and models of these types of stories.
|
We study distributed computing of the truncated singular value decomposition
problem. We develop an algorithm that we call \texttt{LocalPower} for improving
communication efficiency. Specifically, we uniformly partition the dataset
among $m$ nodes and alternate between multiple (precisely $p$) local power
iterations and one global aggregation. In the aggregation, we propose to weight
each local eigenvector matrix with orthogonal Procrustes transformation (OPT).
As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with
$\pm 1$ entries as weights, has better computation complexity and stability in
experiments. We theoretically show that under certain assumptions
\texttt{LocalPower} lowers the required number of communications by a factor of
$p$ to reach a constant accuracy. We also show that the strategy of
periodically decaying $p$ helps obtain high-precision solutions. We conduct
experiments to demonstrate the effectiveness of \texttt{LocalPower}.
|
In spintronic devices, the two main approaches to actively control the
electrons' spin degree of freedom involve either static magnetic or electric
fields. An alternative avenue relies on the application of optical fields to
generate spin currents, which promises to bolster spin-device performance
allowing for significantly faster and more efficient spin logic. To date,
research has mainly focused on the optical injection of spin currents through
the photogalvanic effect, and little is known about the direct optical control
of the intrinsic spin splitting. Here, to explore the all-optical manipulation
of a material's spin properties, we consider the Rashba effect at a
semiconductor interface. The Rashba effect has long been a staple in the field
of spintronics owing to its superior tunability, which allows the observation
of fully spin-dependent phenomena, such as the spin-Hall effect, spin-charge
conversion, and spin-torque in semiconductor devices. In this work, by means of
time and angle-resolved photoemission spectroscopy (TR-ARPES), we demonstrate
that an ultrafast optical excitation can be used to manipulate the
Rashba-induced spin splitting of a two-dimensional electron gas (2DEG)
engineered at the surface of the topological insulator Bi$_{2}$Se$_{3}$. We
establish that light-induced photovoltage and charge carrier redistribution --
which in concert modulate the spin-orbit coupling strength on a sub-picosecond
timescale -- can offer an unprecedented platform for achieving all
optically-driven THz spin logic devices.
|
In this paper, we introduce a new graph whose vertices are the nonzero
zero-divisors of commutative ring $R$ and for distincts elements $x$ and $y$ in
the set $Z(R)^{\star}$ of the nonzero zero-divisors of $R$, $x$ and $y$ are
adjacent if and only if $xy=0$ or $x+y\in Z(R)$. we present some properties and
examples of this graph and we study his relation with the zero-divisor graph
and with a subgraph of total graph of a commutative ring.
|
We study an online linear classification problem, in which the data is
generated by strategic agents who manipulate their features in an effort to
change the classification outcome. In rounds, the learner deploys a classifier,
and an adversarially chosen agent arrives, possibly manipulating her features
to optimally respond to the learner. The learner has no knowledge of the
agents' utility functions or "real" features, which may vary widely across
agents. Instead, the learner is only able to observe their "revealed
preferences" --- i.e. the actual manipulated feature vectors they provide. For
a broad family of agent cost functions, we give a computationally efficient
learning algorithm that is able to obtain diminishing "Stackelberg regret" ---
a form of policy regret that guarantees that the learner is obtaining loss
nearly as small as that of the best classifier in hindsight, even allowing for
the fact that agents will best-respond differently to the optimal classifier.
|
In this paper, we give a polytopal estimate of Mirkovi\'c-Vilonen polytopes
lying in a Demazure crystal in terms of Minkowski sums of extremal
Mirkovi\'c-Vilonen polytopes. As an immediate consequence of this result, we
provide a necessary (but not sufficient) polytopal condition for a
Mirkovi\'c-Vilonen polytope to lie in a Demazure crystal.
|
Given two Lie $\infty$-algebras $E$ and $V$, any Lie $\infty$-action of $E$
on $V$ defines a Lie $\infty$-algebra structure on $E\oplus V$. Some
compatibility between the action and the Lie $\infty$-structure on $V$ is
needed to obtain a particular Loday $\infty$-algebra, the non-abelian
hemisemidirect product. These are the coherent actions. For coherent actions it
is possible to define non-abelian homotopy embedding tensors as Maurer-Cartan
elements of a convenient Lie $\infty$-algebra. Generalizing the classical case,
we see that a non-abelian homotopy embedding tensor defines a Loday
$\infty$-structure on $V$ and is a morphism between this new Loday
$\infty$-algebra and $E$.
|
As a continuation of the previously published work [Velychko O. V., Stasyuk
I. V., Phase Transitions, 2019, 92, 420], a phenomenological framework for the
relaxation dynamics of quantum lattice model with multi-well potentials is
given in the case of deformed Sn$_{2}$P$_{2}$S$_{6}$ ferroelectric lattice. The
framework is based on the combination of statistical equilibrium theory and
irreversible thermodynamics. In order to study these dynamics in a connected
way we assume that the dipole ordering or polarization ($\eta$) and volume
deformation ($u$) can be treated as fluxes and forces in the sense of Onsager
theory. From the linear relations between the forces and fluxes, the rate
equations are derived and characterized by two relaxation times ($\tau_{S},
\tau_{F}$) which describe the irreversible process near the equilibrium states.
The behaviors of $\tau_{S}$ and $\tau_{F}$ in the vicinity of ferroelectric
phase transitions are studied.
|
The problem of noncooperative resource allocation in a
multipoint-to-multipoint cellular network is considered in this paper. The
considered scenario is general enough to represent several key instances of
modern wireless networks such as a multicellular network, a peer-to-peer
network (interference channel), and a wireless network equipped with
femtocells. In particular, the problem of joint transmit waveforms adaptation,
linear receiver design, and transmit power control is examined. Several utility
functions to be maximized are considered, and, among them, we cite the received
SINR, and the transmitter energy efficiency, which is measured in bit/Joule,
and represents the number of successfully delivered bits for each energy unit
used for transmission. Resorting to the theory of potential games,
noncooperative games admitting Nash equilibria in multipoint-to-multipoint
cellular networks regardless of the channel coefficient realizations are
designed. Computer simulations confirm that the considered games are
convergent, and show the huge benefits that resource allocation schemes can
bring to the performance of wireless data networks.
|
We study a class of perturbative scalar quantum field theories where dynamics
is characterized by Lorentz-invariant or Lorentz-breaking non-local operators
of fractional order and the underlying spacetime has a varying spectral
dimension. These theories are either ghost free or power-counting
renormalizable but they cannot be both at the same time. However, some of them
are one-loop unitary and finite, and possibly unitary and finite at all orders.
|
In this paper we study the number of bound states for potentials in one and
two spatial dimensions. We first show that in addition to the well-known fact
that an arbitrarily weak attractive potential has a bound state, it is easy to
construct examples where weak potentials have an infinite number of bound
states. These examples have potentials which decrease at infinity faster than
expected. Using somewhat stronger conditions, we derive explicit bounds on the
number of bound states in one dimension, using known results for the
three-dimensional zero angular momentum. A change of variables which allows us
to go from the one-dimensional case to that of two dimensions results in a
bound for the zero angular momentum case. Finally, we obtain a bound on the
total number of bound states in two dimensions, first for the radial case and
then, under stronger conditions, for the non-central case.
|
Cosmic rays (CRs) propagate in the Milky Way and interact with the
interstellar medium and magnetic fields. These interactions produce emissions
that span the electromagnetic spectrum, and are an invaluable tool for
understanding the intensities and spectra of CRs in distant regions, far beyond
those probed by direct CR measurements. We present updates on the study of CR
properties by combining multi-frequency observations of the interstellar
emission and latest CR direct measurements with propagation models.
|
Cooperative behaviors are ubiquitous in nature,which is a puzzle to
evolutionary biology,because the defector always gains more benefit than the
cooperator,thus,the cooperator should decrease and vanish over time.This
typical "prisoners' dilemma" phenomenon has been widely researched in recent
years.The interaction strength between cooperators and defectors is introduced
in this paper(in human society,it can be understood as the tolerance of
cooperators).We find that only when the maximum interaction strength is between
two critical values,the cooperator and defector can coexist,otherwise, 1) if it
is greater than the upper value,the cooperator will vanish, 2) if it is less
than the lower value,a bistable state will appear.
|
Experimental measurements of electron transport properties of molecular
junctions are often performed in solvents. Solvent-molecule coupling and
physical properties of the solvent can be used as the external stimulus to
control electric current through a molecule. In this paper, we propose a model,
which includes dynamical effects of solvent-molecule interaction in the
non-equilibrium Green's function calculations of electric current. The solvent
is considered as a macroscopic dipole moment that reorients stochastically and
interacts with the electrons tunnelling through the molecular junction. The
Keldysh-Kadanoff-Baym equations for electronic Green's functions are solved in
time-domain with subsequent averaging over random realisations of rotational
variables using Furutsu-Novikov method for exact closure of infinite hierarchy
of stochastic correlation functions. The developed theory requires the use of
wide-band approximation as well as classical treatment of solvent degrees of
freedom. The theory is applied to a model molecular junction. It is
demonstrated that not only electrostatic interaction between molecular junction
and solvent but also solvent viscosity can be used to control electrical
properties of the junction. Aligning of the rotating dipole moment breaks
particle-hole symmetry of the transmission favouring either hole or electron
transport channels depending upon the aligning potential.
|
Virasoro constraints for orbifold Gromov-Witten theory are described. These
constraints are applied to the degree zreo, genus zero orbifold Gromov-Witten
potentials of the weighted projective stacks $\mathbb{P}(1,N)$,
$\mathbb{P}(1,1,N)$ and $\mathbb{P}(1,1,1,N)$ to obtain formulas of descendant
cyclic Hurwitz-Hodge integrals.
|
Accurate hand joints detection from images is a fundamental topic which is
essential for many applications in computer vision and human computer
interaction. This paper presents a two stage network for hand joints detection
from single unmarked image by using serial-parallel multi-scale feature fusion.
In stage I, the hand regions are located by a pre-trained network, and the
features of each detected hand region are extracted by a shallow spatial hand
features representation module. The extracted hand features are then fed into
stage II, which consists of serially connected feature extraction modules with
similar structures, called "multi-scale feature fusion" (MSFF). A MSFF contains
parallel multi-scale feature extraction branches, which generate initial hand
joint heatmaps. The initial heatmaps are then mutually reinforced by the
anatomic relationship between hand joints. The experimental results on five
hand joints datasets show that the proposed network overperforms the
state-of-the-art methods.
|
Reducing traffic accidents is an important public safety challenge. However,
the majority of studies on traffic accident analysis and prediction have used
small-scale datasets with limited coverage, which limits their impact and
applicability; and existing large-scale datasets are either private, old, or do
not include important contextual information such as environmental stimuli
(weather, points-of-interest, etc.). In order to help the research community
address these shortcomings we have - through a comprehensive process of data
collection, integration, and augmentation - created a large-scale publicly
available database of accident information named US-Accidents. US-Accidents
currently contains data about $2.25$ million instances of traffic accidents
that took place within the contiguous United States, and over the last three
years. Each accident record consists of a variety of intrinsic and contextual
attributes such as location, time, natural language description, weather,
period-of-day, and points-of-interest. We present this dataset in this paper,
along with a wide range of insights gleaned from this dataset with respect to
the spatiotemporal characteristics of accidents. The dataset is publicly
available at https://smoosavi.org/datasets/us_accidents.
|
The MAJORANA DEMONSTRATOR was a search for neutrinoless double-beta decay
($0\nu\beta\beta$) in the $^{76}$Ge isotope. It was staged at the 4850-foot
level of the Sanford Underground Research Facility (SURF) in Lead, SD. The
experiment consisted of 58 germanium detectors housed in a low background
shield and was calibrated once per week by deploying a $^{228}$Th line source
for 1 to 2 hours. The energy scale calibration determination for the detector
array was automated using custom analysis tools. We describe the offline
procedure for calibration of the Demonstrator germanium detectors, including
the simultaneous fitting of multiple spectral peaks, estimation of energy scale
uncertainties, and the automation of the calibration procedure.
|
Subsets and Splits