text
stringlengths 6
128k
|
---|
In this paper we systematically classify and describe bosonic symmetry
protected topological (SPT) phases in all physical spatial dimensions using
semiclassical nonlinear Sigma model (NLSM) field theories. All the SPT phases
on a $d-$dimensional lattice discussed in this paper can be described by the
same NLSM, which is an O(d+2) NLSM in $(d+1)-$dimensional space-time, with a
topological $\Theta-$term. The field in the NLSM is a semiclassical Landau
order parameter with a unit length constraint. The classification of SPT phases
discussed in this paper based on their NLSMs is consistent with the more
mathematical classification based on group cohomology. Besides the
classification, the formalism used in this paper also allows us to explicitly
discuss the physics at the boundary of the SPT phases, and it reveals the
relation between SPT phases with different symmetries. For example, it gives
many of these SPT states a natural "decorated defect" construction.
|
With a particular focus on Scipy's minimize function the eclipse mapping
method is thoroughly researched and implemented utilizing Python and essential
libraries. Many optimization techniques are used, including Sequential Least
Squares Programming (SLSQP), Nelder-Mead, and Conjugate Gradient (CG). However,
for the purpose of examining photometric light curves these methods seek to
solve the maximum entropy equation under a chi-squared constraint. Therefore,
these techniques are first evaluated on two-dimensional Gaussian data without a
chi-squared restriction, and then they are used to map the accretion disc and
uncover the Gaussian structure of the Cataclysmic Variable KIC 201325107.
Critical analysis is performed on the code structure to find possible faults
and design problems. Additionally, the analysis shows how several factors
impacting computing time and image quality are included including the variance
in Gaussian weighting, disc image resolution, number of data points in the
light curve, and degree of constraint.
|
We study the quench dynamics of non-Hermitian topological models with
non-Hermitian skin effects. Adopting the non-Bloch band theory and projecting
quench dynamics onto the generalized Brillouin zone, we find that emergent
topological structures, in the form of dynamic skyrmions, exist in the
generalized momentum-time domain, and are correlated with the non-Bloch
topological invariants of the static Hamiltonians. The skyrmion structures
anchor on the fixed points of dynamics whose existence are conditional on the
coincidence of generalized Brillouin zones of the pre- and post-quench
Hamiltonians. Global signatures of dynamic skyrmions, however, persist well
beyond such a condition, thus offering a general dynamic detection scheme for
non-Bloch topology in the presence of non-Hermitian skin effects. Applying our
theory to an experimentally relevant, non-unitary quantum walk, we explicitly
demonstrate how the non-Bloch topological invariants can be revealed through
the non-Bloch quench dynamics.
|
We investigate the constraints imposed by the first-year WMAP CMB data
extended to higher multipole by data from ACBAR, BOOMERANG, CBI and the VSA and
by the LSS data from the 2dF galaxy redshift survey on the possible amplitude
of primordial isocurvature modes. A flat universe with CDM and Lambda is
assumed, and the baryon, CDM (CI), and neutrino density (NID) and velocity
(NIV) isocurvature modes are considered. Constraints on the allowed
isocurvature contributions are established from the data for various
combinations of the adiabatic mode and one, two, and three isocurvature modes,
with intermode cross-correlations allowed. Since baryon and CDM isocurvature
are observationally virtually indistinguishable, these modes are not considered
separately. We find that when just a single isocurvature mode is added, the
present data allows an isocurvature fraction as large as 13+-6, 7+-4, and 13+-7
percent for adiabatic plus the CI, NID, and NIV modes, respectively. When two
isocurvature modes plus the adiabatic mode and cross-correlations are allowed,
these percentages rise to 47+-16, 34+-12, and 44+-12 for the combinations
CI+NID, CI+NIV, and NID+NIV, respectively. Finally, when all three isocurvature
modes and cross-correlations are allowed, the admissible isocurvature fraction
rises to 57+-9 per cent. The sensitivity of the results to the choice of prior
probability distribution is examined.
|
These notes rigorously construct the stochastic integral of a Hilbert Space
valued process driven by a Cylindrical Brownian Motion. We expand upon this
stochastic calculus to present an introduction to stochastic differential
equations in infinite dimensions, with a particular focus on Stratonovich
equations due to their physical importance as well as unbounded noise operators
(with applications to transport noise). Furthermore we explore techniques in
the existence theory for nonlinear stochastic partial differential equations.
|
We employ population synthesis method to model the double neutron star (DNS)
population and test various possibilities on natal kick velocities gained by
neutron stars after their formation. We first choose natal kicks after standard
core collapse SN from a Maxwellian distribution with velocity dispersion of
sigma=265 km/s as proposed by Hobbs et al. (2005) and then modify this
distribution by changing the velocity dispersion towards smaller and larger
kick values. We also take into account the possibility of NS formation through
electron capture supernova. In this case we test two scenarios: zero natal kick
or small natal kick, drawn from Maxwellian distribution with sigma = 26.5 km/s.
We calculate the present-day orbital parameters of binaries and compare the
resulting eccentricities with those known for observed DNSs. As an additional
test we calculate Galactic merger rates for our model populations and confront
them with observational limits. We do not find any model unequivocally
consistent with both observational constraints simultaneously.The models with
low kicks after CCSN for binaries with the second NS forming through core
collapse SN are marginally consistent with the observations. This means that
either 14 observed DNSs are not representative of the intrinsic Galactic
population, or that our modeling of DNS formation needs revision.
|
The proportional hazards model has been extensively used in many fields such
as biomedicine to estimate and perform statistical significance testing on the
effects of covariates influencing the survival time of patients. The classical
theory of maximum partial-likelihood estimation (MPLE) is used by most software
packages to produce inference, e.g., the coxph function in R and the PHREG
procedure in SAS. In this paper, we investigate the asymptotic behavior of the
MPLE in the regime in which the number of parameters p is of the same order as
the number of samples n. The main results are (i) existence of the MPLE
undergoes a sharp 'phase transition'; (ii) the classical MPLE theory leads to
invalid inference in the high-dimensional regime. We show that the asymptotic
behavior of the MPLE is governed by a new asymptotic theory. These findings are
further corroborated through numerical studies. The main technical tool in our
proofs is the Convex Gaussian Min-max Theorem (CGMT), which has not been
previously used in the analysis of partial likelihood. Our results thus extend
the scope of CGMT and shed new light on the use of CGMT for examining the
existence of MPLE and non-separable objective functions.
|
We point out that the ability of some models of inflation, such as Higgs
inflation and the universal attractor models, in reproducing the available data
is due to their relation to the Starobinsky model of inflation. For large field
values, where the inflationary phase takes place, all these classes of models
are indeed identical to the Starobinsky model. Nevertheless, the inflaton is
just an auxiliary field in the Jordan frame of the Starobinsky model and this
leads to two important consequences: first, the inflationary predictions of the
Starobinsky model and its descendants are slightly different (albeit not
measurably); secondly the theories have different small-field behaviour,
leading to different ultra-violet cut-off scales. In particular, one
interesting descendant of the Starobinsky model is the non-minimally- coupled
quadratic chaotic inflation. Although the standard quadratic chaotic inflation
is ruled out by the recent Planck data, its non-minimally coupled version is in
agreement with observational data and valid up to Planckian scales.
|
Non-standard .sty file `equations.sty' now included inline. The critical
exponents of the metal--insulator transition in disordered systems have been
the subject of much published work containing often contradictory results.
Values ranging between $\half$ and $2$ can be found even in the recent
literature. In this paper the results of a long term study of the transition
are presented. The data have been calculated with sufficient accuracy (0.2\%)
that the calculated exponent can be quoted as $s=\nu=1.54 \pm 0.08$ with
confidence. The reasons for the previous scatter of results is discussed.
|
We study the stellar and gas kinematics of the brightest group galaxies
(BGGs) in dynamically relaxed and unrelaxed galaxy groups for a sample of 154
galaxies in the SAMI galaxy survey. We characterize the dynamical state of the
groups using the luminosity gap between the two most luminous galaxies and the
BGG offset from the luminosity centroid of the group. We find that the
misalignment between the rotation axis of gas and stellar components is more
frequent in the BGGs in unrelaxed groups, although with quite low statistical
significance. Meanwhile galaxies whose stellar dynamics would be classified as
`regular rotators' based on their kinemetry are more common in relaxed groups.
We confirm that this dependency on group dynamical state remains valid at fixed
stellar mass and Sersic index. The observed trend could potentially originate
from a differing BGG accretion history in virialised and evolving groups.
Amongst the halo relaxation probes, the group BGG offset appears to play a
stronger role than the luminosity gap on the stellar kinematic differences of
the BGGs. However, both the group BGG offset and luminosity gap appear to
roughly equally drive the misalignment between the gas and stellar component of
the BGGs in one direction. This study offers the first evidence that the
dynamical state of galaxy groups may influence the BGG's stellar and gas
kinematics and calls for further studies using a larger sample with higher
signal-to-noise.
|
The Large Hadron Collider (LHC) is expected to provide proton-proton
collisions at a centre-of-mass energy of 14 TeV, yielding millions of of top
quark events. The top-physics potential of the two general purpose experiments,
ATLAS and CMS, is discussed according to state-of-the-art simulation of both
physics and detectors. An overview is given of the most important results with
emphasis on the expected improvements in our understanding of physics connected
to the top quark.
|
Assuming hierarchical neutrino masses we calculate the heavy neutrino mass
scale in the seesaw mechanism from experimental data on oscillations of solar
and atmospheric neutrinos and quark-lepton symmetry. The resulting scale is
around or above the unification scale, unless the two lightest neutrinos have
masses of opposite sign, in which case the resulting scale can be intermediate.
|
Despite their great success in recent years, deep neural networks (DNN) are
mainly black boxes where the results obtained by running through the network
are difficult to understand and interpret. Compared to e.g. decision trees or
bayesian classifiers, DNN suffer from bad interpretability where we understand
by interpretability, that a human can easily derive the relations modeled by
the network. A reasonable way to provide interpretability for humans are
logical rules. In this paper we propose neural logic rule layers (NLRL) which
are able to represent arbitrary logic rules in terms of their conjunctive and
disjunctive normal forms. Using various NLRL within one layer and
correspondingly stacking various layers, we are able to represent arbitrary
complex rules by the resulting neural network architecture. The NLRL are
end-to-end trainable allowing to learn logic rules directly from available data
sets. Experiments show that NLRL-enhanced neural networks can learn to model
arbitrary complex logic and perform arithmetic operation over the input values.
|
In this paper, we quantify the non-linear effects from $k$-essence dark
energy through an effective parameter $\mu$ that encodes the additional
contribution of a dark energy fluid or a modification of gravity to the Poisson
equation. This is a first step toward quantifying non-linear effects of dark
energy/modified gravity models in a more general approach. We compare our
$N$-body simulation results from $k$-evolution with predictions from the linear
Boltzmann code $\texttt{CLASS}$, and we show that for the $k$-essence model one
can safely neglect the difference between the two potentials, $ \Phi -\Psi$,
and short wave corrections appearing as higher order terms in the Poisson
equation, which allows us to use single parameter $\mu$ for characterizing this
model. We also show that for a large $k$-essence speed of sound the
$\texttt{CLASS}$ results are sufficiently accurate, while for a low speed of
sound non-linearities in matter and in the $k$-essence field are
non-negligible. We propose a $\tanh$-based parameterisation for $\mu$,
motivated by the results for two cases with low ($c_s^2=10^{-7}$) and high
($c_s^2=10^{-4}$) speed of sound, to include the non-linear effects based on
the simulation results. This parametric form of $\mu$ can be used to improve
Fisher forecasts or Newtonian $N$-body simulations for $k$-essence models.
|
In this work, we investigate the current flaws with identifying
network-related errors, and examine how K-Means and Long-Short Term Memory
Networks solve these problems. We demonstrate that K-Means is able to classify
messages, but not necessary provide meaningful clusters. However, Long-Short
Term Memory Networks are able to meet our goals of providing an intelligent
clustering of messages by grouping messages that are temporally related.
Additionally, Long-Short Term Memory Networks can provide the ability to
understand and visualize temporal causality, which unlocks the ability to warn
about errors before they happen. We show that LSTMs have a 70% accuracy on
classifying network errors, and provide some suggestions on future work.
|
We illustrate the stochastic method for solving the Schwinger-Dyson equations
in large-N quantum field theories described in ArXiv:1009.4033 on the example
of the Gross-Witten unitary matrix model. In the strong-coupling limit, this
method can be applied directly, while in the weak-coupling limit we change the
variables from compact to noncompact ones in order to cast the Schwinger-Dyson
equations in the stochastic form. This leads to a new action with an infinite
number of higher-order interaction terms. Nevertheless, such an action can be
efficiently handled. This suggests the way to apply the method of
ArXiv:1009.4033 to field theories with U(N) field variables as well as to
effective field theories in the large-N limit.
|
We characterize the points that satisfy Birkhoff's ergodic theorem under
certain computability conditions in terms of algorithmic randomness. First, we
use the method of cutting and stacking to show that if an element x of the
Cantor space is not Martin-Lof random, there is a computable measure-preserving
transformation and a computable set that witness that x is not typical with
respect to the ergodic theorem, which gives us the converse of a theorem by
V'yugin. We further show that if x is weakly 2-random, then it satisfies the
ergodic theorem for all computable measure-preserving transformations and all
lower semi-computable functions.
|
Class-Incremental Learning (CIL) aims to build classification models from
data streams. At each step of the CIL process, new classes must be integrated
into the model. Due to catastrophic forgetting, CIL is particularly challenging
when examples from past classes cannot be stored, the case on which we focus
here. To date, most approaches are based exclusively on the target dataset of
the CIL process. However, the use of models pre-trained in a self-supervised
way on large amounts of data has recently gained momentum. The initial model of
the CIL process may only use the first batch of the target dataset, or also use
pre-trained weights obtained on an auxiliary dataset. The choice between these
two initial learning strategies can significantly influence the performance of
the incremental learning model, but has not yet been studied in depth.
Performance is also influenced by the choice of the CIL algorithm, the neural
architecture, the nature of the target task, the distribution of classes in the
stream and the number of examples available for learning. We conduct a
comprehensive experimental study to assess the roles of these factors. We
present a statistical analysis framework that quantifies the relative
contribution of each factor to incremental performance. Our main finding is
that the initial training strategy is the dominant factor influencing the
average incremental accuracy, but that the choice of CIL algorithm is more
important in preventing forgetting. Based on this analysis, we propose
practical recommendations for choosing the right initial training strategy for
a given incremental learning use case. These recommendations are intended to
facilitate the practical deployment of incremental learning.
|
In a recent Nature Materials article, Brown et al. reported a generality of
shear thickening in dense suspensions and demonstrated that shear thickening
can be masked by a yield stress and can be recovered when the yield stress is
decreased to below a threshold. However, the generality of the shear thickening
reported in the article may not be necessary true when a high electric/magnetic
field is applied on an ER/MR fluid. Shear thickening of ER fluid and MR fluid
under high electric/magnetic fields at low shear rates, indicating an obvious
phase change inside the dense suspensions has been observed.
|
Hyperbolic metamaterials were initially proposed in optics to boost radiation
efficiencies of quantum emitters. Adopting this concept for antenna design can
allow approaching long-standing challenges in radio physics. For example,
impedance matching and gain are among the most challenging antenna parameters
to achieve in case when a broadband operation is needed. Here we pro-pose and
numerically analyse a new compact antenna design, based on hyperbolic
metamaterial slab with a patterned layer on top. Energy from a subwavelength
loop antenna is shown to be efficiently harvested into bulk modes of the
metamaterial over a broad frequency range. Highly localized propagating waves
within the medium have a well-resolved spatial-temporal separation owing to the
hyperbolic type of effective permeability tensor. This strong interplay between
chromatic and modal dispersions enables routing different frequencies into
different spatial locations within compact subwavelength geometry. An array of
non-overlapping resonant elements is placed on the metamaterial layer and
provides a superior matching of localized electromagnetic energy to the free
space radiation. As the result, two-order of magnitude improvement in linear
gain of the device is predicted. The proposed new architecture can find a use
in applications, where multiband or broadband compact devices are required.
|
We study solutions of high codimension mean curvature flow defined for all
negative times, usually referred to as ancient solutions. We show that any
compact ancient solution whose second fundamental form satisfies a certain
natural pinching condition must be a family of shrinking spheres. Andrews and
Baker have shown that initial submanifolds satisfying this pinching condition,
which generalises the notion of convexity, converge to round points under the
flow. As an application, we use our result to simplify their proof.
|
Grassmann (or anti-commuting) variables are extensively used in theoretical
physics. In this paper we use Grassmann variable calculus to give new proofs of
celebrated combinatorial identities such as the Lindstr\"om-Gessel-Viennot
formula for graphs with cycles and the Jacobi-Trudi identity. Moreover, we
define a one parameter extension of Schur polynomials that obey a natural
convolution identity.
|
We present deep, large area B and r' imaging for a sample of 49 brightest
cluster galaxies (BCGs). The clusters were selected by their x-ray luminosity
and redshift to form two volume limited samples, one with mean redshift ~ 0.07
and one at a mean redshift ~ 0.17. For each cluster the data cover 41' by 41'.
We discuss our data reduction techniques in detail, and show that we can
reliably measure the surface brightness at the levels of mu_B ~ 29 and mu_r' ~
28. For each galaxy we present the B and r' images together with the surface
brightness profile, B-r' colour, eccentricity and position angle as a function
of radius. We investigate the distribution of positional offsets between the
optical centroid of the BCG and the centre of the X-ray emission, and conclude
that the mass profiles are cuspy, and do not have extended cores. We also
introduce a method to objectively identify the transition from BCG to extended
envelope of intra-cluster light, using the Petrosian index as a function of
radius.
|
The inequivalence of thermodynamical ensembles related by a Legendre
transformation is manifest in self-gravitating systems and in black hole
thermodynamics. Using the Poincare's method of the linear series, we describe
the mathematical reasons which lead to this inequivalence which in turn induces
a hierarchy of ensembles: the most stable ensemble describes the most isolated
system. Moreover, we prove that one can obtain the degree of stability of all
equilibrium configurations in any ensensemble related by Legendre
transformations to the most stable if one knows the degree of stability in the
most stable ensemble.
|
We investigate the possibility of the gravitational-wave event GW170817 being
a light, solar-mass black hole (BH) - neutron star (NS) merger. We explore two
exotic scenarios involving primordial black holes (PBH) that could produce such
an event, taking into account available observational information on NGC 4993.
First, we entertain the possibility of dynamical NS-PBH binary formation where
a solar-mass PBH and a NS form a binary through gravitational interaction. We
find that while dynamical NS-PBH formation could account for the GW170817
event, the rate is highly dependent on unknown density contrast factors and
could potentially be affected by galaxy mergers. We also find that PBH-PBH
binaries would likely have a larger merger rate, assuming the density contrast
boost factor of an order similar to the NS-PBH case. These exotic merger
formations could provide new channels to account for the volumetric rate of
compact-object mergers reported by LIGO/Virgo. Secondly, we consider the case
where one of the NS's in a binary NS system is imploded by a microscopic PBH.
We find that the predicted rate for NS implosion into a BH is very small, at
least for the specific environment of NGC 4993. We point out that similar
existing (e.g. GW190425 and GW190814) and future observations will shed
additional light on these scenarios.
|
As a maintainer of an open source software project, you are usually happy
about contributions in the form of pull requests that bring the project a step
forward. Past studies have shown that when reviewing a pull request, not only
its content is taken into account, but also, for example, the social
characteristics of the contributor. Whether a contribution is accepted and how
long this takes therefore depends not only on the content of the contribution.
What we only have indications for so far, however, is that pull requests from
bots may be prioritized lower, even if the bots are explicitly deployed by the
development team and are considered useful.
One goal of the bot research and development community is to design helpful
bots to effectively support software development in a variety of ways. To get
closer to this goal, in this GitHub mining study, we examine the measurable
differences in how maintainers interact with manually created pull requests
from humans compared to those created automatically by bots.
About one third of all pull requests on GitHub currently come from bots.
While pull requests from humans are accepted and merged in 72.53% of all cases,
this applies to only 37.38% of bot pull requests. Furthermore, it takes
significantly longer for a bot pull request to be interacted with and for it to
be merged, even though they contain fewer changes on average than human pull
requests. These results suggest that bots have yet to realize their full
potential.
|
Let \Gamma be the convex set consisting of all marginal tracial states on the
tensor product B \otimes B of the algebra B of nxn matrices over the complex
numbers. We find necessary and sufficient conditions for such a state to be
extremal in \Gamma. We also give a characterization of those extreme points in
\Gamma which are pure states. We conjecture that all extremal marginal tracial
states are pure states.
|
For the standard symplectic forms on Jacobi and CMV matrices, we compute
Poisson brackets of OPRL and OPUC, and relate these to other basic Poisson
brackets and to Jacobians of basic changes of variable.
|
The expressive power of neural networks is important for understanding deep
learning. Most existing works consider this problem from the view of the depth
of a network. In this paper, we study how width affects the expressiveness of
neural networks. Classical results state that depth-bounded (e.g. depth-$2$)
networks with suitable activation functions are universal approximators. We
show a universal approximation theorem for width-bounded ReLU networks:
width-$(n+4)$ ReLU networks, where $n$ is the input dimension, are universal
approximators. Moreover, except for a measure zero set, all functions cannot be
approximated by width-$n$ ReLU networks, which exhibits a phase transition.
Several recent works demonstrate the benefits of depth by proving the
depth-efficiency of neural networks. That is, there are classes of deep
networks which cannot be realized by any shallow network whose size is no more
than an exponential bound. Here we pose the dual question on the
width-efficiency of ReLU networks: Are there wide networks that cannot be
realized by narrow networks whose size is not substantially larger? We show
that there exist classes of wide networks which cannot be realized by any
narrow network whose depth is no more than a polynomial bound. On the other
hand, we demonstrate by extensive experiments that narrow networks whose size
exceed the polynomial bound by a constant factor can approximate wide and
shallow network with high accuracy. Our results provide more comprehensive
evidence that depth is more effective than width for the expressiveness of ReLU
networks.
|
In this paper, we present an equitable partition theorem of tensors, which
gives the relations between $H$-eigenvalues of a tensor and its quotient
equitable tensor and extends the equitable partitions of graphs to hypergraphs.
Furthermore, with the aid of it, some properties and $H$-eigenvalues of the
generalized power hypergraphs are obtained, which extends some known results,
including some results of Yuan, Qi and Shao.
|
Scientists routinely compare gene expression levels in cases versus controls
in part to determine genes associated with a disease. Similarly, detecting
case-control differences in co-expression among genes can be critical to
understanding complex human diseases; however statistical methods have been
limited by the high dimensional nature of this problem. In this paper, we
construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two
high-dimensional covariance matrices. By focusing on the spectrum of the
differential matrix, sLED provides a novel perspective that accommodates what
we assume to be common, namely sparse and weak signals in gene expression data,
and it is closely related with Sparse Principal Component Analysis. We prove
that sLED achieves full power asymptotically under mild assumptions, and
simulation studies verify that it outperforms other existing procedures under
many biologically plausible scenarios. Applying sLED to the largest
gene-expression dataset obtained from post-mortem brain tissue from
Schizophrenia patients and controls, we provide a novel list of genes
implicated in Schizophrenia and reveal intriguing patterns in gene
co-expression change for Schizophrenia subjects. We also illustrate that sLED
can be generalized to compare other gene-gene "relationship" matrices that are
of practical interest, such as the weighted adjacency matrices.
|
Plasma jets belong to the category remote plasma. This means that the
discharge conditions and the chemical effect on samples can be tuned
separately, this being a big advantage compared to standard low-pressure
reactors. The inductive coupling brings the advantage of a pure and dense
plasma. The microwave excitation allows furthermore miniaturization and
generation of low temperature plasmas. The present paper shows the state of the
art of the research on such sources, demonstrating their work up to atmospheric
pressure.
|
We found that three types of tethered surface model undergo a first-order
phase transition between the smooth and the crumpled phase. The first and the
third are discrete models of Helfrich, Polyakov, and Kleinert, and the second
is that of Nambu and Goto. These are curvature models for biological membranes
including artificial vesicles. The results obtained in this paper indicate that
the first-order phase transition is universal in the sense that the order of
the transition is independent of discretization of the Hamiltonian for the
tethered surface model.
|
Nanoscale systems offer key capabilities for quantum technologies that
include single qubit control and readout, multiple qubit gate operation,
extremely sensitive and localized sensing and imaging, as well as the ability
to build hybrid quantum systems. To fully exploit these functionalities,
multiple degrees of freedom are highly desirable: in this respect, nanoscale
systems that coherently couple to light and possess spins, allow for storage of
photonic qubits or light-matter entanglement together with processing
capabilities. In addition, all-optical control of spins can be possible for
faster gate operations and higher spatial selectivity compared to direct RF
excitation. Such systems are therefore of high interest for quantum
communications and processing. However, an outstanding challenge is to preserve
properties, and especially optical and spin coherence lifetimes, at the
nanoscale. Indeed, interactions with surfaces related perturbations strongly
increase as sizes decrease, although the smallest objects present the highest
flexibility for integration with other systems. Here, we demonstrate optically
controlled nuclear spins with long coherence lifetimes (T2) in rare earth doped
nanoparticles. We observed spins echoes and measured T2 of 2.9 +/- 0.3 ms at 5
K and under a magnetic field of 9 mT, a value comparable to those obtained in
bulk single crystals. Moreover, we achieve, for the first time, spin T2
extension using all-optical spin dynamical decoupling and observe high fidelity
between excitation and echo phases. Rare-earth doped nanoparticles are thus the
only reported nano-materials in which optically controlled spins with
millisecond coherence lifetimes have been observed. These results open the way
to providing quantum light-atom-spin interfaces with long storage time within
hybrid architectures.
|
We perform coupled-cluster and diffusion Monte Carlo calculations of the
energies of circular quantum dots up to 20 electrons. The coupled-cluster
calculations include triples corrections and a renormalized Coulomb interaction
defined for a given number of low-lying oscillator shells. Using such a
renormalized Coulomb interaction brings the coupled-cluster calculations with
triples correlations in excellent agreement with the diffusion Monte Carlo
calculations. This opens up perspectives for doing ab initio calculations for
much larger systems of electrons.
|
In recent years, deep learning has been at the center of analytics due to its
impressive empirical success in analyzing complex data objects. Despite this
success, most of the existing tools behave like black-box machines, thus the
increasing interest in interpretable, reliable, and robust deep learning models
applicable to a broad class of applications. Feature-selected deep learning has
emerged as a promising tool in this realm. However, the recent developments do
not accommodate ultra-high dimensional and highly correlated features, in
addition to the high noise level. In this article, we propose a novel screening
and cleaning method with the aid of deep learning for a data-adaptive
multi-resolutional discovery of highly correlated predictors with a controlled
error rate. Extensive empirical evaluations over a wide range of simulated
scenarios and several real datasets demonstrate the effectiveness of the
proposed method in achieving high power while keeping the false discovery rate
at a minimum.
|
Given a compact and H-convex subset $K$ of the Heisenberg group ${\mathbb
H}$, with the origin $e$ in its interior, we are interested in finding a
homogeneous H-convex function $f$ such that $f(e)=0$ and $f\bigl|_{\partial
K}=1$; we will call this function $f$ the ${\mathbb H}$-cone-function of vertex
$e$ and base $\partial K$. While the equivalent version of this problem in the
Euclidean framework has an easy solution, in our context this investigation
turns out to be quite entangled, and the problem can be unsolvable. The
approach we follow makes use of an extension of the notion of convex family
introduced by Fenchel. We provide the precise, even if awkward, condition
required to $K$ so that $\partial K$ is the base of an ${\mathbb
H}$-cone-function of vertex $e.$ Via a suitable employment of this condition,
we prove two interesting binding constraints on the shape of the set $K,$
together with several examples.
|
The unbound nature of pure neutron matter (PNM) requires intrinsic
correlations between the symmetric nuclear matter (SNM) EOS parameters
(incompressibility $K_0$, skewness $J_0$ and kurtosis $I_0$) and those (slope
$L$, curvature $K_{\rm{sym}}$ and skewness $J_{\rm{sym}}$) characterizing the
symmetry energy independent of any nuclear many-body theory. We investigate
these intrinsic correlations and their applications in better constraining the
poorly known high-density behavior of nuclear symmetry energy. Several novel
correlations connecting the characteristics of SNM EOS with those of nuclear
symmetry energy are found. In particular, at the lowest-order of
approximations, the bulk parts of the slope $L$, curvature $K_{\rm{sym}}$ and
skewness $J_{\rm{sym}}$ of the symmetry energy are found to be $L\approx K_0/3,
K_{\rm{sym}}\approx LJ_0/2K_0$ and $J_{\rm{sym}}\approx I_0L/3K_0$,
respectively. High-order corrections to these simple relations can be written
in terms of the small ratios of high-order EOS parameters. The resulting
intrinsic correlations among some of the EOS parameters reproduce very nicely
their relations predicted by various microscopic nuclear many-body theories and
phenomenological models constrained by available data of terrestrial
experiments and astrophysical observations in the literature. The unbound
nature of PNM is fundamental and the required intrinsic correlations among the
EOS parameters characterizing both the SNM EOS and symmetry energy are
universal. These intrinsic correlations provide a novel and model-independent
tool not only for consistency checks but also for investigating the poorly
known high-density properties of neutron-rich matter by using those with
smaller uncertainties.
|
Limit cycles of planar polynomial vector fields have been an active area of
research for decades; the interest in periodic-orbit related dynamics comes
from Hilbert's 16th problem and the fact that oscillatory states are often
found in applications. We study the existence of limit cycles and their
coexistence with invariant algebraic curves in two families of Kukles systems,
via Lyapunov quantities and Melnikov functions of first and second order. We
show center conditions, as well as a connection between small- and
large-amplitude limit cycles arising in one of the families, in which the first
coefficients of the Melnikov function correspond to the first Lyapunov
quantities. We also provide an example of a planar polynomial system in which
the cyclicity is not fully controlled by the first nonzero Melnikov function.
|
Local and long range structure, optical and photoluminescence properties of
sol-gel synthesized Ce1-xNixO2 nanostructures have been studied. The crystal
structure, lattice strain and crystallite size have been analyzed. A decrease
in lattice parameter may be attributed to substitution of Ce with smaller Ni
ion. UV-Vis measurement is used for studying the effect of Ni substitution on
bandgap and disorder. The bandgap decreases with Ni substitution and disorder
increases. The PL spectra show five major peaks attributed to various defect
states. The PL emission decreases with Ni substitution owing to increase in
defects which acts as emission quenching centers. The lattice disorder and
defects have been studied using Raman spectroscopy. Raman measurement shows
that oxygen vacancies related defects are increasing with Ni substitution which
causes changes in optical and PL properties. Local structure measurements show
that Ni substitution leads to oxygen vacancies which does change host lattice
structure notably. Ce4+ to Ce3+ conversion increases with Ni substitution.
|
When a novel treatment has successfully passed phase I, different options to
design subsequent phase II trials are available. One approach is a single-arm
trial, comparing the response rate in the intervention group against a fixed
proportion. Another alternative is to conduct a randomized phase II trial,
comparing the new treatment with placebo or the current standard. A significant
problem arises in both approaches when the investigated patient population is
very heterogeneous regarding prognostic factors. For the situation that a
substantial dataset of historical controls exists, we propose an approach to
enhance the classic single-arm trial design by including matched control
patients. The outcome of the observed study population can be adjusted based on
the matched controls with a comparable distribution of known confounders. We
propose an adaptive two-stage design with the options of early stopping for
futility and recalculation of the sample size taking the matching rate, number
of matching partners, and observed treatment effect into account. The
performance of the proposed design in terms of type I error rate, power, and
expected sample size is investigated via simulation studies based on a
hypothetical phase II trial investigating a novel therapy for patients with
acute myeloid leukemia.
|
A case is made that in encounters with the earth's atmosphere, astrophysical
little black holes (LBH) can manifest themselves as the core energy source of
balllightning (BL). Relating the LBH incidence rate on earth to BL occurrence
has the potential of shedding light on the distribution of LBH in the universe,
and their velocities relative to the earth. Most BL features can be explained
by a testable LBH model. Analyses are presented to support this model. LBH
produce complex and many-faceted interactions in air directly and via their
exhaust, resulting in excitation, ionization, and radiation due to processes
such as gravitational and electrostatic tidal force, bremsstrahlung, pair
production and annihilation, orbital electron near-capture by interaction with
a charged LBH. Gravitational interaction of atmospheric atoms with LBH can
result in an enhanced cross-section for polarization and ionization. An
estimate for the power radiated by BL ~ Watts is in agreement with observation.
An upper limit is found for the largest masses that can produce ionization and
polarization excitation. It is shown that the LBH high power exhaust radiation
is not prominent and its effects are consistent with observations.
|
Video frame interpolation aims to synthesize one or multiple frames between
two consecutive frames in a video. It has a wide range of applications
including slow-motion video generation, frame-rate up-scaling and developing
video codecs. Some older works tackled this problem by assuming per-pixel
linear motion between video frames. However, objects often follow a non-linear
motion pattern in the real domain and some recent methods attempt to model
per-pixel motion by non-linear models (e.g., quadratic). A quadratic model can
also be inaccurate, especially in the case of motion discontinuities over time
(i.e. sudden jerks) and occlusions, where some of the flow information may be
invalid or inaccurate.
In our paper, we propose to approximate the per-pixel motion using a
space-time convolution network that is able to adaptively select the motion
model to be used. Specifically, we are able to softly switch between a linear
and a quadratic model. Towards this end, we use an end-to-end 3D CNN
encoder-decoder architecture over bidirectional optical flows and occlusion
maps to estimate the non-linear motion model of each pixel. Further, a motion
refinement module is employed to refine the non-linear motion and the
interpolated frames are estimated by a simple warping of the neighboring frames
with the estimated per-pixel motion. Through a set of comprehensive
experiments, we validate the effectiveness of our model and show that our
method outperforms state-of-the-art algorithms on four datasets (Vimeo, DAVIS,
HD and GoPro).
|
We have obtained broad-band near-infrared photometry for seven Galactic star
clusters (M92, M15, M13, M5, NGC1851, M71 and NGC6791) using the WIRCam
wide-field imager on the Canada-France-Hawaii Telescope, supplemented by images
of NGC1851 taken with HAWK-I on the VLT. In addition, 2MASS observations of the
[Fe/H] ~ 0.0 open cluster M67 were added to the cluster database. From the
resultant (V-J)-V and (V-Ks)-V colour-magnitude diagrams (CMDs), fiducial
sequences spanning the range in metallicity, -2.4 < [Fe/H] < +0.3, have been
defined which extend (for most clusters) from the tip of the red-giant branch
(RGB) to ~ 2.5 magnitudes below the main-sequence turnoff. These fiducials
provide a valuable set of empirical isochrones for the interpretation of
stellar population data in the 2MASS system. We also compare our newly derived
CMDs to Victoria isochrones that have been transformed to the observed plane
using recent empirical and theoretical colour-Teff relations. The models are
able to reproduce the entire CMDs of clusters more metal rich than [Fe/H] ~
-1.4 quite well, on the assumption of the same reddenings and distance moduli
that yield good fits of the same isochrones to Johnson-Cousins BV(RI)C
photometry. However, the predicted giant branches become systematically redder
than the observed RGBs as the cluster metallicity decreases. Possible
explanations for these discrepancies are discussed.
|
The detection of a nuclear spin in an individual molecule represents a key
challenge in physics and biology whose solution has been pursued for many
years. The small magnetic moment of a single nucleus and the unavoidable
environmental noise present the key obstacles for its realization. Here, we
demonstrate theoretically that a single nitrogen-vacancy (NV) center in diamond
can be used to construct a nano-scale single molecule spectrometer that is
capable of detecting the position and spin state of a single nucleus and can
determine the distance and alignment of a nuclear or electron spin pair. The
proposed device will find applications in single molecule spectroscopy in
chemistry and biology, such as in determining protein structure or monitoring
macromolecular motions and can thus provide a tool to help unravelling the
microscopic mechanisms underlying bio-molecular function.
|
Let $\mathcal{L}$ be a pencil of plane curves defined over $\mathbb{F}_q$
with no $\mathbb{F}_q$-points in its base locus. We investigate the number of
curves in $\mathcal{L}$ whose $\mathbb{F}_q$-points form a blocking set. When
the degree of the pencil is allowed to grow with respect to $q$, we show that
the geometric problem can be translated into a purely combinatorial problem
about disjoint blocking sets. We also study the same problem when the degree of
the pencil is fixed.
|
We pursue a classification of low-rank super-modular categories parallel to
that of modular categories. We classify all super-modular categories up to
rank=$6$, and spin modular categories up to rank=$11$. In particular, we show
that, up to fusion rules, there is exactly one non-split super-modular category
of rank $2,4$ and $6$, namely $PSU(2)_{4k+2}$ for $k=0,1$ and $2$. This
classification is facilitated by adapting and extending well-known constraints
from modular categories to super-modular categories, such as Verlinde and
Frobenius-Schur indicator formulae.
|
In effective models of loop quantum cosmology, the holonomy corrections are
associated with deformations of space-time symmetries. The most evident
manifestation of the deformations is the emergence of an Euclidean phase
accompanying the non-singular bouncing dynamics of the scale factor. In this
article, we compute the power spectrum of scalar perturbations generated in
this model, with a massive scalar field as the matter content. Instantaneous
and adiabatic vacuum-type initial conditions for scalar perturbations are
imposed in the contracting phase. The evolution through the Euclidean region is
calculated based on the extrapolation of the time direction pointed by the
vectors normal to the Cauchy hypersurface in the Lorentzian domains. The
obtained power spectrum is characterized by a suppression in the IR regime and
oscillations in the intermediate energy range. Furthermore, the speculative
extension of the analysis in the UV reveals a specific rise of the power.
|
This paper deals with the classical problem of density estimation on the real
line. Most of the existing papers devoted to minimax properties assume that the
support of the underlying density is bounded and known. But this assumption may
be very difficult to handle in practice. In this work, we show that, exactly as
a curse of dimensionality exists when the data lie in $\R^d$, there exists a
curse of support as well when the support of the density is infinite. As for
the dimensionality problem where the rates of convergence deteriorate when the
dimension grows, the minimax rates of convergence may deteriorate as well when
the support becomes infinite. This problem is not purely theoretical since the
simulations show that the support-dependent methods are really affected in
practice by the size of the density support, or by the weight of the density
tail. We propose a method based on a biorthogonal wavelet thresholding rule
that is adaptive with respect to the nature of the support and the regularity
of the signal, but that is also robust in practice to this curse of support.
The threshold, that is proposed here, is very accurately calibrated so that the
gap between optimal theoretical and practical tuning parameters is almost
filled.
|
In this paper we introduce and study a categorical action of the positive
part of the Heisenberg Lie algebra on categories of modules over rational
Cherednik algebras associated to symmetric groups. We show that the generating
functor for this action is exact. We then produce a categorical Heisenberg
action on the categories $\mathcal{O}$ and show it is the same as one
constructed by Shan and Vasserot. Finally, we reduce modulo a large prime $p$.
We show that the functors constituting the action of the positive half of the
Heisenberg algebra send simple objects to semisimple ones, and we describe
these semisimple objects.
|
Creating sound zones has been an active research field since the idea was
first proposed. So far, most sound zone control methods rely on either an
optimization of physical metrics such as acoustic contrast and signal
distortion or a mode decomposition of the desired sound field. By using these
types of methods, approximately 15 dB of acoustic contrast between the
reproduced sound field in the target zone and its leakage to other zone(s) has
been reported in practical set-ups, but this is typically not high enough to
satisfy the people inside the zones. In this paper, we propose a sound zone
control method shaping the leakage errors so that they are as inaudible as
possible for a given acoustic contrast. The shaping of the leakage errors is
performed by taking the time-varying input signal characteristics and the human
auditory system into account when the loudspeaker control filters are
calculated. We show how this shaping can be performed using variable span
trade-off filters, and we show theoretically how these filters can be used for
trading signal distortion in the target zone for acoustic contrast. The
proposed method is evaluated based on physical metrics such as acoustic
contrast and perceptual metrics such as STOI. The computational complexity and
processing time of the proposed method for different system set-ups are also
investigated. Lastly, the results of a MUSHRA listening test are reported. The
test results show that the proposed method provides more than 20% perceptual
improvement compared to existing sound zone control methods.
|
We calculate the operating parameters of a transition edge sensor that is
mounted on a thin dielectric membrane with the assumption that the phononic
heat transport in the membrane is ballistic. Our treatment uses the correct
phonon modes from elasticity theory (Lamb-modes), and spans the transition from
3D to 2D behavior. The phonon cooling power and conductance have a global
minimum as function of membrane thickness, which leads to an optimal value for
the membrane thickness with respect to noise equivalent power at a fixed
operating temperature. The energy resolution of a calorimeter will not be
affected strongly, but, somewhat counterintuitively, the effective time
constant can be reduced by decreasing the membrane thickness in the 2D limit.
|
We use momentum transfer arguments to predict the friction factor $f$ in
two-dimensional turbulent soap-film flows with rough boundaries (an analogue of
three-dimensional pipe flow) as a function of Reynolds number Re and roughness
$r$, considering separately the inverse energy cascade and the forward
enstrophy cascade. At intermediate Re, we predict a Blasius-like friction
factor scaling of $f\propto\textrm{Re}^{-1/2}$ in flows dominated by the
enstrophy cascade, distinct from the energy cascade scaling of
$\textrm{Re}^{-1/4}$. For large Re, $f \sim r$ in the enstrophy-dominated case.
We use conformal map techniques to perform direct numerical simulations that
are in satisfactory agreement with theory, and exhibit data collapse scaling of
roughness-induced criticality, previously shown to arise in the 3D pipe data of
Nikuradse.
|
Pulsar Wind Nebulae (PWNe) shine at multi-wavelengths and are expected to
constitute the largest class of gamma-ray sources in our Galaxy. They are known
to be very efficient particle accelerators: the Crab nebula, the PWNe class
prototype, is the unique firmly identified leptonic PeVatron of the Galaxy to
date, and most of the PeVatrons recently detected by LHAASO appear to be
compatible with a pulsar origin. PWNe have been proved to be associated with
the formation of misaligned X-ray tails and TeV halos, as sign of an efficient
escape of energetic particles from the PWN into the surrounding medium. With
the advent of the Cherenkov Telescope Array we expect that ~200 new PWNe will
be detected. Being able to correctly model their multi-wavelength spectral
properties, spatial and spectral morphology at gamma-rays is then topical
today. This in particular means we should be able to account for their
different evolutionary phases, and to correctly determine the influence they
have on the spectral properties of the source. This indeed reflects directly on
the expectation of how many PWNe will be detected at gamma-rays. Finally, the
identification of PWNe in future gamma-ray data, not only is relevant for their
scientific importance, but also to allow for the identification of less
prominent sources that might be hidden by the background of non-identified
PWNe.
|
The antiProton Unstable Matter Annihilation experiment (PUMA) at CERN aims at
investigating the nucleon composition in the matter density tail of radioactive
as well as stable isotopes by use of low-energy antiproton-nucleon annihilation
processes. For this purpose, antiprotons provided by the Extra Low ENergy
Antiproton (ELENA) facility will be trapped together with the ions of interest.
While exotic ions will be obtained by the Isotope mass Separator On-Line DEvice
(ISOLDE), stable ions will be delivered from an offline ion source setup
designed for this purpose. This allows the proposed technique to be applied to
a variety of stable nuclei and for reference measurements. For beam
purification, the ion source setup includes a multi-reflection time-of-flight
mass spectrometer (MR-ToF MS). Supported by SIMION simulations, an earlier
MR-ToF MS design has been modified to meet the requirements of PUMA. During
commissioning of the new MR-ToF device with Ar$^+$ ions, mass resolving powers
in excess of 50,000 have been obtained after 150 revolutions, limited by the
chopping of the continuous beam from an electron impact ionisation source.
|
Negative index metamaterials (NIMs) give rise to unusual and intriguing
properties and phenomena, which may lead to important applications such as
superlens, subwavelength cavity and slow light devices. However, the negative
refractive index in metamaterials normally requires a stringent condition of
simultaneously negative permittivity and negative permeability. A new class of
negative index metamaterials - chiral NIMs, have been recently proposed. In
contrast to the conventional NIMs, chiral NIMs do not require the above
condition, thus presenting a very robust route toward negative refraction. Here
we present the first experimental demonstration of a chiral metamaterial
exhibiting negative refractive index down to n=-5 at terahertz frequencies,
with only a single chiral resonance. The strong chirality present in the
structure lifts the degeneracy for the two circularly polarized waves and
relieves the double negativity requirement. Chiral NIM are predicted to possess
intriguing electromagnetic properties that go beyond the traditional NIMs, such
as opposite signs of refractive indices for the two circular polarizations and
negative reflection. The realization of terahertz chiral NIMs offers new
opportunities for investigations of their novel electromagnetic properties, as
well as important terahertz device applications.
|
We give a new proof, independent of Lin's theorem, of the Segal conjecture
for the cyclic group of order two. The key input is a calculation, as a Hopf
algebroid, of the Real topological Hochschild homology of $\mathbb{F}_2$. This
determines the $\mathrm{E}_2$-page of the descent spectral sequence for the map
$\mathrm{N}\mathbb{F}_2 \to \mathbb{F}_2$, where $\mathrm{N}\mathbb{F}_2$ is
the $C_2$-equivariant Hill--Hopkins--Ravenel norm of $\mathbb{F}_2$. The
$\mathrm{E}_2$-page represents a new upper bound on the $RO(C_2)$-graded
homotopy of $\mathrm{N}\mathbb{F}_2$, from which the Segal conjecture is an
immediate corollary.
|
For first order differential equations of the form $y'=\sum_{p=0}^P
F_p(x)y^p$ and second order homogeneous linear differential equations
$y''+a(x)y'+b(x)y=0$ with locally integrable coefficients having asymptotic
(possibly divergent) power series when $|x|\to\infty$ on a ray $\arg(x)=$const,
under some further assumptions, it is shown that, on the given ray, there is a
one-to-one correspondence between true solutions and (complete) formal
solutions. The correspondence is based on asymptotic inequalities which are
required to be uniform in $x$ and optimal with respect to certain weights.
|
Finite smooth digraphs, that is, finite directed graphs without sources and
sinks, can be partially ordered via pp-constructability. We give a complete
description of this poset and, in particular, we prove that it is a
distributive lattice. Moreover, we show that in order to separate two smooth
digraphs in our poset it suffices to show that the polymorphism clone of one of
the digraphs satisfies a prime cyclic loop condition that is not satisfied by
the polymorphism clone of the other. Furthermore, we prove that the poset of
cyclic loop ordered by their strength for clones is a distributive lattice,
too.
|
We systematically investigated the phonon and electron transport properties
of monolayer InSe and its Janus derivatives including monolayer In2SSe and
In2SeTe by first-principles calculations. The breaking of mirror symmetry
produce a distinguishable A1 peak in the Raman spectra of monolayer In2SSe and
In2SeTe. The room-temperature thermal conductivity (\k{appa}) of monolayer
InSe, In2SSe and In2SeTe is 44.6, 46.9, and 29.9 W/(m K), respectively. There
is a competition effect between atomic mass, phonon group velocity and phonon
lifetime. The \k{appa} can be further effectively modulated by sample size for
the purpose of thermoelectric applications. Meanwhile, monolayer In2SeTe
exhibits a direct band and higher electron mobility than that of monolayer
InSe, due to the smaller electron effective mass caused by tensile strain on
the Se side. These results indicate that 2D Janus group-III chalcogenides can
provide a platform to design the new electronic, optoelectronic and
thermoelectric devices.
|
Rich and massive clusters of galaxies at intermediate redshift are capable of
magnifying and distorting the images of background galaxies. A comparison of
different mass estimators among these clusters can provide useful information
about the distribution and composition of cluster matter and their dynamical
evolution. Using a hitherto largest sample of lensing clusters drawn from
literature, we compare the gravitating masses of clusters derived from the
strong/weak gravitational lensing phenomena, from the X-ray measurements based
on the assumption of hydrostatic equilibrium, and from the conventional
isothermal sphere model for the dark matter profile characterized by the
velocity dispersion and core radius of galaxy distributions in clusters. While
there is an excellent agreement between the weak lensing, X-ray and isothermal
sphere model determined cluster masses, these methods are likely to
underestimate the gravitating masses enclosed within the central cores of
clusters by a factor of 2--4 as compared with the strong lensing results. Such
a mass discrepancy has probably arisen from the inappropriate applications of
the weak lensing technique and the hydrostatic equilibrium hypothesis to the
central regions of clusters as well as an unreasonably large core radius for
both luminous and dark matter profiles. Nevertheless, it is pointed out that
these cluster mass estimators may be safely applied on scales greater than the
core sizes. Namely, the overall clusters of galaxies at intermediate redshift
can still be regarded as the dynamically relaxed systems, in which the velocity
dispersion of galaxies and the temperature of X-ray emitting gas are good
indicators of the underlying gravitational potentials of clusters.
|
We present an analysis of the ENEAR sample of peculiar velocities of
elliptical galaxies, obtained with D_n-\sigma distances. We use the velocity
correlation function to analyze the statistics of the field-object's
velocities, while the analysis of the cluster data is based on the estimate of
their rms peculiar velocity, Vrms. The statistics of the model velocity field
is parameterized by the amplitude, \eta_8=\sigma_8 \Omega_m^{0.6}, and by the
shape parameter, \Gamma. From the velocity correlation statistics we obtain
\eta_8=0.51{-0.09}{+0.24} for \Gamma=0.25 at the 2\sigma level. Even though
less constraining, a consistent result is obtained by comparing the measured
Vrms of clusters to linear theory predictions. For \Gamma=0.25 we find
\eta_8=0.63{-0.19}{+0.22}$ at 1\sigma. Overall, our results point toward a
statistical concordance of the cosmic flows traced by spirals and early-type
galaxies, with galaxy distances estimated using TF and D_n-\sigma distance
indicators, respectively.
|
We collect in one place a variety of known and folklore results in enriched
model category theory and add a few new twists. The central theme is a general
procedure for constructing a Quillen adjunction, often a Quillen equivalence,
between a given V-model category and a category of enriched presheaves in V,
where V is any good enriching category. For example, we rederive the result of
Schwede and Shipley that reasonable stable model categories are Quillen
equivalent to presheaf categories of spectra (alias categories of module
spectra) under more general hypotheses. The technical improvements and
modifications of general model categorical results given here are applied to
equivariant contexts in a pair of sequels, where we indicate various directions
of application.
|
We show in this Letter that the spectral details of the FUV radiation fields
have a large impact on the chemistry of protoplanetary disks surrounding T
Tauri stars. We show that the strength of a realistic stellar FUV field is
significantly lower than typically assumed in chemical calculations and that
the radiation field is dominated by strong line emission, most notably Lyman
alpha radiation. The effects of the strong Lyman alpha emission on the chemical
equilibrium in protoplanetary disks has previously been unrecognized. We
discuss the impact of this radiation on molecular observations in the context
of a radiative transfer model that includes both direct attenuation and
scattering. In particular, Lyman alpha radiation will directly dissociate water
vapor and may contribute to the observed enhancements of CN/HCN in disks.
|
Quantum optical metrology aims to identify ultimate sensitivity bounds for
the estimation of parameters encoded into quantum states of the electromagnetic
field. In many practical applications, including imaging, microscopy, and
remote sensing, the parameter of interest is not only encoded in the quantum
state of the field, but also in its spatio-temporal distribution, i.e. in its
mode structure. In this mode-encoded parameter estimation setting, we derive an
analytical expression for the quantum Fisher information valid for arbitrary
multimode Gaussian fields. To illustrate the power of our approach, we apply
our results to the estimation of the transverse displacement of a beam and to
the temporal separation between two pulses. For these examples, we show how the
estimation sensitivity can be enhanced by adding squeezing into specific modes.
|
In this work, we investigate the spectra of gravitational waves produced by
chiral symmetry breaking in dark quantum chromodynamics (dQCD) sector. The dark
pion ($\pi$) can be a dark matter candidate as weakly interacting massive
particle (WIMP) or strongly interacting massive particle (SIMP). For a WIMP
scenario, we introduce the dQCD sector coupled to the standard model (SM)
sector with classical scale invariance and investigate the annihilation process
of the dark pion via the $2\pi \to 2\,\text{SM}$ process. For a SIMP scenario,
we investigate the $3\pi \to 2\pi$ annihilation process of the dark pion as a
SIMP using chiral perturbation theory. We find that in the WIMP scenario the
gravitational wave background spectra can be observed by future space
gravitational wave antennas. On the other hand, when the dark pion is the SIMP
dark matter with the constraints for the chiral perturbative limit and
pion-pion scattering cross section, the chiral phase transition becomes
crossover and then the gravitational waves are not produced.
|
Radio frequency (RF) wireless power transfer (WPT) is a promising technology
for sustainable support of massive Internet of Things (IoT). However, RF-WPT
systems are characterized by low efficiency due to channel attenuation, which
can be mitigated by precoders that adjust the transmission directivity. This
work considers a multi-antenna RF-WPT system with multiple non-linear energy
harvesting (EH) nodes with energy demands changing over discrete time slots.
This leads to the charging scheduling problem, which involves choosing the
precoders at each slot to minimize the total energy consumption and meet the EH
requirements. We model the problem as a Markov decision process and propose a
solution relying on a low-complexity beamforming and deep deterministic policy
gradient (DDPG). The results show that the proposed beamforming achieves
near-optimal performance with low computational complexity, and the DDPG-based
approach converges with the number of episodes and reduces the system's power
consumption, while the outage probability and the power consumption increase
with the number of devices.
|
The Higgs boson couplings to bottom and top quarks have been measured and
agree well with the Standard Model predictions. Decays to lighter quarks and
gluons, however, remain elusive. Observing these decays is essential to
complete the picture of the Higgs boson interactions. In this work, we present
the perspectives for the 14 TeV LHC to observe the Higgs boson decay to gluon
jets assembling convolutional neural networks, trained to recognize abstract
jet images constructed embodying particle flow information, and boosted
decision trees with kinetic information from Higgs-strahlung $ZH\to
\ell^+\ell^- + gg$ events. We show that this approach might be able to observe
Higgs to gluon decays with a significance of around $2.4\sigma$ improving
significantly previous prospects based on cut-and-count analysis. An upper
bound of $BR(H\to gg)\leq 1.74\times BR^{SM}(H\to gg)$ at 95\% confidence level
after 3000 fb$^{-1}$ of data is obtained using these machine learning
techniques.
|
We have made experimental measurements of electrical conductivity, pH and
relative magnetic susceptibility of the aqueous solutions of 24 indian spices.
The measured values of electrical conductance of these spices are found to be
linearly related to their ash content and bulk calorific values reported in
literature. The physiological relevance of the pH and diamagnetic
susceptibility of spices when consumed as food or medicine will be also
discussed.
|
We studied the hydration of a single methanol molecule in aqueous solution by
first-principle DFT-based molecular dynamics simulation. The calculations show
that the local structural and short-time dynamical properties of the water
molecules remain almost unchanged by the presence of the methanol, confirming
the observation from recent experimental structural data for dilute solutions.
We also see, in accordance with this experimental work, a distinct shell of
water molecules that consists of about 15 molecules. We found no evidence for a
strong tangential ordering of the water molecules in the first hydration shell.
|
Matrix field theory is a combinatorially non-local field theory which has
recently been found to be a non-trivial but solvable QFT example. To generalize
such non-perturbative structures to other models, a more combinatorial
understanding of Dyson-Schwinger equations and their solutions is of high
interest. To this end we consider combinatorial Dyson-Schwinger equations
manifestly relying on the Hopf-algebraic structure of perturbative
renormalization. We find that these equations are fully compatible with
renormalization, relying only on the superficially divergent diagrams which are
planar ribbon graphs, i.e. decompleted dual combinatorial maps. Still, they are
of a similar kind as in realistic models of local QFT, featuring in particular
an infinite number of primitive diagrams as well as graph-dependent
combinatorial factors.
|
Most representation learning algorithms for language and image processing are
local, in that they identify features for a data point based on surrounding
points. Yet in language processing, the correct meaning of a word often depends
on its global context. As a step toward incorporating global context into
representation learning, we develop a representation learning algorithm that
incorporates joint prediction into its technique for producing features for a
word. We develop efficient variational methods for learning Factorial Hidden
Markov Models from large texts, and use variational distributions to produce
features for each word that are sensitive to the entire input sequence, not
just to a local context window. Experiments on part-of-speech tagging and
chunking indicate that the features are competitive with or better than
existing state-of-the-art representation learning methods.
|
We discuss and derive the continuous Becchi-Rouet-Stora-Tyutin (BRST)and
anti-BRST symmetry transformations for the Jackiw-Pi (JP) model of three (2 +
1)-dimensional (3D) massive non-Abelian 1-form gauge theory by exploiting the
standard technique of (anti-)chiral superfield approach (ACSA) to BRST
formalism where a few appropriate and specific sets of (anti-)BRST invariant
quantities (i. e. physical quantities at quantum level) play a very important
role. We provide the explicit derivation of the nilpotency and absolute
anticommutativity properties of (anti-)BRST conserved charges and the existence
of the Curci-Ferrari (CF) condition within the realm of ACSA to BRST formalism
where we take only a single Grassmannian variable into account. We also provide
clear proof of (anti-) BRST invariances of the coupled (but equivalent)
Lagrangian densities within the framework of ACSA to BRST approach where the
emergence of the CF-condition is observed.
|
Composites of the type: metal - dielectrics and superconductor - dielectrics
are studied in the quasistatic approximation. The dielectric response is
described by the spectral function $G(n,x)$, which contains effects of the
concentration x (of metallic resp. superconductive particles) on the dielectric
function,and effects of the shape. The parameter n plays the role of the
depolarisation factor for dielectric materials, in metals it is a factor which
includes effects like shape, and a topology of the composite. There exists a
percolation transition at $ x_{c}= \frac{1}{3} $ which leads to a metallic-like
for the composite with the concentration $ x > x_{c}$. At low frequencies
divergence with frequency remains even when there are present dielectric
particles above the percolation concentration. In superconductor case the
spectral function $G(n,x)$ may include also Josephson junction effects. We
assume in both cases of composites two types of spheroidal particles, metal
(superconducting) ones and dielectric ones. A dielectric function is constant
in both cases for the dielectric material, and a dielectric function for the
metal and for the superconductor are used with well known form for metals and a
classical superconductor. A percolation transition at $ x_{c}$ leads to a
metallic-like absorption for the composite with $x>x_{c}$. Note that at low
frequencies divergence in frequency remains even when there are present
dielectric particles above $x_{c}$. Below the percolation threshold dielectric
properties are modified by metalic particles. We obtain at very low
temperatures and low concentrations x of the superconductor the effective
dielectric constant. The absorption part is zero in our simple case. The real
part of the dielectric function increases with the concentration of the
superconducting spheres. The frequency dependence is quadratic, it gives low
frequency tail.
|
The construction of a new detector is proposed to extend the capabilities of
ALICE in the high transverse momentum (pT) region. This Very High Momentum
Particle Identification Detector (VHMPID) performs charged hadron
identification on a track-by-track basis in the 5 GeV/c < p < 25 GeV/c momentum
range and provides ALICE with new opportunities to study parton-medium
interactions at LHC energies. The VHMPID covers up to 30% of the ALICE central
barrel and presents sufficient acceptance for triggered- and tagged-jet
studies, allowing for the first time identified charged hadron measurements in
jets. This Letter of Intent summarizes the physics motivations for such a
detector as well as its layout and integration into ALICE.
|
This paper is devoted to give a complete unified study of several weak forms
of $\ddb-$Lemma on compact complex manifolds.
|
We study the geometry of 4d N=1 SCFT's arising from compactification of 6d
(1,0) SCFT's on a Riemann surface. We show that the conformal manifold of the
resulting theory is characterized, in addition to moduli of complex structure
of the Riemann surface, by the choice of a connection for a vector bundle on
the surface arising from flavor symmetries in 6d. We exemplify this by
considering the case of 4d N=1 SCFT's arising from M5 branes probing Z_k
singularity compactified on a Riemann surface. In particular, we study in
detail the four dimensional theories arising in the case of two M5 branes on
Z_2 singularity. We compute the conformal anomalies and indices of such
theories in 4d and find that they are consistent with expectations based on
anomaly and the moduli structure derived from the 6 dimensional perspective.
|
Light-fidelity (LiFi) is a wireless communication technology that employs
both infrared and visible light spectra to support multiuser access and user
mobility. Considering the small wavelength of light, the optical channel is
affected by the random orientation of a user equipment (UE). In this paper, a
random process model for changes in the UE orientation is proposed based on
data measurements. We show that the coherence time of the random orientation is
in the order of hundreds of milliseconds. Therefore, an indoor optical wireless
channel can be treated as a slowly-varying channel as its delay spread is
typically in the order of nanoseconds. A study of the orientation model on the
performance of direct-current-biased orthogonal frequency-division multiplexing
(DC-OFDM) is also presented. The performance analysis of the DC-OFDM system
incorporates the effect of diffuse link due to reflection and blockage by the
user. The results show that the diffuse link and the blockage have significant
effects, especially if the UE is located relatively far away from an access
point (AP). It is shown that the effect is notable if the horizontal distance
between the UE and the AP is greater than $1.5$ m in a typical
$5\times3.5\times3$ m$^3$ indoor room.
|
In this note, we show that any distributive lattice is isomorphic to the set
of reachable configurations of an Edge Firing Game. Together with the result of
James Propp, saying that the set of reachable configurations of any Edge Firing
Game is always a distributive lattice, this shows that the two concepts are
equivalent.
|
I use Bridgeland's definition of a stability condition on a triangulated
category to investigate the stability of D-branes on Calabi-Yau cones given by
the canonical line bundle over a del Pezzo surface. In this context, I prove
the existence of the decay of a D3-brane into a set of fractional branes. This
is an important aspect of the derivation of quiver gauge theories from branes
at singularities via the technique of equivalences of categories. Some
important technical aspects of this equivalence are discussed. I also prove
that the representations corresponding to skyscraper sheaves supported off the
zero section are simple.
|
The small but measurable effect of weak gravitational lensing on the cosmic
microwave background radiation provide information about the large-scale
distribution of matter in the universe. We use the all sky distribution of
matter, as represented by the {\em convergence map} that is inferred from CMB
lensing measurement by Planck survey, to test the fundamental assumption of
Statistical Isotropy (SI) of the universe. For the analysis we use the $\alpha$
statistic that is devised from the contour Minkowski tensor, a tensorial
generalization of the scalar Minkowski functional, the contour length. In
essence, the $\alpha$ statistic captures the ellipticity of isofield contours
at any chosen threshold value of a smooth random field and provides a measure
of anisotropy. The SI of the observed convergence map is tested against the
suite of realistic simulations of the convergence map provided by the Planck
collaboration. We first carry out a global analysis using the full sky data
after applying the galactic and point sources mask. We find that the observed
data is consistent with SI. Further we carry out a local search for departure
from SI in small patches of the sky using $\alpha$. This analysis reveals
several sky patches which exhibit deviations from simulations with statistical
significance higher than 95\% confidence level (CL). Our analysis indicates
that the source of the anomalous behaviour of most of the outlier patches is
inaccurate estimation of noise. We identify two outlier patches which exhibit
anomalous behaviour originating from departure from SI at higher than 95\% CL.
Most of the anomalous patches are found to be located roughly along the
ecliptic plane or in proximity to the ecliptic poles.
|
Segmentation algorithms based on an energy minimisation framework often
depend on a scale parameter which balances a fit to data and a regularising
term. Irregular pyramids are defined as a stack of graphs successively reduced.
Within this framework, the scale is often defined implicitly as the height in
the pyramid. However, each level of an irregular pyramid can not usually be
readily associated to the global optimum of an energy or a global criterion on
the base level graph. This last drawback is addressed by the scale set
framework designed by Guigues. The methods designed by this author allow to
build a hierarchy and to design cuts within this hierarchy which globally
minimise an energy. This paper studies the influence of the construction scheme
of the initial hierarchy on the resulting optimal cuts. We propose one
sequential and one parallel method with two variations within both. Our
sequential methods provide partitions near the global optima while parallel
methods require less execution times than the sequential method of Guigues even
on sequential machines.
|
We present a new realization of inverted neutrino mass hierarchy based on
$S_3 \times {\cal U}(1)$ flavor symmetry. In this scenario, the deviation of
the solar oscillation angle from $\pi/4$ is correlated with the value of
$\theta_{13}$, as they are both induced by a common mixing angle in the charged
lepton sector. We find several interesting predictions: $\te_{13}\geq 0.13$,
$\sin^2\te_{12}\geq 0.31$, $\sin^2\te_{23}\simeq 0.5$ and $0\leq \cos \de \leq
0.7$ for the neutrino oscillation parameters and $0.01 {\rm
eV}\stackrel{<}{_\sim}m_{\bt \bt}\stackrel{<}{_\sim} 0.02 {\rm eV}$ for the
effective neutrino mass in neutrino-less double $\bt $-decay. We show that our
scenario can also explain naturally the observed baryon asymmetry of the
universe via resonant leptogenesis. The masses of the decaying right--handed
neutrinos can be in the range $(10^3 - 10^7)$ GeV, which would avoid the
generic gravitino problem of supersymmetric models.
|
In this work, we present and study a new framework for online learning in
systems with multiple users that provide user anonymity. Specifically, we
extend the notion of bandits to obey the standard $k$-anonymity constraint by
requiring each observation to be an aggregation of rewards for at least $k$
users. This provides a simple yet effective framework where one can learn a
clustering of users in an online fashion without observing any user's
individual decision. We initiate the study of anonymous bandits and provide the
first sublinear regret algorithms and lower bounds for this setting.
|
We present the first determination of the hadronic decays of the lightest
exotic $J^{PC}=1^{-+}$ resonance in lattice QCD. Working with SU(3) flavor
symmetry, where the up, down and strange quark masses approximately match the
physical strange-quark mass giving $m_\pi \sim 700$ MeV, we compute
finite-volume spectra on six lattice volumes which constrain a scattering
system featuring eight coupled channels. Analytically continuing the scattering
amplitudes into the complex energy plane, we find a pole singularity
corresponding to a narrow resonance which shows relatively weak coupling to the
open pseudoscalar--pseudoscalar, vector--pseudoscalar and vector--vector decay
channels, but large couplings to at least one kinematically-closed
axial-vector--pseudoscalar channel. Attempting a simple extrapolation of the
couplings to physical light-quark mass suggests a broad $\pi_1$ resonance
decaying dominantly through the $b_1 \pi$ mode with much smaller decays into
$f_1 \pi$, $\rho \pi$, $\eta' \pi$ and $\eta \pi$. A large total width is
potentially in agreement with the experimental $\pi_1(1564)$ candidate state,
observed in $\eta \pi$, $\eta' \pi$, which we suggest may be heavily suppressed
decay channels.
|
The IceCube Neutrino Observatory, which instruments 1$\,$km$^3$ of clear ice
at the geographic South Pole, was mainly designed to detect particles with
energies in the multi-GeV to PeV range. Due to ice temperatures between
$-20^\circ$C to $-43^\circ$C and the low radioactivity of the ice, the dark
noise rates of the 5160 photomultiplier tubes forming the IceCube lattice are
of order 500 Hz, which is particularly low for 10 inch photomultipliers.
Therefore, IceCube can extend its searches to bursts of
$\mathcal{O}$(10$\,$MeV) neutrinos lasting several seconds, which are expected
to be produced by Galactic core collapse supernovae. By observing a uniform
rise in all photomultiplier rates, IceCube can provide a particularly high
statistical precision for the neutrino rate from supernovae in the inner part
of our Galaxy ($<$ 20 kpc). In this paper, the tools and the method to study
potential obscured or failed core collapse supernovae in our Galaxy are
presented. The analysis will be based on 3911 days of IceCube data taken
between April 17, 2008 and December 31, 2018.
|
This is an overview of mathematical heritage of Sergey Naboko in the area of
functional models of non-self-adjoint operators. It covers the works by Sergey
in model construction, the analysis of absolutely continuous and singular
spectra and the construction of the scattering theory in model terms.
|
With the rise of geospatial big data, new narratives of cities based on
spatial networks and flows have replaced the traditional focus on locations.
While plenty of research have empirically analyzed network structures, there
lacks a state-of-the-art synthesis of applicable insights and methods of
spatial networks in the planning context. In this chapter, we reviewed the
theories, concepts, methods, and applications of spatial network analysis in
cities and their insights for planners from four areas of concern: spatial
structures, urban infrastructure optimizations, indications of economic wealth,
social capital, and residential mobility, and public health control (especially
COVID-19). We also outlined four challenges that planners face when taking the
planning knowledge from spatial networks to actions: data openness and privacy,
linkage to direct policy implications, lack of civic engagement, and the
difficulty to visualize and integrate with GIS. Finally, we envisioned how
spatial networks can be integrated into a collaborative planning framework.
|
We present {\AE}THEL, a semantic compositionality dataset for written Dutch.
{\AE}THEL consists of two parts. First, it contains a lexicon of supertags for
about 900 000 words in context. The supertags correspond to types of the simply
typed linear lambda-calculus, enhanced with dependency decorations that capture
grammatical roles supplementary to function-argument structures. On the basis
of these types, {\AE}THEL further provides 72 192 validated derivations,
presented in four formats: natural-deduction and sequent-style proofs, linear
logic proofnets and the associated programs (lambda terms) for meaning
composition. {\AE}THEL's types and derivations are obtained by means of an
extraction algorithm applied to the syntactic analyses of LASSY Small, the gold
standard corpus of written Dutch. We discuss the extraction algorithm and show
how `virtual elements' in the original LASSY annotation of unbounded
dependencies and coordination phenomena give rise to higher-order types. We
suggest some example usecases highlighting the benefits of a type-driven
approach at the syntax semantics interface. The following resources are
open-sourced with {\AE}THEL: the lexical mappings between words and types, a
subset of the dataset consisting of 7 924 semantic parses, and the Python code
that implements the extraction algorithm.
|
With the growing prevalence of large language models, it is increasingly
common to annotate datasets for machine learning using pools of crowd raters.
However, these raters often work in isolation as individual crowdworkers. In
this work, we regard annotation not merely as inexpensive, scalable labor, but
rather as a nuanced interpretative effort to discern the meaning of what is
being said in a text. We describe a novel, collaborative, and iterative
annotator-in-the-loop methodology for annotation, resulting in a 'Bridging
Benchmark Dataset' of comments relevant to bridging divides, annotated from
11,973 textual posts in the Civil Comments dataset. The methodology differs
from popular anonymous crowd-rating annotation processes due to its use of an
in-depth, iterative engagement with seven US-based raters to (1)
collaboratively refine the definitions of the to-be-annotated concepts and then
(2) iteratively annotate complex social concepts, with check-in meetings and
discussions. This approach addresses some shortcomings of current anonymous
crowd-based annotation work, and we present empirical evidence of the
performance of our annotation process in the form of inter-rater reliability.
Our findings indicate that collaborative engagement with annotators can enhance
annotation methods, as opposed to relying solely on isolated work conducted
remotely. We provide an overview of the input texts, attributes, and annotation
process, along with the empirical results and the resulting benchmark dataset,
categorized according to the following attributes: Alienation, Compassion,
Reasoning, Curiosity, Moral Outrage, and Respect.
|
The RHESSI experiment uses rotational modulation for x- and gamma ray imaging
of solar eruptions. In order to disentangle rotational modulation from
intrinsic time variation, an unbiased linear estimator for the spatially
integrated photon flux is proposed. The estimator mimics a flat instrumental
response under a gaussian prior, with achievable flatness depending on the
counting noise. The amount of regularization is primarily given by the
modulation-to-Poisson levels of fluctuations, and is only weakly affected by
the Bayesian prior. Monte Carlo simulations demonstrate that the mean relative
error of the estimator reaches the Poisson limit, and real-data applications
are shown.
|
We show that the stress-energy tensor for a superstring in the AdS5xS5
background is written in a supersymmetric generalized "Sugawara" form. It is
the "supertrace" of the square of the right-invariant current which is the
Noether current satisfying the flatness condition. The Wess-Zumino term is
taken into account through the supersymmetric gauge connection in the
right-invariant currents, therefore the obtained stress-energy tensor is kappa
invariant. The integrability of the AdS superstring provides an infinite number
of the conserved "local" currents which are supertraces of the n-th power of
the right-invariant current. For even n the "local" current reduces to terms
proportional to the Virasoro constraint and the kappa symmetry constraint,
while for odd n it reduces to a term proportional to the kappa symmetry
constraint .
|
Theoretical remarks are offered regarding recent hadron collider results on
the mixing and decays of $B_s$ mesons. Topics covered include: (1) CP-violating
mixing in $B_s(\ob_s) \to J/\psi \phi$, (2) the D0 dimuon charge asymmetry, (3)
information from triple products, (4) $B_s \to J/\psi f_0$, (5) new physics
constraints, (6) some illustrative new physics scenarios.
|
Answering science questions posed in natural language is an important AI
challenge. Answering such questions often requires non-trivial inference and
knowledge that goes beyond factoid retrieval. Yet, most systems for this task
are based on relatively shallow Information Retrieval (IR) and statistical
correlation techniques operating on large unstructured corpora. We propose a
structured inference system for this task, formulated as an Integer Linear
Program (ILP), that answers natural language questions using a semi-structured
knowledge base derived from text, including questions requiring multi-step
inference and a combination of multiple facts. On a dataset of real, unseen
science questions, our system significantly outperforms (+14%) the best
previous attempt at structured reasoning for this task, which used Markov Logic
Networks (MLNs). It also improves upon a previous ILP formulation by 17.7%.
When combined with unstructured inference methods, the ILP system significantly
boosts overall performance (+10%). Finally, we show our approach is
substantially more robust to a simple answer perturbation compared to
statistical correlation methods.
|
We present a two-dimensional classical stochastic differential equation for a
displacement field of a point particle in two dimensions and show that its
components define real and imaginary parts of a complex field satisfying the
Schroedinger equation of a harmonic oscillator. In this way we derive the
discrete oscillator spectrum from classical dynamics. The model is then
generalized to an arbitrary potential. This opens up the possibility of
efficiently simulating quantum computers with the help of classical systems.
|
A cover of an associative (not necessarily commutative nor unital) ring $R$
is a collection of proper subrings of $R$ whose set-theoretic union equals $R$.
If such a cover exists, then the covering number $\sigma(R)$ of $R$ is the
cardinality of a minimal cover, and a ring $R$ is called $\sigma$-elementary if
$\sigma(R) < \sigma(R/I)$ for every nonzero two-sided ideal $I$ of $R$. If $R$
is a ring with unity, then we define the unital covering number $\sigma_u(R)$
to be the size of a minimal cover of $R$ by subrings that contain $1_R$ (if
such a cover exists), and $R$ is $\sigma_u$-elementary if $\sigma_u(R) <
\sigma_u(R/I)$ for every nonzero two-sided ideal of $R$. In this paper, we
classify all $\sigma$-elementary unital rings and determine their covering
numbers. Building on this classification, we are further able to classify all
$\sigma_u$-elementary rings and prove $\sigma_u(R) = \sigma(R)$ for every
$\sigma_u$-elementary ring $R$. We also prove that, if $R$ is a ring without
unity with a finite cover, then there exists a unital ring $R'$ such that
$\sigma(R) = \sigma_u(R')$, which in turn provides a complete list of all
integers that are the covering number of a ring. Moreover, if \[\mathscr{E}(N)
:= \{m : m \le N, \sigma(R) = m \text{ for some ring } R\},\] then we show that
$|\mathscr{E}(N)| = \Theta(N/\log(N))$, which proves that almost all integers
are not covering numbers of a ring.
|
The devil's staircase is a fractal structure that characterizes the ground
state of one-dimensional classical lattice gases with long-range repulsive
convex interactions. Its plateaus mark regions of stability for specific
filling fractions which are controlled by a chemical potential. Typically such
staircase has an explicit particle-hole symmetry, i.e., the staircase at more
than half-filling can be trivially extracted from the one at less than half
filling by exchanging the roles of holes and particles. Here we introduce a
quantum spin chain with competing short-range attractive and long-range
repulsive interactions, i.e. a non-convex potential. In the classical limit the
ground state features generalized Wigner crystals that --- depending on the
filling fraction --- are either composed of dimer particles or dimer holes
which results in an emergent complete devil's staircase without explicit
particle-hole symmetry of the underlying microscopic model. In our system the
particle-hole symmetry is lifted due to the fact that the staircase is
controlled through a two-body interaction rather than a one-body chemical
potential. The introduction of quantum fluctuations through a transverse field
melts the staircase and ultimately makes the system enter a paramagnetic phase.
For intermediate transverse field strengths, however, we identify a region,
where the density-density correlations suggest the emergence of quasi
long-range order. We discuss how this physics can be explored with
Rydberg-dressed atoms held in a lattice.
|
The competing nature of the app market motivates us to shift our focus on
apps that provide similar functionalities and directly compete with each other
(i.e., peer apps). In this work, we study the ratings and the review text of
100 Android apps across 10 peer app groups. We highlight the importance of
performing peer-app analysis by showing that it can provide a unique
perspective over performing a global analysis of apps (i.e., mixing apps from
multiple categories). First, we observe that comparing user ratings within peer
groups can provide very different results from comparing user ratings from a
global perspective. Then, we show that peer-app analysis provides a different
perspective to spot the dominant topics in the user reviews, and to understand
the impact of the topics on user ratings. Our findings suggest that future
efforts may pay more attention to performing and supporting app analysis from a
peer group context. For example, app store owners may consider an additional
rating mechanism that normalizes app ratings within peer groups, and future
research may help developers understand the characteristics of specific peer
groups and prioritize their efforts.
|
We study the synchronization of two chaotic maps with unidirectional
(master-slave) coupling. Both maps have an intrinsic delay $n_1$, and coupling
acts with a delay $n_2$. Depending on the sign of the difference $n_1-n_2$, the
slave map can synchronize to a future or a past state of the master system. The
stability properties of the synchronized state are studied analytically, and we
find that they are independent of the coupling delay $n_2$. These results are
compared with numerical simulations of a delayed map that arises from
discretization of the Ikeda delay-differential equation. We show that the
critical value of the coupling strength above which synchronization is stable
becomes independent of the delay $n_1$ for large delays.
|
It is argued that if cosmic rays penetrate into molecular clouds, the total
energy they lose can exceed the energy from galactic supernovae shocks. It is
shown that most likely galactic cosmic rays interacting with the surface layers
of molecular clouds are efficiently reflected and do not penetrate into the
cloud interior. Low-energy cosmic rays ($E<1$ GeV) that provide the primary
ionization of the molecular cloud gas can be generated inside such clouds by
multiple shocks arising due to supersonic turbulence.
|
Subsets and Splits