text
stringlengths 6
128k
|
---|
The increasing popularity of smart mobile phones and their powerful sensing
capabilities have enabled the collection of rich contextual information and
mobile phone usage records through the device logs. This paper formulates the
problem of mining behavioral association rules of individual mobile phone users
utilizing their smartphone data. Association rule learning is the most popular
technique to discover rules utilizing large datasets. However, it is well-known
that a large proportion of association rules generated are redundant. This
redundant production makes not only the rule-set unnecessarily large but also
makes the decision making process more complex and ineffective. In this paper,
we propose an approach that effectively identifies the redundancy in
associations and extracts a concise set of behavioral association rules that
are non-redundant. The effectiveness of the proposed approach is examined by
considering the real mobile phone datasets of individual users.
|
We prove that the Leech lattice is the unique densest lattice in R^24. The
proof combines human reasoning with computer verification of the properties of
certain explicit polynomials. We furthermore prove that no sphere packing in
R^24 can exceed the Leech lattice's density by a factor of more than
1+1.65*10^(-30), and we give a new proof that E_8 is the unique densest lattice
in R^8.
|
We consider the Poisson Boolean continuum percolation model in n-dimensional
hyperbolic space. In 2 dimensions we show that there are intensities for the
underlying Poisson process for which there are infinitely unbounded components
in the covered and vacant regions. In n dimensions we show that if the radius
of the balls are big enough, then there are intensities for the underlying
Poisson process for which there are infinitely many unbounded components.
|
We present an efficient implementation of the equation of motion oscillator
strengths for the closed-shell multilevel coupled cluster singles and doubles
with perturbative triples method (MLCC3) in the electronic structure program
eT. The orbital space is split into an active part treated with CC3 and an
inactive part computed at the coupled cluster singles and doubles (CCSD) level
of theory. Asymptotically, the CC3 contribution scales as $O(n_\text{V}
n^3_\text{v} n^3_\text{o})$ floating-point operations (FLOP), where $n_V$ is
the total number of virtual orbitals while $n_\text{v}$ and $n_\text{o}$ are
the number of active virtual and occupied orbitals, respectively. The CC3
contribution, thus, only scales linearly with the full system size and can
become negligible compared to the cost of CCSD. We demonstrate the capabilities
of our implementation by calculating the UV-VIS spectrum of azobenzene and a
core excited state of betaine 30 with more than 1000 molecular orbitals.
|
Survival analysis is widely used as a technique to model time-to-event data
when some data is censored, particularly in healthcare for predicting future
patient risk. In such settings, survival models must be both accurate and
interpretable so that users (such as doctors) can trust the model and
understand model predictions. While most literature focuses on discrimination,
interpretability is equally as important. A successful interpretable model
should be able to describe how changing each feature impacts the outcome, and
should only use a small number of features. In this paper, we present DyS
(pronounced ``dice''), a new survival analysis model that achieves both strong
discrimination and interpretability. DyS is a feature-sparse Generalized
Additive Model, combining feature selection and interpretable prediction into
one model. While DyS works well for all survival analysis problems, it is
particularly useful for large (in $n$ and $p$) survival datasets such as those
commonly found in observational healthcare studies. Empirical studies show that
DyS competes with other state-of-the-art machine learning models for survival
analysis, while being highly interpretable.
|
Basic physics concepts of the QGSJET model are discussed, starting from the
general picture of high energy hadronic interactions and addressing in some
detail the treatment of multiple scattering processes, contributions of
``soft'' and ``semihard'' parton dynamics, implementation of non-linear
interaction effects. The predictions of the new model version (QGSJET II.03)
are compared to selected accelerator data. Future developments are outlined and
the expected input from the LHC collider for constraining model predictions is
discussed.
|
We develop protocols for Hastings-Haah Floquet codes in the presence of dead
qubits.
|
Large-scale learning of transformer language models has yielded improvements
on a variety of natural language understanding tasks. Whether they can be
effectively adapted for summarization, however, has been less explored, as the
learned representations are less seamlessly integrated into existing neural
text production architectures. In this work, we propose two solutions for
efficiently adapting pretrained transformer language models as text
summarizers: source embeddings and domain-adaptive training. We test these
solutions on three abstractive summarization datasets, achieving new state of
the art performance on two of them. Finally, we show that these improvements
are achieved by producing more focused summaries with fewer superfluous and
that performance improvements are more pronounced on more abstractive datasets.
|
We examine the many-body localization (MBL) phase transition in
one-dimensional quantum systems with quenched randomness and short-range
interactions. Following recent works, we use a strong-randomness
renormalization group (RG) approach where the phase transition is due to the
so-called avalanche instability of the MBL phase. We show that the critical
behavior can be determined analytically within this RG. On a rough
$\textit{qualitative}$ level the RG flow near the critical fixed point is
similar to the Kosterlitz-Thouless (KT) flow as previously shown, but there are
important differences in the critical behavior. Thus we show that this MBL
transition is in a new universality class that is different from KT. The
divergence of the correlation length corresponds to critical exponent $\nu
\rightarrow \infty$, but the divergence is weaker than for the KT transition.
|
Resonant lines are powerful probes of the interstellar and circumgalactic
medium of galaxies. Their transfer in gas being a complex process, the
interpretation of their observational signatures, either in absorption or in
emission, is often not straightforward. Numerical radiative transfer
simulations are needed to accurately describe the travel of resonant line
photons in real and in frequency space, and to produce realistic mock
observations. This paper introduces RASCAS, a new public 3D radiative transfer
code developed to perform the propagation of any resonant line in numerical
simulations of astrophysical objects. RASCAS was designed to be easily
customisable and to process simulations of arbitrarily large sizes on large
supercomputers. RASCAS performs radiative transfer on an adaptive mesh with an
octree structure using the Monte Carlo technique. RASCAS features full MPI
parallelisation, domain decomposition, adaptive load-balancing, and a standard
peeling algorithm to construct mock observations. The radiative transport of
resonant line photons through different mixes of species (e.g. \ion{H}{i},
\ion{Si}{ii}, \ion{Mg}{ii}, \ion{Fe}{ii}), including their interaction with
dust, is implemented in a modular fashion to allow new transitions to be easily
added to the code. RASCAS is very accurate and efficient. It shows perfect
scaling up to a minimum of a thousand cores. It has been fully tested against
radiative transfer problems with analytic solutions and against various test
cases proposed in the literature. Although it was designed to describe
accurately the many scatterings of line photons, RASCAS may also be used to
propagate photons at any wavelength (e.g. stellar continuum or fluorescent
lines), or to cast millions of rays to integrate the optical depths of ionising
photons, making it highly versatile.
|
To increase the number of sources with Very Long Baseline Interferometry
(VLBI) astrometry available for comparison with the Gaia results, we have
observed 31 young stars with recently reported radio emission. These stars are
all in the Gaia DR3 catalog and were suggested, on the basis of conventional
interferometry observations, to be non-thermal radio emitters and, therefore,
good candidates for VLBI detections. The observations were carried out with the
Very Long Baseline Array (VLBA) at two epochs separated by a few days and
yielded 10 detections (a $\sim$30\% detection rate). Using the astrometric Gaia
results, we have extrapolated the target positions to the epochs of our radio
observations and compared them with the position of the radio sources. For
seven objects, the optical and radio positions are coincident within five times
their combined position errors. Three targets, however, have position
discrepancies above eight times the position errors, indicating different
emitting sources at optical and radio wavelengths. In one case, the VLBA
emission is very likely associated with a known companion of the primary
target. In the other two cases, we associate the VLBA emission with previously
unknown companions, but further observations will be needed to confirm this.
|
We study the decay B0 --> rho0 rho0 in a sample of about 427 million
Upsilon(4S) --> BBbar decays collected with the BABAR detector at the PEP-II
asymmetric-energy e+e- collider at SLAC. We find the branching fraction
B = (0.84 +/- 0.29 +/- 0.17)*1e-6 and longitudinal polarization fraction of
f_L = 0.70 +/- 0.14 +/- 0.05, where the first uncertainty is statistical, and
the second is systematic. The evidence for the B0 --> rho0 rho0 signal has 3.6
sigma significance. We investigate the proper-time dependence of the
longitudinal component in the decay and measure the CP-violating coefficients
S^{00}_L = 0.5 +/- 0.9 +/- 0.2 and C^{00}_L = 0.4 +/- 0.9 +/- 0.2,
corresponding to the sine and cosine terms in the time evolution of asymmetry.
We study the implication of these results for penguin contributions in B -->
rho rho decays and for the CKM unitarity angle alpha.
|
The presence of luminous hot X-ray coronae in the dark matter halos of
massive spiral galaxies is a basic prediction of galaxy formation models.
However, observational evidence for such coronae is very scarce, with the first
few examples having only been detected recently. In this paper, we study the
large-scale diffuse X-ray emission associated with the massive spiral galaxy
NGC266. Using ROSAT and Chandra X-ray observations we argue that the diffuse
emission extends to at least ~70 kpc, whereas the bulk of the stellar light is
confined to within ~25 kpc. Based on X-ray hardness ratios, we find that most
of the diffuse emission is released at energies <1.2 keV, which indicates that
this emission originates from hot X-ray gas. Adopting a realistic gas
temperature and metallicity, we derive that in the (0.05-0.15)r_200 region
(where r_200 is the virial radius) the bolometric X-ray luminosity of the hot
gas is (4.3 +/- 0.8) x 10^40 erg/s and the gas mass is (9.1 +/- 0.9) x 10^9
M_sun. These values are comparable to those observed for the two other
well-studied X-ray coronae in spiral galaxies, suggesting that the physical
properties of such coronae are similar. This detection offers an excellent
opportunity for comparison of observations with detailed galaxy formation
simulations.
|
We found that numbers of fully connected clusters in Barab\'asi-Albert (BA)
networks follow the exponential distribution with the characteristic exponent
$\kappa=2/m$. The critical temperature for the Ising model on the BA network is
determined by the critical temperature of the largest fully connected cluster
within the network. The result explains the logarithmic dependence of the
critical temperature on the size of the network $N$.
|
We propose a new method of discovering causal structures, based on the
detection of local, spontaneous changes in the underlying data-generating
model. We analyze the classes of structures that are equivalent relative to a
stream of distributions produced by local changes, and devise algorithms that
output graphical representations of these equivalence classes. We present
experimental results, using simulated data, and examine the errors associated
with detection of changes and recovery of structures.
|
We examine the renormalization group flow in the vicinity of the free-field
fixed point for effective field theories in the presence of a constant,
nondynamical vector potential background. The interaction with this vector
potential represents the simplest possible form of Lorentz violation. We search
for any normal modes of the flow involving nonpolynomial interactions. For
scalar fields, the inclusion of the vector potential modifies the known modes
only through a change in the field strength renormalization. For fermionic
theories, where an infinite number of particle species are required in order
for nonpolynomial interactions to be possible, we find no evidence for any
analogous relevant modes. These results are consistent with the idea that the
vector potential interaction, which may be eliminated from the action by a
gauge transformation, should have no physical effects.
|
In this paper, by inputting the Bessel identities over the complex field in
previous work of the authors, the Waldspurger formula of Baruch and Mao is
extended from totally real fields to arbitrary number fields. This is applied
to give a non-trivial bound towards the Ramanujan conjecture for automorphic
forms of the metaplectic group $\widetilde{\mathrm{SL}}_2$ for the first time
in the generality of arbitrary number fields.
|
This paper describes our approach in DSTC 8 Track 4: Schema-Guided Dialogue
State Tracking. The goal of this task is to predict the intents and slots in
each user turn to complete the dialogue state tracking (DST) based on the
information provided by the task's schema. Different from traditional
stage-wise DST, we propose an end-to-end DST system to avoid error accumulation
between the dialogue turns. The DST system consists of a machine reading
comprehension (MRC) model for non-categorical slots and a Wide & Deep model for
categorical slots. As far as we know, this is the first time that MRC and Wide
& Deep model are applied to DST problem in a fully end-to-end way. Experimental
results show that our framework achieves an excellent performance on the test
dataset including 50% zero-shot services with a joint goal accuracy of 0.8652
and a slot tagging F1-Score of 0.9835.
|
In this paper we completely characterize the words with second minimum weight
in the $p-$ary linear code generated by the rows of the incidence matrix of
points and hyperplanes of $PG(n,q)$, with $q=p^h$ and $p$ prime, proving that
they are the scalar multiples of the difference of the incidence vectors of two
distinct hyperplanes of $PG(n,q)$.
|
Binary duadic codes are an interesting subclass of cyclic codes since they
have large dimensions and their minimum distances may have a square-root bound.
In this paper, we present several families of binary duadic codes of length
$2^m-1$ and develop some lower bounds on their minimum distances by using the
BCH bound on cyclic codes, which partially solves one case of the open problem
proposed in \cite{LLD}. It is shown that the lower bounds on their minimum
distances are close to the square root bound. Moreover, the parameters of the
dual and extended codes of these binary duadic codes are investigated.
|
We review the calculation of the LSP relic density in alternative
cosmological scenarios where contributions from direct decay production are
important. We study supersymmetric models with intermediate unification scale.
We find concrete scenarios where the reheating temperature is of order one GeV
($M_I\sim 10^{12}$) and below. If the case that this reheating temperature is
associated to the decay of oscillating moduli fields appearing in string
theories, we show that the LSP relic density considerably increases with
respect to the standard radiation-dominated case by the effect of the direct
non-thermal production by the modulus field. The LSP can become a good dark
matter candidate ($0.01-0.1 \gsim \Omega h^2 \lsim 0.3-1$) for $M_I\sim
10^{12}-10^{14}$ Gev and $m_\phi\sim 1-10$ TeV.
|
Remnant radio galaxies represent the final dying phase of radio galaxy
evolution, in which the jets are no longer active. Due to their rarity in flux
limited samples and the difficulty of identification, this dying phase remains
poorly understood and the luminosity evolution largely unconstrained. Here we
present the discovery and detailed analysis of a large (700 kpc) remnant radio
galaxy with a low surface brightness that has been identified in LOFAR images
at 150 MHz. By combining LOFAR data with new follow-up Westerbork observations
and archival data at higher frequencies, we investigated the source morphology
and spectral properties from 116 to 4850 MHz. By modelling the radio spectrum
we probed characteristic timescales of the radio activity. The source has a
relatively smooth, diffuse, amorphous appearance together with a very weak
central compact core which is associated with the host galaxy located at
z=0.051. From our ageing and morphological analysis it is clear that the
nuclear engine is currently switched off or, at most, active at a very low
power state. The host galaxy is currently interacting with another galaxy
located at a projected separation of 15 kpc and a radial velocity offset of 300
km/s. This interaction may have played a role in the triggering and/or shut
down of the radio jets. The spectral shape of this remnant radio galaxy differs
from the majority of the previously identified remnant sources, which show
steep or curved spectra at low to intermediate frequencies. In light of this
finding and in preparation for new-generation deep low-frequency surveys, we
discuss the selection criteria to be used to select representative samples of
these sources.
|
We propose a general mechanism based on soil mechanics concepts, such as
dilatancy and friction, to explain the fact that avalanches stop at an angle
smaller than they start: the mechanism involved is linked to the fact that the
stress field near the free surface of a pile built with inclined strata obeys
always the plasticity criteria, even when the slope is smaller than the
friction angle. It results from this that the larger the slope angle the
smaller the mean stress and the smaller the maximum principal stress. So when
the pile rotates to generate the next instability the granular material is
submitted to a decrease of the mean stress, resulting in an increase of its
yielding angle, which becomes larger than the friction angle. The slope starts
then flowing at an angle larger than the friction angle.
|
Topological defects in active polar fluids can organise spontaneous flows and
influence macroscopic density patterns. Both of them play, for example, an
important role during animal development. Yet the influence of density on
active flows is poorly understood. Motivated by experiments on cell monolayers
confined to discs, we study the coupling between density and polar order for a
compressible active polar fluid in presence of a +1 topological defect. As in
the experiments, we find a density-controlled spiral-to-aster transition. In
addition, biphasic orientational phases emerge as a generic outcome of such
coupling. Our results highlight the importance of density gradients as a
potential mechanism for controlling flow and orientational patterns in
biological systems.
|
We present nuclear spectral energy distributions (SEDs) in the range
0.4-16micron for an expanded CfA sample of Seyfert galaxies. The spectral
indices from 1-16micron range from alpha_IR 0.9 to 3.8. The shapes of the
spectra are correlated with Seyfert type in the sense that steeper nuclear SEDs
(nu*f_nu increasing with increasing wavelength) tend to be found in Seyfert 2s
and flatter SEDs (nu *f_nu constant) in Seyfert 1-1.5s. The galaxies optically
classified as Seyferts 1.8s and 1.9s display values of alpha_IR as in type 1
objects, or values intermediate between those of Seyfert 1s and Seyfert 2s. The
intermediate SEDs of many Seyfert 1.8-1.9s may be consistent with the presence
of a pure Seyfert 1 viewed through a moderate amount (A_V <5mag) of foreground
galaxy extinction. We find, however, that between 10 and 20% of galaxies with
broad optical line components have steep infrared SEDs.
Torus models usually adopt high equatorial opacities to reproduce the
infrared properties of Seyfert 1s and 2s, resulting in a dichotomy of infrared
SEDs (flat for type 1s, and steep for type 2s). Such a dichotomy, however, is
not observed in our sample. The wide range of spectral indices observed in the
type 2 objects, the lack of extremely steep SEDs, and the large numbers of
objects with intermediate spectral indices cannot be reconciled with
predictions from existing optically thick torus models. We discuss possible
modifications to improve torus models, including low optical depth tori, clumpy
dusty tori, and high-optical-depth tori with an extended optically thin
component.
|
Generative Adversarial Networks (GANs) are a popular formulation to train
generative models for complex high dimensional data. The standard method for
training GANs involves a gradient descent-ascent (GDA) procedure on a minimax
optimization problem. This procedure is hard to analyze in general due to the
nonlinear nature of the dynamics. We study the local dynamics of GDA for
training a GAN with a kernel-based discriminator. This convergence analysis is
based on a linearization of a non-linear dynamical system that describes the
GDA iterations, under an \textit{isolated points model} assumption from [Becker
et al. 2022]. Our analysis brings out the effect of the learning rates,
regularization, and the bandwidth of the kernel discriminator, on the local
convergence rate of GDA. Importantly, we show phase transitions that indicate
when the system converges, oscillates, or diverges. We also provide numerical
simulations that verify our claims.
|
In this paper, we consider two-state mean-field games and its dual
formulation. We then discuss numerical methods for these problems. Finally, we
present various numerical experiments, exhibiting different behaviours,
including shock formation, lack of invertibility, and monotonicity loss.
|
We adapt the Kolmogorov-Sinai entropy to the non-extensive perspective
recently advocated by Tsallis. The resulting expression is an average on the
invariant distribution, which should be used to detect the genuine entropic
index Q. We argue that the condition Q >1 is time dependent.
|
We observe significant dust-correlated emission outside of H II regions in
the Green Bank Galactic Plane Survey (-4 < b < 4 degrees) at 8.35 and 14.35
GHz. The rising spectral slope rules out synchrotron and free-free emission as
majority constituents at 14 GHz, and the amplitude is at least 500 times higher
than expected thermal dust emission. When combined with the Rhodes (2.326 GHz),
and WMAP (23-94 GHz) data it is possible to fit dust-correlated emission at
2.3-94 GHz with only soft synchrotron, free-free, thermal dust, and an
additional dust-correlated component similar to Draine & Lazarian spinning
dust. The rising component generally dominates free-free and synchrotron for
\nu >~ 14 GHz and is overwhelmed by thermal dust at \nu > 60 GHz. The current
data fulfill most of the criteria laid out by Finkbeiner et al. (2002) for
detection of spinning dust.
|
The policy gradient theorem describes the gradient of the expected discounted
return with respect to an agent's policy parameters. However, most policy
gradient methods drop the discount factor from the state distribution and
therefore do not optimize the discounted objective. What do they optimize
instead? This has been an open question for several years, and this lack of
theoretical clarity has lead to an abundance of misstatements in the
literature. We answer this question by proving that the update direction
approximated by most methods is not the gradient of any function. Further, we
argue that algorithms that follow this direction are not guaranteed to converge
to a "reasonable" fixed point by constructing a counterexample wherein the
fixed point is globally pessimal with respect to both the discounted and
undiscounted objectives. We motivate this work by surveying the literature and
showing that there remains a widespread misunderstanding regarding discounted
policy gradient methods, with errors present even in highly-cited papers
published at top conferences.
|
We analyze the inclusive $b(c) \to s(u) \mu^+ \mu^-$ and the exclusive
$B(D^+) \to K(\pi^+) \mu^+ \mu^-$ flavour changing neutral current decays in
the light of HyperCP boson $X^0$ of mass 214 MeV recently observed in the
hyperon decay $\Sigma^+ \to p \mu^+ \mu^-$. Using the branching ratio data of
the above inclusive and exclusive decays, we obtain constraints on $g_1 (h_1)$
and $g_2 (h_2)$, the scalar and pseudo-scalar coupling constants of the
$b-s-X^0 (c-u-X^0)$ vertices.
|
We try to design a quantum neural network with qubits instead of classical
neurons with deterministic states, and also with quantum operators replacing
teh classical action potentials. With our choice of gates interconnecting teh
neural lattice, it appears that the state of the system behaves in ways
reflecting both the strengths of coupling between neurons as well as initial
conditions. We find that depending whether there is a threshold for emission
from excited to ground state, the system shows either aperiodic oscillations or
coherent ones with periodicity depending on the strength of coupling.
|
A Maxwell-Stefan system for fluid mixtures with driving forces depending on
Cahn-Hilliard-type chemical potentials is analyzed. The corresponding parabolic
cross-diffusion equations contain fourth-order derivatives and are considered
in a bounded domain with no-flux boundary conditions. The main difficulty of
the analysis is the degeneracy of the diffusion matrix, which is overcome by
proving the positive definiteness of the matrix on a subspace and using the
Bott--Duffin matrix inverse. The global existence of weak solutions and a
weak-strong uniqueness property are shown by a careful combination of
(relative) energy and entropy estimates, yielding $H^2(\Omega)$ bounds for the
densities, which cannot be obtained from the energy or entropy inequalities
alone.
|
We show that by having the auxtethers made partly or completely of conducting
material and by controlling their voltages, it is possible to control the spin
rate of the electric solar wind sail by using the electric sail effect itself.
The proposed intrinsic spin rate control scheme has enough control authority to
overcome the secular change of the spin rate due to orbital Coriolis effect.
|
Synthetic data has emerged as a promising source for 3D human research as it
offers low-cost access to large-scale human datasets. To advance the diversity
and annotation quality of human models, we introduce a new synthetic dataset,
SynBody, with three appealing features: 1) a clothed parametric human model
that can generate a diverse range of subjects; 2) the layered human
representation that naturally offers high-quality 3D annotations to support
multiple tasks; 3) a scalable system for producing realistic data to facilitate
real-world tasks. The dataset comprises 1.2M images with corresponding accurate
3D annotations, covering 10,000 human body models, 1,187 actions, and various
viewpoints. The dataset includes two subsets for human pose and shape
estimation as well as human neural rendering. Extensive experiments on SynBody
indicate that it substantially enhances both SMPL and SMPL-X estimation.
Furthermore, the incorporation of layered annotations offers a valuable
training resource for investigating the Human Neural Radiance Fields (NeRF).
|
Both dark energy models and modified gravity theories could lead to
cosmological evolutions different from either the recollapse into a Big Crunch
or exponential de Sitter expansion. The newly arising singularities may
represent true endpoints of the evolution or allow for the extension of
geodesics through them. In the latter case the components of the Riemann tensor
representing tidal forces diverge. Sudden Future Singularities (SFS) occur at
finite time, finite scale factor and finite Hubble parameter, only the
deceleration parameter diverges. The energy density of a perfect fluid is
regular and its pressure diverges at the SFS. A particular SFS, the Big Brake
occurs when the energy density vanishes and the expansion arrives at a full
stop at the singularity. Such scenarios are generated by either a particular
scalar field (the tachyon field) or the anti-Chaplygin gas. By adding matter to
these models, an unwanted feature appears: at the finite scale factor of the
SFS the energy density of matter remains finite, implying (for a spatially flat
universe) a finite Hubble parameter, hence finite expansion rate, rather then
full stop. The universe would further expand through the singularity, this
nevertheless seems forbidden as the energy density of the tachyonic field /
anti-Chaplygin gas would become ill-defined. This paradox is relieved in the
case of the anti-Chaplygin gas by redefining its energy density and pressure in
terms of distributions peaked on the singularity. The regular cosmological
quantities, continuous across the SFS are the energy density and the square of
the Hubble parameter; those allowing for a jump at the SFS are the Hubble
parameter and expansion rate (both being mirror-symmetric). The pressure and
the decelaration parameter will contain Dirac delta-function contributions
peaked on the SFS, however they anyhow would diverge at the singularity.
|
Processors may find some elementary operations to be faster than the others.
Although an operation may be conceptually as simple as some other operation,
the processing speeds of the two can vary. A clever programmer will always try
to choose the faster instructions for the job. This paper presents an algorithm
to display squares of 1st N natural numbers without using multiplication (*
operator). Instead, the same work can be done using addition (+ operator). The
results can also be used to compute the sum of those squares. If we compare the
normal method of computing the squares of 1st N natural numbers with this
method, we can conclude that the algorithm discussed in the paper is more
optimized in terms of time complexity.
|
A variational approach is used in order to study the stationary states of
Hall devices. Charge accumulation, electric potentials and electric currents
are investigated on the basis of the Kirchhoff-Helmholtz principle of least
heat dissipation. A simple expression for the state of minimum power dissipated
-- that corresponds to zero transverse current and harmonic chemical potential
-- is derived. It is shown that a longitudinal surface current proportional to
the charge accumulation is flowing near the edges of the device. Charge
accumulation and surface currents define a boundary layer over a distance of
the order of the Debye-Fermi length.
|
We propose a regression algorithm that utilizes a learned dictionary
optimized for sparse inference on a D-Wave quantum annealer. In this regression
algorithm, we concatenate the independent and dependent variables as a combined
vector, and encode the high-order correlations between them into a dictionary
optimized for sparse reconstruction. On a test dataset, the dependent variable
is initialized to its average value and then a sparse reconstruction of the
combined vector is obtained in which the dependent variable is typically
shifted closer to its true value, as in a standard inpainting or denoising
task. Here, a quantum annealer, which can presumably exploit a fully entangled
initial state to better explore the complex energy landscape, is used to solve
the highly non-convex sparse coding optimization problem. The regression
algorithm is demonstrated for a lattice quantum chromodynamics simulation data
using a D-Wave 2000Q quantum annealer and good prediction performance is
achieved. The regression test is performed using six different values for the
number of fully connected logical qubits, between 20 and 64. The scaling
results indicate that a larger number of qubits gives better prediction
accuracy.
|
Existing state-of-the-art methods for Video Object Segmentation (VOS) learn
low-level pixel-to-pixel correspondences between frames to propagate object
masks across video. This requires a large amount of densely annotated video
data, which is costly to annotate, and largely redundant since frames within a
video are highly correlated. In light of this, we propose HODOR: a novel method
that tackles VOS by effectively leveraging annotated static images for
understanding object appearance and scene context. We encode object instances
and scene information from an image frame into robust high-level descriptors
which can then be used to re-segment those objects in different frames. As a
result, HODOR achieves state-of-the-art performance on the DAVIS and
YouTube-VOS benchmarks compared to existing methods trained without video
annotations. Without any architectural modification, HODOR can also learn from
video context around single annotated video frames by utilizing cyclic
consistency, whereas other methods rely on dense, temporally consistent
annotations. Source code is available at: https://github.com/Ali2500/HODOR
|
We propose a new query application for the well-known Trapezoidal Search DAG
(TSD) of a set of $n$~line segments in the plane, where queries are allowed to
be {\em vertical line segments}.
We show that a simple Depth-First Search reports the $k$ trapezoids that are
intersected by the query segment in $O(k+\log n)$ expected time, regardless of
the spatial location of the query. This bound is optimal and matches known data
structures with $O(n)$ size. In the important case of edges from a connected,
planar graph, our simplistic approach yields an expected $O(n \log^*\!n)$
construction time, which improves on the construction time of known structures
for vertical segment-queries. Also for connected input, a simple extension
allows the TSD approach to directly answer axis-aligned window-queries in $O(k
+ \log n)$ expected time, where $k$ is the result size.
|
Deep learning has had a far reaching impact in robotics. Specifically, deep
reinforcement learning algorithms have been highly effective in synthesizing
neural-network controllers for a wide range of tasks. However, despite this
empirical success, these controllers still lack theoretical guarantees on their
performance, such as Lyapunov stability (i.e., all trajectories of the
closed-loop system are guaranteed to converge to a goal state under the control
policy). This is in stark contrast to traditional model-based controller
design, where principled approaches (like LQR) can synthesize stable
controllers with provable guarantees. To address this gap, we propose a generic
method to synthesize a Lyapunov-stable neural-network controller, together with
a neural-network Lyapunov function to simultaneously certify its stability. Our
approach formulates the Lyapunov condition verification as a mixed-integer
linear program (MIP). Our MIP verifier either certifies the Lyapunov condition,
or generates counter examples that can help improve the candidate controller
and the Lyapunov function. We also present an optimization program to compute
an inner approximation of the region of attraction for the closed-loop system.
We apply our approach to robots including an inverted pendulum, a 2D and a 3D
quadrotor, and showcase that our neural-network controller outperforms a
baseline LQR controller. The code is open sourced at
\url{https://github.com/StanfordASL/neural-network-lyapunov}.
|
Recent results on the particle detector R&D for new accelerators are
reviewed. Different approaches for the muon systems, hadronic and
electromagnetic calorimeters, particle identification devices, and central
trackers are discussed. Main emphasis is made on the detectors for the
International Linear Collider and Super B-factory. A detailed description of a
novel photodetector, a so called Silicon Photomultiplier, and its applications
in scintillator detectors is presented.
|
For a rational map $f$ and a H\"older continuous potential $\phi$ satisfying
$$ \sup\phi< P(f,\phi)$$ the uniqueness and stochastic properties of the
corresponding equilibrium states have been extensively studied. For a given
rational map $f$, in this paper we characterize those H\"older continuous
potentials $\phi$ for which this property is satisfied for some iterate of $f$.
|
In this paper, we characterize weakly pseudoconvex domains of finite type in
$\mathbb C^n$ in terms of the boundary behavior of automorphism orbits by using
the scaling method.
|
In this article, we study a class of fully nonlinear double-divergence
systems with free boundaries associated with a minimization problem. The
variational structure of Hessian-dependent functional plays a fundamental role
in proving the existence of the minimizers and then the existence of the
solutions for the system. In addition, we establish gains of the integrability
for the double-divergence equation. Consequently, we improve the regularity for
the fully nonlinear equation in Sobolev and H\"older spaces.
|
Fine scale elastic structures are widespread in nature, for instances in
plants or bones, whenever stiffness and low weight are required. These patterns
frequently refine towards a Dirichlet boundary to ensure an effective load
transfer. The paper discusses the optimization of such supporting structures in
a specific class of domain patterns in 2D, which composes of periodic and
branching period transitions on subdomain facets. These investigations can be
considered as a case study to display examples of optimal branching domain
patterns.
In explicit, a rectangular domain is decomposed into rectangular subdomains,
which share facets with neighbouring subdomains or with facets which split on
one side into equally sized facets of two different subdomains. On each
subdomain one considers an elastic material phase with stiff elasticity
coefficients and an approximate void phase with orders of magnitude softer
material. For given load on the outer domain boundary, which is distributed on
a prescribed fine scale pattern representing the contact area of the shape, the
interior elastic phase is optimized with respect to the compliance cost. The
elastic stress is supposed to be continuous on the domain and a stress based
finite volume discretization is used for the optimization. If in one direction
equally sized subdomains with equal adjacent subdomain topology line up, these
subdomains are consider as equal copies including the enforced boundary
conditions for the stress and form a locally periodic substructure.
An alternating descent algorithm is employed for a discrete characteristic
function describing the stiff elastic subset on the subdomains and the solution
of the elastic state equation. Numerical experiments are shown for compression
and shear load on the boundary of a quadratic domain.
|
Determining the optimal configuration of adsorbates on a slab (adslab) is
pivotal in the exploration of novel catalysts across diverse applications.
Traditionally, the quest for the lowest energy adslab configuration involves
placing the adsorbate onto the slab followed by an optimization process. Prior
methodologies have relied on heuristics, problem-specific intuitions, or
brute-force approaches to guide adsorbate placement. In this work, we propose a
novel framework for adsorbate placement using denoising diffusion. The model is
designed to predict the optimal adsorbate site and orientation corresponding to
the lowest energy configuration. Further, we have an end-to-end evaluation
framework where diffusion-predicted adslab configuration is optimized with a
pretrained machine learning force field and finally evaluated with Density
Functional Theory (DFT). Our findings demonstrate an acceleration of up to 5x
or 3.5x improvement in accuracy compared to the previous best approach. Given
the novelty of this framework and application, we provide insights into the
impact of pre-training, model architectures, and conduct extensive experiments
to underscore the significance of this approach.
|
This paper studies how to achieve the maximal link throughputs in a CSMA
wireless network through offered-load control. First, we propose an analytical
model, contention-graph-combination (CGC), to describe the relationship between
the offered-load and the output link throughputs of an unsaturated CSMA
network. Based on CGC, we then formulate a linear optimization model to improve
the aggregate link throughput through properly setting the occurrence
probabilities of each sub-network, based on which we can obtain the optimal
offered-load of each link. Simulation results bore out the accuracy of our CGC
analysis and the maximal link throughputs can be closely achieved. Different
from prior work in which CSMA protocol parameters are adaptively adjusted to
achieve better performance, in this paper we propose to achieve maximal link
throughputs by adjusting the rates of the traffic pumped into the source nodes
of links, which runs in a software manner and is more practical to implement in
real networks.
|
Potentially Hazardous Asteroids (PHAs) are a special subset of Near-Earth
Objects (NEOs) that can come close to the Earth and are large enough to cause
significant damage in the event of an impact. Observations and researches of
Earth-PHAs have been underway for decades. Here, we extend the concept of PHAs
to Mars and study the feasibility of detecting Mars-PHAs in the near future. We
focus on PHAs that truly undergo close approaches with a planet (dubbed CAPHAs)
and aim to compare the actual quantities of Earth-CAPHAs and Mars-CAPHAs by
conducting numerical simulations incorporating the Yarkovsky effect, based on
observed data of the main asteroid belt. The estimated number of Earth-CAPHAs
and Mars-CAPHAs are 4675 and 16910, respectively. The occurrence frequency of
Mars-CAPHAs is about 52 per year, which is 2.6 times that of Earth-CAPHAs,
indicating significant potential for future Mars-based observations.
Furthermore, a few Mars-CAPHAs are predicted to be observable even from Earth
around the time of next Mars opposition in 2025.
|
Ramanujan in his notebook recorded two modular equations involving multiplier
with moduli of degrees (1,7) and (1,23). In this paper, we find some new
Ramanujan's modular equations involving multiplier with moduli of degrees (3,5)
and (1,15), and give concise proofs by employing Ramanujan's multiplier
function equation.
|
Active nematics are a class of far-from-equilibrium materials characterized
by local orientational order of force-generating, anisotropic constitutes.
Traditional methods for predicting the dynamics of active nematics rely on
hydrodynamic models, which accurately describe idealized flows and many of the
steady-state properties, but do not capture certain detailed dynamics of
experimental active nematics. We have developed a deep learning approach that
uses a Convolutional Long-Short-Term-Memory (ConvLSTM) algorithm to
automatically learn and forecast the dynamics of active nematics. We
demonstrate our purely data-driven approach on experiments of 2D unconfined
active nematics of extensile microtubule bundles, as well as on data from
numerical simulations of active nematics.
|
To investigate the mass generating problem without Higgs mechanism we present
a model in which a new scalar gauge coupling is naturally introduced. Because
of the existence of production and annihilation for particles in quantum field
theory, we extend the number of independent variables from conventional four
space-time dimensions to five ones in order to describe all degrees of freedom
for field functions while the conventional space-time is still retained to be
the background. The potential fifth variable is nothing but the proper time of
particles. In response, a mass operator $(\hat{m}=-i\hbar
\frac{\partial}{\partial\tau})$ should be introduced. After that, the
lagrangian for free fermion fields in terms of five independent variables and
mass operator is written down. By applying the gauge principle, three kinds of
vector gauge couplings and one kind of scalar gauge coupling are naturally
introduced. In the current scenario, the mass spectrum for all fundamental
particles is accounted for in principle by solving the eigenvalue of mass
operator under the function of all kinds of interactions. Moreover, there no
any auxiliary mechanism including spontaneous symmetry breaking get involved in
the model. Therefore, traditional problems in the standard model such as the
vacuum energy problem are removed from our model, as well as the hierarchy
problem on the mass spectrum for fundamental particles.
|
Measuring Compton scattered photons and recoil neutrons in coincidence,
quasi-free Compton scattering by the neutron has been investigated at MAMI
(Mainz) at $theta^{lab}_\gamma=136^o$ in an energy range from 200 to 400 MeV.
From the data a polarizability difference of $\alpha_n - \beta_n = 9.8 \pm
3.6(stat)^{+2.1}_{-1.1}(syst)\pm 2.2(model)$ in units of $10^{-4}fm^3$ has been
determined. In combination with the polarizability sum $\alpha_n+\beta_n=
15.2\pm 0.5$ deduced from photo absorption data, the neutron electric and
magnetic polarizabilities, $\alpha_n=12.5\pm 1.8(stat)^{+1.1}_{-0.6}(syst)\pm
1.1(model)$ and $\beta_n = 2.7\mp 1.8(stat)^{+0.6}_{-1.1}(syst)\mp 1.1(model)$,
are obtained.
|
Spin-crossover has a wide range of applications from memory devices to
sensors. This has to do mainly with the nature of the transition, which may be
abrupt, gradual or incomplete and may also present hysteresis. This transition
alters the properties of a given sample, such as magnetic moment, color and
electric resistance to name some. Yet, a thorough understanding of the
phenomenon is still lacking. In this work a simple model is provided to mimic
some of the properties known to occur in spin-crossover. A detailed study of
the model parameters is presented using a mean field approach and exhaustive
Monte Carlo simulations. A good agreement is found between the analytical
results and the simulations for certain regions in the parameter-space. This
mean field approach breaks down in parameter regions where the correlations and
cooperativity may no longer be averaged over.
|
The question of how and why the phenomenon of mode connectivity occurs in
training deep neural networks has gained remarkable attention in the research
community. From a theoretical perspective, two possible explanations have been
proposed: (i) the loss function has connected sublevel sets, and (ii) the
solutions found by stochastic gradient descent are dropout stable. While these
explanations provide insights into the phenomenon, their assumptions are not
always satisfied in practice. In particular, the first approach requires the
network to have one layer with order of $N$ neurons ($N$ being the number of
training samples), while the second one requires the loss to be almost
invariant after removing half of the neurons at each layer (up to some
rescaling of the remaining ones). In this work, we improve both conditions by
exploiting the quality of the features at every intermediate layer together
with a milder over-parameterization condition. More specifically, we show that:
(i) under generic assumptions on the features of intermediate layers, it
suffices that the last two hidden layers have order of $\sqrt{N}$ neurons, and
(ii) if subsets of features at each layer are linearly separable, then no
over-parameterization is needed to show the connectivity. Our experiments
confirm that the proposed condition ensures the connectivity of solutions found
by stochastic gradient descent, even in settings where the previous
requirements do not hold.
|
In \cite{CC1}, Cheeger-Colding considered manifolds with lower Ricci
curvature bound and gave some almost rigidity results about warped products
including almost metric cone rigidity and quantitative splitting theorem. As a
generalization of manifolds with lower Ricci curvature bound, for metric
measure spaces in $RCD(K, N)$, $1<N<\infty$, splitting theorem \cite{Gi13} and
"volume cone implies metric cone" rigidity for balls and annuluses of a point
\cite{PG} have been proved. In this paper we will generalize Cheeger-Colding's
\cite{CC1} result about "almost volume cone implies almost metric cone for
annuluses of a compact subset " to $RCD(K, N)$-spaces. More precisely, consider
a $RCD(K, N)$-space $(X, d, \mathfrak m)$ and a Borel subset $\Omega\subset X$.
If the closed subset $S=\partial \Omega$ has finite outer curvature, the
diameter ${diam}(S)\leq D$ and the mean curvature of $S$ satisfies $$m(x)\leq
m, \, \forall x\in S,$$ and \begin{equation*}\mathfrak m(A_{a, b}(S))\geq
(1-\epsilon)\int_a^b \left({sn}'_H(r)+ \frac{m}{n-1}{sn}_H(r)\right)^{n-1}dr
\mathfrak m_S(S)\end{equation*} then $A_{a', b'}(S)$ is measured
Gromov-Hausdorff close to a warped product $(a', b')\times_{{sn}'_H(r)+
\frac{m}{n-1}{sn}_H(r)}Y,$ $A_{a, b}(S)=\{x\in X\setminus \Omega, \, a<d(x,
S)<b\}$, $a<a'<b'<b$, $Y$ is a metric space with finite components with each
component is a $RCD(0, N-1)$-space when $m=0, K=0$ and is a $RCD(N-2,
N-1)$-space for other cases and $H=\frac{K}{N-1}$. Note that when $m=0, K=0$,
our result is a kind of quantitative splitting theorem and in other cases it is
an almost metric cone rigidity.
To prove this result, different from \cite{Gi13, PG}, we will use
\cite{GiT}'s second order differentiation formula and a method similar as
\cite{CC1}.
|
Clarifying the impact of quantumness in the operation and properties of
thermal machines represents a major challenge. Here we envisage a toy model
acting either as an information-driven fridge or as heat-powered information
eraser in which coherences can be naturally introduced in by means of squeezed
thermal reservoirs. We study the validity of the transient entropy production
fluctuation theorem in the model with and without squeezing as well as its
decomposition into adiabatic and non-adiabatic contributions. Squeezing
modifies fluctuations and introduces extra mechanisms of entropy exchange. This
leads to enhancements in the cooling performance of the refrigerator, and to
overcoming Landauer's bound in the setup.
|
A Chandra study of pulsar wind nebula around the young energetic pulsar PSR
B1509-58 is presented. The high resolution X-ray image with total exposure time
of 190 ks reveals a ring like feature 10'' apart from the pulsar. This feature
is analogous to the inner ring seen in the Crab nebula and thus may correspond
to a wind termination shock. The shock radius enables us to constrain the wind
magnetization, sigma>= 0.01. The obtained sigma is one order of magnitude
larger than that of the Crab nebula. In the pulsar vicinity, the southern jet
appears to extend beyond the wind termination shock, in contrast to the narrow
jet of the Crab. The revealed morphology of the broad jet is coincident with
the recently proposed theoretical model in which a magnetic hoop stress diverts
and squeezes the post-shock equatorial flow towards the poloidal direction
generating a jet.
|
We summarize JWST's measured telescope performance across science Cycle 1.
The stability of segments alignments is typically better than 10 nanometers RMS
between measurements every two days, leading to highly stable point spread
functions. The frequency of segment "tilt events" decreased significantly, and
larger tilt events ceased entirely, as structures gradually equilibrated after
cooldown. Mirror corrections every 1-2 months now maintain the telescope below
70 nm RMS wavefront error. Observed micrometeoroid impacts during cycle 1 had
negligible effect on science performance, consistent with preflight
predictions. As JWST begins Cycle 2, its optical performance and stability are
equal to, and in some ways better than, the performance reported at the end of
commissioning.
|
Constructing Driver Hamiltonians and Mixing Operators such that they satisfy
constraints is an important ansatz construction for quantum algorithms. We give
general algebraic expressions for finding Hamiltonian terms and analogously
unitary primitives, that satisfy constraint embeddings and use these to give
complexity characterizations of the related problems. Finding operators that
enforce classical constraints is proven to be NP-Complete in the general case;
algorithmic procedures with worse-case polynomial runtime to find any operators
with a constant locality bound, applicable for many constraints. We then give
algorithmic procedures to turn these algebraic primitives into Hamiltonian
drivers and unitary mixers that can be used for Constrained Quantum Annealing
(CQA) and Quantum Alternating Operator Ansatz (QAOA) constructions by tackling
practical problems related to finding an appropriate set of reduced generators
and defining corresponding drivers and mixers accordingly. We then apply these
concepts to the construction of ansaetze for 1-in-3 SAT instances. We consider
the ordinary x-mixer QAOA, a novel QAOA approach based on the maximally
disjoint subset, and a QAOA approach based on the disjoint subset as well as
higher order constraint satisfaction terms. We empirically benchmark these
approaches on instances sized between 12 and 22, showing the best relative
performance for the tailored ansaetze and that exponential curve fits on the
results are consistent with a quadratic speedup by utilizing alternative
ansaetze to the x-mixer. We provide very general algorithmic prescriptions for
finding driver or mixing terms that satisfy embedded constraints that can be
utilized to probe quantum speedups for constraints problems with linear,
quadratic, or even higher order polynomial constraints.
|
We consider first-order bosonic string theory, perturbed by the primary
operator, corresponding to deformation of the target-space complex structure.
We compute the effective action in this theory and find that its consistency
with the world-sheet conformal invariance requires necessarily the
Kodaira-Spencer equations to be satisfied by target-space Beltrami
differentials. We discuss the symmetries of the theory and its reformulation in
terms of the vielbein background fields.
|
We demonstrate remote entanglement of trapped-ion qubits via a
quantum-optical fiber link with fidelity and rate approaching those of local
operations. Two ${}^{88}$Sr${}^{+}$ qubits are entangled via the polarization
degree of freedom of two photons which are coupled by high-numerical-aperture
lenses into single-mode optical fibers and interfere on a beamsplitter. A novel
geometry allows high-efficiency photon collection while maintaining unit
fidelity for ion-photon entanglement. We generate remote Bell pairs with
fidelity $F=0.940(5)$ at an average rate $182\,\mathrm{s}^{-1}$ (success
probability $2.18\times10^{-4}$).
|
We report spectral and variability analysis of two quiescent low mass x-ray
binaries (previously identified with ROSAT HRI as X5 and X7) in the globular
cluster 47 Tuc, from a Chandra ACIS-I observation. X5 demonstrates sharp
eclipses with an 8.666+-0.008 hr period, as well as dips showing an increased
N_H column. Their thermal spectra are well-modelled by unmagnetized hydrogen
atmospheres of hot neutron stars, most likely heated by transient accretion,
with little to no hard power law component.
|
We derive the 1-loop effective action of the cubic Galileon coupled to
quantum-gravitational fluctuations in a background and gauge-independent
manner, employing the covariant framework of DeWitt and Vilkovisky. Although
the bare action respects shift symmetry, the coupling to gravity induces an
effective mass to the scalar, of the order of the cosmological constant, as a
direct result of the non-flat field-space metric, the latter ensuring the
field-reparametrization invariance of the formalism. Within a gauge-invariant
regularization scheme, we discover novel, gravitationally induced non-Galileon
higher-derivative interactions in the effective action. These terms, previously
unnoticed within standard, non-covariant frameworks, are not Planck suppressed.
Unless tuned to be sub-dominant, their presence could have important
implications for the classical and quantum phenomenology of the theory.
|
A model of aluminium has been developed and implemented in an Ocean General
Circulation Model (NEMO-PISCES). In the model, aluminium enters the ocean by
means of dust deposition. The internal oceanic processes are described by
advection, mixing and reversible scavenging. The model has been evaluated
against a number of selected high-quality datasets covering much of the world
ocean, especially those from the West Atlantic Geotraces cruises of 2010 and
2011. Generally, the model results are in fair agreement with the observations.
However, the model does not describe well the vertical distribution of
dissolved Al in the North Atlantic Ocean. The model may require changes in the
physical forcing and the vertical dependence of the sinking velocity of
biogenic silica to account for other discrepancies. To explore the model
behaviour, sensitivity experiments have been performed, in which we changed the
key parameters of the scavenging process as well as the input of aluminium into
the ocean. This resulted in a better understanding of aluminium in the ocean,
and it is now clear which parameter has what effect on the dissolved aluminium
distribution and which processes might be missing in the model, among which
boundary scavenging and biological incorporation of aluminium into diatoms.
|
We introduce the software tool NTRFinder to find the complex repetitive
structure in DNA we call a nested tandem repeat (NTR). An NTR is a recurrence
of two or more distinct tandem motifs interspersed with each other. We propose
that nested tandem repeats can be used as phylogenetic and population markers.
We have tested our algorithm on both real and simulated data, and present some
real nested tandem repeats of interest. We discuss how the NTR found in the
ribosomal DNA of taro (Colocasia esculenta) may assist in determining the
cultivation prehistory of this ancient staple food crop. NTRFinder can be
downloaded from http://www.maths.otago.ac.nz/? aamatroud/.
|
A/B testing is an important decision making tool in product development
because can provide an accurate estimate of the average treatment effect of a
new features, which allows developers to understand how the business impact of
new changes to products or algorithms. However, an important assumption of A/B
testing, Stable Unit Treatment Value Assumption (SUTVA), is not always a valid
assumption to make, especially for products that facilitate interactions
between individuals. In contexts like one-to-one messaging we should expect
network interference; if an experimental manipulation is effective, behavior of
the treatment group is likely to influence members in the control group by
sending them messages, violating this assumption. In this paper, we propose a
novel method that can be used to account for network effects when A/B testing
changes to one-to-one interactions. Our method is an edge-based analysis that
can be applied to standard Bernoulli randomized experiments to retrieve an
average treatment effect that is not influenced by network interference. We
develop a theoretical model, and methods for computing point estimates and
variances of effects of interest via network-consistent permutation testing. We
then apply our technique to real data from experiments conducted on the
messaging product at LinkedIn. We find empirical support for our model, and
evidence that the standard method of analysis for A/B tests underestimates the
impact of new features in one-to-one messaging contexts.
|
We prove a "multiple colored Tverberg theorem" and a "balanced colored
Tverberg theorem", by applying different methods, tools and ideas. The proof of
the first theorem uses multiple chessboard complexes (as configuration spaces)
and Eilenberg-Krasnoselskii theory of degrees of equivariant maps for non-free
actions. The proof of the second result relies on high connectivity of the
configuration space, established by discrete Morse theory.
|
The equivalence of time-optimal and distance-optimal control problems is
shown for a class of parabolic control systems. Based on this equivalence, an
approach for the efficient algorithmic solution of time-optimal control
problems is investigated. Numerical examples are provided to illustrate that
the approach works well is practice.
|
We provide a survey of non-stationary surrogate models which utilize Gaussian
processes (GPs) or variations thereof, including non-stationary kernel
adaptations, partition and local GPs, and spatial warpings through deep
Gaussian processes. We also overview publicly available software
implementations and conclude with a bake-off involving an 8-dimensional
satellite drag computer experiment. Code for this example is provided in a
public git repository.
|
Interest in two dimensional materials has exploded in recent years. Not only
are they studied due to their novel electronic properties, such as the emergent
Dirac Fermion in graphene, but also as a new paradigm in which stacking layers
of distinct two dimensional materials may enable different functionality or
devices. Here, through first-principles theory, we reveal a large new class of
two dimensional materials which are derived from traditional III-V, II-VI, and
I-VII semiconductors. It is found that in the ultra-thin limit all of the
traditional binary semi-conductors studied (a series of 26 semiconductors)
stabilize in a two dimensional double layer honeycomb (DLHC) structure, as
opposed to the wurtzite or zinc-blende structures associated with three
dimensional bulk. Not only does this greatly increase the landscape of
two-dimensional materials, but it is shown that in the double layer honeycomb
form, even ordinary semiconductors, such as GaAs, can exhibit exotic
topological properties.
|
The article deals with the development, implementation and research of the
spectrum analyzers that can be used in sensor networks and Internet systems of
things. As an operating frequency range, 2.4-2.5 GHz ISM is selected. At the
stage of hardware selection, a comparative analysis of existing available
microcontrollers for the analysis of the spectrum, the choice of hardware
interfaces, the ordering of the required modules and electrical components, as
well as the input control is carried out. During development, several variants
of spectrum analyzers on the basis of microcontroller and TI Chipcon CC2500
microcontrollers with USB interfaces, as well as Cypress CYWUSB6935 modules
with LPT and USB interfaces, have been implemented. At the development stage,
the development of the printed circuit board, its fabrication, component
assembly, microcontroller programming, the verification of the assembly's
robustness, making corrections, connecting to a personal computer and assembly
in the case have been carried out. An analysis of existing software for
collecting information on the state of the wireless broadcast is also
conducted. According to the results of comparative experiments of various
collections of spectrum analyzers, spectrographs for different types of signals
were obtained. On these typical spectrographs a comparative analysis of the
work of various prototypes was conducted. The offered approaches to building
sensors on the basis of spectrum analyzers allow to create low-power modules
for embedding in existing wireless information networks of enterprises for
prevention of inter-channel interference and ensuring the integrity of data
transmission.
|
These are the notes of the lectures delivered by the author at CIME in June
2018. The main purpose of the notes is to provide an overview of the techniques
used in the construction of the triply graded link homology. The homology is
space of global sections of a particular sheaf on the Hilbert scheme of points
on the plane. Our construction relies on existence on the natural push-forward
functor for the equivariant matrix factorizations, we explain the subtleties on
the construction in these notes. We also outline a proof of the Markov moves
for our homology as well as some explicit localization formulas for knot
homology of a large class of links.
|
We investigate two types of boundedness criteria for bilinear Fourier
multiplier operators with symbols with bounded partial derivatives of all (or
sufficiently many) orders. Theorems of the first type explicitly prescribe only
a certain rate of decay of the symbol itself while theorems of the second type
require, in addition, the same rate of decay of all derivatives of the symbol.
We show that even though these two types of bilinear multiplier theorems are
closely related, there are some fundamental differences between them which
arise in limiting cases. Also, since theorems of the latter type have so far
been studied mainly in connection with the more general class of bilinear
pseudodifferential operators, we revisit them in the special case of bilinear
Fourier multipliers, providing also some improvements of the existing results
in this setting.
|
Mean field game theory has been developed largely following two routes. One
of them, called the direct approach, starts by solving a large-scale game and
next derives a set of limiting equations as the population size tends to
infinity. The second route is to apply mean field approximations and formalize
a fixed point problem by analyzing the best response of a representative
player. This paper addresses the connection and difference of the two
approaches in a linear quadratic (LQ) setting. We first introduce an asymptotic
solvability notion for the direct approach, which means for all sufficiently
large population sizes, the corresponding game has a set of feedback Nash
strategies in addition to a mild regularity requirement. We provide a necessary
and sufficient condition for asymptotic solvability and show that in this case
the solution converges to a mean field limit. This is accomplished by
developing a re-scaling method to derive a low dimensional ordinary
differential equation (ODE) system, where a non-symmetric Riccati ODE has a
central role. We next compare with the fixed point approach which determines a
two point boundary value (TPBV) problem, and show that asymptotic solvability
implies feasibility of the fixed point approach, but the converse is not true.
We further address non-uniqueness in the fixed point approach and examine the
long time behavior of the non-symmetric Riccati ODE in the asymptotic
solvability problem.
|
We study the effects produced by D-brane instantons on the holomorphic
quantities of a D-brane gauge theory at an orbifold singularity. These effects
are not limited to reproducing the well known contributions of the gauge theory
instantons but also generate extra terms in the superpotential or the
prepotential. On these brane instantons there are some neutral fermionic
zero-modes in addition to the ones expected from broken supertranslations. They
are crucial in correctly reproducing effects which are dual to gauge theory
instantons, but they may make some other interesting contributions vanish. We
analyze how orientifold projections can remove these zero-modes and thus allow
for new superpotential terms. These terms contribute to the dynamics of the
effective gauge theory, for instance in the stabilization of runaway
directions.
|
In this work, an extensive review of literature in the field of gesture
recognition carried out along with the implementation of a simple
classification system for hand hygiene stages based on deep learning solutions.
A subset of robust dataset that consist of handwashing gestures with two hands
as well as one-hand gestures such as linear hand movement utilized. A
pretrained neural network model, RES Net 50, with image net weights used for
the classification of 3 categories: Linear hand movement, rub hands palm to
palm and rub hands with fingers interlaced movement. Correct predictions made
for the first two classes with > 60% accuracy. A complete dataset along with
increased number of classes and training steps will be explored as a future
work.
|
Hot subluminous stars of spectral type B and O are core helium-burning stars
at the blue end of the horizontal branch or have evolved even beyond that
stage. Strikingly, the distribution in the Hertzsprung-Russell diagram of
He-rich vs. He-poor hot subdwarf stars of the globular clusters omega Cen and
NGC~2808 differ from that of their field counterparts. The metal-abundance
patterns of hot subdwarfs are typically characterized by strong deficiencies of
some lighter elements as well as large enrichments of heavy elements. A large
fraction of sdB stars are found in close binaries with white dwarf or very
low-mass main sequence companions, which must have gone through a
common-envelope phase of evolution.They provide a clean-cut laboratory to study
this important but yet purely understood phase of stellar evolution. Substellar
companions to sdB stars have also been found. For HW~Vir systems the companion
mass distribution extends from the stellar into the brown dwarf regime. A giant
planet to the pulsator V391 Peg was the first discovery of a planet that
survived the red giant evolution of its host star. Several types of pulsating
star have been discovered among hot subdwarf stars, the most common are the
gravity-mode sdB pulsators (V1093 Her) and their hotter siblings, the p-mode
pulsating V361 Hya stars. Another class of multi-periodic pulsating hot
subdwarfs has been found in the globular cluster omega Cen that is unmatched by
any field star. The masses of hot subdwarf stars are the key to understand the
stars' evolution. A few pulsating sdB stars in eclipsing binaries have been
found that allow mass determination. The results are in good agreement with
predictions from binary population synthesis. New classes of binaries, hosting
extremely low mass (ELM) white dwarfs (M<0.3 Msun), have recently been
discovered, filling a gap in the mosaic of binary stellar evolution.
(abbreviated)
|
For the Schr\"odinger map problem from 2+1 dimensions into the 2-sphere we
prove the existence of equivariant finite time blow up solutions that are close
to a dynamically rescaled lowest energy harmonic map, the scaling parameter
being given by $t^{-\nu}$ with $\nu>3/2$.
|
Space weather observations and modeling at Mars have begun but they must be
significantly increased to support the future of Human Exploration on the Red
Planet. A comprehensive space weather understanding of a planet without a
global magnetosphere and a thin atmosphere is very different from our situation
at Earth so there is substantial fundamental research remaining. It is expected
that the development of suitable models will lead to a comprehensive
operational Mars space weather alert (MSWA) system that would provide rapid
dissemination of information to Earth controllers, astronauts in transit, and
those in the exploration zone (EZ) on the surface by producing alerts that are
delivered rapidly and are actionable. To illustrate the importance of such a
system, we use a magnetohydrodynamic code to model an extreme Carrington-type
coronal mass ejection (CME) event at Mars. The results show a significant
induced surface field of nearly 3000 nT on the dayside that could radically
affect unprotected electrical systems that would dramatically impact human
survival on Mars. Other associated problems include coronal mass ejection (CME)
shock-driven acceleration of solar energetic particles producing large doses of
ionizing radiation at the Martian surface. In summary, along with working more
closely with international partners, the next Heliophysics Decadal Survey must
include a new initiative to meet expected demands for space weather forecasting
in support of humans living and working on the surface of Mars. It will require
significant effort to coordinate NASA and the international community
contributions.
|
This paper is aimed at extending the H-infinity Bounded Real Lemma to
stochastic systems under random disturbances with imprecisely known probability
distributions. The statistical uncertainty is measured in entropy theoretic
terms using the mean anisotropy functional. The disturbance attenuation
capabilities of the system are quantified by the anisotropic norm which is a
stochastic counterpart of the H-infinity norm. A state-space sufficient
criterion for the anisotropic norm of a linear discrete time invariant system
to be bounded by a given threshold value is derived. The resulting Strict
Anisotropic Norm Bounded Real Lemma involves an inequality on the determinant
of a positive definite matrix and a linear matrix inequality. It is shown that
slight reformulation of these conditions allows the anisotropic norm of a
system to be efficiently computed via convex optimization.
|
The functions studied in the paper are quaternion-valued functions of a
quaternionic variable. It is show that the left slice regular functions and
right slice regular functions are related by a particular involution. The
relation between left slice regular functions, right slice regular functions
and intrinsic regular functions is revealed. The classical Laplace transform
can be naturally generalized to quaternions in two different ways, which
transform a quaternion-valued function of a real variable to a left or right
slice regular quaternion-valued function of a quaternionic variable. The usual
properties of the classical Laplace transforms are generalized to quaternionic
Laplace transforms.
|
We revisit critical behaviour and phase structure of charged anti-deSitter
(AdS) dilaton black holes for arbitrary values of dilaton coupling $\alpha$,
and realize several novel phase behaviour for this system. We adopt the
viewpoint that cosmological constant (pressure) is fixed and treat the charge
of the black hole as a thermodynamical variable. We study critical behaviour
and phase structure by analyzing the phase diagrams in $T-S$ and $ q-T$ planes.
We numerically derive the critical point in terms of $\alpha$ and observe that
for $\alpha =1$ and $\alpha \geq \sqrt{3}$, the system does not admit any
critical point, while for $0<\alpha <1$, the critical quantities are not
significantly affected by $\alpha$. We find that unstable behavior of the Gibbs
free energy for $q<q_{c}$ exhibits a \textit{first order} (discontinuous) phase
transition between small and large black holes for $0\leq\alpha<1$. For
$1<\alpha <\sqrt{3}$ and $q>q_{c}$, however, a novel first order phase
transition occurs between small and large black hole, which has not been
observed in the previous studies on phase transition of charged AdS black
holes.
|
Database fingerprinting has been widely used to discourage unauthorized
redistribution of data by providing means to identify the source of data
leakages. However, there is no fingerprinting scheme aiming at achieving
liability guarantees when sharing genomic databases. Thus, we are motivated to
fill in this gap by devising a vanilla fingerprinting scheme specifically for
genomic databases. Moreover, since malicious genomic database recipients may
compromise the embedded fingerprint by launching effective correlation attacks
which leverage the intrinsic correlations among genomic data (e.g., Mendel's
law and linkage disequilibrium), we also augment the vanilla scheme by
developing mitigation techniques to achieve robust fingerprinting of genomic
databases against correlation attacks.
We first show that correlation attacks against fingerprinting schemes for
genomic databases are very powerful. In particular, the correlation attacks can
distort more than half of the fingerprint bits by causing a small utility loss
(e.g.,database accuracy and consistency of SNP-phenotype associations measured
via p-values). Next, we experimentally show that the correlation attacks can be
effectively mitigated by our proposed mitigation techniques. We validate that
the attacker can hardly compromise a large portion of the fingerprint bits even
if it pays a higher cost in terms of degradation of the database utility. For
example, with around 24% loss in accuracy and 20% loss in the consistency of
SNP-phenotype associations, the attacker can only distort about 30% fingerprint
bits, which is insufficient for it to avoid being accused. We also show that
the proposed mitigation techniques also preserve the utility of the shared
genomic databases.
|
We develop an algorithm solving the 3x3 real symmetric eigenproblem. This is
a common problem and in certain applications it must be solved many thousands
of times, see for example \cite{tripref} where each element in a finite element
grid generates one. Because of this it is useful to have a tailored method that
is easily coded and compact. Furthermore, the method described is fully
compatible with development as a GPU based code that would allow the
simultaneous solution of a large number of these small eigenproblems.
|
In this VERTICO early science paper we explore in detail how environmental
mechanisms, identified in HI, affect the resolved properties of molecular gas
reservoirs in cluster galaxies. The molecular gas is probed using ALMA ACA
(+TP) observations of 12CO(2-1) in 51 spiral galaxies in the Virgo cluster (of
which 49 are detected), all of which are included in the VIVA HI survey. The
sample spans a stellar mass range of 9 < log M*/Msol < 11. We study molecular
gas radial profiles, isodensity radii, and surface densities as a function of
galaxy HI deficiency and morphology. There is a weak correlation between global
HI and H2 deficiencies, and resolved properties of molecular gas correlate with
HI deficiency: galaxies that have large HI deficiencies have relatively steep
and truncated molecular gas radial profiles, which is due to the removal of
low-surface density molecular gas on the outskirts. Therefore, while the
environmental mechanisms observed in HI also affect molecular gas reservoirs,
there is only a moderate reduction of the total amount of molecular gas.
|
Solid solutions of perovskite oxides (KNbO3)1-x+(La2NiMnO6)x (x=0, 0.1, 0.2
and 0.3) with a variation of band gap (1.33-3.6 eV) have been introduced as a
promising photovoltaic absorber. The structural characterization of the
prepared samples was carried out using X-ray diffraction (followed by Rietveld
refinement) and Raman experiment. As the doping percentage of the monoclinic
La2NiMnO6 in the solid-solution increases, the crystal structure of host KNbO3
becomes more symmetric from orthorhombic to cubic. A large reduction in the
particle size has also been observed in the solid solutions in comparison to
the pure KNbO3. The band gap (~ 1.33 eV) of the synthesized solid solution
x=0.1 is found to be very close to the Shockley-Queisser band gap value of 1.34
eV, which suggests the promising photovoltaic possibility in this material.
Photoluminescence (PL) emission spectra reveal a strong PL quenching in the
solid-solutions in comparison to the KNbO3. The overall structural and optical
studies suggest the promising photovoltaic possibility in KNbO3/ La2NiMnO6
solid solution.
|
Superconductivity in iron-based, magnesium diborides, and other novel
superconducting materials has a strong multi-band and multi-gap character.
Recent experiments support the possibillity for a BCS-BEC crossover induced by
strong-coupling and proximity of the chemical potential to the band edge of one
of the bands. Here we study the simplest theoretical model which accounts for
the BCS-BEC crossover in a two-band superconductor, considering tunable
interactions and tunable energy separations between the bands. Mean-field
results for condensate fraction, correlation length, and superconducting gap
are reported in typical crossover diagrams to locate the boundaries of the BCS,
crossover, and BEC regimes. When the superconducting gap is of the order of the
local chemical potential, superconductivity is in the crossover regime of the
BCS-BEC crossover and the Fermi surface of the small band is smeared by the gap
opening. In this situation, small and large Cooper pairs coexist in the total
condensate, which is the optimal condition for high-Tc superconductivity. The
ratio between the gap and the Fermi energy in a given band results to be the
best detection parameter for experiments to locate the system in the BCS-BEC
crossover. Using available experimental data, our analysis shows that
iron-based superconductors have the partial condensate of the small Fermi
surface in the crossover regime of the BCS-BEC crossover, supporting the recent
ARPES findings.
|
We demonstrate the ability of a phase-sensitive amplifier (PSA) to
pre-amplify a selected quadrature of one mode of a two-mode squeezed state in
order to improve the measurement of two-mode quantum correlations that exist
before degradation due to optical and detection losses. We use four-wave mixing
(4WM) in $^{85}$Rb vapor to generate bright beams in a two-mode squeezed state.
One of these two modes then passes through a second 4WM interaction in a PSA
configuration to noiselessly pre-amplify the desired quadrature of the mode
before loss is intentionally introduced. We demonstrate an enhancement in the
measured degree of intensity correlation and intensity-difference squeezing
between the two modes.
|
The central engines of Seyfert galaxies are thought to be enshrouded by
geometrically thick gas and dust structures. In this article, we derive
observable properties for a self-consistent model of such toroidal gas and dust
distributions, where the geometrical thickness is achieved and maintained with
the help of X-ray heating and radiation pressure due to the central engine.
Spectral energy distributions (SEDs) and images are obtained with the help of
dust continuum radiative transfer calculations with RADMC-3D. For the first
time, we are able to present time-resolved SEDs and images for a physical model
of the central obscurer. Temporal changes are mostly visible at shorter
wavelengths, close to the combined peak of the dust opacity as well as the
central source spectrum and are caused by variations in the column densities of
the generated outflow. Due to the three-component morphology of the
hydrodynamical models -- a thin disc with high density filaments, a surrounding
fluffy component (the obscurer) and a low density outflow along the rotation
axis -- we find dramatic differences depending on wavelength: whereas the
mid-infrared images are dominated by the elongated appearance of the outflow
cone, the long wavelength emission is mainly given by the cold and dense disc
component. Overall, we find good agreement with observed characteristics,
especially for those models, which show clear outflow cones in combination with
a geometrically thick distribution of gas and dust, as well as a geometrically
thin, but high column density disc in the equatorial plane.
|
We derive braided $C^*$-tensor categories from gapped ground states on
two-dimensional quantum spin systems satisfying some additional condition which
we call the approximate Haag duality.
|
We consider the classical evolution of the inflaton field $\phi(t)$ and the
Hubble parameter $H(t)$ in homogeneous and isotropic single-field inflation
models. Under an extremely broad assumption, we show that the universe
generically emerges from an initial singularity in a non-inflating state where
the kinetic energy of the inflaton dominates its potential energy,
$\dot{\phi}^2 \gg V(\phi)$. In this kinetically-dominated regime, the dynamical
equations admit simple analytic solutions for $\phi(t)$ and $H(t)$, which are
independent of the form of $V(\phi)$. In such models, these analytic solutions
thus provide a simple way of setting the initial conditions from which to start
the (usually numerical) integration of the coupled equations of motion for
$\phi(t)$ and $H(t)$. We illustrate this procedure by applying it to
spatially-flat models with polynomial and exponential potentials, and determine
the background evolution in each case; generically $H(t)$ and $|\phi(t)|$ as
well as their time derivatives decrease during kinetic dominance until
$\dot{\phi}^2\sim V(\phi)$, marking the onset of a brief period of fast-roll
inflation prior to a slow roll phase. We also calculate the approximate
spectrum of scalar perturbations produced in each model and show that it
exhibits a generic damping of power on large scales. This may be relevant to
the apparent low-$\ell$ falloff in the CMB power spectrum.
|
The existence of the tiny neutrino mass and the flavor mixing can be
naturally explained by type-I Seesaw model which is probably the simplest
extension of the Standard Model (SM) using Majorana type SM gauge singlet heavy
Right Handed Neutrinos (RHNs). If the RHNs are around the Electroweak(EW)-scale
having sizable mixings with the SM light neutrinos, they can be produced at the
high energy colliders such as Large Hadron Collider (LHC) and future $100$ TeV
proton-proton (pp) collider through the characteristic signatures with same
sign di-lepton introducing lepton number violations(LNV). On the other hand
Seesaw models, namely inverse Seesaw, with small LNV parameter can accommodate
EW-scale pseudo-Dirac neutrinos with sizable mixings with SM light neutrinos
while satisfying the neutrino oscillation data. Due to the smallness of the LNV
parameter of such models, the `smoking-gun' signature of same-sign di-lepton is
suppressed where as the RHNs in the model will manifest at the LHC and future
$100$ TeV pp collider dominantly through the Lepton number conserving (LNC)
trilepton final state with Missing Transverse Energy (MET). Studying various
production channels of such RHNs we give an updated upper bound on the mixing
parameters of the light-heavy neutrinos at the 13 TeV LHC and future 100 TeV pp
collider.
|
We present 0.8-mm band molecular images and spectra obtained with the Atacama
Large Millimeter/submillimeter Array (ALMA) toward one of the nearest galaxies
with an active galactic nucleus (AGN), NGC 1068. Distributions of CO isotopic
species ($^{13}$CO and C$^{18}$O) $\it{J}$ = 3--2, CN $\it{N}$ = 3--2 and CS
$\it{J}$ = 7--6 are observed toward the circumnuclear disk (CND) and a part of
the starburst ring with an angular resolution of $\sim$1.$^{\prime\prime}$3
$\times$ 1.$^{\prime\prime}$2. The physical properties of these molecules and
shock-related molecules such as HNCO, CH$_{3}$CN, SO, and CH$_{3}$OH detected
in the 3-mm band were estimated using rotation diagrams under the assumption of
local thermodynamic equilibrium. The rotational temperatures of the CO isotopic
species and the shock-related molecules in the CND are, respectively, 14--22 K
and upper limits of 20--40 K. Although the column densities of the CO isotopic
species in the CND are only from one-fifth to one-third of that in the
starburst ring, those of the shock-related molecules are enhanced by a factor
of 3--10 in the CND. We also discuss the chemistry of each species, and compare
the fractional abundances in the CND and starburst ring with those of Galactic
sources such as cold cores, hot cores, and shocked molecular clouds in order to
study the overall characteristics. We find that the abundances of shock-related
molecules are more similar to abundances in hot cores and/or shocked clouds
than to cold cores. The CND hosts relatively complex molecules, which are often
associated with shocked molecular clouds or hot cores. Because a high X-ray
flux can dissociate these molecules, they must also reside in regions shielded
from X-rays.
|
Collective spins in thermal gases are at the core of a multitude of science
and technology applications. In most of them, the random thermal motion of the
particles is considered detrimental as it is responsible for decoherence and
noise. In conditions of diffusive propagation, thermal atoms can potentially
occupy various stable spatial modes in a glass cell. Extended or localized,
diffusive modes have different magnetic properties, depending on the boundary
conditions of the atomic cell, and can react differently to external
perturbations. Here we demonstrate that few of these modes can be selectively
excited, manipulated, and interrogated in atomic thermal vapours using laser
light. In particular, we individuate the conditions for the generation of modes
that are exceptionally resilient to undesirable effects introduced by optical
pumping, such as light shifts and power-broadening, which are often the
dominant sources of systematic errors in atomic magnetometers and
co-magnetometers. Moreover, we show that the presence of spatial inhomogeneity
in the pump, on top of the random diffusive atomic motion, introduces a
coupling that leads to a coherent exchange of excitation between the two
longest-lived modes. Our results indicate that systematic engineering of the
multi-mode nature of diffusive gases has great potential for improving the
performance of quantum technology applications based on alkali-metal thermal
gases, and promote these simple experimental systems as versatile tools for
quantum information applications.
|
Given integers $g,n \geq 0$ satisfying $2-2g-n < 0$, let $\mathcal{M}_{g,n}$
be the moduli space of connected, oriented, complete, finite area hyperbolic
surfaces of genus $g$ with $n$ cusps. We study the global behavior of the
Mirzakhani function $B \colon \mathcal{M}_{g,n} \to \mathbf{R}_{\geq 0}$ which
assigns to $X \in \mathcal{M}_{g,n}$ the Thurston measure of the set of
measured geodesic laminations on $X$ of hyperbolic length $\leq 1$. We improve
bounds of Mirzakhani describing the behavior of this function near the cusp of
$\mathcal{M}_{g,n}$ and deduce that $B$ is square-integrable with respect to
the Weil-Petersson volume form. We relate this knowledge of $B$ to statistics
of counting problems for simple closed hyperbolic geodesics.
|
New measurements of the $\eta$ and $K^0$ masses have been performed using
decays to 3$\pi^0$ with the NA48 detector at the CERN SPS. Using symmetric
decays to reduce systematic effects, the results $M(\eta) = 547.843\pm0.051$
MeV/c$^2$ and $M(K^0) = 497.625\pm0.031$ MeV/c$^2$ were obtained.
|
A short historical review is made of some recent literature in the field of
noncommutative geometry, especially the efforts to add a gravitational field to
noncommutative models of space-time and to use it as an ultraviolet regulator.
An extensive bibliography has been added containing reference to recent review
articles as well as to part of the original literature.
|
We reply to the comment of Harada, Sannino and Schechter (hep-ph/9609428) on
our recent paper in the Phys. Rev. Letters on the sigma meson. This concerns
the question, raised by Isgur and Speth in another comment, of whether a
detailed crossing symmetric form is necessary to understand the data. The
discussion gives further support on the existence of the broad sigma near 500
MeV.
|
Subsets and Splits