text
stringlengths 6
128k
|
---|
The Large Synoptic Survey Telescope (LSST) has been designed in order to
satisfy several different scientific objectives that can be addressed by a
ten-year synoptic sky survey. However, LSST will also provide a large amount of
data that can then be exploited for additional science beyond its primary
goals. We demonstrate the potential of using LSST data to search for transiting
exoplanets, and in particular to find planets orbiting host stars that are
members of stellar populations that have been less thoroughly probed by current
exoplanet surveys. We find that existing algorithms can detect in simulated
LSST light curves the transits of Hot Jupiters around solar-type stars, Hot
Neptunes around K dwarfs, and planets orbiting stars in the Large Magellanic
Cloud. We also show that LSST would have the sensitivity to potentially detect
Super-Earths orbiting red dwarfs, including those in habitable zone orbits, if
they are present in some fields that LSST will observe. From these results, we
make the case that LSST has the ability to provide a valuable contribution to
exoplanet science.
|
This paper presents a new construction of Maximum-Distance Separable (MDS)
Reed-Solomon erasure codes based on Fermat Number Transform (FNT). Thanks to
FNT, these codes support practical coding and decoding algorithms with
complexity O(n log n), where n is the number of symbols of a codeword. An
open-source implementation shows that the encoding speed can reach 150Mbps for
codes of length up to several 10,000s of symbols. These codes can be used as
the basic component of the Information Dispersal Algorithm (IDA) system used in
a several P2P systems.
|
We apply the OPE inversion formula to thermal two-point functions of bosonic
and fermionic CFTs in general odd dimensions. This allows us to analyze in
detail the operator spectrum of these theories. We find that nontrivial thermal
CFTs arise when the thermal mass satisfies an algebraic transcendental equation
that ensures the absence of an infinite set of operators from the spectrum. The
solutions of these gap equations for general odd dimensions are in general
complex numbers and follow a particular pattern. We argue that this pattern
unveils the large-$N$ vacuum structure of the corresponding theories at zero
temperature.
|
In several applications, input samples are more naturally represented in
terms of similarities between each other, rather than in terms of feature
vectors. In these settings, machine-learning algorithms can become very
computationally demanding, as they may require matching the test samples
against a very large set of reference prototypes. To mitigate this issue,
different approaches have been developed to reduce the number of required
reference prototypes. Current reduction approaches select a small subset of
representative prototypes in the space induced by the similarity measure, and
then separately train the classification function on the reduced subset.
However, decoupling these two steps may not allow reducing the number of
prototypes effectively without compromising accuracy. We overcome this
limitation by jointly learning the classification function along with an
optimal set of virtual prototypes, whose number can be either fixed a priori or
optimized according to application-specific criteria. Creating a super-sparse
set of virtual prototypes provides much sparser solutions, drastically reducing
complexity at test time, at the expense of a slightly increased complexity
during training. A much smaller set of prototypes also results in
easier-to-interpret decisions. We empirically show that our approach can reduce
up to ten times the complexity of Support Vector Machines, LASSO and ridge
regression at test time, without almost affecting their classification
accuracy.
|
Trigonometric and trigonometric-algebraic entropies are introduced.
Regularity increases the entropy and the maximal entropy is shown to result
when a regular $n$-gon is inscribed in a circle. A regular $n$-gon
circumscribing a circle gives the largest entropy reduction, or the smallest
change in entropy from the state of maximum entropy which occurs in the
asymptotic infinite $n$ limit. EOM are shown to correspond to minimum perimeter
and maximum area in the theory of convex bodies, and can be used in the
prediction of new inequalities for convex sets. These expressions are shown to
be related to the phase functions obtained from the WKB approximation for
Bessel and Hermite functions.
|
The main goal of this paper is to develop a concept of approximate
differentiability of higher order for subsets of the Euclidean space that
allows to characterize higher order rectifiable sets, extending somehow well
known facts for functions. We emphasize that for every subset $ A $ of the
Euclidean space and for every integer $ k \geq 2 $ we introduce the approximate
differential of order $ k $ of $ A $ and we prove it is a Borel map whose
domain is a (possibly empty) Borel set. This concept could be helpful to deal
with higher order rectifiable sets in applications.
|
This paper aims to provide a new problem formulation of path following for
mechanical systems without time parameterization nor guidance laws, namely, we
express the control objective as an orbital stabilization problem. It is shown
that, it is possible to adapt the immersion and invariance technique to design
static state-feedback controllers that solve the problem. In particular, we
select the target dynamics adopting the recently introduced Mexican sombrero
energy assignment method. To demonstrate the effectiveness of the proposed
method we apply it to control underactuated marine surface vessels.
|
In recent years, a range of problems within the broad umbrella of automatic,
computer vision based analysis of ancient coins has been attracting an
increasing amount of attention. Notwithstanding this research effort, the
results achieved by the state of the art in the published literature remain
poor and far from sufficiently well performing for any practical purpose. In
the present paper we present a series of contributions which we believe will
benefit the interested community. Firstly, we explain that the approach of
visual matching of coins, universally adopted in all existing published papers
on the topic, is not of practical interest because the number of ancient coin
types exceeds by far the number of those types which have been imaged, be it in
digital form (e.g. online) or otherwise (traditional film, in print, etc.).
Rather, we argue that the focus should be on the understanding of the semantic
content of coins. Hence, we describe a novel method which uses real-world
multimodal input to extract and associate semantic concepts with the correct
coin images and then using a novel convolutional neural network learn the
appearance of these concepts. Empirical evidence on a real-world and by far the
largest data set of ancient coins, we demonstrate highly promising results.
|
We study the homogeneous Dirichlet problem for the doubly nonlinear equation
$u_t = \Delta_p u^m$, where $p>1,\ m>0$ posed in a bounded domain in
$\mathbb{R}^N$ with homogeneous boundary conditions and with non-negative and
integrable data. In this paper we consider the degenerate case $m(p-1)>1$ and
the quasilinear case $m(p-1)=1$. We establish the large-time behaviour by
proving the uniform convergence to a unique asymptotic profile and we also give
rates for this convergence.
|
We establish the upper bound in the multiplicity conjecture of Herzog, Huneke
and Srinivasan for the codimension three almost complete intersections. We also
give some partial results in the case where I is the aci linked to a complete
intersection in one step.
|
We study numerically the ground state properties of the Cooper problem in the
three-dimensional Anderson model. It is shown that attractive interaction
creates localized pairs in the metallic noninteracting phase. This localization
is destroyed at sufficiently weak disorder. The phase diagram for the
delocalization transition in the presence of disorder and interaction is
determined.
|
We study the ground state properties of spin-half bosons subjected to the
Rashba spin-orbit coupling in two dimensions. Due to the enhancement of the low
energy density of states, it is expected that the effect of interaction becomes
more important. After reviewing several possible ideal condensed states, we
carry out an exact diagonalization calculation for a cluster of the bosons in
the presence of strong spin-orbit coupling on a two-dimensional disk and reveal
strong correlations in its ground state. We derive a low-energy effective
Hamiltonian to understand how states with strong correlations become
energetically more favorable than the ideal condensed states.
|
Discrete Choice Experiments (DCEs) investigate the attributes that influence
individuals' choices when selecting among various options. To enhance the
quality of the estimated choice models, researchers opt for Bayesian optimal
designs that utilize existing information about the attributes' preferences.
Given the nonlinear nature of choice models, the construction of an appropriate
design requires efficient algorithms. Among these, the Coordinate-Exchange (CE)
algorithm is most commonly employed for constructing designs based on the
multinomial logit model. Since this is a hill-climbing algorithm, obtaining
better designs necessitates multiple random starting designs. This approach
increases the algorithm's run-time, but may not lead to a significant
improvement in results. We propose the use of a Simulated Annealing (SA)
algorithm to construct Bayesian D-optimal designs. This algorithm accepts both
superior and inferior solutions, avoiding premature convergence and allowing a
more thorough exploration of potential designs. Consequently, it ultimately
obtains higher-quality choice designs within the same time-frame. Our work
represents the first application of an SA algorithm in constructing Bayesian
optimal designs for DCEs. Through computational experiments and a real-life
case study, we demonstrate that the SA designs consistently outperform the CE
designs in terms of Bayesian D-efficiency, especially when the prior preference
information is highly uncertain.
|
General Relativity is able to describe the dynamics of galaxies and larger
cosmic structures only if most of the matter in the Universe is dark, namely it
does not emit any electromagnetic radiation. Intriguingly, on the scale of
galaxies, there is strong observational evidence that the presence of dark
matter appears to be necessary only when the gravitational field inferred from
the distribution of the luminous matter falls below an acceleration of the
order of 10^(-10) m/s^2. In the standard model, which combines Newtonian
gravity with dark matter, the origin of this acceleration scale is challenging
and remains unsolved. On the contrary, the full set of observations can be
neatly described, and were partly predicted, by a modification of Newtonian
dynamics, dubbed MOND, that does not resort to the existence of dark matter. On
the scale of galaxy clusters and beyond, however, MOND is not as successful as
on the scale of galaxies, and the existence of some dark matter appears
unavoidable. A model combining MOND with hot dark matter made of sterile
neutrinos seems to be able to describe most of the astrophysical phenomenology,
from the power spectrum of the cosmic microwave background anisotropies to the
dynamics of dwarf galaxies. Whether there exists a yet unknown covariant theory
that contains General Relativity and Newtonian gravity in the weak field limit,
and MOND as the ultra-weak field limit is still an open question.
|
Normal-mode coupling is a helioseismic technique that uses measurements of
mode eigenfunctions to infer the interior structure of the Sun. This technique
has led to insights into the evolution and structure of toroidal flows in the
solar interior. Here, we validate an inversion algorithm for normal-mode
coupling by generating synthetic seismic measurements associated with input
flows and comparing the input and inverted velocities. We study four different
cases of input toroidal flows and compute synthetics that take into account the
partial visibility of the Sun. We invert the synthetics using Subtractive
Optimally Localized Averages (SOLA) and also try to mitigate the systematics of
mode leakage. We demonstrate that, ultimately, inversions are only as good as
the model we assume for the correlation between flow velocities.
|
I discuss the nature of the compact X-ray source inside the supernova remnant
RCW 103. Several models, based on the accretion onto a compact object such as a
neutron star or a black hole (isolated or binary), are analyzed. I show that it
is more likely that the X-ray source is an accreting neutron star than an
accreting black hole. I also argue that models of a binary system with an old
accreting neutron star are most favored.
|
We report on the formation of surface instabilities in a layer of
thermoreversible ferrogel when exposed to a vertical magnetic field. Both
static and time dependent magnetic fields are employed. Under variations of
temperature, the viscoelastic properties of our soft magnetic matter can be
tuned. Stress relaxation experiments unveil a stretched exponential scaling of
the shear modulus, with an exponent of beta=1/3. The resulting magnetic
threshold for the formation of Rosensweig-cusps is measured for different
temperatures, and compared with theoretical predictions by Bohlius et. al. in
J. Phys.: Condens. Matter., 2006, 18, 2671-2684.
|
Concepts of graph theory have applications in many areas of computer science
including data mining, image segmentation, clustering, image capturing,
networks, etc . An interval-valued fuzzy set is a generalization of the notion
of a fuzzy set. Interval-valued fuzzy models give more precision, flexibility
and compatibility to the system as compared to the fuzzy models. In this paper,
we introduce the concept of antipodal interval - valued fuzzy graph and self
median interval-valued fuzzy graph of the given interval-valued fuzzy graph. We
investigate isomorphism properties of antipodal interval - valued fuzzy graphs.
|
Recently, laser cooling methods have been extended from atoms to molecules.
The complex rotational and vibrational energy level structure of molecules
makes laser cooling difficult, but these difficulties have been overcome and
molecules have now been cooled to a few microkelvin and trapped for several
seconds. This opens many possibilities for applications in quantum science and
technology, controlled chemistry, and tests of fundamental physics. This
article explains how molecules can be decelerated, cooled and trapped using
laser light, reviews the progress made in recent years, and outlines some
future applications.
|
We consider the problem of computing the integrable sub-distributions of the
non-integrable Vessiot distribution of multi-dimensional second order partial
differential equations (PDEs). We use Vessiot theory and solvable structures to
find the largest integrable distributions contained in the Vessiot distribution
associated to second order PDEs. In particular, we show how the solvable
symmetry structure of the original PDE can be used to construct integrable
sub-distributions leading to group invariant solutions of the PDE in two and
more than two independent variables.
|
Despite the existence of numerous Optical Character Recognition (OCR) tools,
the lack of comprehensive open-source systems hampers the progress of document
digitization in various low-resource languages, including Bengali. Low-resource
languages, especially those with an alphasyllabary writing system, suffer from
the lack of large-scale datasets for various document OCR components such as
word-level OCR, document layout extraction, and distortion correction; which
are available as individual modules in high-resource languages. In this paper,
we introduce Bengali$.$AI-BRACU-OCR (bbOCR): an open-source scalable document
OCR system that can reconstruct Bengali documents into a structured searchable
digitized format that leverages a novel Bengali text recognition model and two
novel synthetic datasets. We present extensive component-level and system-level
evaluation: both use a novel diversified evaluation dataset and comprehensive
evaluation metrics. Our extensive evaluation suggests that our proposed
solution is preferable over the current state-of-the-art Bengali OCR systems.
The source codes and datasets are available here:
https://bengaliai.github.io/bbocr.
|
The measurement of $R_D$ ($R_{D^*}$), the ratio of the branching fraction of
$\overline{B} \to D \tau \bar{\nu}_\tau (\overline{B} \to D^* \tau
\bar{\nu}_\tau)$ to that of $\overline{B} \to D l \bar{\nu}_l (\overline{B} \to
D^* l \bar{\nu}_l)$, shows $1.9 \sigma$ $(3.3 \sigma)$ deviation from its
Standard Model (SM) prediction. The combined deviation is at the level of $4
\sigma$ according to the Heavy Flavour Averaging Group (HFAG). We perform an
effective field theory analysis (at the dimension 6 level) of these potential
New Physics (NP) signals assuming $ SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$
gauge invariance. We first show that, in general, $R_D$ and $R_{D^*}$ are
theoretically independent observables and hence, their theoretical predictions
are not correlated. We identify the operators that can explain the experimental
measurements of $R_D$ and $R_{D^*}$ individually and also together. Motivated
by the recent measurement of the $\tau$ polarisation in $\overline{B} \to D^*
\tau \bar{\nu}_\tau$ decay, $P_\tau^{D^*}$ by the {\sc Belle} collaboration, we
study the impact of a more precise measurement of $P_\tau^{D^*}$ (and a
measurement of $P_\tau^D$) on the various possible NP explanations.
Furthermore, we show that the measurement of $R_{D^*}$ in bins of $q^2$, the
square of the invariant mass of the lepton neutrino system, along with the
information on $\tau$ polarisation, can completely distinguish the various
operator structures.
|
The photon PDF of the proton is needed for precision comparisons of LHC cross
sections with theoretical predictions. In a recent paper, we showed how the
photon PDF could be determined in terms of the electromagnetic proton structure
functions $F_2$ and $F_L$ measured in electron-proton scattering experiments,
and gave an explicit formula for the PDF including all terms up to
next-to-leading order. In this paper we give details of the derivation. We
obtain the photon PDF using the factorisation theorem and applying it to
suitable BSM hard scattering processes. We also obtain the same PDF in a
process-independent manner using the usual definition of PDFs in terms of
light-cone Fourier transforms of products of operators. We show how our method
gives an exact representation for the photon PDF in terms of $F_2$ and $F_L$,
valid to all orders in QED and QCD, and including all non-perturbative
corrections. This representation is then used to give an explicit formula for
the photon PDF to one order higher than our previous result. We also generalise
our results to obtain formul\ae\ for the polarised photon PDF, as well as the
photon TMDPDF. Using our formula, we derive the $P_{\gamma i}$ subset of DGLAP
splitting functions to order $\alpha \alpha_s$ and $\alpha^2$, which agree with
known results. We give a detailed explanation of the approach that we follow to
determine a photon PDF and its uncertainty within the above framework.
|
In this article a study was made of the conditions under which a Hamiltonian
which is an element of the complex $ \left\{ h (1) \oplus h(1) \right\} \uplus
u(2) $ Lie algebra admits ladder operators which are also elements of this
algebra. The algebra eigenstates of the lowering operator constructed in this
way are computed and from them both the energy spectrum and the energy
eigenstates of this Hamiltonian are generated in the usual way with the help of
the corresponding raising operator. Thus, several families of generalized
Hamiltonian systems are found, which, under a suitable similarity
transformation, reduce to a basic set of systems, among which we find the 1:1,
2:1, 1:2, $su(2)$ and some other non-commensurate and commensurate anisotropic
2D quantum oscillator systems. Explicit expressions for the normalized
eigenstates of the Hamiltonian and its associated lowering operator are given,
which show the classical structure of two-mode separable and non-separable
generalized coherent and squeezed states. Finally, based on all the above
results, a proposal for new ladder operators for the $p:q$ coprime commensurate
anisotropic quantum oscillator is made, which leads us to a class of Chen
$SU(2)$ coherent states.
|
Current quantum computer designs will not scale. To scale beyond small
prototypes, quantum architectures will likely adopt a modular approach with
clusters of tightly connected quantum bits and sparser connections between
clusters. We exploit this clustering and the statically-known control flow of
quantum programs to create tractable partitioning heuristics which map quantum
circuits to modular physical machines one time slice at a time. Specifically,
we create optimized mappings for each time slice, accounting for the cost to
move data from the previous time slice and using a tunable lookahead scheme to
reduce the cost to move to future time slices. We compare our approach to a
traditional statically-mapped, owner-computes model. Our results show strict
improvement over the static mapping baseline. We reduce the non-local
communication overhead by 89.8\% in the best case and by 60.9\% on average. Our
techniques, unlike many exact solver methods, are computationally tractable.
|
We present results from the PARallaxes of Southern Extremely Cool objects
(PARSEC) program, an observational program begun in April 2007 to determine
parallaxes for 122 L and 28 T southern hemisphere dwarfs using the Wide Field
Imager on the ESO 2.2m telescope. The results presented here include parallaxes
of 10 targets from observations over 18 months and a first version proper
motion catalog. The proper motions were obtained by combining PARSEC
observations astrometrically reduced with respect to the UCAC2 Catalog, and the
2MASS Catalog. The resulting median proper motion precision is 5mas/yr for
195,700 sources. The 140 0.3deg2 fields sample the southern hemisphere in an
unbiased fashion with the exception of the galactic plane due to the small
number of targets in that region. We present preliminary parallaxes with a 4.2
mas median precision for 10 brown dwarfs, 2 of which are within 10pc. These
increase by 20% the present number of L dwarfs with published parallaxes. Of
the 10 targets, 7 have been previously discussed in the literature: two were
thought to be binary but the PARSEC observations show them to be single, one
has been confirmed as a binary companion and another has been found to be part
of a binary system, both of which will make good benchmark systems.
Observations for the PARSEC program will end in early 2011 providing 3-4 years
of coverage for all targets. The main expected outputs are: more than a 100%
increase of the number of L dwarfs with parallaxes; to increment - in
conjuction with published results - to at least 10 the number of objects per
spectral subclass up to L9, and; to put sensible limits on the general binary
fraction of brown dwarfs. We aim to contribute significantly to the
understanding of the faint end of the H-R diagram and of the L/T transition
region.
|
Ideal MHD relaxation is the topology-conserving reconfiguration of a magnetic
field into a lower energy state where the net force is zero. This is achieved
by modeling the plasma as perfectly conducting viscous fluid. It is an
important tool for investigating plasma equilibria and is often used to study
the magnetic configurations in fusion devices and astrophysical plasmas. We
study the equilibrium reached by a localized magnetic field through the
topology conserving relaxation of a magnetic field based on the Hopf fibration
in which magnetic field lines are closed circles that are all linked with one
another. Magnetic fields with this topology have recently been shown to occur
in non-ideal numerical simulations. Our results show that any localized field
can only attain equilibrium if there is a finite external pressure, and that
for such a field a Taylor state is unattainable. We find an equilibrium plasma
configuration that is characterized by a lowered pressure in a toroidal region,
with field lines lying on surfaces of constant pressure. Therefore, the field
is in a Grad-Shafranov equilibrium. Localized helical magnetic fields are found
when plasma is ejected from astrophysical bodies and subsequently relaxes
against the background plasma, as well as on earth in plasmoids generated by
e.g.\ a Marshall gun. This work shows under which conditions an equilibrium can
be reached and identifies a toroidal depression as the characteristic feature
of such a configuration.
|
Artificial Neural Networks (ANNs) are being deployed for an increasing number
of safety-critical applications, including autonomous cars and medical
diagnosis. However, concerns about their reliability have been raised due to
their black-box nature and apparent fragility to adversarial attacks. These
concerns are amplified when ANNs are deployed on restricted system, which limit
the precision of mathematical operations and thus introduce additional
quantization errors. Here, we develop and evaluate a novel symbolic
verification framework using software model checking (SMC) and satisfiability
modulo theories (SMT) to check for vulnerabilities in ANNs. More specifically,
we propose several ANN-related optimizations for SMC, including invariant
inference via interval analysis, slicing, expression simplifications, and
discretization of non-linear activation functions. With this verification
framework, we can provide formal guarantees on the safe behavior of ANNs
implemented both in floating- and fixed-point arithmetic. In this regard, our
verification approach was able to verify and produce adversarial examples for
$52$ test cases spanning image classification and general machine learning
applications. Furthermore, for small- to medium-sized ANN, our approach
completes most of its verification runs in minutes. Moreover, in contrast to
most state-of-the-art methods, our approach is not restricted to specific
choices regarding activation functions and non-quantized representations. Our
experiments show that our approach can analyze larger ANN implementations and
substantially reduce the verification time compared to state-of-the-art
techniques that use SMT solving.
|
Accurate time-series forecasting is vital for numerous areas of application
such as transportation, energy, finance, economics, etc. However, while modern
techniques are able to explore large sets of temporal data to build forecasting
models, they typically neglect valuable information that is often available
under the form of unstructured text. Although this data is in a radically
different format, it often contains contextual explanations for many of the
patterns that are observed in the temporal data. In this paper, we propose two
deep learning architectures that leverage word embeddings, convolutional layers
and attention mechanisms for combining text information with time-series data.
We apply these approaches for the problem of taxi demand forecasting in event
areas. Using publicly available taxi data from New York, we empirically show
that by fusing these two complementary cross-modal sources of information, the
proposed models are able to significantly reduce the error in the forecasts.
|
While quantum devices rely on interactions between constituent subsystems and
with their environment to operate, native interactions alone often fail to
deliver targeted performance. Coherent pulsed control provides the ability to
tailor effective interactions, known as Hamiltonian engineering. We propose a
Hamiltonian engineering method that maximizes desired interactions while
mitigating deleterious ones by conducting a pulse sequence search using
constrained optimization. The optimization formulation incorporates pulse
sequence length and cardinality penalties consistent with linear or integer
programming. We apply the general technique to magnetometry with solid state
spin ensembles in which inhomogeneous interactions between sensing spins limit
coherence. Defining figures of merit for broadband Ramsey magnetometry, we
present novel pulse sequences which outperform known techniques for homonuclear
spin decoupling in both spin-1/2 and spin-1 systems. When applied to nitrogen
vacancy (NV) centers in diamond, this scheme partially preserves the Zeeman
interaction while zeroing dipolar coupling between negatively charged
NV$^{\text -}$ centers. Such a scheme is of interest for NV$^\text{-}$
magnetometers which have reached the NV$^\text{-}$-NV$^\text{-}$ coupling
limit. We discuss experimental implementation in NV ensembles, as well as
applicability of the current approach to more general spin bath decoupling and
superconducting qubit control.
|
In the presence of a certain class of functions we show that there exists a
smooth solution to Navier-Stokes equation. This solution entertains the
property of being nonconvective. We introduce a definition for any possible
solution to the problem with minimum assumptions on the existence and the
regularity of such solution. Then we prove that the proposed class of functions
represents the unique solution to the problem and consequently we conclude that
there exists no convective solutions to the problem in the sense of the given
definition.
|
During Nova operations it is planned to run the Fermilab Recycler in a 12
batch slip stacking mode. In preparation for this, measurements of the tune
during a six batch injection and then as the beam is slipped by changing the RF
frequency, but without a 7th injection, have been carried out in the Main
Injector. The coherent tune shifts due to the changing beam intensity were
measured and compared well with the theoretically expected tune shift. The tune
shifts due to changing RF frequency, required for slip stacking, also compare
well with the linear theory, although some nonlinear affects are apparent at
large frequency changes. These results give us confidence that the expected
tunes shifts during 12 batch slip stacking Recycler operations can be
accommodated.
|
We report our observations of very bright prompt optical and reverse shock
(RS) optical emission of GRB 140512A and analyze its multi-wavelength data
observed with the {\em Swift} and {\em Fermi} missions. It is found that the
joint optical-X-ray-gamma-ray spectrum with our first optical detection
(R=13.09 mag) at $T_0+136$ seconds during the second episode of the prompt
gamma-rays can be fit by a single power-law with index $-1.32\pm 0.01$. Our
empirical fit to the afterglow lightcurves indicates that the observed bright
optical afterglow with R=13.00 mag at the peak time is consistent with
predictions of the RS and forward shock (FS) emission of external shock models.
Joint optical-X-ray afterglow spectrum is well fit with an absorbed single
power-law, with an index evolving with time from $-1.86\pm 0.01$ at the peak
time to $-1.57\pm 0.01$ at late epoch, which could be due to the evolution of
the ratio of the RS to FS emission fluxes. We fit the lightcurves with standard
external models, and derive the physical properties of the outflow. It is found
that the ratio $R_B\equiv\epsilon_{\rm B, r}/\epsilon_{\rm B, f}$ is 8187,
indicating a high magnetization degree in the RS region. Measuring the relative
radiation efficiency with $R_e\equiv\epsilon_{\rm e, r}/\epsilon_{\rm e, f}$,
we have $R_e= 0.02$, implying the radiation efficiency of the RS is much lower
than that in FS. We also show that the $R_B$ of GRBs 990123, 090102, and
130427A are similar to that of GRB 140512A and their apparent difference may be
mainly attributed to the difference of the jet kinetic energy, initial Lorentz
factor, and medium density among them.
|
We consider the solution of complex symmetric shifted linear systems. Such
systems arise in large-scale electronic structure simulations and there is a
strong need for the fast solution of the systems. With the aim of solving the
systems efficiently, we consider a special case of the QMR method for
non-Hermitian shifted linear systems and propose its weighted quasi-minimal
residual approach. A numerical algorithm, referred to as shifted QMR\_SYM($B$),
is given by the choice of a particularly cost-effective weight. Numerical
examples are presented to show the performance of the shifted QMR\_SYM($B$)
method.
|
Birthday problem is a well-known classic problem in probability theory widely
applied in cryptography. Although bubble sort is a popular algorithm leading to
some interesting theoretical problems in computer science, its relation to
birthday problem has not been found yet. This paper indicates how Rayleigh
distribution naturally arises in bubble sort by relating it to birthday
problem, which presents a novel direction for analysing bubble sort and
birthday problem. Then asymptotic distributions and statistical characteristics
of bubble sort and birthday problem with very small absolute errors are
presented. Moreover, this paper proves that some common optimizations of bubble
sort could lead to average performance degradation.
|
We explore the fundamental limits to which reionization histories can be
constrained using only large-scale cosmic microwave background (CMB) anisotropy
measurements. The redshift distribution of the fractional ionization $x_e(z)$
affects the angular distribution of CMB polarization. We project constraints on
the reionization history of the universe using low-noise full-sky temperature
and E-mode measurements of the CMB. We show that the measured TE power
spectrum, $\hat C_\ell^\mathrm{TE}$, has roughly one quarter of the
constraining power of $\hat C_\ell^\mathrm{EE}$ on the reionization optical
depth $\tau$, and its addition improves the precision on $\tau$ by 20% over
using $\hat C_\ell^\mathrm{EE}$ only. We also use a two-step reionization model
with an additional high redshift step, parametrized by an early ionization
fraction $x_e^\mathrm{min}$, and a late reionization step at $z_\mathrm{re}$.
We find that future high signal-to-noise measurements of the multipoles
$10\leqslant\ell<20$ are especially important for breaking the degeneracy
between $x_e^\mathrm{min}$ and $z_\mathrm{re}$. In addition, we show that the
uncertainties on these parameters determined from a map with sensitivity
$10\,\mathrm{\mu K\,arcmin}$ are less than 5% larger than the uncertainties in
the noiseless case, making this noise level a natural target for future large
sky area E-mode measurements.
|
Thomass\'{e} conjectured the following strengthening of the well-known
Caccetta-Haggkvist Conjecture: any digraph with minimum out-degree $\delta$ and
girth $g$ contains a directed path of length $\delta(g-1)$. Bai and Manoussakis
gave counterexamples to Thomass\'{e}'s conjecture for every even $g\geq 4$. In
this note, we first generalize their counterexamples to show that
Thomass\'{e}'s conjecture is false for every $g\geq 4$. We also obtain the
positive result that any digraph with minimum out-degree $\delta$ and girth $g$
contains a directed path of $2\delta(1-\frac{1}{g})$. For small $g$ we obtain
better bounds, e.g. for $g=3$ we show that oriented graph with minimum
out-degree $\delta$ contains a directed path of length $1.5\delta$.
Furthermore, we show that each $d$-regular digraph with girth $g$ contains a
directed path of length $\Omega(dg/\log d)$. Our results give the first
non-trivial bounds for these problems.
|
The first calculation of kaonic deuterium $1s$ level shift using Faddeev-type
equations was performed. The obtained results were compared with commonly used
approximate approaches.
|
Edge intelligence refers to a set of connected systems and devices for data
collection, caching, processing, and analysis in locations close to where data
is captured based on artificial intelligence. The aim of edge intelligence is
to enhance the quality and speed of data processing and protect the privacy and
security of the data. Although recently emerged, spanning the period from 2011
to now, this field of research has shown explosive growth over the past five
years. In this paper, we present a thorough and comprehensive survey on the
literature surrounding edge intelligence. We first identify four fundamental
components of edge intelligence, namely edge caching, edge training, edge
inference, and edge offloading, based on theoretical and practical results
pertaining to proposed and deployed systems. We then aim for a systematic
classification of the state of the solutions by examining research results and
observations for each of the four components and present a taxonomy that
includes practical problems, adopted techniques, and application goals. For
each category, we elaborate, compare and analyse the literature from the
perspectives of adopted techniques, objectives, performance, advantages and
drawbacks, etc. This survey article provides a comprehensive introduction to
edge intelligence and its application areas. In addition, we summarise the
development of the emerging research field and the current state-of-the-art and
discuss the important open issues and possible theoretical and technical
solutions.
|
Unlike parametric regression, machine learning (ML) methods do not generally
require precise knowledge of the true data generating mechanisms. As such,
numerous authors have advocated for ML methods to estimate causal effects.
Unfortunately, ML algorithms can perform worse than parametric regression. We
demonstrate the performance of ML-based single- and double-robust estimators.
We use 100 Monte Carlo samples with sample sizes of 200, 1200, and 5000 to
investigate bias and confidence interval coverage under several scenarios. In a
simple confounding scenario, confounders were related to the treatment and the
outcome via parametric models. In a complex confounding scenario, the simple
confounders were transformed to induce complicated nonlinear relationships. In
the simple scenario, when ML algorithms were used, double-robust estimators
were superior to single-robust estimators. In the complex scenario,
single-robust estimators with ML algorithms were at least as biased as
estimators using misspecified parametric models. Double-robust estimators were
less biased, but coverage was well below nominal. The use of sample splitting,
inclusion of confounder interactions, reliance on a richly specified ML
algorithm, and use of doubly robust estimators was the only explored approach
that yielded negligible bias and nominal coverage. Our results suggest that ML
based singly robust methods should be avoided.
|
A prototypical example of a rogue wave structure in a two-dimensional model
is presented in the context of the Davey-Stewartson~II (DS~II) equation arising
in water waves. The analytical methodology involves a Taylor expansion of an
eigenfunctionof the model's Lax pair which is used to form a hierarchy of
infinitely many new eigenfunctions. These are used for the construction of
two-dimensional (2D) rogue waves (RWs) of the DS~II equation by the even-fold
Darboux transformation (DT). The obtained 2D RWs, which are localized in both
space and time, can be viewed as a 2D analogue of the Peregrine soliton and are
thus natural candidates to describe oceanic RW phenomena,as well as ones in 2D
fluid systems and water tanks.
|
We investigate the hydrostatic equilibrium of white dwarfs within the
framework of Rastall-Rainbow gravity, aiming to explore the effects of this
modified gravitational theory on their properties. By employing the
Chandrasekhar equation of state in conjunction with the modified
Tolman-Oppenheimer-Volkoff equation, we derive the mass-radius relations for
white dwarfs. Our results show that the maximum mass of white dwarfs deviates
significantly from the predictions of general relativity, potentially exceeding
the Chandrasekhar limit. Furthermore, we discuss other properties of white
dwarfs, such as the gravitational redshift, compactness and dynamical
stability, shedding light on their behavior within the context of this modified
gravitational framework.
|
We consider families of charged rotating asymptotically AdS5 Extremal black
holes with Vanishing Horizon (EVH black holes) whose near horizon geometries
develop locally AdS3 throats. Using the AdS3/CFT2 duality, we propose an
EVH/CFT2 correspondence to describe the near-horizon low energy IR dynamics of
near-EVH black holes involving a specific large N limit of the 4d N = 4 SYM. We
give a map between the UV and IR near-EVH excitations, showing that the UV
first law of thermodynamics reduces to the IR first law satisfied by the near
horizon BTZ black holes in this near-EVH limit. We also discuss the connection
between our EVH/CFT proposal and the Kerr/CFT correspondence in the cases where
the two overlap.
|
The monopole map defines an element in an equivariant stable cohomotopy group
refining the Seiberg-Witten invariant. This first of two articles presents the
details of the definition of the stable cohomotopy invariant and discusses its
relation to the integer valued Seiberg-Witten invariant.
|
Invisibility cloaks for flexural waves have been mostly examined in the
continuous-wave regime, while invisibility is likely to deteriorate for short
pulses. Here, we propose the practical realization of a unidirectional
invisibility cloak for flexural waves based on an area-preserving coordinate
transformation. Time-resolved experiments reveal how the invisibility cloak
deviates a pulsed plane-wave from its initial trajectory, and how the initial
wavefront perfectly recombines behind the cloak, leaving the diamond-shaped
hole invisible, notwithstanding the appearance of a forerunner.
Three-dimensional full-elasticity simulations support our experimental
observations.
|
The timeliness of detection of a sepsis event in progress is a crucial factor
in the outcome for the patient. Machine learning models built from data in
electronic health records can be used as an effective tool for improving this
timeliness, but so far the potential for clinical implementations has been
largely limited to studies in intensive care units. This study will employ a
richer data set that will expand the applicability of these models beyond
intensive care units. Furthermore, we will circumvent several important
limitations that have been found in the literature: 1) Models are evaluated
shortly before sepsis onset without considering interventions already
initiated. 2) Machine learning models are built on a restricted set of clinical
parameters, which are not necessarily measured in all departments. 3) Model
performance is limited by current knowledge of sepsis, as feature interactions
and time dependencies are hardcoded into the model. In this study, we present a
model to overcome these shortcomings using a deep learning approach on a
diverse multicenter data set. We used retrospective data from multiple Danish
hospitals over a seven-year period. Our sepsis detection system is constructed
as a combination of a convolutional neural network and a long short-term memory
network. We suggest a retrospective assessment of interventions by looking at
intravenous antibiotics and blood cultures preceding the prediction time.
Results show performance ranging from AUROC 0.856 (3 hours before sepsis onset)
to AUROC 0.756 (24 hours before sepsis onset). We present a deep learning
system for early detection of sepsis that is able to learn characteristics of
the key factors and interactions from the raw event sequence data itself,
without relying on a labor-intensive feature extraction work.
|
The Higgs particle can decay dominantly into an invisible channel in the
Majoron models. We have explored the prospect of detecting such a Higgs
particle at LHC via its associated production with a gluon, Z or W boson. While
the signal/background ratio is too small for the first process, the latter two
provide viable signatures for detecting such a Higgs particle.
|
We consider the problem of regulating by means of external control inputs the
ratio of two cell populations. Specifically, we assume that these two cellular
populations are composed of cells belonging to the same strain which embeds
some bistable memory mechanism, e.g. a genetic toggle switch, allowing them to
switch role from one population to another in response to some inputs. We
present three control strategies to regulate the populations' ratio to
arbitrary desired values which take also into account realistic physical and
technological constraints occurring in experimental microfluidic platforms. The
designed controllers are then validated in-silico using stochastic agent-based
simulations.
|
This paper investigates the thermoelectric properties of solid polymer
electrolytes (SPE) containing lithium bis(trifluoromethanesulfonyl)imide
(LiTFSI) and sodium bis(trifluoromethanesulfonyl)imide (NaTFSI) salts, along
with carbon-based additives of various dimensionalities. Increasing salt
concentration leads to higher Seebeck coefficients as a result of the
increasing number of free charge carriers and additional, superimposed effects
by ion-ion and ion-polymer interactions. NaTFSI-based electrolytes exhibit
negative Seebeck coefficients (up to $S = -1.5\,\mathrm{mV\,K^{-1}}$),
indicating dominant mobility of $\mathrm{TFSI^-}$ ions. Quasi-one-dimensional
carbon nanotubes (CNTs) increase the Seebeck coefficient by a factor of 3.
Planar, two-dimensional graphite flakes (GF) moderately enhance it, affecting
$\mathrm{Na^+}$ and $\mathrm{TFSI^-}$ ion mobilities and electronic
conductivity. Bulky, three-dimensional carbon black (CB) additives induce a
unique behavior where the sign of the Seebeck coefficient changes with
temperature, presumably due to interaction with $\mathrm{TFSI^-}$ ions within
the CB structure. Changes in activation energy and Vogel temperature with salt
concentration suggest structural and mechanical modifications in the polymer
matrix. The choice of carbon-based additives and salt concentration
significantly influences the thermoelectric properties of SPEs thermoelectric
properties, providing insights into their potential for thermoelectric
applications. Sodium-based electrolytes emerge as promising, sustainable
alternatives to lithium-based systems, aligning with sustainable energy
research demands.
|
We devise an explicit method for computing combinatorial formulae for
Hadamard products of certain rational generating functions. The latter arise
naturally when studying so-called ask zeta functions of direct sums of modules
of matrices or class- and orbit-counting zeta functions of direct products of
nilpotent groups. Our method relies on shuffle compatibility of coloured
permutation statistics and coloured quasisymmetric functions, extending recent
work of Gessel and Zhuang.
|
Quantum Monte Carlo approaches such as the diffusion Monte Carlo (DMC) method
are among the most accurate many-body methods for extended systems. Their
scaling makes them well suited for defect calculations in solids. We review the
various approximations needed for DMC calculations of solids and the results of
previous DMC calculations for point defects in solids. Finally, we present
estimates of how approximations affect the accuracy of calculations for
self-interstitial formation energies in silicon and predict DMC values of
4.4(1), 5.1(1) and 4.7(1) eV for the X, T and H interstitial defects,
respectively, in a 16(+1)-atom supercell.
|
This paper introduces a semi-supervised contrastive learning framework and
its application to text-independent speaker verification. The proposed
framework employs generalized contrastive loss (GCL). GCL unifies losses from
two different learning frameworks, supervised metric learning and unsupervised
contrastive learning, and thus it naturally determines the loss for
semi-supervised learning. In experiments, we applied the proposed framework to
text-independent speaker verification on the VoxCeleb dataset. We demonstrate
that GCL enables the learning of speaker embeddings in three manners,
supervised learning, semi-supervised learning, and unsupervised learning,
without any changes in the definition of the loss function.
|
The typical temporal resolution used in modern simulations significantly
exceeds characteristic time scales at which the system is driven. This is
especially so when systems are simulated over time-scales that are much longer
than the typical temporal scales of forcing factors. We investigate the impact
of space-time upscaling on reactive transport in porous media driven by
time-dependent boundary conditions whose characteristic time scale is much
smaller than that at which transport is studied or observed at the macroscopic
level. The focus is on transport of a reactive solute undergoing diffusion,
advection and heterogeneous reaction on the solid grain boundaries. We first
introduce the concept of spatiotemporal upscaling in the context of
homogenization by multiple-scale expansions, and demonstrate the impact of
time-dependent forcings and boundary conditions on macroscopic reactive
transport. We then derive the macroscopic equation as well as the corresponding
applicability conditions based on the order of magnitude of the P\'{e}clet and
Damk\"{o}hler dimensionless numbers. Finally, we demonstrate that the dynamics
at the continuum scale is strongly influenced by the interplay between signal
frequency at the boundary and transport processes at the pore level.
|
We inject a sequence of 1 ms current pulses into uniformly magnetized
patterns of the itinerant ferromagnet SrRuO3 until a magnetization reversal is
detected. We detect the effective temperature during the pulse and find that
the cumulative pulse time required to induce magnetization reversal depends
exponentially on 1/T. In addition, we find that the cumulative pulse time also
depends exponentially on the current amplitude. These observations indicate
current-induced magnetization reversal assisted by thermal fluctuations.
|
We examine whether the inverted hierarchical model of neutrinos is compatible
with the explanation of the large mixing angles (LMA)MSW solution of the solar
neutrino problem. The left-handed Majorana neutrino mass matrix for the
inverted hierarchical model, is generated through the seesaw mechanism using
the diagonal form of the Dirac neutrino mass matrix and the non-diagonal
texture of the right-handed Majorana mass matrix. In a model independent way,
we construct a specific form of the charged lepton mass matrix having a special
structure in 1-2 block, which contribution to the leptonic mixing (MNS) matrix
leads to the predictions $\sin^{2}2\theta_{12}=0.8517$,
$\sin^{2}2\theta_{23}=0.9494$, and $|V_{e3}|=0.159$ at the unification scale.
These predictions are found to be consistent with the LMA MSW solution of the
solar neutrino problem. The inverted hierarchical model is also found to be
stable against the quantum radiative corrections in the MSSM. A numerical
analysis of the renormalisation group equations (RGEs) in the MSSM shows a mild
decrease of the mixing angles with the decrease of energy scale and the
corresponding values of the neutrino mixings at the top-quark mass scale are
found as $\sin^{2}2\theta_{12}=0.8472$, $\sin^{2}2\theta_{23}=0.9399$,
$|V_{e3}|=0.1509$ respectively.
|
We have used available intermediate degree p-mode frequencies for the solar
cycle 23 to check the validity of previously derived empirical relations for
frequency shifts (Jain et al.: 2000, Solar Phys., 192, 487). We find that the
calculated and observed frequency shifts during the rising phase of the cycle
23 are in good agreement. The observed frequency shift from minimum to maximum
of this cycle as calculated from MDI frequency data sets is 251 $\pm$ 7 nHz and
from GONG data is 238 $\pm$ 11 nHz. These values are in close agreement with
the empirically predicted value of 271 $\pm$ 22 nHz.
|
The characteristic size of penumbral structures are still below the current
resolution limit of modern solar telescopes. Though we have seen a significant
progress in theoretical work over the last decades no tight constraints can be
placed on the size of penumbral structures in order to favor models with
relatively large and thick magnetic flux elements, just at or below the current
resolution limit, or on the other hand, clusters of optically thin
micro-structures.
Based on a macroscopic 2-component inversion and the approach of polarized
radiative transfer in stochastic media, we have estimated the characteristic
length scale of the magnetic fluctuation in a sunspot penumbra from observed
Stokes spectra. The results yield a coherent picture for the entire magnetic
neutral line of the penumbra and indicate that the magnetic fluctuations have a
typical length scale between 30 km and 70 km.
|
Transformer-based architectures have become competitive across a variety of
visual domains, most notably images and videos. While prior work studies these
modalities in isolation, having a common architecture suggests that one can
train a single unified model for multiple visual modalities. Prior attempts at
unified modeling typically use architectures tailored for vision tasks, or
obtain worse performance compared to single modality models. In this work, we
show that masked autoencoding can be used to train a simple Vision Transformer
on images and videos, without requiring any labeled data. This single model
learns visual representations that are comparable to or better than
single-modality representations on both image and video benchmarks, while using
a much simpler architecture. Furthermore, this model can be learned by dropping
90% of the image and 95% of the video patches, enabling extremely fast training
of huge model architectures. In particular, we show that our single ViT-Huge
model can be finetuned to achieve 86.6% on ImageNet and 75.5% on the
challenging Something Something-v2 video benchmark, setting a new
state-of-the-art.
|
In the two-dimensional framework, the surface gravity of a (classical) black
hole is independent of its mass $M$. As a consequence, the Hawking temperature
and outflux are also independent of $M$ at the large-$M$ limit. (This contrasts
with the four-dimensional framework, in which the surface gravity and
temperature scale as 1/M.) However, when the semiclassical backreaction effects
on the black-hole geometry are taken into account, the surface gravity is no
longer $M$-independent, and the same applies to the Hawking temperature and
outflux. This effect, which vanishes at the large-$M$ limit, increases with
decreasing $M$. Here we analyze the semiclassical field equations for a
two-dimensional static black hole, and calculate the leading-order backreaction
effect ($\propto 1/M$) on the Hawking temperature and outflux. We then confirm
our analytical result by numerically integrating the semiclassical field
equations.
|
In this survey, we give an introduction to nearly K\"ahler geometry, and list
some results on submanifolds of these spaces. This survey tries by no means to
be complete.
|
Graph embedding methods are becoming increasingly popular in the machine
learning community, where they are widely used for tasks such as node
classification and link prediction. Embedding graphs in geometric spaces should
aid the identification of network communities as well, because nodes in the
same community should be projected close to each other in the geometric space,
where they can be detected via standard data clustering algorithms. In this
paper, we test the ability of several graph embedding techniques to detect
communities on benchmark graphs. We compare their performance against that of
traditional community detection algorithms. We find that the performance is
comparable, if the parameters of the embedding techniques are suitably chosen.
However, the optimal parameter set varies with the specific features of the
benchmark graphs, like their size, whereas popular community detection
algorithms do not require any parameter. So it is not possible to indicate
beforehand good parameter sets for the analysis of real networks. This finding,
along with the high computational cost of embedding a network and grouping the
points, suggests that, for community detection, current embedding techniques do
not represent an improvement over network clustering algorithms.
|
In this paper, the author obtain the continuity of a class of linear
operators on variable anisotropic Hardy-Lorentz spaces.
In addition, the author also obtain that the dual space of variable
anisotropic Hardy-Lorentz spaces is the anisotropic BMO-type spaces with
variable exponents.
This result is still new even when the exponent function $p(\cdot)$ is $p$.
|
Configurations of rigid collections of saddle connections are connected
component invariants for strata of the moduli space of quadratic differentials.
They have been classified for strata of Abelian differentials by Eskin, Masur
and Zorich. Similar work for strata of quadratic differentials has been done in
Masur and Zorich, although in that case the connected components were not
distinguished. We classify the configurations for quadratic differentials on
the Riemann sphere and on hyperelliptic connected components of the moduli
space of quadratic differentials. We show that, in genera greater than five,
any configuration that appears in the hyperelliptic connected component of a
stratum also appears in the non-hyperelliptic one.
|
In a static gravitational field an intersection of a worldline by a global
hypersurface of simultaneity t=const gives an invariant constraint relating the
proper time of this event by t. Since at any finite t the such constrained
proper time intervals are less than reqiured for crossing a horizon, general
relativity predicts the gravitational freezing of proper times in stars with
time-like or null geodesics everywhere. The time dilation stabilizes
contracting massive stars by freezing, which is maximal but finite at the
centre, and the surface is frozen near the gravitational radius. The frozen
stars (frozars) slowly defrost due to emissions and external interactions, the
internal phase transitions can initiate refreezing with bursts and explosions.
|
When a two-dimensional Ising ferromagnet is quenched from above the critical
temperature to zero temperature, the system eventually converges to either a
ground state (all spins aligned) or an infinitely long-lived metastable stripe
state. By applying results from percolation theory, we analytically determine
the probability to reach the stripe state as a function of the aspect ratio and
the form of the boundary conditions. These predictions agree with simulation
results. Our approach generally applies to coarsening dynamics of non-conserved
scalar fields in two dimensions.
|
Let $C$ be a curve of genus 2 and $\psi_1:C \lar E_1$ a map of degree $n$,
from $C$ to an elliptic curve $E_1$, both curves defined over $\bC$. This map
induces a degree $n$ map $\phi_1:\bP^1 \lar \bP^1$ which we call a Frey-Kani
covering. We determine all possible ramifications for $\phi_1$. If $\psi_1:C
\lar E_1$ is maximal then there exists a maximal map $\psi_2:C\lar E_2$, of
degree $n$, to some elliptic curve $E_2$ such that there is an isogeny of
degree $n^2$ from the Jacobian $J_C$ to $E_1 \times E_2$. We say that $J_C$ is
$(n,n)$-decomposable. If the degree $n$ is odd the pair $(\psi_2, E_2)$ is
canonically determined. For $n=3, 5$, and 7, we give arithmetic examples of
curves whose Jacobians are $(n,n)$-decomposable.
|
We prove that the group G=Hom(P,Z) of all homomorphisms from the Baer-Specker
group P to the group Z of integer numbers endowed with the topology of
pointwise convergence contains no infinite compact subsets. We deduce from this
fact that the second Pontryagin dual of G is discrete. As G is non-discrete, it
is not reflexive. Since G can be viewed as a closed subgroup of the Tychonoff
product of continuum many copies of the integers Z, this provides an example of
a group described in the title, thereby answering Problem 11 from [J.Galindo,
L.Recorder-N\'{u}\~{n}ez, M.Tkachenko, Reflexivity of prodiscrete topological
groups, J. Math. Anal. Appl. 384 (2011), 320--330.] It follows that an inverse
limit of finitely generated (torsion-)free discrete abelian groups need not be
reflexive.
|
Clouds affected by solar eclipses could influence the reflection of sunlight
back into space and might change local precipitation patterns. Satellite cloud
retrievals have so far not taken into account the lunar shadow, hindering a
reliable spaceborne assessment of the eclipse-induced cloud evolution. Here we
use satellite cloud measurements during three solar eclipses between 2005 and
2016 that have been corrected for the partial lunar shadow together with
large-eddy simulations to analyze the eclipse-induced cloud evolution. Our
corrected data reveal that, over cooling land surfaces, shallow cumulus clouds
start to disappear at very small solar obscurations. Our simulations explain
that the cloud response was delayed and was initiated at even smaller solar
obscurations. We demonstrate that neglecting the disappearance of clouds during
a solar eclipse could lead to a considerable overestimation of the
eclipse-related reduction of net incoming solar radiation. These findings
should spur cloud model simulations of the direct consequences of
sunlight-intercepting geoengineering proposals, for which our results serve as
a unique benchmark.
|
The COmpact detectoR for the Eic (CORE) Proposal was submitted to the EIC
"Call for Collaboration Proposals for Detectors". CORE comprehensively covers
the physics scope of the EIC Community White Paper and the National Academies
of Science 2018 report. The design exploits advances in detector precision and
granularity to minimize size. The central detector includes a 3Tesla, 2.5m
solenoid. Tracking is primarily silicon. Electromagnetic calorimetry is based
on the high performance crystals. Ring-imaging Cherenkov detectors provide
hadronic particle identification.
|
A new concept is proposed to solve the solar neutrino problem, that is based
on a hypothesis about the existence of a new interaction of electron neutrinos
with nucleons mediated by massless pseudoscalar bosons. At every collision of a
neutrino with nucleons of the Sun, its handedness changes from left to right
and vice versa, and its energy decreases. The postulated hypothesis, having
only one free parameter, provides a good agreement between the calculated and
experimental characteristics of all five observed processes with solar
neutrinos.
|
The launch of ${\it JWST}$ opens a new window for studying the connection
between metal-line absorbers and galaxies at the end of the Epoch of
Reionization (EoR). Previous studies have detected absorber-galaxy pairs in
limited quantities through ground-based observations. To enhance our
understanding of the relationship between absorbers and their host galaxies at
$z>5$, we utilized the NIRCam Wide Field Slitless Spectroscopy (WFSS) to search
for absorber-associated galaxies by detecting their rest-frame optical emission
lines (e.g., [OIII] + H$\beta$). We report the discovery of a MgII-associated
galaxy at $z=5.428$ using data from the ${\it JWST}$ ASPIRE program. The MgII
absorber is detected on the spectrum of quasar J0305--3150 with a rest-frame
equivalent width of 0.74$\mathring{A}$. The associated galaxy has an [OIII]
luminosity of $10^{42.5}\ {\rm erg\ s^{-1}}$ with an impact parameter of 24.9
proper kiloparsecs (pkpc). The joint ${\it HST}$-${\it JWST}$ spectral energy
distribution (SED) implies a stellar mass and star-formation rate of ${\rm M_*
\approx 10^{8.8}}$ ${\rm M_{\odot}}$, ${\rm SFR}\approx 10\ {\rm M_{\odot}\
yr^{-1}}$. Its [OIII] equivalent width and stellar mass are typical of [OIII]
emitters at this redshift. Furthermore, connecting the outflow starting time to
the SED-derived stellar age, the outflow velocity of this galaxy is $\sim300\
{\rm km\ s^{-1}}$, consistent with theoretical expectations. We identified six
additional [OIII] emitters with impact parameters of up to $\sim300$ pkpc at
similar redshifts ($|dv|<1000\ {\rm km\ s^{-1}}$). The observed number is
consistent with that in cosmological simulations. This pilot study suggests
that systematically investigating the absorber-galaxy connection within the
ASPIRE program will provide insights into the metal-enrichment history in the
early universe.
|
We propose an opinion dynamics model that combines processes of vanity and
opinion propagation. The interactions take place between randomly chosen pairs.
During an interaction, the agents propagate their opinions about themselves and
about other people they know. Moreover, each individual is subject to vanity:
if her interlocutor seems to value her highly, then she increases her opinion
about this interlocutor. On the contrary she tends to decrease her opinion
about those who seem to undervalue her. The combination of these dynamics with
the hypothesis that the opinion propagation is more efficient when coming from
highly valued individuals, leads to different patterns when varying the
parameters. For instance, for some parameters the positive opinion links
between individuals generate a small world network. In one of the patterns,
absolute dominance of one agent alternates with a state of generalised
distrust, where all agents have a very low opinion of all the others (including
themselves). We provide some explanations of the mechanisms behind these
emergent behaviors and finally propose a discussion about their interest
|
Directed graphs are a natural model for many phenomena, in particular
scientific knowledge graphs such as molecular interaction or chemical reaction
networks that define cellular signaling relationships. In these situations,
source nodes typically have distinct biophysical properties from sinks. Due to
their ordered and unidirectional relationships, many such networks also have
hierarchical and multiscale structure. However, the majority of methods
performing node- and edge-level tasks in machine learning do not take these
properties into account, and thus have not been leveraged effectively for
scientific tasks such as cellular signaling network inference. We propose a new
framework called Directed Scattering Autoencoder (DSAE) which uses a directed
version of a geometric scattering transform, combined with the non-linear
dimensionality reduction properties of an autoencoder and the geometric
properties of the hyperbolic space to learn latent hierarchies. We show this
method outperforms numerous others on tasks such as embedding directed graphs
and learning cellular signaling networks.
|
This paper studies how long it takes the orbit of the chaos game to reach a
certain density inside the attractor of a strictly contracting iterated
function system of which we only assume that its lower dimension is positive.
We show that the rate of growth of this cover time is determined by the
Minkowski dimension of the push-forward of the shift invariant measure with
exponential decay of correlations driving the chaos game. Moreover, we bound
the expected value of the cover time from above and below with multiplicative
logarithmic correction terms. As an application, for Bedford-McMullen carpets
we completely characterise the family of probability vectors which minimise the
Minkowski dimension of Bernoulli measures. Interestingly, these vectors have
not appeared in any other aspect of Bedford-McMullen carpets before.
|
In this work, we investigate the problem of simultaneous blind demixing and
super-resolution. Leveraging the subspace assumption regarding unknown point
spread functions, this problem can be reformulated as a low-rank matrix
demixing problem. We propose a convex recovery approach that utilizes the
low-rank structure of each vectorized Hankel matrix associated with the target
matrix. Our analysis reveals that for achieving exact recovery, the number of
samples needs to satisfy the condition $n\gtrsim Ksr \log (sn)$. Empirical
evaluations demonstrate the recovery capabilities and the computational
efficiency of the convex method.
|
We introduce a new induction scheme for non-uniformly expanding maps $f$ of
compact Riemannian manifolds, proving that the existence of a
Gibbs-Markov-Young structure is a necessary condition for $f$ to preserve an
absolutely continuous probability with all its Lyapunov exponents positive.
|
Multiple viable theoretical models predict heavy dark matter particles with a
mass close to the Planck mass, a range relatively unexplored by current
experimental measurements. We use 219.4 days of data collected with the XENON1T
experiment to conduct a blind search for signals from Multiply-Interacting
Massive Particles (MIMPs). Their unique track signature allows a targeted
analysis with only 0.05 expected background events from muons. Following
unblinding, we observe no signal candidate events. This work places strong
constraints on spin-independent interactions of dark matter particles with a
mass between 1$\times$10$^{12}\,$GeV/c$^2$ and 2$\times$10$^{17}\,$GeV/c$^2$.
In addition, we present the first exclusion limits on spin-dependent
MIMP-neutron and MIMP-proton cross-sections for dark matter particles with
masses close to the Planck scale.
|
DSLR cameras can achieve multiple zoom levels via shifting lens distances or
swapping lens types. However, these techniques are not possible on smartphone
devices due to space constraints. Most smartphone manufacturers adopt a hybrid
zoom system: commonly a Wide (W) camera at a low zoom level and a Telephoto (T)
camera at a high zoom level. To simulate zoom levels between W and T, these
systems crop and digitally upsample images from W, leading to significant
detail loss. In this paper, we propose an efficient system for hybrid zoom
super-resolution on mobile devices, which captures a synchronous pair of W and
T shots and leverages machine learning models to align and transfer details
from T to W. We further develop an adaptive blending method that accounts for
depth-of-field mismatches, scene occlusion, flow uncertainty, and alignment
errors. To minimize the domain gap, we design a dual-phone camera rig to
capture real-world inputs and ground-truths for supervised training. Our method
generates a 12-megapixel image in 500ms on a mobile platform and compares
favorably against state-of-the-art methods under extensive evaluation on
real-world scenarios.
|
We derive a new unidirectional evolution equation for photonic nanowires made
of silica. Contrary to previous approaches, our formulation simultaneously
takes into account both the vector nature of the electromagnetic field and the
full variations of the effective modal profiles with wavelength. This leads to
the discovery of new, previously unexplored nonlinear effects which have the
potential to affect soliton propagation considerably. We specialize our
theoretical considerations to the case of perfectly circular silica strands in
air, and we support our analysis with detailed numerical simulations.
|
We investigate the cohomology rings of regular semisimple Hessenberg
varieties whose Hessenberg functions are of the form $h=(h(1),n\dots,n)$ in Lie
type $A_{n-1}$. The main result of this paper gives an explicit presentation of
the cohomology rings in terms of generators and their relations. Our
presentation naturally specializes to Borel's presentation of the cohomology
ring of the flag variety and it is compatible with the representation of the
symmetric group $\mathfrak{S}_n$ on the cohomology constructed by J. Tymoczko.
As a corollary, we also give an explicit presentation of the
$\mathfrak{S}_n$-invariant subring of the cohomology ring.
|
The mean parallel current density evolution equation is presented using
electromagnetic (EM) gyrokinetic equation. There exist two types of intrinsic
current driving mechanisms resulted from EM electron temperature gradient (ETG)
turbulence. The first type is the divergence of residual turbulent flux
including a residual stress-like term and a kinetic stress-like term. The
second type is named as residual turbulent source, which is driven by the
correlation between density and parallel electric field fluctuations. The
intrinsic current density driven by the residual turbulent source is negligible
as compared to that driven by the residual turbulent flux. The ratio of
intrinsic current density driven by EM ETG turbulence to the background
bootstrap current density is estimated. The local intrinsic current density
driven by the residual turbulent flux for mesoscale variation of turbulent flux
can reach about 80% of the bootstrap current density in the core region of ITER
standard scenario, but there is no net intrinsic current on a global scale.
Based on this, the local intrinsic current driven by EM micro-turbulence and
its effects on local modification of the profile of safety factor may be needed
to be carefully taken into account in the future device with high beta_e which
is the ratio between electron pressure to the magnetic pressure.
|
It is shown that the operad maps $E_n\to E_{n+k}$ are formal over the reals
for $k\geq 2$ and non-formal for $k=1$. Furthermore we compute the cohomology
of the deformation complex of the operad maps $E_{n}\to E_{n+1}$, proving an
algebraic version of the Cerf Lemma.
|
High Utility Itemset (HUI) mining problem is one of the important problems in
the data mining literature. The problem offers greater flexibility to a
decision maker to incorporate her/his notion of utility into the pattern mining
process. The problem, however, requires the decision maker to choose a minimum
utility threshold value for discovering interesting patterns. This is quite
challenging due to the disparate itemset characteristics and their utility
distributions. In order to address this issue, Top-K High Utility Itemset
(THUI) mining problem was introduced in the literature. THUI mining problem is
primarily a variant of the HUI mining problem that allows a decision maker to
specify the desired number of HUIs rather than the minimum utility threshold
value. Several algorithms have been introduced in the literature to efficiently
mine top-k HUIs. This paper systematically analyses the top-k HUI mining
methods in the literature, describes the methods, and performs a comparative
analysis. The data structures, threshold raising strategies, and pruning
strategies adopted for efficient top-k HUI mining are also presented and
analysed. Furthermore, the paper reviews several extensions of the top-k HUI
mining problem such as data stream mining, sequential pattern mining and
on-shelf utility mining. The paper is likely to be useful for researchers to
examine the key methods in top-k HUI mining, evaluate the gaps in literature,
explore new research opportunities and enhance the state-of-the-art in high
utility pattern mining.
|
We experimentally probe nonlinear wave propagation in weakly compressed
granular media, and observe a crossover from quasi-linear sound waves at low
impact, to shock waves at high impact. We show that this crossover grows with
the confining pressure $P_0$, whereas the shock wave speed is independent of
$P_0$ --- two hallmarks of granular shocks predicted recently. The shocks
exhibit powerlaw attenuation, which we model with a logarithmic law implying
that local dissipation is weak. We show that elastic and potential energy
balance in the leading part of the shocks.
|
This paper studies the pair production of the doubly charged Higgs boson of
the left-right symmetric models using multilepton final state in the vector
boson fusion (VBF)-like processes. The study is performed in the framework
consistent with the model's correction to the standard model $\rho_{EW}$
parameter. VBF topological cuts, number of leptons in the final state and $p_T$
cuts on the leptons are found to be effective in suppressing the background.
Significant mass reach can be achieved for exclusion/discovery of the doubly
charge Higgs boson for the upcoming LHC run with a luminosity of
$\mathcal{O}(10^3)$ fb$^{-1}$.
|
Differentially private stochastic gradient descent (DP-SGD) is the standard
algorithm for training machine learning models under differential privacy (DP).
The major drawback of DP-SGD is the drop in utility which prior work has
comprehensively studied. However, in practice another major drawback that
hinders the large-scale deployment is the significantly higher computational
cost. We conduct a comprehensive empirical study to quantify the computational
cost of training deep learning models under DP and benchmark methods that aim
at reducing the cost. Among these are more efficient implementations of DP-SGD
and training with lower precision. Finally, we study the scaling behaviour
using up to 80 GPUs.
|
In this paper, we make use of holographic Boundary Conformal Field Theory
(BCFT) to simulate the black hole information problem in the semi-classical
picture. We investigate the correlation between a portion of Hawking radiation
and entanglement islands by the area of an entanglement wedge cross-section.
Building on the understanding of the relationship between entanglement wedge
cross-sections and perfect tensor entanglement as discussed in reference [1],
we make an intriguing observation: in the semi-classical picture, the
positioning of an entanglement island automatically yields a pattern of perfect
tensor entanglement. Furthermore, the contribution of this perfect tensor
entanglement, combined with the bipartite entanglement contribution, precisely
determines the area of the entanglement wedge cross-section.
|
Quantum key distribution (QKD) has been developed for decades and several
different QKD protocols have been proposed. But two difficulties limit the
implementation of most QKD protocols. First, the involved participants are
required to have heavy quantum capabilities, such as quantum joint operation,
quantum register, and so on. Second, a hypothetical authenticated classical
channel is used in most of the existing QKD protocols and this assumed channel
does not exist in reality. To solve both the above limitations at the same
time, this study proposes three lightweight authenticated QKD protocols with
key recycling and shows these proposed protocols are robust under the
collective attack.
|
Using vanilla NeuralODEs to model large and/or complex systems often fails
due two reasons: Stability and convergence. NeuralODEs are capable of
describing stable as well as instable dynamic systems. Selecting an appropriate
numerical solver is not trivial, because NeuralODE properties change during
training. If the NeuralODE becomes more stiff, a suboptimal solver may need to
perform very small solver steps, which significantly slows down the training
process. If the NeuralODE becomes to instable, the numerical solver might not
be able to solve it at all, which causes the training process to terminate.
Often, this is tackled by choosing a computational expensive solver that is
robust to instable and stiff ODEs, but at the cost of a significantly decreased
training performance. Our method on the other hand, allows to enforce ODE
properties that fit a specific solver or application-related boundary
conditions. Concerning the convergence behavior, NeuralODEs often tend to run
into local minima, especially if the system to be learned is highly dynamic
and/or oscillating over multiple periods. Because of the vanishing gradient at
a local minimum, the NeuralODE is often not capable of leaving it and converge
to the right solution. We present a technique to add knowledge of ODE
properties based on eigenvalues - like (partly) stability, oscillation
capability, frequency, damping and/or stiffness - to the training objective of
a NeuralODE. We exemplify our method at a linear as well as a nonlinear system
model and show, that the presented training process is far more robust against
local minima, instabilities and sparse data samples and improves training
convergence and performance.
|
Mediums which do not support the propagation of plane waves with negative
phase velocity (NPV) when viewed at rest can support NPV propagation when they
are viewed in a reference frame which is uniformly translated at sufficiently
high velocity. Thus, relativistic negative refraction may be exploited in
astronomical scenarios.
|
For a given polarized toric variety, we define the notion of
$\lambda$-stability which is a natural generalization of uniform K-stability.
At the neighbourhoods of the vertices of the corresponding moment polytope
$\Delta$, we consider appropriate triangulations and give a sufficient criteria
for a $\lambda$-stable polarized toric variety $(X,L)$ to be asymptotically
Chow polystable when the obstruction of asymptotic Chow semistability (the
Futaki-Ono invariant) vanishes. As an application, we prove that any
K-semistable polarized smooth toric variety $(X,L)$ with the vanishing
Futaki-Ono invariant is asymptotically Chow polystable.
|
Large mass bolometers are used in particle physics experiments to search for
rare processes. By operating at low temperature, they are able to detect
particle energies from few keV up to several MeV, measuring the temperature
rise produced by the energy released. This study was performed on the
bolometers of the CUORE experiment. The response function of these detectors is
not linear in the energy range of interest, and it changes with the operating
temperature. The non-linearity is found to be dominated by the thermistor and
its biasing circuit. A method to obtain a linear response is the result of this
work. It allows a great simplification of the data analysis.
|
Following the discovery by Quashnock and Lamb (1993) of an apparent excess of
$\gamma$-ray burst pairs with small angular separations, we reanalyze the
angular distribution of the bursts in the BATSE catalogue. We find that in
addition to an excess of close pairs, there is also a comparable excess of
antipodal bursts, i.e pairs of bursts separated by about 180 degrees in the
sky. Both excesses have only modest statistical significance. We reject the
hypothesis put forward by Quashnock and Lamb that burst sources are repeaters,
since it is obvious that this hypothesis does not predict an excess of
antipodal coincidences. Lacking any physical model of bursts that can explain
the antipodal pairs, we suggest that the two excesses seen in the data are
either due to an unusual statistical fluctuation or caused by some unknown
selection effect.
|
We study the problem of generating, ranking and unranking of unlabeled
ordered trees whose nodes have maximum degree of $\Delta$. This class of trees
represents a generalization of chemical trees. A chemical tree is an unlabeled
tree in which no node has degree greater than 4. By allowing up to $\Delta$
children for each node of chemical tree instead of 4, we will have a
generalization of chemical trees. Here, we introduce a new encoding over an
alphabet of size 4 for representing unlabeled ordered trees with maximum degree
of $\Delta$. We use this encoding for generating these trees in A-order with
constant average time and O(n) worst case time. Due to the given encoding, with
a precomputation of size and time O(n^2) (assuming $\Delta$ is constant), both
ranking and unranking algorithms are also designed taking O(n) and O(nlogn)
time complexities.
|
We demonstrate that free graphene sheet edges can curl back on
themselves,reconstructing as nanotubes. This results in lower formation
energies than any other non-functionalised edge structure reported to date in
the literature. We determine the critical tube size and formation barrier and
compare with density functional simulations of other edge terminations
including a new reconstructed Klein edge. Simulated high resolution electron
microscopy images show why such rolled edges may be difficult to detect. Rolled
zigzag edges serve as metallic conduction channels, separated from the
neighbouring bulk graphene by a chain of insulating sp$^3$-carbon atoms, and
introduce Van Hove singularities into the graphene density of states.
|
We investigate the effects of pre-hydrodynamic evolution on final-state
observables in heavy-ion collisions using state-of-the art event simulations
coupled to different pre-hydrodynamic scenarios, which include the
recently-developed effective kinetic transport theory evolution model KoMPoST.
Flow observables are found to be insensitive to the details of pre-hydrodynamic
evolution. The main effect we observe is in the $p_T$ spectra, particularly the
mean transverse momentum. However, at least part of this effect is a
consequence of the underlying conformal invariance assumption currently present
in such approaches, which is known to be violated in the temperature regime
probed in heavy-ion collisions. This assumption of early time conformal
invariance leads to an artificially large out-of-equilibrium bulk pressure when
switching from (conformal) pre-hydrodynamic evolution to hydrodynamics (using
the non-conformal QCD equation of state), which in turn increases the
transverse momentum. Our study indicates that a consistent treatment of
pre-hydrodynamic evolution in heavy-ion collisions requires the use of
non-conformal models of early time dynamics.
|
Block-based environments are visual programming environments, which are
becoming more and more popular because of their ease of use. The ease of use
comes thanks to their intuitive graphical representation and structural
metaphors (jigsaw-like puzzles) to display valid combinations of language
constructs to the users. Part of the current popularity of block-based
environments is thanks to Scratch. As a result they are often associated with
tools for children or young learners. However, it is unclear how these types of
programming environments are developed and used in general. So we conducted a
systematic literature review on block-based environments by studying 152 papers
published between 2014 and 2020, and a non-systematic tool review of 32
block-based environments. In particular, we provide a helpful inventory of
block-based editors for end-users on different topics and domains. Likewise, we
focused on identifying the main components of block-based environments, how
they are engineered, and how they are used. This survey should be equally
helpful for language engineering researchers and language engineers alike.
|
The present work is a brief review of the progressive search of improper
delta-functions which are of interest in Quantum Mechanics and in the problem
of motion in General Relativity Theory.
|
Speculative decoding (SD) has attracted a significant amount of research
attention due to the substantial speedup it can achieve for LLM inference.
However, despite the high speedups they offer, speculative decoding methods
often achieve optimal performance on high-end devices or with a substantial GPU
memory overhead. Given limited memory and the necessity of quantization, a
high-performing model on a high-end GPU can slow down by up to 7 times. To this
end, we propose Skippy Simultaneous Speculative Decoding (or S3D), a
cost-effective self-speculative SD method based on simultaneous multi-token
decoding and mid-layer skipping. When compared against recent effective
open-source SD systems, our method has achieved one of the top
performance-memory ratios while requiring minimal architecture changes and
training data. Leveraging our memory efficiency, we created a smaller yet more
effective SD model based on Phi-3. It is 1.4 to 2 times faster than the
quantized EAGLE model and operates in half-precision while using less VRAM.
|
For any function $f: X \times Y \to Z$, we prove that $Q^{*\text{cc}}(f)
\cdot Q^{\text{OIP}}(f) \cdot (\log Q^{\text{OIP}}(f) + \log |Z|) \geq
\Omega(\log |X|)$. Here, $Q^{*\text{cc}}(f)$ denotes the bounded-error
communication complexity of $f$ using an entanglement-assisted two-way qubit
channel, and $Q^{\text{OIP}}(f)$ denotes the number of quantum queries needed
to learn $x$ with high probability given oracle access to the function $f_x(y)
\stackrel{\text{def}}{=} f(x, y)$. We show that this tradeoff is close to the
best possible. We also give a generalization of this tradeoff for
distributional query complexity.
As an application, we prove an optimal $\Omega(\log q)$ lower bound on the
$Q^{*\text{cc}}$ complexity of determining whether $x + y$ is a perfect square,
where Alice holds $x \in \mathbf{F}_q$, Bob holds $y \in \mathbf{F}_q$, and
$\mathbf{F}_q$ is a finite field of odd characteristic. As another application,
we give a new, simpler proof that searching an ordered size-$N$ database
requires $\Omega(\log N / \log \log N)$ quantum queries. (It was already known
that $\Theta(\log N)$ queries are required.)
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.