text
stringlengths 6
128k
|
---|
In the past decades microRNAs (miRNA) have much attracted the attention of
researchers at the interface between life and theoretical sciences for their
involvement in post-transcriptional regulation and related diseases. Thanks to
the always more sophisticated experimental techniques, the role of miRNAs as
"noise processing units" has been further elucidated and two main ways of miRNA
noise-control have emerged by combinations of theoretical and experimental
studies. While on one side miRNA were thought to buffer gene expression noise,
it has recently been suggested that miRNA could also increase the cell-to-cell
variability of their targets. In this Mini Review, we focus on the miRNA role
in noise processing and on the inference of the parameters defined by the
related theoretical modelling.
|
A recent approach based on Bayesian inverse planning for the "theory of mind"
has shown good performance in modeling human cognition. However, perfect
inverse planning differs from human cognition during one kind of complex tasks
due to human bounded rationality. One example is an environment in which there
are many available plans for achieving a specific goal. We propose a "plan
predictability oriented model" as a model of inferring other peoples' goals in
complex environments. This model adds the bias that people prefer predictable
plans. This bias is calculated with simple plan prediction. We tested this
model with a behavioral experiment in which humans observed the partial path of
goal-directed actions. Our model had a higher correlation with human inference.
We also confirmed the robustness of our model with complex tasks and determined
that it can be improved by taking account of individual differences in "bounded
rationality".
|
Our understanding of physical systems generally depends on our ability to
match complex computational modelling with measured experimental outcomes.
However, simulations with large parameter spaces suffer from inverse problem
instabilities, where similar simulated outputs can map back to very different
sets of input parameters. While of fundamental importance, such instabilities
are seldom resolved due to the intractably large number of simulations required
to comprehensively explore parameter space. Here we show how Bayesian machine
learning can be used to address inverse problem instabilities, and apply it to
two popular experimental diagnostics in plasma physics. We find that the
extraction of information from measurements simply on the basis of agreement
with simulations is unreliable, and leads to a significant underestimation of
uncertainties. We describe how to statistically quantify the effect of unstable
inverse models, and describe an approach to experimental design that mitigates
its impact.
|
A practical communication channel often suffers from constraints on input
other than the average power, such as the peak power constraint. In order to
compare achievable rates with different constellations as well as the channel
capacity under such constraints, it is crucial to take these constraints into
consideration properly. In this paper, we propose a direct approach to compare
the achievable rates of practical input constellations and the capacity under
such constraints. As an example, we study the discrete-time complex-valued
additive white Gaussian noise (AWGN) channel and compare the capacity under the
peak power constraint with the achievable rates of phase shift keying (PSK) and
quadrature amplitude modulation (QAM) input constellations.
|
Recent results applying resurgence theory to finite-temperature field
theories yield a detailed analytic structure determined by topological
excitations. We examine finite-temperature SU(N) lattice gauge theories in
light of these results. Double-trace Polyakov loop deformations move through
different regions of the confined phase characterized by continuous change in
the adjoint Polyakov loop. Lattice models show how the behavior of monopole
constituents of calorons can change in the different confining regions. We
conjecture that the pure SU(N) gauge theory is close to a special symmetric
point where monopole effects give rise to Casimir string-tension scaling.
|
We construct a bicovariant differential calculus on the quantum group
$GL_q(3)$, and discuss its restriction to $[SU(3) \otimes U(1)]_q$. The
$q$-algebra of Lie derivatives is found, as well as the Cartan-Maurer
equations. All the quantities characterizing the non-commutative geometry of
$GL_q(3)$ are given explicitly.
|
The probability amplitude for $N$ particles in a quantum gas with negligible
range of interparticle interaction potentials to come to a small region of size
$r$ scales like $r^\gamma$. It is shown that $\gamma$ is quantitatively related
to the ground state energy of these $N$ fermions in the unitarity limit,
confined by an isotropic harmonic potential. For large $N$, the short range
density distribution of these $N$ particles is predominantly the same as the
Thomas-Fermi profile of the gas in the unitarity limit confined by such a
harmonic potential. These results may shed light on strongly interacting
ultracold atomic Fermi gases, in a trap or on an optical lattice.
|
The separability problem for word languages of a class $\mathcal{C}$ by
languages of a class $\mathcal{S}$ asks, for two given languages $I$ and $E$
from $\mathcal{C}$, whether there exists a language $S$ from $\mathcal{S}$ that
includes $I$ and excludes $E$, that is, $I \subseteq S$ and $S\cap E =
\emptyset$. In this work, we assume some mild closure properties for
$\mathcal{C}$ and study for which such classes separability by a piecewise
testable language (PTL) is decidable. We characterize these classes in terms of
decidability of (two variants of) an unboundedness problem. From this, we
deduce that separability by PTL is decidable for a number of language classes,
such as the context-free languages and languages of labeled vector addition
systems. Furthermore, it follows that separability by PTL is decidable if and
only if one can compute for any language of the class its downward closure wrt.
the scattered substring ordering (i.e., if the set of scattered substrings of
any language of the class is effectively regular).
The obtained decidability results contrast some undecidability results. In
fact, for all (non-regular) language classes that we present as examples with
decidable separability, it is undecidable whether a given language is a PTL
itself.
Our characterization involves a result of independent interest, which states
that for any kind of languages $I$ and $E$, non-separability by PTL is
equivalent to the existence of common patterns in $I$ and $E$.
|
Metrics based on percentile ranks (PRs) for measuring scholarly impact
involves complex treatment because of various defects such as overvaluing or
devaluing an object caused by percentile ranking schemes, ignoring precise
citation variation among those ranked next to each other, and inconsistency
caused by additional papers or citations. These defects are especially obvious
in a small-sized dataset. To avoid the complicated treatment of PRs based
metrics, we propose two new indicators - the citation-based indicator (CBI) and
the combined impact indicator (CII). Document types of publications are taken
into account. With the two indicators, one would no more be bothered by complex
issues encountered by PRs based indicators. For a small-sized dataset with less
than 100 papers, special calculation is no more needed. The CBI is based solely
on citation counts and the CII measures the integrate contributions of
publications and citations. Both virtual and empirical data are used so as to
compare the effect of related indicators. The CII and the PRs based indicator
I3 are highly correlated but the former reflects citation impact more and the
latter relates more to publications.
|
We use Gaia DR2 data to show that the globular cluster NGC5634 is physically
associated with an arm of the Sagittarius Stream, the huge system of tidal
tails created by the ongoing disruption of the Sagittarius dwarf spheroidal
galaxy (Sgr dSph). Two additional arms of the Stream are also detected along
the same line of sight, at different distances. We show that the Sgr Stream
stars surrounding NGC5634 are more metal-poor, on average, than those found in
the more distant Stream arm lying behind the cluster and in the main body of
Sgr~dSph, confirming that a significant metallicity (and, presumably, age)
gradient is present along the Stream. This analysis demonstrates the potential
of the Gaia DR2 catalogue to directly verify if a cluster is physically
associated to the Stream or not, without the need to rely on models of the
tidal disruption of this system. [Withdrawn: see comments]
|
The problem of detecting changes with multiple sensors has received
significant attention in the literature. In many practical applications such as
critical infrastructure monitoring and modeling of disease spread, a useful
change propagation model is one where change eventually happens at all sensors,
but where not all sensors witness change at the same time instant. While prior
work considered the case of known change propagation dynamics, this paper
studies a more general setting of unknown change propagation pattern
(trajectory). A Bayesian formulation of the problem in both centralized and
decentralized settings is studied with the goal of detecting the first time
instant at which any sensor witnesses a change. Using the dynamic programming
(DP) framework, the optimal solution structure is derived and in the rare
change regime, several more practical change detection algorithms are proposed.
Under certain conditions, the first-order asymptotic optimality of a proposed
algorithm called multichart test is shown as the false alarm probability
vanishes. To further reduce the computational complexity, change detection
algorithms are proposed based on online estimation of the unknown change
propagation pattern. Numerical studies illustrate that the proposed detection
techniques offer near-optimal performance. Further, in the decentralized
setting, it is shown that if an event-triggered sampling scheme called
level-crossing sampling with hysteresis (LCSH) is used for sampling and
transmission of local statistics, the detection performance can be
significantly improved using the same amount of communication resources
compared to the conventional uniform-in-time sampling (US) scheme.
|
Mobile video traffic is dominant in cellular and enterprise wireless
networks. With the advent of diverse applications, network administrators face
the challenge to provide high QoE in the face of diverse wireless conditions
and application contents. Yet, state-of-the-art networks lack analytics for
QoE, as this requires support from the application or user feedback. While
there are existing techniques to map QoS to QoE by training machine learning
models without requiring user feedback, these techniques are limited to only
few applications, due to insufficient QoE ground-truth annotation for ML. To
address these limitations, we focus on video telephony applications and model
key artefacts of spatial and temporal video QoE. Our key contribution is
designing content- and device-independent metrics and training across diverse
WiFi conditions. We show that our metrics achieve a median 90% accuracy by
comparing with mean-opinion-score from more than 200 users and 800 video
samples over three popular video telephony applications -- Skype, FaceTime and
Google Hangouts. We further extend our metrics by using deep neural networks,
more specifically we use a combined CNN and LSTM model. We achieve a median
accuracy of 95% by combining our QoE metrics with the deep learning model,
which is a 38% improvement over the state-of-the-art well known techniques.
|
In this paper we study the existence of solutions for a new class of
nonlinear differential equations with three-point boundary conditions.
Existence of solutions are obtained by using the Leray-Schauder degree.
|
Weak gravitational lensing is one of the most important probes of the nature
of dark matter and dark energy. In order to extract cosmological information
from next-generation weak lensing surveys (e.g., Euclid, Roman, LSST, and CSST)
as much as possible, accurate measurements of weak lensing shear are required.
There are existing algorithms to measure the weak lensing shear on imaging
data, which have been successfully applied in previous surveys. In the
meantime, machine learning (ML) has been widely recognized in various
astrophysics applications in modeling and observations. In this work, we
present a fully deep-learning-based approach to measuring weak lensing shear
accurately. Our approach comprises two modules. The first one contains a
convolutional neural network (CNN) with two branches for taking galaxy images
and point spread function (PSF) simultaneously, and the output of this module
includes the galaxy's magnitude, size, and shape. The second module includes a
multiple-layer neural network (NN) to calibrate weak-lensing shear
measurements. We name the program Forklens and make it publicly available
online. Applying Forklens to CSST-like mock images, we achieve consistent
accuracy with traditional approaches (such as moment-based measurement and
forward model fitting) on the sources with high signal-to-noise ratios (S/N, >
20). For the sources with S/N < 10, Forklens exhibits an $\sim 36\%$ higher
Pearson coefficient on galaxy ellipticity measurements. After adopting galaxy
weighting, the shear measurements with Forklens deliver accuracy levels to
$0.2\%$. The whole procedure of Forklens is automated and costs about $0.7$
milliseconds per galaxy, which is appropriate for adequately taking advantage
of the sky coverage and depth of the upcoming weak lensing surveys.
|
The $L_{2}$-regularized loss of Deep Linear Networks (DLNs) with more than
one hidden layers has multiple local minima, corresponding to matrices with
different ranks. In tasks such as matrix completion, the goal is to converge to
the local minimum with the smallest rank that still fits the training data.
While rank-underestimating minima can be avoided since they do not fit the
data, GD might get stuck at rank-overestimating minima. We show that with SGD,
there is always a probability to jump from a higher rank minimum to a lower
rank one, but the probability of jumping back is zero. More precisely, we
define a sequence of sets $B_{1}\subset B_{2}\subset\cdots\subset B_{R}$ so
that $B_{r}$ contains all minima of rank $r$ or less (and not more) that are
absorbing for small enough ridge parameters $\lambda$ and learning rates
$\eta$: SGD has prob. 0 of leaving $B_{r}$, and from any starting point there
is a non-zero prob. for SGD to go in $B_{r}$.
|
Within the landscape of modified theories of gravity, progress in
understanding the behaviour of, and developing tests for, screening mechanisms
has been hindered by the complexity of the field equations involved, which are
nonlinear in nature and characterised by a large hierarchy of scales. This is
especially true of Vainshtein screening, where the fifth force is suppressed by
high-order derivative terms which dominate within a radius much larger than the
size of the source, known as the Vainshtein radius.
In this work, we present the numerical code $\varphi$enics, building on the
FEniCS library, to solve the full equations of motion from two theories of
interest for screening: a model containing high-order derivative operators in
the equation of motion and one characterised by nonlinear self-interactions in
two coupled scalar fields. We also include functionalities that allow the
computation of higher-order operators of the scalar fields in post-processing,
enabling us to check that the profiles we find are consistent solutions within
the effective field theory. These two examples illustrate the different
challenges experienced when trying to simulate such theories numerically, and
we show how these are addressed within this code. The examples in this paper
assume spherical symmetry, but the techniques may be straightforwardly
generalised to asymmetric configurations. This article therefore also provides
a worked example of how the finite element method can be employed to solve the
screened equations of motion. $\varphi$enics is publicly available and can be
adapted to solve other theories of screening.
|
We describe the action of the shifted Yangian of sl_2 on the cohomology
groups of the Quot schemes of 0-dimensional quotients on a smooth projective
curve. We introduce a commuting family of r operators in the positive half of
the Yangian, whose action yields a natural basis of the Quot cohomology. These
commuting operators further lead to formulas for the operators of
multiplication by the Segre classes of the universal bundle.
|
There has been a lot of research interest in modified gravity theories which
utilise the Vainshtein mechanism to recover standard general relativity in
regions with high matter density, such as the Dvali-Gabadadze-Porrati and
Galileon models. The strong nonlinearity in the field equations of these
theories implies that accurate theoretical predictions could only be made using
high-resolution cosmological simulations. Previously, such simulations were
usually done on regular meshes, which limits both their performance and the
accuracy. In this paper, we report the development of a new algorithm and code,
based on ECOSMOG, that uses adaptive mesh refinements to improve the efficiency
and precision in simulating the models with Vainshtein mechanism. We have made
various code tests against the numerical reliability, and found consistency
with previous simulations. We also studied the velocity field in the
self-accelerating branch of the DGP model. The code, parallelised using MPI, is
suitable for large cosmological simulations of Galileon-type modified gravity
theories.
|
Limit theorems for non-additive probabilities or non-linear expectations are
challenging issues which have raised progressive interest recently. The purpose
of this paper is to study the strong law of large numbers and the law of the
iterated logarithm for a sequence of random variables in a sub-linear
expectation space under a concept of extended independence which is much weaker
and easier to verify than the independence proposed by Peng (2008b). We
introduce a concept of extended negative dependence which is an extension of
this kind of weak independence and the extended negative independence relative
to classical probability appeared in recent literatures. Powerful tools as the
moment inequality and Kolmogorov's exponential inequality are established for
this kind of extended negatively independent random variables, which improve
those of Chen, Chen and Ng(2010) a lot. And the strong law of large numbers and
the law of iterated logarithm are obtained by applying these inequalities.
|
To rationalize the relatively high investment that industrial automation
systems entail, research in the field of intelligent machines should target
high value functions such as fettling, die-finishing, deburring, and
fixtureless manufacturing. For achieving this goal, past work has concentrated
on force control algorithms at the system level with limited focus on
performance expansion at the actuator level. We present a comprehensive
literature review on robot force control, including algorithms, specialized
actuators, and robot control software. A robot force control testbed was
developed using Schunk's PowerCube 6-DOF Arm and a six-axis ATI force/torque
sensor. Using parameter identification experiments, manipulator module inertias
and the motor torque constant were estimated. Experiments were conducted to
study the practical issues involved in implementing stable contact transitions
and programmable endpoint impedance. Applications to human augmentation,
virtual fixtures, and teleoperation are discussed. These experiments are used
as a vehicle to understand the performance improvement achievable at the
actuator level. The approach at UTRRG has been to maximize the choices within
the actuator to enhance its intelligence. Drawing on this 20-year research
history in electromechanical actuator architecture, we propose a new concept
that mixes two inputs, distinct in their velocity ratios, within the same dual
actuator called a Force/Motion Actuator (FMA). Detailed kinematic and dynamic
models of this dual actuator are developed. The actuator performance is
evaluated using simulations with an output velocity specification and resolving
input trajectories using a minimum-norm solution. It is shown that a design
choice of 14:1 motion scaling between the two inputs results in good
sensitivity to output force disturbances without compromising velocity tracking
performance.
|
Let $p$ be a prime number with $p>3$, $p\equiv 3\pmod{4}$ and let $n$ be a
positive integer. In this paper, we prove that the Diophantine equation
$(5pn^{2}-1)^{x}+(p(p-5)n^{2}+1)^{y}=(pn)^{z}$ has only the positive integer
solution $(x,y,z)=(1,1,2)$ where $pn \equiv \pm1 \pmod 5$. As an another
result, we show that the Diophantine equation
$(35n^{2}-1)^{x}+(14n^{2}+1)^{y}=(7n)^{z}$ has only the positive integer
solution $(x,y,z)=(1,1,2)$ where $n\equiv \pm 3% \pmod{5}$ or $5\mid n$. On the
proofs, we use the properties of Jacobi symbol and Baker's method.
|
Let $\Lambda$ be the path algebra of a finite quiver $Q$ over a
finite-dimensional algebra $A$. Then $\Lambda$-modules are identified with
representations of $Q$ over $A$. This yields the notion of monic
representations of $Q$ over $A$. If $Q$ is acyclic, then the
Gorenstein-projective $\m$-modules can be explicitly determined via the monic
representations. As an application, $A$ is self-injective if and only if the
Gorenstein-projective $\m$-modules are exactly the monic representations of $Q$
over $A$.
|
Bohr's complementarity and Schr\"odinger's entanglement are two prominent
physical characters of quantum systems. In this letter, we formally connect
them. It is known that complementarity relations for wave-particle duality are
saturated only for pure, single-quanton, quantum states. For mixed states, the
wave-particle quantifiers never saturate a complementarity relation and can
even reach zero for a maximally mixed state. To fully characterize a quanton,
it is not enough to consider its wave-particle aspect; we have also to regard
its quantum correlations with other systems. Here we prove that for any
complete complementarity relation involving predictability and visibility
measures that satisfy the criteria established in the literature, these
corresponding quantum correlations are entanglement monotones.
|
The motion of a projectile with horizontal initial velocity V0, moving under
the action of the gravitational field and a drag force is studied analytically.
As it is well known, the projectile reaches a terminal velocity Vterm. There is
a curious result concerning the minimum speed Vmin; it turns out that the
minimum velocity is lower than the terminal one if V0 > Vterm and is lower than
the initial one if V0 < Vterm. These results show that the velocity is not a
monotonous function. If the initial speed is not horizontal, there is an angle
range where the velocity shows the same behavior mentioned previously. Out of
that range, the volocity is a monotonous function. These results come out from
numerical simulations.
|
We investigate the properties of the deconfinement transition in SU(4) and
SU(6) gauge theories. We find that it is a `normal' first order transition in
both cases, from which we conclude that the transition is first order in the
N->infinity limit. Comparing our preliminary estimates of the continuum values
of Tc/sqrt(K) with existing values for SU(2) and SU(3) demonstrates a weak
dependence on N for all values of N.
|
For zero-balanced Gaussian hypergeometric functions $ F(a,b;a+b;x),$ $a,b>0,$
we determine maximal regions of $ab$ plane where well-known Landen identities
for the complete elliptic integral of the first kind turn on respective
inequalities valid for each $x\in (0,1)$. Thereby an exhausting answer is given
to an open problem.
|
Dependence on the graviton gauge enters the conventional effective field
equations because they fail to account for quantum gravitational correlations
with the source which excites the effective field and with the observer who
measures it. Including these correlations has been shown to eliminate gauge
dependence in flat space background. We generalize the technique to de Sitter
background for the case of the 1-loop graviton corrections to the exchange
potential of a massless, minimally coupled scalar.
|
The limited connectivity of current and next-generation quantum annealers
motivates the need for efficient graph-minor embedding methods. These methods
allow non-native problems to be adapted to the target annealer's architecture.
The overhead of the widely used heuristic techniques is quickly proving to be a
significant bottleneck for solving real-world applications. To alleviate this
difficulty, we propose a systematic and deterministic embedding method,
exploiting the structures of both the input graph of the specific problem and
the quantum annealer. We focus on the specific case of the Cartesian product of
two complete graphs, a regular structure that occurs in many problems. We
divide the embedding problem by first embedding one of the factors of the
Cartesian product in a repeatable pattern. The resulting simplified problem
consists of the placement and connecting together of these copies to reach a
valid solution. Aside from the obvious advantage of a systematic and
deterministic approach with respect to speed and efficiency, the embeddings
produced are easily scaled for larger processors and show desirable properties
for the number of qubits used and the chain length distribution. To conclude,
we briefly address the problem of circumventing inoperable qubits by presenting
possible extensions of the method.
|
The caloric curve for mononuclear configurations is studied with a schematic
model. We investigate the dependence of the entropy on the density and
effective mass profiles. A plateau in the caloric curve is a direct result of
decreasing density and the destruction of correlations with increasing
excitation. The mononuclear regime is metastable with respect to binary fission
at low excitation energy and unstable with respect to multifragmentation at
high excitation. The statistical framework presented here is suitable to treat
scenarios where experimental conditions are set to favor population of highly
excited mononuclear states.
|
Recent work on the complete wetting transition for three dimensional systems
with short-ranged forces has emphasized the role played by the coupling of
order-parameter fluctuations near the wall and depinning interface. It has been
proposed that an effective two-field Hamiltonian, which predicts a
renormalisation of the wetting parameter, could explain the controversy between
RG analysis of the capillary-wave model and Monte Carlo simulations on the
Ising model. In this letter results of extensive Monte Carlo simulations of the
two-field model are presented. The results are in agreement with prediction of
a renormalized wetting parameter $\omega $.
|
One of the major and unfortunately unforeseen sources of background for the
current generation of X-ray telescopes are few tens to hundreds of keV (soft)
protons concentrated by the mirrors. One such telescope is the European Space
Agency's (ESA) X-ray Multi-Mirror Mission (XMM-Newton). Its observing time lost
due to background contamination is about 40\%. This loss of observing time
affects all the major broad science goals of this observatory, ranging from
cosmology to astrophysics of neutron stars and black holes. The soft proton
background could dramatically impact future large X-ray missions such as the
ESA planned Athena mission (http://www.the-athena-x-ray-observatory.eu/).
Physical processes that trigger this background are still poorly understood. We
use a Machine Learning (ML) approach to delineate related important parameters
and to develop a model to predict the background contamination using 12 years
of XMM observations. As predictors we use the location of satellite, solar and
geomagnetic activity parameters. We revealed that the contamination is most
strongly related to the distance in southern direction, $Z$, (XMM observations
were in the southern hemisphere), the solar wind radial velocity and the
location on the magnetospheric magnetic field lines. We derived simple
empirical models for the first two individual predictors and an ML model which
utilizes an ensemble of the predictors (Extra Trees Regressor) and gives better
performance. Based on our analysis, future missions should minimize
observations during times associated with high solar wind speed and avoid
closed magnetic field lines, especially at the dusk flank region in the
southern hemisphere.
|
This paper explores how reliable broadcast can be implemented when facing a
dual adversary that can both corrupt processes and remove messages.More
precisely, we consider an asynchronous $n$-process message-passing systems in
which up to $t_b$ processes are Byzantine and where, at the network level, for
each message broadcast by a correct process, an adversary can prevent up to
$t_m$ processes from receiving it (the integer $t_m$ defines the power of the
message adversary).So, differently from previous works, this work considers
that not only computing entities can be faulty (Byzantine processes), but also
that the network can lose messages.To this end, the paper first introduces a
new basic communication abstraction denoted $k\ell$-cast, and studies its
properties in this new bi-dimensional adversary context.Then, the paper
deconstructs existing Byzantine-tolerant asynchronous broadcast algorithms and,
with the help of the $k\ell$-cast communication abstraction, reconstructs
versions of them that tolerate both Byzantine processes and message
adversaries.Interestingly, these reconstructed algorithms are also more
efficient than the Byzantine-tolerant-only algorithms from which they
originate.The paper also shows that the condition $n>3t_b+2t_m$ is necessary
and sufficient (with signatures) to design such reliable broadcast algorithms.
|
We find a different approach to define convex functions in the sub-Riemannian
setting. A function on a sub-Riemannian manifold is nonholonomically geodesic
convex if its restriction to any nonholonomic (straightest) geodesic is convex.
In the case of Carnot groups, this definition coincides with that by
Danniell-Garofalo-Nieuhn (equivalent to that by Lu-Manfredi-Stroffolini).
Nonholonomic geodesics are defined using the horizontal connection. A new
distance corresponding to the horizontal connection has been introduced and
near regular points proven to be equivalent to the Carnot-Carath\`{e}odory
distance. Some basic properties of convex functions are studied. In particular
we prove that any nonholonomically geodesic convex function locally bounded
from above is locally Lipschitzian with respect to the Carnot-Carath\`{e}odory
distance.
|
We hereby report the discovery of ATLAS17jrp as an extraordinary TDE in
star-forming galaxy SDSSJ162034.99+240726.5 in our recent sample of
mid-infrared outbursts in nearby galaxies. Its optical/UV light curves rise to
a peak luminosity $\sim1.06\times10^{44}\rm\,erg\,s^{-1}$ in about a month and
then decay as $\rm t^{-5/3}$ with a roughly constant temperature around
19000~K, and the optical spectra show a blue continuum and very broad Balmer
lines with FWHM$\sim$15000 km/s which gradually narrowed to 1400 km/s within 4
years, all agreeing well with other optical TDEs. A delayed and rapidly rising
X-ray flare with a peak luminosity $\rm \sim 1.27\times10^{43}\,erg\,s^{-1}$
was detected at $\rm \sim$ 170 days after the optical peak. The high MIR
luminosity of ATLAS17jrp ($\sim2\times10^{43} \rm\,erg\,s^{-1}$) has revealed a
distinctive dusty environment with covering factor as high as $\sim0.2$, that
is comparable with that of torus in active galactic nuclei but at least one
order of magnitude higher than normal optical TDEs. Therefore, ATLAS17jrp turns
out to be one of the rare unambiguous TDE found in star-forming galaxies and
its high dust covering factor implies that the dust extinction could play an
important role in the absence of optical TDEs in star-forming galaxies.
|
We consider a Generalized Uncertainty Principle (GUP) framework which
predicts a maximal uncertainty in momentum and minimal uncertainties both in
position and momentum. We apply supersymmetric quantum mechanics method and the
shape invariance condition to obtain the exact harmonic oscillator eigenvalues
in this GUP context. We find the supersymmetric partner Hamiltonians and show
that the harmonic oscillator belongs to a hierarchy of Hamiltonians with a
shift in momentum representation and different masses and frequencies. We also
study the effect of a uniform electric field on the harmonic oscillator energy
spectrum in this setup.
|
We propose angle-resolved photoelectron spectroscopy of aerosol particles as
an alternative way to determine the electron mean free path of low energy
electrons in solid and liquid materials. The mean free path is obtained from
fits of simulated photoemission images to experimental ones over a broad range
of different aerosol particle sizes. The principal advantage of the aerosol
approach is twofold. Firstly, aerosol photoemission studies can be performed
for many different materials, including liquids. Secondly, the size-dependent
anisotropy of the photoelectrons can be exploited in addition to size-dependent
changes in their kinetic energy. These finite size effects depend in different
ways on the mean free path and thus provide more information on the mean free
path than corresponding liquid jet, thin film, or bulk data. The present
contribution is a proof of principle employing a simple model for the
photoemission of electrons and preliminary experimental data for potassium
chloride aerosol particles.
|
This paper has a practical aim. For a long time, implementations of
pseudorandom number generators in standard libraries of programming languages
had poor quality. The situation started to improve only recently. Up to now, a
large number of libraries and weakly supported mathematical packages use
outdated algorithms for random number generation. Four modern sets of
statistical tests that can be used for verifying random number generators are
described. It is proposed to use command line utilities, which makes it
possible to avoid low-level programming in such languages as C or C++. Only
free open source systems are considered.
|
Part IA: We present numerical measurements of relativistic scaling relations
in $(2+1)$-dimensional conformal fluid turbulence, which perform favourably
over their non-relativistic versions. As seen with incompressible turbulence in
past studies, we find that the energy spectrum exhibits $k^{-2}$ scaling rather
than the Kolmogorov/Kraichnan expectation of $k^{-5/3}$.
Part IB: We compute the fractal dimension $D$ of a turbulent anti-deSitter
black brane reconstructed from boundary fluid data using the fluid-gravity
duality. Our value of $D=2.584(1)$ is consistent with the upper bound $D\leq
3$, resolving a recent claim that $D=3+1/3$. We describe how to covariantly
define the fractal dimension of spatial sections of the horizon, and we
speculate on assigning a `bootstrapped' value to the entire horizon.
Part II: We report progress implementing a fluid code with post-Newtonian
(PN) gravity in spherical symmetry. The PN formalism couples a fluid, its
self-gravity, and a black hole via elliptic equations. This eliminates
radiative modes, allowing larger time steps, which is helpful for studying
systems with very long time scales, eg. tidal disruption events.
Part III: Asteroseismology of rotating core-collapse supernovae is possible
with a multimessenger strategy. We show an $l=2$, $m=0$, $n\gtrsim 2$,
$f\lesssim 280$ Hz mode of the core is responsible for emission in
gravitational waves and neutrinos. The angular harmonics of the neutrino
emission is consistent with the mode energy around the neutrinospheres, where
$r\sim 70$ km. Thus, neutrinos carry information about the mode in the outer
region of the core, whereas gravitational waves probe the deep inner core $r
\lesssim 30$ km.
|
We present Superbox, a particle-mesh code with high resolution sub-grids and
an NGP (nearest grid point) force-calculation scheme based on the second
derivatives of the potential. Superbox implements a fast low-storage
FFT-algorithm, giving the possibility to work with millions of particles on
desk-top computers. Test calculations show energy and angular momentum
conservation to one part in 10^5 per crossing-time. The effects of grid and
numerical relaxation remain negligible, even when these calculations cover a
Hubble-time of evolution. As the sub-grids follow the trajectories of
individual galaxies, the code allows a highly resolved treatment of
interactions in clusters of galaxies, such as high-velocity encounters between
elliptical galaxies and the tidal disruption of dwarf galaxies. Excellent
agreement is obtained in a comparison with a direct-summation N-body code
running on special-purpose Grape3 hardware. The orbital decay of satellite
galaxies due to dynamical friction obtained with Superbox agrees with
Chandrasekhar's treatment when the Coulomb logarithm is approximately 1.5.
|
We present a chemical abundance study of the brightest confirmed member star
of the ultrafaint dwarf galaxy Bootes II from Keck/HIRES high-resolution
spectroscopy at moderate signal-to-noise ratios. At [Fe/H] = -2.93 +/- 0.03
(stat.) +/- 0.17 (sys.) this star chemically resembles metal-poor halo field
stars and the signatures of other faint dwarf spheroidal galaxies at the same
metallicities in that it shows enhanced [alpha/Fe] ratios, Solar Fe-peak
element abundances, and low upper limits on the neutron-capture element Ba.
Moreover, this star shows no chemical peculiarities in any of the eight
elements we were able to measure. This implies that the chemical outliers found
in other systems remain outliers pertaining to the unusual enrichment histories
of the respective environments, while Bootes II appears to have experienced an
enrichment history typical of its very low mass. We also re-calibrated previous
measurements of the galaxy's metallicity from the calcium triplet (CaT) and
find a much lower value than reported before. The resulting broad metallicity
spread, in excess of one dex, the very metal poor mean, and the chemical
abundance patterns of the present star imply that Bootes II is a low-mass, old,
metal poor dwarf galaxy and not an overdensity associated with the Sagittarius
Stream as has been previously suggested based on its sky position and
kinematics. The low, mean CaT metallicity of -2.7 dex falls right on the
luminosity-metallicity relation delineated over four orders of magnitude from
the more luminous to the faintest galaxies. Thus Bootes II's chemical
enrichment appears representative of the galaxy's original mass, while tidal
stripping and other mass loss mechanisms were probably not significant as for
other low-mass satellites.
|
In this paper, we consider the one-sided shift space on finitely many symbols
and extend the theory of what is known as rough analysis. We define difference
operators on an increasing sequence of subsets of the shift space that would
eventually render the Laplacian on the space of real-valued continuous
functions on the shift space. We then define the Green's function and the
Green's operator that come in handy to solve the analogue to the Dirichlet
boundary value problem on the shift space.
|
The transition between gapped (semiconducting) and gapless (metallic) phases
and tunability of bandgap in materials is a very lucrative yet considerably
challenging goal for new-age device preparation. For bulk materials and for
two-dimensional layered systems, this is a rapidly expanding field. We
theoretically propose a one-dimensional pure carbon material with a tunable
bandgap. We find that two parallel coupled polyyne chains show metallic
behaviour with bands crossing on the Fermi level, unlike the single
semiconducting chain. The number of nodal points (two) is robust under
transverse and longitudinal strain, indicating the symmetry-protected nature of
the metallic phase. Sliding one chain with respect to the other breaks
reflection symmetry and a clear bandgap opens up at the nodes, leading to a
gapped phase. By varying the slide parameter, the bandgap can be tuned
efficiently. This work initiates and indicates possible topological phases of
real one-dimensional materials without the involvement of edge modes.
|
Polar codes are the first capacity achieving and efficiently implementable
codes for classical communication. Recently they have also been generalized to
communication over classical-quantum and quantum channels. In this work we
present our recent results for polar coding in quantum information theory,
including applications to classical-quantum multiple access channels,
interference channels and compound communication settings, including the first
proof of channel coding achieving the Han-Kobayashi rate region of the
interference channel without the need of a simultaneous decoder. Moreover we
add to the existing framework by extending polar codes to achieve the
asymmetric capacity and improving the block error probability for
classical-quantum channels. In addition we use polar codes to prove a new
achievable rate region for the classical-quantum broadcast channel. We also
discuss polar codes for quantum communication over quantum channels and state
results towards codes for compound quantum channels in this setting. We
conclude by stating a list of interesting open questions to invite further
research on the topic.
|
It was observed experimentally that after crossing a waveguide filled with a
neutral gas, a short powerful microwave pulse leaves a periodic glow of plasma
along the waveguide, persisting several tens of nanoseconds. A theoretical
model is presented which in combination with numerical simulations proposes a
possible explanation of this phenomenon.
|
We study the effect of a control beam on a Lambda electromagnetically induced
transparency (EIT) system in 87Rb. The control beam couples one ground state to
another excited state forming a four level N-system. Phase coherent beams to
drive the N-system are produced using a double injection scheme. We show that
the control beam can be used to Stark shift or split the EIT resonance.
Finally, we show that the when the control beam is on-resonance one observes a
Doppler-free and sub-natural absorptive resonance with a width of order 100
kHz. Crucially, this narrow absorptive resonance only occurs when atoms with a
range of velocities are present, as is the case in a room temperature vapour.
|
I consider the possibility that Ultra High Energy Cosmic Rays are accelerated
in Gamma Ray Bursts located in the Galactic corona, thus circumventing the
problem raised by Greisen--Zatsepin--Kuz'min cutoff. The acceleration of UHECRs
could occur in the pulsars which, in the coronal GRB model, produce them: the
same parameters that permit fitting GRBs' observations in the model of
Podsiadlowski, Rees and Ruderman (1995) lead to an estimate of the highest
achievable energies corresponding to that of the Bird et al (1994) event, and
to very low luminosities in cosmic rays. I show that, if the observations of
Milgrom and Usov (1995a) are confirmed, the extragalactic GRBs' model for the
acceleration of UHECRs is untenable, but the same constraint does not apply to
the coronal model. Also, I show that the efficiency of particle acceleration
needs be much smaller (and less demanding) than in cosmological models of GRBs.
Uncertainties remain about the ensuing cosmic ray spectral distribution. I also
briefly discuss observational strategies to distinguish between the two
possibilities.
|
We carefully re-examine the conditions of validity for the consistent
derivation of the Lifshitz-Matsubara sum formula for the Casimir pressure
between metallic plane mirrors. We recover the usual expression for the lossy
Drude model, but not for the lossless plasma model. We give an interpretation
of this new result in terms of the modes associated with the Foucault currents
which play a role in the limit of vanishing losses, in contrast to common
expectations.
|
We study the question of extracting a sequence of functions
$\{\boldsymbol{f}_i, \boldsymbol{g}_i\}_{i=1}^s$ from observing only the sum of
their convolutions, i.e., from $\boldsymbol{y} = \sum_{i=1}^s
\boldsymbol{f}_i\ast \boldsymbol{g}_i$. While convex optimization techniques
are able to solve this joint blind deconvolution-demixing problem provably and
robustly under certain conditions, for medium-size or large-size problems we
need computationally faster methods without sacrificing the benefits of
mathematical rigor that come with convex methods. In this paper, we present a
non-convex algorithm which guarantees exact recovery under conditions that are
competitive with convex optimization methods, with the additional advantage of
being computationally much more efficient. Our two-step algorithm converges to
the global minimum linearly and is also robust in the presence of additive
noise. While the derived performance bounds are suboptimal in terms of the
information-theoretic limit, numerical simulations show remarkable performance
even if the number of measurements is close to the number of degrees of
freedom. We discuss an application of the proposed framework in wireless
communications in connection with the Internet-of-Things.
|
The lattice thermal expansion and conductivity in bulk Mo and W-based
transition metal dichalcogenides are investigated by means of density
functional and Boltzmann transport theory calculations. To this end, a recent
van der Waals density functional (vdW-DF-CX) is employed, which is shown to
yield excellent agreement with reference data for the structural parameters.
The calculated in-plane thermal conductivity compares well with experimental
room temperature values, when phonon-phonon and isotopic scattering are
included. To explain the behavior over the entire available temperature range
one must, however, include additional (temperature independent) scattering
mechanisms that limit the mean free path. Generally, the primary heat carrying
modes have mean free paths of $1\,\mu\text{m}$ or more, which makes these
materials very susceptible to structural defects. The conductivity of Mo and
W-based TMDs is primarily determined by the chalcogenide species and increases
in the order Te-Se-S. While for the tellurides and selenides the transition
metal element has a negligible effect, the conductivity of WS$_2$ is notably
higher than for MoS$_2$, which can be traced to the much larger phonon band gap
of the former. Overall the present study provides a consistent set of thermal
conductivities that reveal chemical trends and constitute the basis for future
investigations of van der Waals solids.
|
We examine the possibility of evolution with redshift in the mean rest-frame
ultraviolet (UV; <4500A) spectrum of Type Ia Supernovae (SNe Ia) sampling the
redshift range 0<z<1.3. We find new evidence for a decrease with redshift in
the strength of intermediate-mass element (IME) features, particularly Si II
and to a lesser extent Ca II "H&K" and Mg II blends, indicating lower IME
abundances in the higher redshift SNe. A larger fraction of luminous, wider
light-curve width (higher "stretch") SNe Ia are expected at higher redshift
than locally, so we compare our observed spectral evolution with that predicted
by a redshift-evolving stretch distribution (Howell et al. 2007) coupled with a
stretch-dependent SN Ia spectrum. We show that the sense of the spectral
evolution can be reproduced by this simple model, though the highest redshift
events seem additionally deficient in Si and Ca. We also examine the mean SN Ia
UV-optical colors as a function of redshift, thought to be sensitive to
variations in progenitor composition. We find that the expected stretch
variations are sufficient to explain the differences, although improved data at
z~0 will enable more precise tests. Thus, to the extent possible with the
available datasets, our results support the continued use of SNe Ia as
standardized candles.
|
Universal Dependencies is an open community effort to create
cross-linguistically consistent treebank annotation for many languages within a
dependency-based lexicalist framework. The annotation consists in a
linguistically motivated word segmentation; a morphological layer comprising
lemmas, universal part-of-speech tags, and standardized morphological features;
and a syntactic layer focusing on syntactic relations between predicates,
arguments and modifiers. In this paper, we describe version 2 of the guidelines
(UD v2), discuss the major changes from UD v1 to UD v2, and give an overview of
the currently available treebanks for 90 languages.
|
The understanding of the phase structure and the fundamental properties of
QCD matter from its microscopic description requires appropriate
first-principle approaches. Here I review the progress towards a quantitative
first-principle continuum approach within the framework of the Functional
Renormalization group established by the fQCD collaboration. I focus on recent
quantitative results for quenched QCD and Yang-Mills theory in the vacuum
before addressing the calculation of dynamical quantities such as spectral
functions and transport coefficients in this framework.
|
Large Language Models (LLMs) have been shown to perform well for many
downstream tasks. Transfer learning can enable LLMs to acquire skills that were
not targeted during pre-training. In financial contexts, LLMs can sometimes
beat well-established benchmarks. This paper investigates how well LLMs perform
in the task of forecasting corporate credit ratings. We show that while LLMs
are very good at encoding textual information, traditional methods are still
very competitive when it comes to encoding numeric and multimodal data. For our
task, current LLMs perform worse than a more traditional XGBoost architecture
that combines fundamental and macroeconomic data with high-density text-based
embedding features.
|
Using the logarithmic capacity, we give quantitative estimates of the Green
function, as well as lower bounds of the Bergman kernel for bounded
pseudoconvex domains in $\mathbb C^n$ and the Bergman distance for bounded
planar domains. In particular, it is shown that the Bergman kernel satisfies
$K_\Omega(z)\gtrsim \delta_\Omega(z)^{-2}$ for any bounded pseudoconvex domain
with $C^0-$boundary. An application to holomorphic motions is given.
|
We present a setup for quantum secret sharing using pseudo-GHZ states based
on energy-time entanglement. In opposition to true GHZ states, our states do
not enable GHZ-type tests of nonlocality, however, they bare the same quantum
correlations. The relatively high coincidence count rates found in our setup
enable for the first time an application of a quantum communication protocoll
based on more than two qubits.
|
The top quark was discovered in 1995. The top quark mass is now well measured
at the Tevatron, with uncertainty getting below 1% of the top mass. The world
average from last year was 170.9 $\pm$ 1.8 GeV/$c^2$. The new CDF measurement
is 172 $\pm$ 1.2 (stat) $\pm$ 1.5 (sys) GeV/$c^2$, and D0 will soon present a
new measurement. The top quark mass is an important parameter in the Standard
Model, and should be measured as precisely as possible. To learn more about the
top quark observed and study possible new physics, other properties also should
be measured. At the Tevatron, the charge of the top quark can be measured
directly. Examples of other properties studied and reported in this
presentation are W helicity, top decay branching ratio to b ($R_b$), searches
for $t \to H b$ and for flavor changing neutral current (FCNC). The results are
all consistent with the Standard Model within current statistics. With
significantly more data being collected at the Tevatron, precision measurements
of the top properties are just starting.
|
We present a transportable optical clock (TOC) with $^{87}$Sr. Its complete
characterization against a stationary lattice clock resulted in a systematic
uncertainty of ${7.4 \times 10^{-17}}$ which is currently limited by the
statistics of the determination of the residual lattice light shift. The
measurements confirm that the systematic uncertainty is reduceable to below the
design goal of $1 \times 10^{-17}$. The instability of our TOC is $1.3 \times
10^{-15}/\sqrt{(\tau/s)}$. Both, the systematic uncertainty and the instability
are to our best knowledge currently the best achieved with any type of
transportable clock. For autonomous operation the TOC is installed in an
air-conditioned car-trailer. It is suitable for chronometric leveling with
sub-meter resolution as well as intercontinental cross-linking of optical
clocks, which is essential for a redefiniton of the SI second. In addition, the
TOC will be used for high precision experiments for fundamental science that
are commonly tied to precise frequency measurements and it is a first step to
space borne optical clocks
|
We consider a double quantum dot coupled to two normal leads and one
superconducting lead, modeling the Cooper pair beam splitter studied in two
recent experiments. Starting from a microscopic Hamiltonian we derive a general
expression for the branching current and the noise crossed correlations in
terms of single and two-particle Green's function of the dot electrons. We then
study numerically how these quantities depend on the energy configuration of
the dots and the presence of direct tunneling between them, isolating the
various processes which come into play. In absence of direct tunneling, the
antisymmetric case (the two levels have opposite energies with respect to the
superconducting chemical potential) optimizes the Crossed Andreev Reflection
(CAR) process while the symmetric case (the two levels have the same energies)
favors the Elastic Cotunneling (EC) process. Switching on the direct tunneling
tends to suppress the CAR process, leading to negative noise crossed
correlations over the whole voltage range for large enough direct tunneling.
|
By using a new bilinear estimate, a pointwise estimate of the generalized
Oseen kernel and an idea of fractional bootstrap, we show in this note that
solutions to the Navier-Stokes equations with fractional dissipation are
analytic in space variables.
|
We propose a model based on the $SU(5)$ grand unification with an extra
$Z_{2}\otimes Z_{2}^{\prime}\otimes Z_{2}^{\prime \prime}\otimes Z_{4}\otimes
Z_{12}$ flavor symmetry, which successfully describes the observed SM fermion
mass and mixing pattern. The observed quark mass and mixing pattern is caused
by the $Z_{4}$ and $Z_{12}$ symmetries, which are broken at very high scale by
the $SU(5)$ scalar singlets $\sigma $ and $\chi $, charged respectively under
these symmetries and which acquire VEVs at the GUT scale. The light neutrino
masses are generated via a type I seesaw mechanism with three heavy Majorana
neutrinos. The model has in total 17 effective free parameters, from which 2
are fixed and 15 are fitted to reproduce the experimental values of the 18
physical parameters in the quark and lepton sectors. The model predictions for
both quark and lepton sectors are in excellent agreement with the experimental
data.
|
Annotating lots of 3D medical images for training segmentation models is
time-consuming. The goal of weakly supervised semantic segmentation is to train
segmentation models without using any ground truth segmentation masks. Our work
addresses the case where only image-level categorical labels, indicating the
presence or absence of a particular region of interest (such as tumours or
lesions), are available. Most existing methods rely on class activation mapping
(CAM). We propose a novel approach, ToNNO, which is based on the Tomographic
reconstruction of a Neural Network's Output. Our technique extracts stacks of
slices with different angles from the input 3D volume, feeds these slices to a
2D encoder, and applies the inverse Radon transform in order to reconstruct a
3D heatmap of the encoder's predictions. This generic method allows to perform
dense prediction tasks on 3D volumes using any 2D image encoder. We apply it to
weakly supervised medical image segmentation by training the 2D encoder to
output high values for slices containing the regions of interest. We test it on
four large scale medical image datasets and outperform 2D CAM methods. We then
extend ToNNO by combining tomographic reconstruction with CAM methods,
proposing Averaged CAM and Tomographic CAM, which obtain even better results.
|
"TRAO FUNS" is a project to survey Gould Belt's clouds in molecular lines.
This paper presents its first results on the central region of the California
molecular cloud, L1478. We performed On-The-Fly mapping observations using the
Taedeok Radio Astronomy Observatory (TRAO) 14m single dish telescope equipped
with a 16 multi-beam array covering $\sim$1.0 square degree area of this region
using C$^{18}$O (1-0) mainly tracing low density cloud and about 460 square
arcminute area using N$_{2}$H$^{+}$ (1-0) mainly tracing dense cores. CS (2-1)
and SO $(3_{2}-2_{1})$ were also used simultaneously to map $\sim$440 square
arcminute area of this region. We identified 10 filaments by applying the
dendrogram technique to the C$^{18}$O data-cube and 8 dense N$_{2}$H$^{+}$
cores by using {\sc FellWalker}. Basic physical properties of filaments such as
mass, length, width, velocity field, and velocity dispersion are derived. It is
found that L1478 consists of several filaments with slightly different
velocities. Especially the filaments which are supercritical are found to
contain dense cores detected in N$_{2}$H$^{+}$. Comparison of non-thermal
velocity dispersions derived from C$^{18}$O and N$_{2}$H$^{+}$ for the
filaments and dense cores indicates that some of dense cores share similar
kinematics with those of the surrounding filaments while several dense cores
have different kinematics with those of their filaments. This suggests that the
formation mechanism of dense cores and filaments can be different in individual
filaments depending on their morphologies and environments.
|
We present the Sum-Product Probabilistic Language (SPPL), a new probabilistic
programming language that automatically delivers exact solutions to a broad
range of probabilistic inference queries. SPPL translates probabilistic
programs into sum-product expressions, a new symbolic representation and
associated semantic domain that extends standard sum-product networks to
support mixed-type distributions, numeric transformations, logical formulas,
and pointwise and set-valued constraints. We formalize SPPL via a novel
translation strategy from probabilistic programs to sum-product expressions and
give sound exact algorithms for conditioning on and computing probabilities of
events. SPPL imposes a collection of restrictions on probabilistic programs to
ensure they can be translated into sum-product expressions, which allow the
system to leverage new techniques for improving the scalability of translation
and inference by automatically exploiting probabilistic structure. We implement
a prototype of SPPL with a modular architecture and evaluate it on benchmarks
the system targets, showing that it obtains up to 3500x speedups over
state-of-the-art symbolic systems on tasks such as verifying the fairness of
decision tree classifiers, smoothing hidden Markov models, conditioning
transformed random variables, and computing rare event probabilities.
|
We show how the trajectories of $d$-dimensional cellular automata (CA) can be
used to determine the ground states of $(d+1)$-dimensional classical spin
models, and we characterise their quantum phase transition, when in the
presence of a transverse magnetic field. For each of the 256 one-dimensional
elementary CA we explicitly construct the simplest local two-dimensional
classical spin model associated to the given CA, and we also describe this
method for $d>1$ through selected examples. We illustrate our general
observations with detailed studies of: (i) the $d=1$ CA Rule 150 and its $d=2$
four-body plaquette spin model, (ii) the $d=2$ CA whose associated model is the
$d=3$ square-pyramid plaquette model, and (iii) two counter-propagating $d=1$
Rule 60 CA that correspond to the two-dimensional Baxter-Wu spin model. For the
quantum spin models, we show that the connection to CAs implies a sensitivity
on the approach to the thermodynamic limit via finite size scaling for their
quantum phase transitions.
|
The Kepler Mission used a 0.95-m aperture space-based telescope to
continuously observe more than 150 000 stars for 4 years. We model and analyze
most KOIs listed at the Exoplanet Archive using the Kepler data. This document
describes data products related to the reported planetary parameters and
uncertainties for the Kepler Objects of Interest (KOIs) based on a
Markov-Chain-Monte- Carlo (MCMC) analysis. Reported parameters, uncertainties
and data products can be found at the NASA Exoplanet Archive (
http://exoplanetarchive.ipac.caltech.edu/docs/Kepler_KOI_docs.html ).
|
Plane adjustment (PA) is crucial for many 3D applications, involving
simultaneous pose estimation and plane recovery. Despite recent advancements,
it remains a challenging problem in the realm of multi-view point cloud
registration. Current state-of-the-art methods can achieve globally optimal
convergence only with good initialization. Furthermore, their high time
complexity renders them impractical for large-scale problems. To address these
challenges, we first exploit a novel optimization strategy termed
\textit{Bi-Convex Relaxation}, which decouples the original problem into two
simpler sub-problems, reformulates each sub-problem using a convex relaxation
technique, and alternately solves each one until the original problem
converges. Building on this strategy, we propose two algorithmic variants for
solving the plane adjustment problem, namely \textit{GlobalPointer} and
\textit{GlobalPointer++}, based on point-to-plane and plane-to-plane errors,
respectively. Extensive experiments on both synthetic and real datasets
demonstrate that our method can perform large-scale plane adjustment with
linear time complexity, larger convergence region, and robustness to poor
initialization, while achieving similar accuracy as prior methods. The code is
available at https://github.com/wu-cvgl/GlobalPointer.
|
We introduce a deep learning architecture for structure-based virtual
screening that generates fixed-sized fingerprints of proteins and small
molecules by applying learnable atom convolution and softmax operations to each
compound separately. These fingerprints are further transformed non-linearly,
their inner-product is calculated and used to predict the binding potential.
Moreover, we show that widely used benchmark datasets may be insufficient for
testing structure-based virtual screening methods that utilize machine
learning. Therefore, we introduce a new benchmark dataset, which we constructed
based on DUD-E and PDBBind databases.
|
The set of all closed subgroups of a profinite carries a natural profinite
topology. This space of subgroups can be classified up to homeomorphism in many
cases, and tight bounds placed on its complexity as expressed by its scattered
height.
|
Results from exoplanet surveys indicate that small planets (super-Earth size
and below) are abundant in our Galaxy. However, little is known about their
interiors and atmospheres. There is therefore a need to find small planets
transiting bright stars, which would enable a detailed characterisation of this
population of objects. We present the results of a search for the transit of
the Earth-mass exoplanet Alpha Centauri Bb with the Hubble Space Telescope
(HST). We observed Alpha Centauri B twice in 2013 and 2014 for a total of 40
hours. We achieve a precision of 115 ppm per 6-s exposure time in a
highly-saturated regime, which is found to be consistent across HST orbits. We
rule out the transiting nature of Alpha Centauri Bb with the orbital parameters
published in the literature at 96.6% confidence. We find in our data a single
transit-like event that could be associated to another Earth-size planet in the
system, on a longer period orbit. Our program demonstrates the ability of HST
to obtain consistent, high-precision photometry of saturated stars over 26
hours of continuous observations.
|
A 2-club is a graph of diameter at most two. In the decision version of the
parametrized {\sc 2-Club Cluster Edge Deletion} problem, an undirected graph
$G$ is given along with an integer $k\geq 0$ as parameter, and the question is
whether $G$ can be transformed into a disjoint union of 2-clubs by deleting at
most $k$ edges. A simple fixed-parameter algorithm solves the problem in
$\mathcal{O}^*(3^k)$, and a decade-old algorithm was claimed to have an
improved running time of $\mathcal{O}^*(2.74^k)$ via a sophisticated case
analysis. Unfortunately, this latter algorithm suffers from a flawed branching
scenario. In this paper, an improved fixed-parameter algorithm is presented
with a running time in $\mathcal{O}^*(2.695^k)$.
|
Proton acceleration in nearby blazars can be diagnosed measuring their
intense TeV $\gamma$-ray emission. Flux predictions for 1101+384 (Mrk421) and
1219+285 (ON231), both strong EGRET sources (0.1-10 GeV), are obtained from
model spectra of unsaturated synchrotron pair cascades fitted to publicly
available multifrequency data. An experimental effort to confirm the predicted
emission in the range 1-10 TeV would be of great importance for the problems of
the origin of cosmic rays, the era of galaxy formation and the cosmological
distance scale.
|
Possible maximal mixing seen in the oscillations of the atmospheric neutrinos
has led to postulate of a $\mu$-$\tau$ symmetry which interchanges $\nu_\mu$
and $\nu_\tau$. We argue that such symmetry need not be special to neutrinos
but can be extended to all fermions. The assumption that all fermion mass
matrices are approximately invariant under interchange of the second and the
third generation fields is shown to be phenomenologically viable and has
interesting consequences. In the quark sector, the smallness of $V_{ub}$ and
$V_{cb}$ can be a consequences of this approximate 2-3 symmetry. The same
approximate symmetry can simultaneously lead to large atmospheric mixing angle
and can describe the leptonic mixing quite well provided the neutrino spectrum
is quasi degenerate. We present this scenario, elaborate on its consequences
and discuss its realization.
|
Let D be a directed graph with vertex set V and order n. An anti-directed
hamiltonian cycle H in D is a hamiltonian cycle in the graph underlying D such
that no pair of consecutive arcs in H form a directed path in D. An
anti-directed 2-factor in D is a vertex-disjoint collection of anti-directed
cycles in D that span V. It was proved in [3] that if the indegree and the
outdegree of each vertex of D is greater than (9/16)n then D contains an
anti-directed hamilton cycle. In this paper we prove that given a directed
graph D, the problem of determining whether D has an anti-directed 2-factor is
NP-complete, and we use a proof technique similar to the one used in [3] to
prove that if the indegree and the outdegree of each vertex of D is greater
than (24/46)n then D contains an anti-directed 2-factor.
|
We present Giant Metrewave Radio Telescope (GMRT) observations for three
(viz., DDO 68, SDSS J2104-0035 and UGC 772) of the six most metal-deficient
actively star-forming galaxies known. Although there is a debate as to whether
these galaxies are undergoing their first episode of star formation or not,
they are `young' in the sense that their ISM is chemically unevolved. In this
regard, they are the nearest equivalents of young galaxies in the early
Universe. All three galaxies, that we have observed, have irregular HI
morphologies and kinematics, which we interpret as either due to tidal
interaction with neighbouring galaxies, or the consequences of a recent merger.
The remaining three of the six most metal-deficient galaxies are also known to
have highly disturbed HI distributions and are interacting. It is interesting
because these galaxies were chosen solely on the basis of their metallicity and
not for any particular signs of interaction. In this sense (i.e., their gas has
not yet had time to settle into a regular disc), one could regard these
extremely metal deficient (XMD) galaxies as `young'. The current star formation
episode is likely to have been triggered by interaction/merger. It is also
possible that the tidal interaction has lead to enhanced mixing with metal-poor
gas in outer disc, and hence to a low gas-phase metallicity in the central
star-forming regions. We also find that in general these galaxies do not show a
one-to-one correspondence between regions of high HI column density and regions
with current star formation. However, to the extent that one can define a
threshold density, its value (~10^{21} atoms cm^{-2}) is similar to that in
galaxies with much higher metallicity.
|
We prove that on a closed surface of genus $g$, the cardinality of a set of
simple closed curves in which any two are non-homotopic and intersect at most
once is $\lesssim g^2 \log(g)$. This bound matches the largest known
constructions to within a logarithmic factor. The proof uses a probabilistic
argument in graph theory. It generalizes as well to the case of curves that
intersect at most $k$ times in pairs.
|
The Belle II experiment at the SuperKEKB energy-asymmetric $e^+ e^-$ collider
is a substantial upgrade of the B factory facility at the Japanese KEK
laboratory. The design luminosity of the machine is $8\times 10^{35}$
cm$^{-2}$s$^{-1}$ and the Belle II experiment aims to record 50 ab$^{-1}$ of
data, a factor of 50 more than its predecessor. From February to July 2018, the
machine has completed a commissioning run, achieved a peak luminosity of
$5.5\times 10^{33}$ cm$^{-2}$s$^{-1}$, and Belle II has recorded a data sample
of about 0.5 fb$^{-1}$. Main operation of SuperKEKB has started in March 2019.
We use this dataset to characterize the performance of the detector regarding
the tracking of charged particles, the reconstruction of known resonances, and
the capability of identifying displaced decay vertices. To assess the B Physics
capabilities of the experiment, one of the first benchmarks consists in the
measurement of the lifetime of B mesons and of the $B^0-\bar B^0$ mixing
frequency. We present the first results, based on samples of B mesons that
decay to hadronic and semileptonic final states.
|
This paper presents a new control, namely additive-state-decomposition
dynamic inversion stabilized control, that is used to stabilize a class of
multi-input multi-output (MIMO) systems subject to nonparametric time-varying
uncertainties with respect to both state and input. By additive state
decomposition and a new definition of output, the considered uncertain system
is transformed into a minimum-phase uncertainty-free system with relative
degree one, in which all uncertainties are lumped into a new disturbance at the
output. Subsequently, dynamic inversion control is applied to reject the lumped
disturbance. Performance analysis of the resulting closed-loop dynamics shows
that the stability can be ensured. Finally, to demonstrate its effectiveness,
the proposed control is applied to two existing problems by numerical
simulation. Furthermore, in order to show its practicability, the proposed
control is also performed on a real quadrotor to stabilize its attitude when
its inertia moment matrix is subject to a large uncertainty.
|
The AI4GCC competition presents a bold step forward in the direction of
integrating machine learning with traditional economic policy analysis. Below,
we highlight two potential areas for improvement that could enhance the
competition's ability to identify and evaluate proposed negotiation protocols.
Firstly, we suggest the inclusion of an additional index that accounts for
consumption/utility as part of the evaluation criteria. Secondly, we recommend
further investigation into the learning dynamics of agents in the simulator and
the game theoretic properties of outcomes from proposed negotiation protocols.
We hope that these suggestions can be of use for future iterations of the
competition/simulation.
|
It is known that turbulent energy is rapidly transferred in the direction of
the rotation axis in a rotating system, in comparison with the non-rotating
case. In this study, this phenomenon is investigated as a problem of energy
diffusion expressed by the Reynolds averaged Navier-Stokes (RANS) model. The
conventional gradient-diffusion approximation for the turbulent energy flux
cannot account for the enhanced energy transport observed in rotating
inhomogeneous turbulence. In order to adequately describe the phenomenon, we
propose a new model for the energy flux due to the pressure associated with the
rotational motion of a fluid. The model of the energy flux is expressed to be
proportional to the turbulent helicity. This property is closely related to the
group velocity of inertial waves in a rapidly rotating fluid. The validity of
the model is assessed using a direct numerical simulation (DNS) of
inhomogeneous turbulence under rotation. It is shown that most of the turbulent
energy transport enhanced by the system rotation is attributed to the pressure
diffusion term. The spatial distribution of the energy flux due to the pressure
related to the system rotation is similar to that of the turbulent helicity
with negative coefficient. Hence, the new model which is proportional to the
turbulent helicity is able to qualitatively account for the enhanced energy
flux due to the system rotation. Finally, the helical Rossby number is proposed
in order to estimate the relative importance of the energy flux enhanced by the
turbulent helicity and the rotation, in comparison to the conventional
gradient-diffusion approximation.
|
This paper continues a series of investigations on converging representations
for the Riemann Zeta function. We generalize some identities which involve
Riemann's zeta function, and moreover we give new series and integrals for the
zeta function. The results originate from attempts to extend the zeta function
by classical means on the complex plane. This is particularly of interest for
representations which converge rapidly in a given area of the complex plane, or
for the purpose to make error bounds available.
|
We present an abstract result that characterizes the coincidence of certain
classes of linear operators with the class of Cohen strongly summing linear
operators. Our argument is extended to multilinear operators and, as a
consequence, we establish a few alternative characterizations for the class of
Cohen strongly summing multilinear operators.
|
I analyze several catalogs of known visual and spectroscopic binaries and
conclude that a large number of binaries is missing in current catalogs.
Samples of the best studied (nearby and bright) stars indicate that the true
binary fraction may be as high as 95%. A preliminary analysis indicates that
these binaries can a ect the astrometry signicantly.
|
Over the past decade alternative technologies have gained momentum as
conventional digital electronics continue to approach their limitations, due to
the end of Moore's Law and Dennard Scaling. At the same time, we are facing new
application challenges such as those due to the enormous increase in data. The
attention, has therefore, shifted from homogeneous computing to specialized
heterogeneous solutions. As an example, brain-inspired computing has re-emerged
as a viable solution for many applications. Such new processors, however, have
widened the abstraction gamut from device level to applications. Therefore,
efficient abstractions that can provide vertical design-flow tools for such
technologies became critical. Photonics in general, and neuromorphic photonics
in particular, are among the promising alternatives to electronics. While the
arsenal of device level toolbox for photonics, and high-level neural network
platforms are rapidly expanding, there has not been much work to bridge this
gap. Here, we present a design methodology to mitigate this problem by
extending high-level hardware-agnostic neural network design tools with
functional and performance models of photonic components. In this paper we
detail this tool and methodology by using design examples and associated
results. We show that adopting this approach enables designers to efficiently
navigate the design space and devise hardware-aware systems with alternative
technologies.
|
We present the elemental abundances and ages of 19 massive quiescent galaxies
at $z\sim1.4$ and $z\sim2.1$ from the Keck Heavy Metal Survey. The ultra-deep
LRIS and MOSFIRE spectra were modeled using a full-spectrum stellar population
fitting code with variable abundance patterns. The galaxies have iron
abundances between [Fe/H] = -0.5 and -0.1 dex, with typical values of $-0.2$
[$-0.3$] at $z\sim1.4$ [$z\sim2.1$]. We also find a tentative
$\log\sigma_v$-[Fe/H] relation at $z\sim1.4$. The magnesium-to-iron ratios span
[Mg/Fe] = 0.1--0.6 dex, with typical values of $0.3$ [$0.5$] dex at $z\sim1.4$
[$z\sim2.1$]. The ages imply formation redshifts of $z_{\rm form}=2-8$.
Compared to quiescent galaxies at lower redshifts, we find [Fe/H] was $\sim0.2$
dex lower at $z=1.4-2.1$. We find no evolution in [Mg/Fe] out to $z\sim1.4$,
though the $z\sim2.1$ galaxies are $0.2$ dex enhanced compared to $z=0-0.7$. A
comparison of these results to a chemical evolution model indicates that
galaxies at higher redshift form at progressively earlier epochs and over
shorter star-formation timescales, with the $z\sim2.1$ galaxies forming the
bulk of their stars over 150 Myr at $z_{\rm form}\sim4$. This evolution cannot
be solely attributed to an increased number of quiescent galaxies at later
times; several Heavy Metal galaxies have extreme chemical properties not found
in massive galaxies at $z\sim0.0-0.7$. Thus, the chemical properties of
individual galaxies must evolve over time. Minor mergers also cannot fully
account for this evolution as they cannot increase [Fe/H], particularly in
galaxy centers. Consequently, the build-up of massive quiescent galaxies since
$z\sim2.1$ may require further mechanisms such as major mergers and/or central
star formation.
|
The traditional perturbative method is applied to the case of gravitational
lensing of planetary systems. A complete and detailed description of the
structure of caustics for a system with an arbitrary number of planets can be
obtained. I have also found precise analytical expressions for microlensing
light curves perturbed by the presence of planets.
|
The demand for quick and reliable DevOps operations pushed distributors of
repository platforms to implement workflows. Workflows allow automating code
management operations directly on the repository hosting the software. However,
this feature also introduces security issues that directly affect the
repository, its content, and all the software supply chains in which the hosted
code is involved in. Hence, an attack exploiting vulnerable workflows can
affect disruptively large software ecosystems. To empirically assess the
importance of this problem, in this paper, we focus on the de-facto main
distributor (i.e., GitHub), and we developed a security assessment methodology
for GitHub Actions workflows, which are widely adopted in software supply
chains. We implemented the methodology in a tool (GHAST) and applied it on 50
open-source projects. The experimental results are worrisome as they allowed
identifying a total of 24,905 security issues (all reported to the
corresponding stakeholders), thereby indicating that the problem is open and
demands further research and investigation.
|
We propose that an optically excited heavy hole in a quantum dot can drive
the surrounding nuclear spins into a quiescent collective state, leading to
significantly prolonged coherence time for the electron spin qubit. This
provides a general paradigm to combat decoherence by environmental control
without involving the active qubit in quantum information processing. It also
serves as a unified solution to some open problems brought about by two recent
experiments [X. Xu et al., Nature 459, 1105 (2009) and C. Latta et al., Nature
Phys. 5, 758 (2009)].
|
Shell structures with a high stiffness-to-weight ratio are desirable in
various engineering applications. In such scenarios, topology optimization
serves as a popular and effective tool for shell structures design. Among the
topology optimization methods, solid isotropic material with penalization
method(SIMP) is often chosen due to its simplicity and convenience. However,
SIMP method is typically integrated with conventional finite element
analysis(FEA) which has limitations in computational accuracy. Achieving high
accuracy with FEA needs a substantial number of elements, leading to
computational burdens. In addition, the discrete representation of the material
distribution may result in rough boundaries and checkerboard structures. To
overcome these challenges, this paper proposes an isogeometric analysis(IGA)
based SIMP method for optimizing the topology of shell structures based on
Reissner-Mindlin theory. We use NURBS to represent both the shell structure and
the material distribution function with the same basis functions, allowing for
higher accuracy and smoother boundaries. The optimization model takes
compliance as the objective function with a volume fraction constraint and the
coefficients of the density function as design variables. The Method of Moving
Asymptotes is employed to solve the optimization problem, resulting in an
optimized shell structure defined by the material distribution function. To
obtain fairing boundaries in the optimized shell structure, further process is
conducted by fitting the boundaries with fair B-spline curves automatically.
Furthermore, the IGA-SIMP framework is applied to generate porous shell
structures by imposing different local volume fraction constraints. Numerical
examples are provided to demonstrate the feasibility and efficiency of the
IGA-SIMP method, showing that it outperforms the FEA-SIMP method and produces
smoother boundaries.
|
Quantum entanglement swapping is one of the most promising ways to realize
the quantum connection among local quantum nodes. In this Letter, we present an
experimental demonstration of the entanglement swapping between two independent
multipartite entangled states, each of which involves a tripartite
Greenberger-Horne-Zeilinger (GHZ) entangled state of an optical field. The
entanglement swapping is implemented deterministically by means of a joint
measurement on two optical modes coming from the two multipartite entangled
states respectively and the classical feedforward of the measurement results.
After entanglement swapping the two independent multipartite entangled states
are merged into a large entangled state in which all unmeasured quantum modes
are entangled. The entanglement swapping between a tripartite GHZ state and an
Einstein-Podolsky-Rosen entangled state is also demonstrated and the dependence
of the resultant entanglement on transmission loss is investigated. The
presented experiment provides a feasible technical reference for constructing
more complicated quantum networks.
|
We repeat our original simulations of the hybrid meson spectrum using the
clover action, as a check on lattice artifacts. Our results for the 1-+ masses
do not substantially change. We present preliminary results for the wave
function of the 1-+ state in Coulomb gauge.
|
The present paper is concerned with properties of multiple Schramm--Loewner
evolutions (SLEs) labelled by a parameter $\kappa\in (0,8]$. Specifically, we
consider the solution of the multiple Loewner equation driven by a time change
of Dyson's Brownian motions in the non-colliding regime. Although it is often
considered that several properties of the solution can be studied by means of
commutation relations of SLEs and the absolute continuity, this method is
available only in the case that the curves generated by commuting SLEs are
separated. Beyond this restriction, it is not even obvious that the solution of
the multiple Loewner equation generates multiple curves. To overcome this
difficulty, we employ the coupling of Gaussian free fields and multiple SLEs.
Consequently, we prove the longstanding conjecture that the solution indeed
generates multiple continuous curves. Furthermore, these multiple curves are
(i) simple disjoint curves when $\kappa\in (0,4]$, (ii) intersecting curves
when $\kappa\in (4,8)$, and (iii) space-filling curves when $\kappa=8$.
|
We take the first step in extending the integrability approach to one-point
functions in AdS/dCFT to higher loop orders. More precisely, we argue that the
formula encoding all tree-level one-point functions of SU(2) operators in the
defect version of N=4 SYM theory, dual to the D5-D3 probe-brane system with
flux, has a natural asymptotic generalization to higher loop orders. The
asymptotic formula correctly encodes the information about the one-loop
correction to the one-point functions of non-protected operators once dressed
by a simple flux-dependent factor, as we demonstrate by an explicit computation
involving a novel object denoted as an amputated matrix product state.
Furthermore, when applied to the BMN vacuum state, the asymptotic formula gives
a result for the one-point function which in a certain double-scaling limit
agrees with that obtained in the dual string theory up to wrapping order.
|
We describe a computational feather-light and intuitive, yet provably
efficient algorithm, named HALFADO. HALFADO is designed for detecting
suspicious events in a high-frequency stream of complex entries, based on a
relatively small number of examples of human judgement. Operating a
sufficiently accurate detection system is vital for {\em assisting} teams of
human experts in many different areas of the modern digital society. These
systems have intrinsically a far-reaching normative effect, and public
knowledge of the workings of such technology should be a human right.
On a conceptual level, the present approach extends one of the most classical
learning algorithms for classification, inheriting its theoretical properties.
It however works in a semi-supervised way integrating human and computational
intelligence. On a practical level, this algorithm transcends existing
approaches (expert systems) by managing and boosting their performance into a
single global detector.
We illustrate HALFADO's efficacy on two challenging applications: (1) for
detecting {\em hate speech} messages in a flow of text messages gathered from a
social media platform, and (2) for a Transaction Monitoring System (TMS) in
FinTech detecting fraudulent transactions in a stream of financial
transactions.
This algorithm illustrates that - contrary to popular belief - advanced
methods of machine learning need not require neither advanced levels of
computation power nor expensive annotation efforts.
|
We phenomenologically study whether partonic collisions responsible for the
growth of hadron-hadron cross sections at high energy can be ascribed to
instanton-induced processes. Although non-perturbative in nature, these
interactions occur at the semi-hard scale $Q\sim 1-2$ GeV, and should therefore
be described using information from deep inelastic leptonic scattering on the
partonic constituents in nucleons, pions, and photons. After considering
shadowing corrections in nucleon-nucleon scattering, we fix a free instanton
tail suppression parameter and determine the effective quark-quark cross
section. The resulting contributions to $NN$, $\pi N$, $\gamma N$, and
$\gamma\gamma$ cross sections all $increase$ with energy differently, but in
reasonable agreement with experimental data. We then proceed to an estimate of
the number of such processes present in high energy Au-Au collisions at RHIC,
finding that the amount of entropy produced by instanton/sphaleron events
matches the observed amount.
|
We argue that the velocity dispersions and masses of galactic bulges and
spheroids are byproducts of the feedback that regulates rapid black hole growth
in protogalaxies. We suggest that the feedback energy liberated by accretion
must pass through the accreting material, in an energy-conserving flux close-in
and a momentum-conserving flux further out. If the inflowing gas dominates the
gravitational potential outside the Bondi radius, feedback from
Eddington-limited accretion drives the density profile of the gas to that of a
singular isothermal sphere. We find that the velocity dispersion associated
with the isothermal potential, sigma, increases with time as the black hole
mass M grows, in such a way that M is proportional to sigma^4. The coefficient
of this proportionality depends on the radius at which the flow switches from
energy conserving to momentum conserving, and gives the observed M-sigma
relation if the transition occurs at ~100 Schwarzschild radii. We associate
this transition with radiative cooling and show that bremsstrahlung, strongly
boosted by inverse Compton scattering in a two-temperature (T_p >> T_e) plasma,
leads to a transition at the desired radius.
According to this picture, bulge masses M_b are insensitive to the virial
masses of their dark matter haloes, but correlate linearly with black hole
mass. Our analytic model also explains the M_b-sigma (Faber-Jackson) relation
as a relic of black hole accretion. The model naturally explains why the
M-sigma relation has less scatter than either the M-M_b (Magorrian) or the
Faber-Jackson relation. It suggests that the M-sigma relation could extend down
to very low velocity dispersions, and predicts that the relation should not
evolve with redshift.
|
We introduce the idea of Centaur Programmer, based on the premise that a
collaborative approach between humans and AI will be more effective than AI
alone, as demonstrated in centaur chess tournaments where mixed teams of humans
and AI beat sole computers. The paper introduces several collaboration models
for programming alongside an AI, including the guidance model, the sketch
model, and the inverted control model, and suggests that universities should
prepare future programmers for a more efficient and productive programming
environment augmented with AI. We hope to contribute to the important
discussion about the diverse ways whereby humans and AI can work together in
programming in the next decade, how universities should handle these changes
and some legal implications surrounding this topic.
|
Globular clusters (GCs) are found in all types of galaxies and harbor some of
the most extreme stellar systems, including black holes that may dynamically
assemble into merging binaries (BBHs). Uncertain GC properties, including when
they formed, their initial masses and sizes, affect their production rate of
BBH mergers. Using the gravitational-wave catalog GWTC-3, we measure that
dynamically-assembled BBHs -- those that are consistent with isotropic spin
directions -- make up ${61^{+29}_{-44}\%}$ of the total merger rate, with a
local merger rate of ${10.9^{+16.8}_{-9.3}}$ Gpc$^{-3}$ yr$^{-1}$ rising to
${58.9^{+149.4}_{-46.0}}$ Gpc$^{-3}$ yr$^{-1}$ at $z = 1$. We assume this
inferred rate describes the contribution from GCs and compare it against the
Cluster Monte Carlo (CMC) simulation catalog to directly fit for the GC initial
mass function, virial radius distribution, and formation history. We find that
GC initial masses are consistent with a Schechter function with slope ${\beta_m
= -1.9^{+0.8}_{-0.8}}$. Assuming a mass function slope of $\beta_m = -2$ and a
mass range between $10^4$--$10^8\,M_\odot$, we infer a GC formation rate at $z
= 2$ of ${5.0^{+9.4}_{-4.0}}$ Gpc$^{-3}$ yr$^{-1}$, or
${2.1^{+3.9}_{-1.7}}\times 10^6\,M_\odot$ Gpc$^{-3}$ yr$^{-1}$ in terms of mass
density. We find that the GC formation rate probably rises more steeply than
the global star formation rate between $z = 0$ and $z = 3$ ({82\%} credibility)
and implies a local number density that is ${f_\mathrm{ev} =
22.6^{+29.9}_{-16.2}}$ times higher than the observed density of survived GCs.
This is consistent with expectations for cluster evaporation, but may suggest
that other environments contribute to the rate of BBH mergers with
significantly tilted spins.
|
Despite the remarkable ability of large language models (LLMs) in language
comprehension and generation, they often suffer from producing factually
incorrect information, also known as hallucination. A promising solution to
this issue is verifiable text generation, which prompts LLMs to generate
content with citations for accuracy verification. However, verifiable text
generation is non-trivial due to the focus-shifting phenomenon, the intricate
reasoning needed to align the claim with correct citations, and the dilemma
between the precision and breadth of retrieved documents. In this paper, we
present VTG, an innovative framework for Verifiable Text Generation with
evolving memory and self-reflection. VTG introduces evolving long short-term
memory to retain both valuable documents and recent documents. A two-tier
verifier equipped with an evidence finder is proposed to rethink and reflect on
the relationship between the claim and citations. Furthermore, active retrieval
and diverse query generation are utilized to enhance both the precision and
breadth of the retrieved documents. We conduct extensive experiments on five
datasets across three knowledge-intensive tasks and the results reveal that VTG
significantly outperforms baselines.
|
The rejection of forward jets originating from additional proton--proton
interactions (pile-up) is crucial for a variety of physics analyses at the LHC,
including Standard Model measurements and searches for physics beyond the
Standard Model. The identification of such jets is challenging due to the lack
of track and vertex information in the pseudorapidity range $|\eta|>2.5$. This
paper presents a novel strategy for forward pile-up jet tagging that exploits
jet shapes and topological jet correlations in pile-up interactions.
Measurements of the per-jet tagging efficiency are presented using a data set
of 3.2 $fb^{-1}$ of proton--proton collisions at a centre-of-mass energy of 13
TeV collected with the ATLAS detector. The fraction of pile-up jets rejected in
the range $2.5<|\eta|<4.5$ is estimated in simulated events with an average of
22 interactions per bunch-crossing. It increases with jet transverse momentum
and, for jets with transverse momentum between 20 and 50 GeV, it ranges between
49% and 67% with an efficiency of 85% for selecting hard-scatter jets. A case
study is performed in Higgs boson production via the vector-boson fusion
process, showing that these techniques mitigate the background growth due to
additional proton--proton interactions, thus enhancing the reach for such
signatures.
|
Two samples of the [Ca$_2$CoO$_{3-t}$]$_{0.62}$(CoO$_2$) misfit cobaltate,
often denoted as the Ca$_{3}$Co$_{3.93}$O$_{9}$ phase, were prepared from the
same ceramic material by the oxygen and argon annealing, resulting in different
carrier concentrations in the conducting CoO$_{2}$ layers, n=0.31 and 0.19
hole/Co, respectively. Electrical and thermal transport properties were studied
in dependence of magnetic field up to 140 kOe. The magnetothermopower data
reveal an extra spin-entropy contribution to Seebeck coefficient that is not
expected for carriers of Fermi liquid character. Its magnitude is
unprecedentedly large and makes at zero field up to 50$\%$ of the theoretical
limit k$_B$/$e$ ln2$ = 59 \mu VK^{-1}$. This spin-entropy contribution is
gradually suppressed with increasing magnetic field, and the saturation is even
observed when temperatures are low enough. To understand the results, the
thermopower is treated in terms of purely thermodynamic Kelvin formula, and
so-called Spin liquid model is evoked, providing a reason for the spin-entropy
manifestation in the [Ca$_2$CoO$_{3-t}$]$_{0.62}$(CoO$_2$) misfits.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.