text
stringlengths 6
128k
|
---|
The BlueWalker 3 (BW3) satellite was folded into a compact object when
launched on 2022 September 11. The spacecraft's apparent visual magnitude
initially ranged from about 4 to 8. Observations on November 11 revealed that
the brightness increased by 4 magnitudes which indicated that the spacecraft
had deployed into a large flat-panel shape. The satellite then faded by several
magnitudes in December before returning to its full luminosity; this was
followed by additional faint periods in 2023 February and March. We discuss the
probable cause of the dimming phenomena and identify a geometrical circumstance
where the satellite is abnormally bright. The luminosity of BW3 can be
represented with a brightness model which is based on the satellite shape and
orientation as well as a reflection function having Lambertian and
pseudo-specular components. Apparent magnitudes are most frequently between 2.0
and 3.0. When BW3 is near zenith the magnitude is about 1.4.
|
In inertial microfluidics colloidal particles in a Poiseuille flow experience
the Segr\'e-Silberberg lift force, which drives them to specific positions in
the channel cross section. Due to the Saffman effect an external force applied
along the microchannel induces a cross-streamline migration to a new
equilibrium position. We apply optimal control theory to design the time
protocol of the axial control force in order to steer a single particle as
precisely as possible from a channel inlet to an outlet at a chosen target
position. We discuss the influence of particle radius and channel length and
show that optimal steering is cheaper than using a constant control force.
Using a single optimized control-force protocol, we demonstrate that even a
pulse of particles spread along the channel axis can be steered to a target and
that particles of different radii can be separarted most efficiently.
|
We prove that the treewidth of an Erd\"{o}s-R\'{e}nyi random graph $\rg{n,
m}$ is, with high probability, greater than $\beta n$ for some constant $\beta
> 0$ if the edge/vertex ratio $\frac{m}{n}$ is greater than 1.073. Our lower
bound $\frac{m}{n} > 1.073$ improves the only previously-known lower bound. We
also study the treewidth of random graphs under two other random models for
large-scale complex networks. In particular, our result on the treewidth of
\rigs strengths a previous observation on the average-case behavior of the
\textit{gate matrix layout} problem. For scale-free random graphs based on the
Barab\'{a}si-Albert preferential-attachment model, our result shows that if
more than 12 vertices are attached to a new vertex, then the treewidth of the
obtained network is linear in the size of the network with high probability.
|
The exponentially growing number of scientific papers stimulates a discussion
on the interplay between quantity and quality in science. In particular, one
may wonder which publication strategy may offer more chances of success:
publishing lots of papers, producing a few hit papers, or something in between.
Here we tackle this question by studying the scientific portfolios of Nobel
Prize laureates. A comparative analysis of different citation-based indicators
of individual impact suggests that the best path to success may rely on
consistently producing high-quality work. Such a pattern is especially rewarded
by a new metric, the $E$-index, which identifies excellence better than
state-of-the-art measures.
|
We report curling self-propulsion in aqueous emulsions of common mesogenic
compounds. Nematic liquid crystal droplets self-propel in a surfactant solution
with concentrations above the critical micelle concentration while undergoing
micellar solubilization. We analyzed trajectories both in a Hele-Shaw geometry
and in a 3D setup at variable buoyancy. The coupling between the nematic
director field and the convective flow inside the droplet leads to a second
symmetry breaking which gives rise to curling motion in 2D. This is
demonstrated through a reversible transition to non-helical persistent swimming
by heating to the isotropic phase. Furthermore, auto-chemotaxis can
spontaneously break the inversion symmetry, leading to helical trajectories.
|
We report results of the first search to date for continuous gravitational
waves from unstable r-modes from the pulsar J0537-6910. We use data from the
first two observing runs of the Advanced LIGO network. We find no significant
signal candidate and set upper limits on the amplitude of gravitational wave
signals, which are within an order of magnitude of the spin-down values. We
highlight the importance of having timing information at the time of the
gravitational wave observations, i.e. rotation frequency and
frequency-derivative values, and glitch occurrence times, such as those that a
NICER campaign could provide.
|
We consider the problem of realizing tight contact structures on closed
orientable three-manifolds. By applying the theorems of Hofer et al., one may
deduce tightness from dynamical properties of (Reeb) flows transverse to the
contact structure. We detail how two classical constructions, Dehn surgery and
branched covering, may be performed on dynamically-constrained links in such a
way as to preserve a transverse tight contact structure.
|
Single-crystal synchrotron x-ray diffraction measurements in strong magnetic
fields have been performed for magnetoelectric compounds GdMnO3 and TbMnO3. It
has been found that the P || a ferroelectric phase induced by the application
of a magnetic field at low temperatures is characterized by commensurate
lattice modulation along the orthorhombic b axis with q = 1/2 and q = 1/4. The
lattice modulation is ascribed to antiferromagnetic spin alignment with a
modulation vector of (0 1/4 1). The change of the spin structure is directly
correlated with the magnetic-field-induced electric phase transition, because
any commensurate spin modulation with (0 1/4 1) should break glide planes
normal to the a axis of the distorted perovskite with the Pbnm space group.
|
The emergence of an effective field theory out of equilibrium is studied in
the case in which a light field --the system-- interacts with very heavy fields
in a finite temperature bath. We obtain the reduced density matrix for the
light field, its time evolution is determined by an effective action that
includes the \emph{influence action} from correlations of the heavy degrees of
freedom. The non-equilibrium effective field theory yields a Langevin equation
of motion for the light field in terms of dissipative and noise kernels that
obey a generalized fluctuation dissipation relation. These are completely
determined by the spectral density of the bath which is analyzed in detail for
several cases. At $T=0$ we elucidate the effect of thresholds in the
renormalization aspects and the asymptotic emergence of a local effective field
theory with unitary time evolution. At $T\neq 0$ new "anomalous" thresholds
arise, in particular the \emph{decay} of the environmental heavy fields into
the light field leads to \emph{dissipative} dynamics of the light field. Even
when the heavy bath particles are thermally suppressed this dissipative
contribution leads to the \emph{thermalization} of the light field which is
confirmed by a quantum kinetics analysis. We obtain the quantum master equation
and show explicitly that its solution in the field basis is precisely the
influence action that determines the effective non-equilibrium field theory.
The Lindblad form of the quantum master equation features \emph{time dependent
dissipative coefficients}. Their time dependence is crucial to extract
renormalization effects at asymptotically long time. The dynamics from the
quantum master equation is in complete agreement with that of the effective
action, Langevin dynamics and quantum kinetics, thus providing a unified
framework to effective field theory out of equilibrium.
|
The reason why the half-integer quantum Hall effect (QHE) is suppressed in
graphene grown by chemical vapor deposition (CVD) is unclear. We propose that
it might be connected to extended defects in the material and present results
for the quantum Hall effect in graphene with [0001] tilt grain boundaries
connecting opposite sides of Hall bar devices. Such grain boundaries contain
5-7 ring complexes that host defect states that hybridize to form bands with
varying degree of metallicity depending on grain boundary defect density. In a
magnetic field, edge states on opposite sides of the Hall bar can be connected
by the defect states along the grain boundary. This destroys Hall resistance
quantization and leads to non-zero longitudinal resistance. Anderson disorder
can partly recover quantization, where current instead flows along returning
paths along the grain boundary depending on defect density in the grain
boundary and on disorder strength. Since grain sizes in graphene made by
chemical vapor deposition are usually small, this may help explain why the
quantum Hall effect is usually poorly developed in devices made of this
material.
|
We investigate the effects of finite temperature on ultracold Bose atoms
confined in an optical lattice plus a parabolic potential in the Mott insulator
state. In particular, we analyze the temperature dependence of the density
distribution of atomic pairs in the lattice, by means of exact Monte-Carlo
simulations. We introduce a simple model that quantitatively accounts for the
computed pair density distributions at low enough temperatures. We suggest that
the temperature dependence of the atomic pair statistics may be used to
estimate the system's temperature at energies of the order of the atoms'
interaction energy.
|
Delzant's theorem for symplectic toric manifolds says that there is a
one-to-one correspondence between certain convex polytopes in $\mathbb{R}^n$
and symplectic toric $2n$-manifolds, realized by the image of the moment map. I
review proofs of this theorem and the convexity theorem of
Atiyah-Guillemin-Sternberg on which it relies. Then, I describe Honda's results
on the local structure of near-symplectic 4-manifolds, and inspired by recent
work of Gay-Symington, I describe a generalization of Delzant's theorem to
near-symplectic toric 4-manifolds. One interesting feature of the
generalization is the failure of convexity, which I discuss in detail. The
first three chapters are primarily expository, duplicate material found
elsewhere, and may be skipped by anyone familiar with the material, but are
included for completeness.
|
We discuss an approach to signal recovery in Generalized Linear Models (GLM)
in which the signal estimation problem is reduced to the problem of solving a
stochastic monotone variational inequality (VI). The solution to the stochastic
VI can be found in a computationally efficient way, and in the case when the VI
is strongly monotone we derive finite-time upper bounds on the expected
$\|\cdot\|_2^2$ error converging to 0 at the rate $O(1/K)$ as the number $K$ of
observations grows. Our structural assumptions are essentially weaker than
those necessary to ensure convexity of the optimization problem resulting from
Maximum Likelihood estimation. In hindsight, the approach we promote can be
traced back directly to the ideas behind the Rosenblatt's perceptron algorithm.
|
Safe and high-speed navigation is a key enabling capability for real world
deployment of robotic systems. A significant limitation of existing approaches
is the computational bottleneck associated with explicit mapping and the
limited field of view (FOV) of existing sensor technologies. In this paper, we
study algorithmic approaches that allow the robot to predict spaces extending
beyond the sensor horizon for robust planning at high speeds. We accomplish
this using a generative neural network trained from real-world data without
requiring human annotated labels. Further, we extend our existing control
algorithms to support leveraging the predicted spaces to improve collision-free
planning and navigation at high speeds. Our experiments are conducted on a
physical robot based on the MIT race car using an RGBD sensor where were able
to demonstrate improved performance at 4 m/s compared to a controller not
operating on predicted regions of the map.
|
Event Causality Identification (ECI) aims at determining the existence of a
causal relation between two events. Although recent prompt learning-based
approaches have shown promising improvements on the ECI task, their performance
are often subject to the delicate design of multiple prompts and the positive
correlations between the main task and derivate tasks. The in-context learning
paradigm provides explicit guidance for label prediction in the prompt learning
paradigm, alleviating its reliance on complex prompts and derivative tasks.
However, it does not distinguish between positive and negative demonstrations
for analogy learning. Motivated from such considerations, this paper proposes
an In-Context Contrastive Learning (ICCL) model that utilizes contrastive
learning to enhance the effectiveness of both positive and negative
demonstrations. Additionally, we apply contrastive learning to event pairs to
better facilitate event causality identification. Our ICCL is evaluated on the
widely used corpora, including the EventStoryLine and Causal-TimeBank, and
results show significant performance improvements over the state-of-the-art
algorithms.
|
Anonymous communication systems are subject to selective denial-of-service
(DoS) attacks. Selective DoS attacks lower anonymity as they force paths to be
rebuilt multiple times to ensure delivery which increases the opportunity for
more attack. In this paper we present a detection algorithm that filters out
compromised communication channels for one of the most widely used anonymity
networks, Tor. Our detection algorithm uses two levels of probing to filter out
potentially compromised tunnels. We perform probabilistic analysis and
extensive simulation to show the robustness of our detection algorithm. We also
analyze the overhead of our detection algorithm and show that we can achieve
satisfactory security guarantee for reasonable communication overhead (5% of
the total available Tor bandwidth in the worst case). Real world experiments
reveal that our detection algorithm provides good defense against selective DoS
attack.
|
The recent experimental data of anomalous magnetic moments strongly indicate
the existence of new physics beyond the standard model. An energetic $\mu^+$
beam is a potential option to the expected neutrino factories, the future muon
colliders and the $\mu$SR(the spin rotation, resonance and relaxation)
technology. It is proposed a prompt acceleration scheme of the $\mu^+$ beam in
a donut wakefield driven by a shaped Laguerre-Gaussian (LG) laser pulse. The
forward part of the donut wakefield can accelerate and also focus positive
particle beams effectively. The LG laser is shaped by a near-critical-density
plasma. The shaped LG laser has the shorter rise time and can enlarge the
acceleration field. The acceleration field driven by a shaped LG laser pulse is
six times higher than that driven by a normal LG laser pulse. The simulation
results show that the $ \mu^+$ bunch can be accelerated from $200\mathrm{MeV}$
to 2GeV and the transversal size of the $\mu^+$ bunch is also focused from
initial $\omega_0=5\mu m$ to $\omega=1\mu m$ within several picoseconds.
|
Finite electron gyro-radius influences on the trapping and charge density
distribution of electron holes of limited transverse extent are calculated
analytically and explored by numerical orbit integration in low to moderate
magnetic fields. Parallel trapping is shown to depend upon the gyro-averaged
potential energy and to give rise to gyro-averaged charge deficit. Both types
of average are expressible as convolutions with perpendicular Gaussians of
width equal to the thermal gyro-radius. Orbit-following confirms these
phenomena but also confirms for the first time in self-consistent potential
profiles the importance of gyro-bounce-resonance detrapping and consequent
velocity diffusion on stochastic orbits. The averaging strongly reduces the
trapped electron deficit that can be sustained by any potential profile whose
transverse width is comparable to the gyro-radius $r_g$. It effectively
prevents equilibrium widths smaller than $\sim r_g$ for times longer than a
quarter parallel-bounce-period. Avoiding gyro-bounce resonance detrapping is
even more restrictive, except for very small potential amplitudes, but it takes
multiple bounce-periods to act. Quantitative criteria are given for both types
of orbit loss.
|
We consider the problem of approximately solving constraint satisfaction
problems with arity $k > 2$ ($k$-CSPs) on instances satisfying certain
expansion properties, when viewed as hypergraphs. Random instances of $k$-CSPs,
which are also highly expanding, are well-known to be hard to approximate using
known algorithmic techniques (and are widely believed to be hard to approximate
in polynomial time). However, we show that this is not necessarily the case for
instances where the hypergraph is a high-dimensional expander.
We consider the spectral definition of high-dimensional expansion used by
Dinur and Kaufman [FOCS 2017] to construct certain primitives related to PCPs.
They measure the expansion in terms of a parameter $\gamma$ which is the
analogue of the second singular value for expanding graphs. Extending the
results by Barak, Raghavendra and Steurer [FOCS 2011] for 2-CSPs, we show that
if an instance of MAX k-CSP over alphabet $[q]$ is a high-dimensional expander
with parameter $\gamma$, then it is possible to approximate the maximum
fraction of satisfiable constraints up to an additive error $\epsilon$ using
$q^{O(k)} \cdot (k/\epsilon)^{O(1)}$ levels of the sum-of-squares SDP
hierarchy, provided $\gamma \leq \epsilon^{O(1)} \cdot (1/(kq))^{O(k)}$.
Based on our analysis, we also suggest a notion of threshold-rank for
hypergraphs, which can be used to extend the results for approximating 2-CSPs
on low threshold-rank graphs. We show that if an instance of MAX k-CSP has
threshold rank $r$ for a threshold $\tau = (\epsilon/k)^{O(1)} \cdot
(1/q)^{O(k)}$, then it is possible to approximately solve the instance up to
additive error $\epsilon$, using $r \cdot q^{O(k)} \cdot (k/\epsilon)^{O(1)}$
levels of the sum-of-squares hierarchy. As in the case of graphs,
high-dimensional expanders (with sufficiently small $\gamma$) have threshold
rank 1 according to our definition.
|
We report magnetotransport measurements of two-dimensional holes in open
quantum dots, patterned either as a single-dot or an array of dots, on a GaAs
quantum well. For temperatures $T$ below 500 mK, we observe signatures of
coherent transport, namely, conductance fluctuations and weak antilocalization.
From these effects, the hole dephasing time $\tau_\phi$ is extracted using
the random matrix theory. While $\tau_\phi$ shows a $T$-dependence that lies
between $T^{-1}$ and $T^{-2}$, similar to that reported for electrons, its
value is found to be approximately one order of magnitude smaller.
|
The article outlines the recent developments in the theoretical and
computational approaches to the higher-order electroweak effects needed for the
accurate interpretation of MOLLER and Belle II experimental data, and shows how
new-physics particles enter at the one-loop level. By analyzing the effects of
$Z'$-boson on the polarization asymmetry, we show how this hypothetical
interaction carrier may influence the future experimental results.
|
The powerset construction is a standard method for converting a
nondeterministic automaton into a deterministic one recognizing the same
language. In this paper, we lift the powerset construction from automata to the
more general framework of coalgebras with structured state spaces. Coalgebra is
an abstract framework for the uniform study of different kinds of dynamical
systems. An endofunctor F determines both the type of systems (F-coalgebras)
and a notion of behavioural equivalence (~_F) amongst them. Many types of
transition systems and their equivalences can be captured by a functor F. For
example, for deterministic automata the derived equivalence is language
equivalence, while for non-deterministic automata it is ordinary bisimilarity.
We give several examples of applications of our generalized determinization
construction, including partial Mealy machines, (structured) Moore automata,
Rabin probabilistic automata, and, somewhat surprisingly, even pushdown
automata. To further witness the generality of the approach we show how to
characterize coalgebraically several equivalences which have been object of
interest in the concurrency community, such as failure or ready semantics.
|
The surface states of a topological insulator are described by an emergent
relativistic massless Dirac equation in 2+1 dimensions. In contrast to
graphene, there is an odd number of Dirac points, and the electron spin is
directly coupled to the momentum. We show that a magnetic impurity opens up a
local gap and suppresses the local density of states. Furthermore, the Dirac
electronic states mediate an RKKY interaction among the magnetic impurities
which is always ferromagnetic, whenever the chemical potential lies near the
Dirac point. These effects can be directly measured in STM experiments. We also
study the case of quenched disorder through a renormalization group analysis.
|
We propose a new technique for computing dense scene flow from two handheld
videos with wide camera baselines and different photometric properties due to
different sensors or camera settings like exposure and white balance. Our
technique innovates in two ways over existing methods: (1) it supports
independently moving cameras, and (2) it computes dense scene flow for
wide-baseline scenarios.We achieve this by combining state-of-the-art
wide-baseline correspondence finding with a variational scene flow formulation.
First, we compute dense, wide-baseline correspondences using DAISY descriptors
for matching between cameras and over time. We then detect and replace occluded
pixels in the correspondence fields using a novel edge-preserving Laplacian
correspondence completion technique. We finally refine the computed
correspondence fields in a variational scene flow formulation. We show dense
scene flow results computed from challenging datasets with independently
moving, handheld cameras of varying camera settings.
|
The problem of selecting a handful of truly relevant variables in supervised
machine learning algorithms is a challenging problem in terms of untestable
assumptions that must hold and unavailability of theoretical assurances that
selection errors are under control. We propose a distribution-free feature
selection method, referred to as Data Splitting Selection (DSS) which controls
False Discovery Rate (FDR) of feature selection while obtaining a high power.
Another version of DSS is proposed with a higher power which "almost" controls
FDR. No assumptions are made on the distribution of the response or on the
joint distribution of the features. Extensive simulation is performed to
compare the performance of the proposed methods with the existing ones.
|
We study the dynamics of a nonlinear one-dimensional disordered system from a
spectral point of view. The spectral entropy and the Lyapunov exponent are
extracted from the short time dynamics, and shown to give a pertinent
characterization of the different dynamical regimes. The chaotic and
self-trapped regimes are governed by log-normal laws whose origin is traced to
the exponential shape of the eigenstates of the linear problem. These
quantities satisfy scaling laws depending on the initial state and explain the
system behaviour at longer times.
|
We show that the longitudinal polarization of the top quarks produced in the
annihilation of e+ e- or mu+ mu- into tbar t at energies near the threshold is
not affected by the large Coulomb-type corrections, which greatly modify the
total cross section. Thus the longitudinal polarization, although small, may
provide an independent information on the mass and the width of the top quark,
largely independent of the uncertainty in alpha_s.
|
Non-spherical emulsion droplets can be stabilized by densely packed colloidal
particles adsorbed at their surface. In order to understand the microstructure
of these surface packings, the ordering of hard spheres on ellipsoidal surfaces
is determined through large scale computer simulations. Defects in the packing
are shown generically to occur most often in regions of strong curvature;
however, the relationship between defects and curvature is nontrivial, and the
distribution of defects shows secondary maxima for ellipsoids of sufficiently
high aspect ratio. As with packings on spherical surfaces, additional defects
beyond those required by topology are observed as chains or 'scars'. The
transition point, however, is found to be softened by the anisotropic curvature
which also partially orients the scars. A rich library of symmetric
commensurate packings are identified for low particle number. We verify
experimentally that ellipsoidal droplets of varying aspect ratio can be
arrested by surface-adsorbed colloids.
|
We determine both the semigroup and spectral properties of a group of
weighted composition operators on the Little Bloch space. It turns out that
these are strongly continuous groups of invertible isometries on the Bloch
space. We then obtain the norm and spectra of the infinitesimal generator as
well as the resulting resolvents which are given as integral operators. As
consequences, we complete the analysis of the adjoint composition group on the
predual on the nonreflexive Bergman space, and a group of isometries associated
with a specific automorphism of the upper half plane.
|
We present the results of our study of the X-ray emission from the Ophiuchus
galaxy cluster based on INTEGRAL/IBIS data in the energy range 20-120 keV. Our
goal is the search for a nonthermal emission component from the cluster. Using
the INTEGRAL data over the period of observations 2003-2009, we have
constructed the images of the Ophiuchus galaxy cluster in different energy
bands from 20 to 120 keV with the extraction of spectral information. We show
that in the hard X-ray energy band the source is an extended one with an
angular size of 4.9 +/- 0.1 arcmins. Assuming a fixed intracluster gas
temperature of 8.5 keV, a power-law component of the possible nonthermal X-ray
emission is observed at a 5.5 sigma significance level, the flux from which is
consistent with previous studies. However, in view of the uncertainty in
constraining the thermal emission component in the X-ray spectrum at energies
above 20 keV, we cannot assert that the nonthermal emission of the cluster has
been significantly detected. Based on the fact of a confident detection of the
cluster up to 70 keV, we can draw the conclusion only about the possible
presence of a nonthermal excess at energies above 60 keV.
|
Answering compositional questions requiring multi-step reasoning is
challenging. We introduce an end-to-end differentiable model for interpreting
questions about a knowledge graph (KG), which is inspired by formal approaches
to semantics. Each span of text is represented by a denotation in a KG and a
vector that captures ungrounded aspects of meaning. Learned composition modules
recursively combine constituent spans, culminating in a grounding for the
complete sentence which answers the question. For example, to interpret "not
green", the model represents "green" as a set of KG entities and "not" as a
trainable ungrounded vector---and then uses this vector to parameterize a
composition function that performs a complement operation. For each sentence,
we build a parse chart subsuming all possible parses, allowing the model to
jointly learn both the composition operators and output structure by gradient
descent from end-task supervision. The model learns a variety of challenging
semantic operators, such as quantifiers, disjunctions and composed relations,
and infers latent syntactic structure. It also generalizes well to longer
questions than seen in its training data, in contrast to RNN, its tree-based
variants, and semantic parsing baselines.
|
Thermodynamics and information have intricate inter-relations. The
justification of the fact that information is physical, is done by
inter-linking information and thermodynamics - through Landauer's principle.
This modern approach towards information recently has improved our
understanding of thermodynamics, both in classical and quantum domains. Here we
show thermodynamics as a consequence of information conservation. Our approach
can be applied to most general situations, where systems and thermal-baths
could be quantum, of arbitrary sizes and even could posses inter-system
correlations. The approach does not rely on an a priori predetermined
temperature associated to a thermal bath, which is not meaningful for
finite-size cases. Hence, the thermal-baths and systems are not different,
rather both are treated on an equal footing. This results in a
"temperature"-independent formulation of thermodynamics. We exploit the fact
that, for a fix amount of coarse-grained information, measured by the von
Neumann entropy, any system can be transformed to a state that possesses
minimal energy, without changing its entropy. This state is known as a
completely passive state, which assumes Boltzmann-Gibb's canonical form with an
intrinsic temperature. This leads us to introduce the notions of bound and free
energy, which we further use to quantify heat and work respectively. With this
guiding principle of information conservation, we develop universal notions of
equilibrium, heat and work, Landauer's principle and also universal fundamental
laws of thermodynamics. We show that the maximum efficiency of a quantum
engine, equipped with a finite baths, is in general lower than that of an ideal
Carnot's engine. We also introduce a resource theoretic framework for
intrinsic-temperature based thermodynamics, within which we address the problem
of work extraction and state transformations.
|
Python currently is the dominant language in the field of Machine Learning
but is often criticized for being slow to perform certain tasks. In this
report, we use the well-known $N$-queens puzzle as a benchmark to show that
once compiled using the Numba compiler it becomes competitive with C++ and Go
in terms of execution speed while still allowing for very fast prototyping.
This is true of both sequential and parallel programs. In most cases that arise
in an academic environment, it therefore makes sense to develop in ordinary
Python, identify computational bottlenecks, and use Numba to remove them.
|
We formulate a model of the conditional colour-magnitude distribution (CCMD)
to describe the distribution of galaxy luminosity and colour as a function of
halo mass. It consists of two populations of different colour distributions,
dubbed pseudo-blue and pseudo-red, respectively, with each further separated
into central and satellite galaxies. We define a global parameterization of
these four colour-magnitude distributions and their dependence on halo mass,
and we infer parameter values by simultaneously fitting the space densities and
auto-correlation functions of 79 galaxy samples from the Sloan Digital Sky
Survey defined by fine bins in the colour-magnitude diagram (CMD). The model
deprojects the overall galaxy CMD, revealing its tomograph along the halo mass
direction. The bimodality of the colour distribution is driven by central
galaxies at most luminosities, though at low luminosities it is driven by the
difference between blue centrals and red satellites. For central galaxies, the
two pseudo-colour components are distinct and orthogonal to each other in the
CCMD: at fixed halo mass, pseudo-blue galaxies have a narrow luminosity range
and broad colour range, while pseudo-red galaxies have a narrow colour range
and broad luminosity range. For pseudo-blue centrals, luminosity correlates
tightly with halo mass, while for pseudo-red galaxies colour correlates more
tightly (redder galaxies in more massive haloes). The satellite fraction is
higher for redder and for fainter galaxies, with colour a stronger indicator
than luminosity. We discuss the implications of the results and further
applications of the CCMD model.
|
We present a pulsar candidate identification and confirmation procedure based
on a position-switch mode during the pulsar search observations. This method
enables the simultaneous search and confirmation of a pulsar in a single
observation, by utilizing the different spatial features of a pulsar signal and
a radio frequency interference (RFI). Based on this method, we performed test
pulsar search observations in globular clusters M3, M15, and M92. We discovered
and confirmed a new pulsar, M3F, and detected the known pulsars M3B, M15 A to G
(except C), and M92A.
|
The main goal of the paper is to connect matrix polynomial biorthogonality on
a contour in the plane with a suitable notion of scalar, multi-point Pad\'e
approximation on an arbitrary Riemann surface endowed with a rational map to
the Riemann sphere. To this end we introduce an appropriate notion of (scalar)
multi-point Pad\'e\ approximation on a Riemann surface and corresponding notion
of biorthogonality of sections of the semi-canonical bundle
(half-differentials). Several examples are offered in illustration of the new
notions.
|
Temperature dependencies of the impurity magnetic susceptibility, entropy,
and heat capacity have been obtained by the method of numerical renormalization
group and exact diagonalization for the Kondo model with peaks in the electron
density of states near the Fermi energy (in particular, with logarithmic Van
Hove singularities). It is shown that these quantities can be {\it negative}. A
new effect has been predicted (which, in principle, can be observed
experimentally), namely, the decrease in the magnetic susceptibility and heat
capacity of a nonmagnetic sample upon the addition of magnetic impurities into
it.
|
We use a minimal zero-range model for describing the bound state spectrum of
three-body states consisting of two Cesium and one Lithium atom. Using a broad
Feshbach resonance model for the two-body interactions, we show that recent
experimental data can be described surprisingly well for particular values of
the three-body parameter that governs the short-range behavior of the atomic
potentials and is outside the scope of the zero-range model. Studying the
spectrum as a function of the three-body parameter suggests that the lowest
state seen in experiment could be influenced by finite range corrections. We
also consider the question of Fermi degeneracy and corresponding Pauli blocking
of the Lithium atoms on the Efimov states.
|
Ground-based adaptive optics (AO) in the infrared has made exceptional
advances in approaching space-like image quality at higher collecting area.
Optical-wavelength applications are now also growing in scope. We therefore
provide here a comparison of the pros and cons of observational capabilities
from the ground and from space at optical wavelengths. With an eye towards the
future, we focus on the comparison of a ~30m ground-based telescope with an
8-16m space-based telescope. We review the current state-of-the-art in AO, and
summarize the expected future improvements in image quality, field of view,
contrast, and low-wavelength cut-off. We discuss the exciting advances in
extreme AO for exoplanet studies and explore what the theoretical limitations
in achievable contrast might be. Our analysis shows that extreme AO techniques
face both fundamental and technological hurdles to reach the contrast of 1E-10
necessary to study an Earth-twin at 10 pc. Based on our assessment of the
current state-of-the-art, the future technology developments, and the inherent
difficulty of observing through a turbulent atmosphere, we conclude that there
will continue to be a strong complementarity between observations from the
ground and from space at optical wavelengths in the coming decades. There will
continue to be subjects that can only be studied from space, including imaging
and (medium-resolution) spectroscopy at the deepest magnitudes, and the
exceptional-contrast observations needed to characterize terrestrial exoplanets
and search for biomarkers.
|
In the traditional random-conformational-search model, various hypotheses
with a series of meta-stable intermediate states were often proposed to resolve
the Levinthal paradox. Here we introduce a quantum strategy to formulate
protein folding as a quantum walk on a definite graph, which provides us a
general framework without making hypotheses. Evaluating it by the mean of first
passage time, we find that the folding time via our quantum approach is much
shorter than the one obtained via classical random walks. This idea is expected
to evoke more insights for future studies.
|
At hole concentrations below x=0.4, Ba_(1-x)K_xBiO_3 is non-metallic. At x=0,
pure BaBiO3 is a Peierls insulator. Very dilute holes create bipolaronic point
defects in the Peierls order parameter. Here we find that the Rice-Sneddon
version of Peierls theory predicts that more concentrated holes should form
stacking faults (two-dimensional topological defects, called slices) in the
Peierls order parameter. However, the long-range Coulomb interaction, left out
of the Rice-Sneddon model, destabilizes slices in favor of point bipolarons at
low concentrations, leaving a window near 30% doping where the sliced state is
marginally stable.
|
We propose a new parton theory of the hole-doped cuprates, describing the
evolution from the pseudogap metal with small Fermi surfaces to the
conventional Fermi liquid with a large Fermi surface. We introduce 2 ancilla
qubits per square lattice site, and employ them to obtain a variational
wavefunction of a fractionalized Fermi liquid for the pseudogap metal state. We
propose a multi-layer Hamiltonion for the cuprates, with the electrons residing
in the 'physical' layer, and the ancilla qubits in two 'hidden' layers: the
hidden layers can be decoupled from the physical layer by a canonical
transformation which leaves the hidden layers in a trivial gapped state. This
Hamiltonian yields an emergent gauge theory which describes not only the
fractionalized Fermi liquid, but also the conventional Fermi liquid, and
possible exotic intermediate phases and critical points. The fractionalized
Fermi liquid has hole pockets with quasiparticle weight which is large only on
"Fermi arcs", and fermionic spinon excitations which carry charges of the
emergent gauge fields.
|
Creating resilient machine learning (ML) systems has become necessary to
ensure production-ready ML systems that acquire user confidence seamlessly. The
quality of the input data and the model highly influence the successful
end-to-end testing in data-sensitive systems. However, the testing approaches
of input data are not as systematic and are few compared to model testing. To
address this gap, this paper presents the Fault Injection for Undesirable
Learning in input Data (FIUL-Data) testing framework that tests the resilience
of ML models to multiple intentionally-triggered data faults. Data mutators
explore vulnerabilities of ML systems against the effects of different fault
injections. The proposed framework is designed based on three main ideas: The
mutators are not random; one data mutator is applied at an instance of time,
and the selected ML models are optimized beforehand. This paper evaluates the
FIUL-Data framework using data from analytical chemistry, comprising retention
time measurements of anti-sense oligonucleotide. Empirical evaluation is
carried out in a two-step process in which the responses of selected ML models
to data mutation are analyzed individually and then compared with each other.
The results show that the FIUL-Data framework allows the evaluation of the
resilience of ML models. In most experiments cases, ML models show higher
resilience at larger training datasets, where gradient boost performed better
than support vector regression in smaller training sets. Overall, the mean
squared error metric is useful in evaluating the resilience of models due to
its higher sensitivity to data mutation.
|
Although there have been so many studies on schizophrenia under the framework
of predictive coding, works focusing on treatment are very preliminary. A
model-oriented, operationalist, and comprehensive understanding of
schizophrenia would promote the therapy turn of further research. We summarize
predictive coding models of embodiment, co-occurrence of over- and
under-weighting priors, subjective time processing, language production or
comprehension, self-or-other inference, and social interaction. Corresponding
impairments and clinical manifestations of schizophrenia are reviewed under
these models at the same time. Finally, we discuss why and how to inaugurate a
therapy turn of further research under the framework of predictive coding.
|
We study transformations of the dynamical fields - a metric, a flat affine
connection and a scalar field - in scalar-teleparallel gravity theories. The
theories we study belong either to the general teleparallel setting, where no
further condition besides vanishing curvature is imposed on the affine
connection, or the symmetric or metric teleparallel gravity, where one also
imposes vanishing torsion or nonmetricity, respectively. For each of these
three settings, we find a general class of scalar-teleparallel action
functionals which retain their form under the aforementioned field
transformations. This is achieved by generalizing the constraint of vanishing
torsion or nonmetricity to non-vanishing, but algebraically constrained torsion
or nonmetricity. We find a number of invariant quantities which characterize
these theories independently of the choice of field variables, and relate these
invariants to analogues of the conformal frames known from scalar-curvature
gravity. Using these invariants, we are able to identify a number of physically
relevant subclasses of scalar-teleparallel theories. We also generalize our
results to multiple scalar fields, and speculate on further extended theories
with non-vanishing, but algebraically constrained curvature.
|
Harary's conjecture $r(C_3,G)\leq 2q+1$ for every isolated-free graph G with
$q$ edges was proved independently by Sidorenko and Goddard and Klietman. In
this paper instead of $C_3$ we consider $K_{2,k}$ and seek a sharp upper bound
for $r(K_{2,k},G)$ over all graphs $G$ with $q$ edges. More specifically if
$q\geq 2$, we will show that $r(C_4,G)\leq kq+1$ and that equality holds if $G
\cong qK_2$ or $K_3$. Using this we will generalize this result for
$r(K_{2,k},G)$ when $k>2$. We will also show that for every graph $G$ with $q
\geq 2$ edges and with no isolated vertices, $r(C_4, G) \leq 2p+ q - 2$ where
$p=|V(G)|$ and that equality holds if $G \cong K_3$.
|
In this work we discuss observational aspects of three time-dependent
parameterisations of the dark energy equation of state $w(z)$. In order to
determine the dynamics associated with these models, we calculate their
background evolution and perturbations in a scalar field representation. After
performing a complete treatment of linear perturbations, we also show that the
non-linear contribution of the selected $w(z)$ parameterisations to the matter
power spectra is almost the same for all scales, with no significant difference
from the predictions of the standard $\Lambda$CDM model.
|
A common approach to improve software quality is to use programming
guidelines to avoid common kinds of errors. In this paper, we consider the
problem of enforcing guidelines for Featherweight Java (FJ). We formalize
guidelines as sets of finite or infinite execution traces and develop a
region-based type and effect system for FJ that can enforce such guidelines. We
build on the work by Erbatur, Hofmann and Z\u{a}linescu, who presented a type
system for verifying the finite event traces of terminating FJ programs. We
refine this type system, separating region typing from FJ typing, and use ideas
of Hofmann and Chen to extend it to capture also infinite traces produced by
non-terminating programs. Our type and effect system can express properties of
both finite and infinite traces and can compute information about the possible
infinite traces of FJ programs. Specifically, the set of infinite traces of a
method is constructed as the greatest fixed point of the operator which
calculates the possible traces of method bodies. Our type inference algorithm
is realized by working with the finitary abstraction of the system based on
B\"uchi automata.
|
In high density quark matter under a strong external magnetic field, possible
phases are investigated by using the two-flavor Nambu-Jona-Lasinio model with
tensor-type four-point interaction between quarks, as well as the
axial-vector-type four-point interaction. In the tensor-type interaction under
the strong external magnetic field, it is shown that a quark spin polarized
phase is realized in all regions of the quark chemical potential under
consideration within the lowest Landau level approximation. In the
axial-vector-type interaction, it is also shown that the quark spin polarized
phase appears in the wide range of the quark chemical potential. In both the
interactions, the quark mass in zero and small chemical potential regions
increases which indicates that the chiral symmetry breaking is enhanced, namely
the magnetic catalysis occurs.
|
We introduce a class of continuous maps $f$ of a compact topological space
$X$ admitting inducing schemes of hyperbolic type and describe the associated
tower constructions. We then establish a thermodynamic formalism, i.e., we
describe a class of real-valued potential functions $\varphi$ on $X$ such that
$f$ possess a unique equilibrium measure $\mu_\varphi$, associated to each
$\varphi$, which minimizes the free energy among the measures that are liftable
to the tower. We also describe some ergodic properties of equilibrium measures
including decay of correlations and the central limit theorem. We then study
the liftability problem and show that under some additional assumptions on the
inducing scheme every measure that charges the base of the tower and has
sufficiently large entropy is liftable. Our results extend those obtained in
[PS05, PS08] for inducing schemes of expanding types and apply to certain
multidimensional maps. Applications include obtaining the thermodynamic
formalism for Young's diffeomorphisms, the H\'enon family at the first
birfucation and the Katok map. In particular for the Katok map we obtain the
exponential decay of correlations for equilibrium measures associated to the
geometric potentials with $0\le t<1$.
|
Nuclear norm maximization has shown the power to enhance the transferability
of unsupervised domain adaptation model (UDA) in an empirical scheme. In this
paper, we identify a new property termed equity, which indicates the balance
degree of predicted classes, to demystify the efficacy of nuclear norm
maximization for UDA theoretically. With this in mind, we offer a new
discriminability-and-equity maximization paradigm built on squares loss, such
that predictions are equalized explicitly. To verify its feasibility and
flexibility, two new losses termed Class Weighted Squares Maximization (CWSM)
and Normalized Squares Maximization (NSM), are proposed to maximize both
predictive discriminability and equity, from the class level and the sample
level, respectively. Importantly, we theoretically relate these two novel
losses (i.e., CWSM and NSM) to the equity maximization under mild conditions,
and empirically suggest the importance of the predictive equity in UDA.
Moreover, it is very efficient to realize the equity constraints in both
losses. Experiments of cross-domain image classification on three popular
benchmark datasets show that both CWSM and NSM contribute to outperforming the
corresponding counterparts.
|
Multitask learning aims at solving a set of related tasks simultaneously, by
exploiting the shared knowledge for improving the performance on individual
tasks. Hence, an important aspect of multitask learning is to understand the
similarities within a set of tasks. Previous works have incorporated this
similarity information explicitly (e.g., weighted loss for each task) or
implicitly (e.g., adversarial loss for feature adaptation), for achieving good
empirical performances. However, the theoretical motivations for adding task
similarity knowledge are often missing or incomplete. In this paper, we give a
different perspective from a theoretical point of view to understand this
practice. We first provide an upper bound on the generalization error of
multitask learning, showing the benefit of explicit and implicit task
similarity knowledge. We systematically derive the bounds based on two distinct
task similarity metrics: H divergence and Wasserstein distance. From these
theoretical results, we revisit the Adversarial Multi-task Neural Network,
proposing a new training algorithm to learn the task relation coefficients and
neural network parameters iteratively. We assess our new algorithm empirically
on several benchmarks, showing not only that we find interesting and robust
task relations, but that the proposed approach outperforms the baselines,
reaffirming the benefits of theoretical insight in algorithm design.
|
In the framework of a one-dimensional model with a tightly localized
self-attractive nonlinearity, we study the formation and transfer (dragging) of
a trapped mode by "nonlinear tweezers", as well as the scattering of coherent
linear wave packets on the stationary localized nonlinearity. The use of the
nonlinear trap for the dragging allows one to pick up and transfer the relevant
structures without grabbing surrounding "garbage". A stability border for the
dragged modes is identified by means of of analytical estimates and systematic
simulations. In the framework of the scattering problem, the shares of trapped,
reflected, and transmitted wave fields are found. Quasi-Airy stationary modes
with a divergent norm, that may be dragged by the nonlinear trap moving at a
constant acceleration, are briefly considered too.
|
For a right-angled Artin group $A_\Gamma$, the untwisted outer automorphism
group $U(A_\Gamma)$ is the subgroup of $Out(A_\Gamma)$ generated by all of the
Laurence-Servatius generators except twists (where a {\em twist} is an
automorphisms of the form $v\mapsto vw$ with $vw=wv$). We define a space
$\Sigma_\Gamma$ on which $U(A_\Gamma)$ acts properly and prove that
$\Sigma_\Gamma$ is contractible, providing a geometric model for $U(A_\Gamma)$
and its subgroups. We also propose a geometric model for all of $Out(A_\Gamma)$
defined by allowing more general markings and metrics on points of
$\Sigma_\Gamma$.
|
We prove some new semi-finite forms of bilateral basic hypergeometric series.
One of them yields in a direct limit Bailey's celebrated ${}_6\psi_6$ summation
formula, answering a question recently raised by Chen and Fu ({\em Semi-Finite
Forms of Bilateral Basic Hypergeometric Series}, Proc. Amer. Math. Soc., to
appear).
|
This paper presents an efficient numerical technique for solving
multi-dimensional fractional optimal control problems using fractional-order
generalized Bernoulli wavelets. The numerical results obtained by this method
have been compared with the results obtained by the method using orthonormal
Bernoulli wavelets. Using fractional-order generalized Bernoulli wavelets,
product and integral operational matrices have been obtained. By using these
operational matrices, the multi-dimensional fractional optimal control problems
have been reduced into a system of algebraic equations. To confirm the
efficiency of the proposed numerical technique involving fractional-order
generalized Bernoulli wavelets, numerical problems have been solved by using
both orthonormal Bernoulli wavelets and fractional-order generalized Bernoulli
wavelets and obtaining an approximate cost function value, which has been found
by approximating state and control functions. In addition, the convergence rate
and error bound of the proposed numerical method have also been derived.
|
In this paper, we investigate the emergence of a ratio-dependent
predator-prey system with Michaelis-Menten-type functional response and
reaction-diffusion. We derive the conditions for Hopf, Turing and Wave
bifurcation on a spatial domain. Furthermore, we present a theoretical analysis
of evolutionary processes that involves organisms distribution and their
interaction of spatially distributed population with local diffusion. The
results of numerical simulations reveal that the typical dynamics of population
density variation is the formation of isolated groups, i.e., stripelike or
spotted or coexistence of both. Our study shows that the spatially extended
model has not only more complex dynamic patterns in the space, but also chaos
and spiral waves. It may help us better understand the dynamics of an aquatic
community in a real marine environment.
|
Variational approaches to data assimilation, and weakly constrained four
dimensional variation (WC-4DVar) in particular, are important in the
geosciences but also in other communities (often under different names). The
cost functions and the resulting optimal trajectories may have a probabilistic
interpretation, for instance by linking data assimilation with Maximum
Aposteriori (MAP) estimation. This is possible in particular if the unknown
trajectory is modelled as the solution of a stochastic differential equation
(SDE), as is increasingly the case in weather forecasting and climate
modelling. In this case, the MAP estimator (or "most probable path" of the SDE)
is obtained by minimising the Onsager--Machlup functional. Although this fact
is well known, there seems to be some confusion in the literature, with the
energy (or "least squares") functional sometimes been claimed to yield the most
probable path. The first aim of this paper is to address this confusion and
show that the energy functional does not, in general, provide the most probable
path. The second aim is to discuss the implications in practice. Although the
mentioned results pertain to stochastic models in continuous time, they do have
consequences in practice where SDE's are approximated by discrete time schemes.
It turns out that using an approximation to the SDE and calculating its most
probable path does not necessarily yield a good approximation to the most
probable path of the SDE proper. This suggest that even in discrete time, a
version of the Onsager--Machlup functional should be used, rather than the
energy functional, at least if the solution is to be interpreted as a MAP
estimator.
|
The non-Markov processes widely exist in thermodymanic processes, while it
usually requires packing of many transistors and memories with great system
complexity in traditional device architecture to minic such functions.
Two-dimensional (2D) material-based resistive random access memory (RRAM)
devices show potential for next-generation computing systems with much-reduced
complexity. Here, we achieve the non-Markov chain in an individual RRAM device
based on 2D mica with a vertical metal/mica/metal structure. We find that the
internal potassium ions (K+) in 2D mica gradually move along the direction of
the applied electric field, making the initially insulating mica conductive.
The accumulation of K+ is tuned by electrical field, and the 2D-mica RRAM
possesses both unipolar and bipolar memory windows, high on/off ratio, decent
stability and repeatability.Importantly, the non-Markov chain algorithm is
established for the first time in a single RRAM, in which the movement of K+ is
dependent on the stimulated voltage as well as their past states. This work not
only uncovers the inner ionic conductivity of 2D mica, but also opens the door
for such novel RRAM devices with numerous functions and applications.
|
It is widely believed that the large redshifts for distant supernovae are
explained by the vacuum energy dominance, or, in other words, by the
cosmological constant in Einstein's equations, which is responsible for the
anti-gravitation effect. A tacit assumption is that particles move along a
geodesic for the background metric. This is in the same spirit as the consensus
regarding the uniform Galilean motion of a free electron. However, there is a
runaway solution to the Lorentz--Dirac equation governing the behavior of a
radiating electron, in addition to the Galilean solution. Likewise, a runaway
solution to the entire system of equations, both gravitation and matter
equations of motion including, may provide an alternative explanation for the
accelerated expansion of the Universe, without recourse to the hypothetic
cosmological constant.
|
The longest common prefix array is a very advantageous data structure that,
combined with the suffix array and the Burrows-Wheeler transform, allows to
efficiently compute some combinatorial properties of a string useful in several
applications, especially in biological contexts. Nowadays, the input data for
many problems are big collections of strings, for instance the data coming from
"next-generation" DNA sequencing (NGS) technologies. In this paper we present
the first lightweight algorithm (called extLCP) for the simultaneous
computation of the longest common prefix array and the Burrows-Wheeler
transform of a very large collection of strings having any length. The
computation is realized by performing disk data accesses only via sequential
scans, and the total disk space usage never needs more than twice the output
size, excluding the disk space required for the input. Moreover, extLCP allows
to compute also the suffix array of the strings of the collection, without any
other further data structure is needed. Finally, we test our algorithm on real
data and compare our results with another tool capable to work in external
memory on large collections of strings.
|
Clusters of galaxies evolve and accrete mass, mostly from small galaxy
systems. Our aim is to study the velocity field of the galaxy cluster Abell
780, which is known for the powerful radio source Hydra A at its center and
where a spectacular X-ray tail associated with the galaxy LEDA 87445 has been
discovered. Our analysis is based on the new spectroscopic data for hundreds of
galaxies obtained with the Italian Telescopio Nazionale {\em Galileo} and the
Very Large Telescope. We have constructed a redshift catalog of 623 galaxies
and selected a sample of 126 cluster members. We analyze the internal structure
of the cluster using a number of techniques. We estimate the mean redshift
z=0.0545, the line-of-sight velocity dispersion sigmav about 800 km/s, and the
dynamical mass M200 about 5.4 10E14 solar masses. The global properties of
Abell 780 are typical of relaxed clusters. On a smaller scale, we can detect
the presence of a galaxy group associated with LEDA 87445 in projected phase
space. The mean velocity and position of the center of the group agree well
with the velocity and position of LEDA 87445. We estimate the following
parameters of the collision. The group is characterized by a higher velocity
relative to the main system. It is infalling at a rest frame velocity of
Vrf=+870 km/s and lies at a projected distance D=1.1 Mpc to the south, slightly
southeast of the cluster center. The mass ratio between the group and the
cluster is about 1:5. We also find evidence of an asymmetry in the velocity
distribution of galaxies in the inner cluster region, which might be related to
a small low-velocity group detected as a substructure at Vrf=-750 km/s. We
conclude that A780, although dynamically relaxed at first sight, contains small
substructures that may have some impact on the energetics of the core region.
|
Aspect-based sentiment analysis (ABSA) delves into understanding sentiments
specific to distinct elements within a user-generated review. It aims to
analyze user-generated reviews to determine a) the target entity being
reviewed, b) the high-level aspect to which it belongs, c) the sentiment words
used to express the opinion, and d) the sentiment expressed toward the targets
and the aspects. While various benchmark datasets have fostered advancements in
ABSA, they often come with domain limitations and data granularity challenges.
Addressing these, we introduce the OATS dataset, which encompasses three fresh
domains and consists of 27,470 sentence-level quadruples and 17,092
review-level tuples. Our initiative seeks to bridge specific observed gaps: the
recurrent focus on familiar domains like restaurants and laptops, limited data
for intricate quadruple extraction tasks, and an occasional oversight of the
synergy between sentence and review-level sentiments. Moreover, to elucidate
OATS's potential and shed light on various ABSA subtasks that OATS can solve,
we conducted experiments, establishing initial baselines. We hope the OATS
dataset augments current resources, paving the way for an encompassing
exploration of ABSA (https://github.com/RiTUAL-UH/OATS-ABSA).
|
Observations of exoplanet atmospheres in high resolution have the potential
to resolve individual planetary absorption lines, despite the issues associated
with ground-based observations. The removal of contaminating stellar and
telluric absorption features is one of the most sensitive steps required to
reveal the planetary spectrum and, while many different detrending methods
exist, it remains difficult to directly compare the performance and efficacy of
these methods. Additionally, though the standard cross-correlation method
enables robust detection of specific atmospheric species, it only probes for
features that are expected a priori. Here we present a novel methodology using
Gaussian process (GP) regression to directly model the components of
high-resolution spectra, which partially addresses these issues. We use two
archival CRIRES/VLT data sets as test cases, observations of the hot Jupiters
HD 189733 b and 51 Pegasi b, recovering injected signals with average line
contrast ratios of $\sim 4.37 \times 10^{-3}$ and $\sim 1.39 \times 10^{-3}$,
and planet radial velocities $\Delta K_\mathrm{p} =1.45 \pm
1.53\,\mathrm{km\,s^{-1}}$ and $\Delta
K_\mathrm{p}=0.12\pm0.12\,\mathrm{km\,s^{-1}}$ from the injection velocities
respectively. In addition, we demonstrate an application of the GP method to
assess the impact of the detrending process on the planetary spectrum, by
implementing injection-recovery tests. We show that standard detrending methods
used in the literature negatively affect the amplitudes of absorption features
in particular, which has the potential to render retrieval analyses inaccurate.
Finally, we discuss possible limiting factors for the non-detections using this
method, likely to be remedied by higher signal-to-noise data.
|
Thermodynamics is a highly successful macroscopic theory widely used across
the natural sciences and for the construction of everyday devices, from car
engines and fridges to power plants and solar cells. With thermodynamics
predating quantum theory, research now aims to uncover the thermodynamic laws
that govern finite size systems which may in addition host quantum effects.
Here we identify information processing tasks, the so-called "projections",
that can only be formulated within the framework of quantum mechanics. We show
that the physical realisation of such projections can come with a non-trivial
thermodynamic work only for quantum states with coherences. This contrasts with
information erasure, first investigated by Landauer, for which a thermodynamic
work cost applies for classical and quantum erasure alike. Implications are
far-reaching, adding a thermodynamic dimension to measurements performed in
quantum thermodynamics experiments, and providing key input for the
construction of a future quantum thermodynamic framework. Repercussions are
discussed for quantum work fluctuation relations and thermodynamic single-shot
approaches.
|
Perplexity (per word) is the most widely used metric for evaluating language
models. Despite this, there has been no dearth of criticism for this metric.
Most of these criticisms center around lack of correlation with extrinsic
metrics like word error rate (WER), dependence upon shared vocabulary for model
comparison and unsuitability for unnormalized language model evaluation. In
this paper, we address the last problem and propose a new discriminative
entropy based intrinsic metric that works for both traditional word level
models and unnormalized language models like sentence level models. We also
propose a discriminatively trained sentence level interpretation of recurrent
neural network based language model (RNN) as an example of unnormalized
sentence level model. We demonstrate that for word level models, contrastive
entropy shows a strong correlation with perplexity. We also observe that when
trained at lower distortion levels, sentence level RNN considerably outperforms
traditional RNNs on this new metric.
|
We present a novel clustering approach for moving object trajectories that
are constrained by an underlying road network. The approach builds a similarity
graph based on these trajectories then uses modularity-optimization hiearchical
graph clustering to regroup trajectories with similar profiles. Our
experimental study shows the superiority of the proposed approach over classic
hierarchical clustering and gives a brief insight to visualization of the
clustering results.
|
The determination of quark angular momentum requires the knowledge of the
generalized parton distribution E in the forward limit. We assume a connection
between this function and the Sivers transverse-momentum distribution, based on
model calculations and theoretical considerations. Using this assumption, we
show that it is possible to fit at the same time nucleon magnetic moments and
semi-inclusive single-spin asymmetries. This imposes additional constraints on
the Sivers function and opens a plausible way to quantifying quark angular
momentum.
|
Assembly planning is a difficult problem for companies. Many disciplines such
as design, planning, scheduling, and manufacturing execution need to be
carefully engineered and coordinated to create successful product assembly
plans. Recent research in the field of design for assembly has proposed new
methodologies to design product structures in such a way that their assembly is
easier. However, present assembly planning approaches lack the engineering tool
support to capture all the constraints associated to assembly planning in a
unified manner. This paper proposes CompositionalPlanning, a string diagram
based framework for assembly planning. In the proposed framework, string
diagrams and their compositional properties serve as the foundation for an
engineering tool where CAD designs interact with planning and scheduling
algorithms to automatically create high-quality assembly plans. These assembly
plans are then executed in simulation to measure their performance and to
visualize their key build characteristics. We demonstrate the versatility of
this approach in the LEGO assembly domain. We developed two reference LEGO CAD
models that are processed by CompositionalPlanning's algorithmic pipeline. We
compare sequential and parallel assembly plans in a Minecraft simulation and
show that the time-to-build performance can be optimized by our algorithms.
|
Context. QPPs are usually detected as spatial displacements of coronal loops
in imaging observations or as periodic shifts of line properties in
spectroscopic observations. They are often applied for remote diagnostics of
magnetic fields and plasma properties on the Sun. Aims. We combine imaging and
spectroscopic measurements of available space missions, and investigate the
properties of non-damping oscillations at flaring loops. Methods. We used the
IRIS to measure the spectrum over a narrow slit. The double-component Gaussian
fitting method was used to extract the line profile of Fe XXI 1354.08 A at "O
I" window. The quasi-periodicity of loop oscillations were identified in the
Fourier and wavelet spectra. Results. A periodicity at about 40 s is detected
in the line properties of Fe XXI, HXR emissions in GOES 1-8 A derivative, and
Fermi 26-50 keV. The Doppler velocity and line width oscillate in phase, while
a phase shift of about Pi/2 is detected between the Doppler velocity and peak
intensity. The amplitudes of Doppler velocity and line width oscillation are
about 2.2 km/s and 1.9 km/s, respectively, while peak intensity oscillate with
amplitude at about 3.6% of the background emission. Meanwhile, a quasi-period
of about 155 s is identified in the Doppler velocity and peak intensity of Fe
XXI, and AIA 131 A intensity. Conclusions. The oscillations at about 40 s are
not damped significantly during the observation, it might be linked to the
global kink modes of flaring loops. The periodicity at about 155 s is most
likely a signature of recurring downflows after chromospheric evaporation along
flaring loops. The magnetic field strengths of the flaring loops are estimated
to be about 120-170 G using the MHD seismology diagnostics, which are
consistent with the magnetic field modeling results using the flux rope
insertion method.
|
The movement and deformation of mineral grains in rocks control the failure
behavior of rocks. However, at high resolution, the physical and mechanical
behavior of three-dimensional microstructures in rocks under uniaxial
compression has not been characterized. Here, in suit XCT (4.6 um) has been
applied to investigate the behavior of mineral grains of sandstone -- movement,
rotation deformation and the principle strains obtained by deformation gradient
tensor constructed with three principle axial vector representation of grain,
indicating that the behavior of grains between the fracture and the
non-fracture zone are different. For further investigate the behavior of grain
cluster, the material lines are used to obtain the Stokes local rotation,
namely shear strain. The finding is that: 1. the shear strain is periodic in
the radial direction. 2. on average sense, the positive shear strain and
negative shear strain have local concentration features.
|
The helical magnetic structures of cubic chiral systems are well-explained by
the competition among Heisenberg exchange, Dzyaloshinskii-Moriya interaction,
cubic anisotropy, and anisotropic exchange interaction (AEI). Recently, the
role of the latter has been argued theoretically to be crucial for the
low-temperature phase diagram of the cubic chiral magnet Cu$_2$OSeO$_3$, which
features tilted conical and disordered skyrmion states for a specific
orientation of the applied magnetic field ($\mu_0 \vec{\mathrm{H}} \parallel
[001]$). In this study, we exploit transmission resonant x-ray scattering
($t-$REXS) in vector magnetic fields to directly quantify the strength of the
AEI in Cu$_2$OSeO$_3$, and measure its temperature dependence. We find that the
AEI continuously increases below 50\,K, resulting in a conical spiral pitch
variation of $10\%$ in the (001) plane. Our results contribute to establishing
the interaction space that supports tilted cone and low-temperature skyrmion
state formation, facilitating the goals for both a quantitative description and
eventual design of the diverse spiral states existing amongst chiral magnets.
|
We present a coherent stellar and nebular model reproducing the observations
of the Planetary Nebula IC418. We want to test whether a stellar model obtained
by fitting the stellar observations is able to satisfactory ionize the nebula
and reproduce the nebular observations, which is by no mean evident. This
allows us to determine all the physical parameters of both the star and the
nebula, including the abundances and the distance. We used all the
observational material available (FUSE, IUE, STIS and optical spectra) to
constrain the stellar atmosphere model performed using the CMFGEN code. The
photoionization model is done with Cloudy_3D, and is based on CTIO, Lick, SPM,
IUE and ISO spectra as well as HST images. More than 140 nebular emission lines
are compared to the observed intensities. We reproduce all the observations for
the star and the nebula. The 3D morphology of the gas distribution is
determined. The effective temperature of the star is 36.7kK. Its luminosity is
7700 solar luminosity. We describe an original method to determine the distance
of the nebula using evolutionary tracks. No clumping factor is need to
reproduce the age-luminosity relation. The distance of 1.25 kpc is found in
very good agreement with recent determination using parallax method. The
chemical composition of both the star and the nebula are determined. Both are
Carbon-rich. The nebula presents evidence of depletion of elements Mg, Si, S,
Cl (0.5 dex lower than solar) and Fe (2.9 dex lower than solar). This is the
first self-consistent stellar and nebular model for a Planetary Nebula that
reproduces all the available observations ranging from IR to UV, showing that
the combined approach for the modeling process leads to more restrictive
constraints and, in principle, more trustworthy results.
|
Let S be a subset of the unit disk, and let F(s) denote the class of
completely multiplicative functions f such that f(p) is in S for all primes p.
The authors' main concern is which numbers arise as mean-values of functions in
F(s). More precisely, let Gamma_N(S) = {1/N sum_{n <= N} f(n): f in F(S)} and
Gamma(S) = lim_{N -> infinity} Gamma_N(s). The authors call Gamma(S) the
spectrum of the set S, and study its properties.
|
Today, reusable components are available in several repositorys. These are
certainly conceived for re-use. However, this re-use is not immediate, it
requires, in effect, to pass by some essential conceptual operations, among
which in particular, research, integration, adaptation, and composition. We are
interested in the present work to the problem of semantic integration of
heterogeneous Business Components. This problem is often put in syntactical
terms, while the real stake is of semantic order. Our contribution concerns an
architecture proposal for Business components integration and a resolution
method of semantic naming conflicts, met during the integration of Business
Components
|
Hycean worlds are a proposed subset of sub-Neptune exoplanets with
substantial water inventories, liquid surface oceans and extended
hydrogen-dominated atmospheres that could be favourable for habitability. In
this work, we aim to quantitatively define the inner edge of the Hycean
habitable zone using a 1D radiative-convective model. As a limiting case, we
model a dry hydrogen-helium envelope above a surface ocean. We find that 10 to
20 bars of atmosphere produces enough greenhouse effect to drive a liquid
surface ocean supercritical when forced with current Earth-like instellation.
Introducing water vapour into the atmosphere, we show the runaway greenhouse
instellation limit is greatly reduced due to the presence of superadiabatic
layers where convection is inhibited. This moves the inner edge of the
habitable zone from $\approx$ 1 AU for a G-star to 1.6 AU (3.85 AU) for a
Hycean world with a H$_2$-He inventory of 1 bar (10 bar). For an M-star, the
inner edge is equivalently moved from 0.17 AU to 0.28 AU (0.54 AU). Our results
suggest that most of the current Hycean world observational targets are not
likely to sustain a liquid water ocean. We present an analytical framework for
interpreting our results, finding that the maximum possible OLR scales
approximately inversely with the dry mass inventory of the atmosphere. We
discuss the possible limitations of our 1D modelling and recommend the use of
3D convection-resolving models to explore the robustness of superadiabatic
layers.
|
We discuss a model comprising two coupled nonlinear oscillators (Kerr-like
nonlinear coupler) with one of them pumped by an external coherent excitation.
Applying the method of nonlinear quantum scissors we show that the quantum
evolution of the coupler can be closed within a finite set of n-photon Fock
states. Moreover, we show that the system is able to generate Bell-like states
and, as a consequence, the coupler discussed behaves as a two-qubit system. We
also analyze the effects of dissipation on entanglement of formation
parametrized by concurrence.
|
Current conditional functional dependencies (CFDs) discovery algorithms
always need a well-prepared training data set. This makes them difficult to be
applied on large datasets which are always in low-quality. To handle the volume
issue of big data, we develop the sampling algorithms to obtain a small
representative training set. For the low-quality issue of big data, we then
design the fault-tolerant rule discovery algorithm and the conflict resolution
algorithm. We also propose parameter selection strategy for CFD discovery
algorithm to ensure its effectiveness. Experimental results demonstrate that
our method could discover effective CFD rules on billion-tuple data within
reasonable time.
|
We study the gravitational-wave background produced by f-mode oscillations of
neutron stars triggered by magnetar giant flares. For the gravitational-wave
energy, we use analytic formulae obtained via general relativistic
magnetohydrodynamic simulations of strongly magnetized neutron stars. Assuming
the magnetar giant flare rate is proportional to the star-formation rate, we
show the gravitational-wave signal is likely undetectable by third-generation
detectors such as the Einstein Telescope and Cosmic Explorer. We calculate the
minimum value of the magnetic field and the magnetar giant flare rate necessary
for such a signal to be detectable, and discuss these in the context of our
current understanding of magnetar flares throughout the Universe.
|
Using the Perron method, we prove the existence of hypersurfaces of
prescribed special Lagrangian curvature with prescribed boundary inside
complete Riemannian manifolds of non-positive curvature.
|
In this letter we show that the two parton showers mechanism for $J/\Psi$
production, that has been discussed in Ref.[28], leads to small values of
$\sigma_{\rm eff}$ for the production of a pair of $J/\Psi$. We develop a
simple two channel approach to estimate the values of $\sigma_{\rm eff}$, which
produces values that are in accord with the experimental data.
|
In this paper, we characterize the biderivations of W-algebra $W(2,2)$ and
Virasoro algebra $Vir$ without skewsymmetric condition. We get two classes of
non-inner biderivations. As applications, we also get the forms of linear
commuting maps on W-algebra $W(2,2)$ and Virasoro algebra $Vir$.
|
The polarization of the atmosphere has been a long-standing concern for
ground-based experiments targeting cosmic microwave background (CMB)
polarization. Ice crystals in upper tropospheric clouds scatter thermal
radiation from the ground and produce a horizontally-polarized signal. We
report the detailed analysis of the cloud signal using a ground-based CMB
experiment, POLARBEAR, located at the Atacama desert in Chile and observing at
150 GHz. We observe horizontally-polarized temporal increases of low-frequency
fluctuations ("polarized bursts," hereafter) of $\lesssim$0.1 K when clouds
appear in a webcam monitoring the telescope and the sky. The hypothesis of no
correlation between polarized bursts and clouds is rejected with $>$24$\sigma$
statistical significance using three years of data. We consider many other
possibilities including instrumental and environmental effects, and find no
other reasons other than clouds that can explain the data better. We also
discuss the impact of the cloud polarization on future ground-based CMB
polarization experiments.
|
This chapter begins with pure aluminium and a discussion of the form of the
crystal structure and different unit cells that can be used to describe the
crystal structure. Measurements of the face-centred cubic lattice parameter and
thermal expansion coefficient in pure aluminium are reviewed and
parametrisations given that allow the reader to evaluate them across the full
range of temperatures where aluminium is a solid. A new concept called the
vacancy triangle is introduced and demonstrated as an effective means for
determining vacancy concentrations near the melting point of aluminium. The
Debye-Waller factor, quantifying the thermal vibration of aluminium atoms in
pure aluminium, is reviewed and parametrised over the full range of
temperatures where aluminium is a solid. The nature of interatomic bonding and
the history of its characterisation in pure aluminium is reviewed with the
unequivocal conclusion that it is purely tetrahedral in nature. The
crystallography of aluminium alloys is then discussed in terms of all of the
concepts covered for pure aluminium, using prominent alloy examples. The
electron density domain theory of solid-state nucleation and precipitate growth
is introduced and discussed as a new means of rationalising phase
transformations in alloys from a crystallographic point of view.
|
The yellow supergiant content of nearby galaxies can provide a critical test
of stellar evolution theory, bridging the gap between the hot, massive stars
and the cool red supergiants. But, this region of the color-magnitude diagram
is dominated by foreground contamination, requiring membership to somehow be
determined. Fortunately, the large negative systemic velocity of M31, coupled
to its high rotation rate, provides the means for separating the contaminating
foreground dwarfs from the bona fide yellow supergiants within M31. Using the
MMT, we obtained spectra of about 2900 stars, selected using the color and
magnitude range to be yellow supergiants. Comparing the velocities to that of
M31's rotation curve, we identified 54 certain, and 66 probable yellow
supergiants from among the sea of foreground dwarfs. We find excellent
agreement between the location of yellow supergiants in the H-R diagram and
that predicted by the latest Geneva evolutionary tracks which include rotation.
However, the relative number of yellow supergiants seen as a function of mass
varies from that predicted by the models by a factor of more than 10, in the
sense that more high mass yellow supergiants are predicted than are actually
observed. Comparing the total number of yellow supergiants with masses above
20Mo with the estimated number of unevolved O stars indicates that the duration
of the yellow supergiant phase is about 3000 years. This is consistent with
what the 12Mo and 15Mo evolutionary tracks predict, but disagrees with the
20,000-80,000 year time scales predicted by the models for higher masses.
|
The accuracy of the energy landscape of silicon systems obtained from various
density functional methods, a tight binding scheme and force fields is studied.
Quantum Monte Carlo results serve as quasi exact reference values. In addition
to the well known accuracy of DFT methods for geometric ground states and
metastable configurations we find that DFT methods give a similar accuracy for
transition states and thus a good overall description of the energy landscape.
On the other hand, force fields give a very poor description of the landscape
that are in most cases too rugged and contain many fake local minima and saddle
points or ones that have the wrong height.
|
Promising searches for new physics beyond the current Standard Model (SM) of
particle physics are feasible through isotope-shift spectroscopy, which is
sensitive to a hypothetical fifth force between the neutrons of the nucleus and
the electrons of the shell. Such an interaction would be mediated by a new
particle which could in principle be associated with dark matter. In so-called
King plots, the mass-scaled frequency shifts of two optical transitions are
plotted against each other for a series of isotopes. Subtle deviations from the
expected linearity could reveal such a fifth force. Here, we study
experimentally and theoretically six transitions in highly charged ions of Ca,
an element with five stable isotopes of zero nuclear spin. Some of the
transitions are suitable for upcoming high-precision coherent laser
spectroscopy and optical clocks. Our results provide a sufficient number of
clock transitions for -- in combination with those of singly charged Ca$^+$ --
application of the generalized King plot method. This will allow future
high-precision measurements to remove higher-order SM-related nonlinearities
and open a new door to yet more sensitive searches for unknown forces and
particles.
|
We present the results of a spectroscopic investigation of 108 nearby field
B-stars. We derive their key stellar parameters, $V \sin i$, $T_{\rm eff}$,
$\log g$, and $\log g_{\rm polar}$, using the same methods that we used in our
previous cluster B-star survey. By comparing the results of the field and the
cluster samples, we find that the main reason for the overall slower rotation
of the field sample is that it contains a larger fraction of older stars than
found in the (mainly young) cluster sample.
|
An isotropic antenna radiates and receives electromagnetic wave uniformly in
magnitude in 3D space. A multi-frequency quasi-isotropic antenna can serve as a
practically feasible solution to emulate an ideal multi-frequency isotropic
radiator. It is also an essential technology for mobile smart devices for
massive IoT in the upcoming 6G. However, ever since the quasi-isotropic antenna
was proposed and achieved more than half a century ago, at most two discrete
narrow frequency bands can be achieved, because of the significantly increased
structural complexity from multi-frequency isotropic radiation. This limitation
impedes numerous related electromagnetic experiments and the advances in
wireless communication. Here, for the first time, a design method for
multi-band (>2) quasi-isotropic antennas is proposed. An exemplified
quasi-isotropic antenna with the desired four frequency bands is also presented
for demonstration. The measured results validate excellent performance on both
electromagnetics and wireless communications for this antenna.
|
We present high angular resolution Submillimeter Array (SMA) and Combined
Array for Research in Millimeter-wave Astronomy (CARMA) observations of two
GLIMPSE Extended Green Objects (EGOs)--massive young stellar object (MYSO)
outflow candidates identified based on their extended 4.5 micron emission in
Spitzer images. The mm observations reveal bipolar molecular outflows, traced
by high-velocity 12CO(2-1) and HCO+(1-0) emission, coincident with the 4.5
micron lobes in both sources. SiO(2-1) emission confirms that the extended 4.5
micron emission traces active outflows. A single dominant outflow is identified
in each EGO, with tentative evidence for multiple flows in one source
(G11.92-0.61). The outflow driving sources are compact millimeter continuum
cores, which exhibit hot-core spectral line emission and are associated with
6.7 GHz Class II methanol masers. G11.92-0.61 is associated with at least three
compact cores: the outflow driving source, and two cores that are largely
devoid of line emission. In contrast, G19.01-0.03 appears as a single MYSO. The
difference in multiplicity, the comparative weakness of its hot core emission,
and the dominance of its extended envelope of molecular gas all suggest that
G19.01-0.03 may be in an earlier evolutionary stage than G11.92-0.61. Modeling
of the G19.01-0.03 spectral energy distribution suggests that a central
(proto)star (M ~10 Msun) has formed in the compact mm core (Mgas ~ 12-16 Msun),
and that accretion is ongoing at a rate of ~10^-3 solar masses per year. Our
observations confirm that these EGOs are young MYSOs driving massive bipolar
molecular outflows, and demonstrate that considerable chemical and evolutionary
diversity are present within the EGO sample.
|
We present self-consistent disk models for T Tauri stars which include a
parameterized treatment of dust settling and grain growth, building on
techniques developed in a series of papers by D'Alessio etal. The models
incorporate depleted distributions of dust in upper disk layers along with
larger-sized particles near the disk midplane, as expected theoretically and as
we suggested earlier is necessary to account for mm-wave emission, SEDs,
scattered light images, and silicate emission features simultaneously. By
comparing the models with recent mid- and near-IR observations, we find that
the dust to gas mass ratio of small grains at the upper layers should be < 10 %
of the standard value. The grains that have disappeared from the upper layers
increase the dust to gas mass ratio of the disk interior; if those grains grow
to maximum sizes of the order of mm during the settling process, then both the
millimeter-wave fluxes and spectral slopes can be consistently explained.
Depletion and growth of grains can also enhance the ionization of upper layers,
enhancing the possibility of the magnetorotational instability for driving disk
accretion.
|
In this paper, achievable rates regions are derived for power constrained
Gaussian broadcast channel of two users using finite dimension constellations.
Various transmission strategies are studied, namely superposition coding (SC)
and superposition modulation (SM) and compared to standard schemes such as time
sharing (TS). The maximal achievable rates regions for SM and SC strategies are
obtained by optimizing over both the joint probability distribution and over
the positions of constellation symbols. The improvement in achievable rates for
each scheme of increasing complexity is evaluated in terms of SNR savings for a
given target achievable rate or/and percentage of gain in achievable rates for
one user with reference to a classical scenario.
|
The efficient exchange of information is an essential aspect of intelligent
collective behavior. Event-triggered control and estimation achieve some
efficiency by replacing continuous data exchange between agents with
intermittent, or event-triggered communication. Typically, model-based
predictions are used at times of no data transmission, and updates are sent
only when the prediction error grows too large. The effectiveness in reducing
communication thus strongly depends on the quality of the prediction model. In
this article, we propose event-triggered learning as a novel concept to reduce
communication even further and to also adapt to changing dynamics. By
monitoring the actual communication rate and comparing it to the one that is
induced by the model, we detect a mismatch between model and reality and
trigger model learning when needed. Specifically, for linear Gaussian dynamics,
we derive different classes of learning triggers solely based on a statistical
analysis of inter-communication times and formally prove their effectiveness
with the aid of concentration inequalities.
|
How do the level of usage of an article, the timeframe of its usage and its
subject area relate to the number of citations it accrues? This paper aims to
answer this question through an observational study of usage and citation data
collected about the multidisciplinary, open access mega-journal Scientific
Reports. This observational study answers these questions using the following
methods: an overlap analysis of most read and top-cited articles; Spearman
correlation tests between total citation counts over two years and usage over
various timeframes; a comparison of first months of citation for most read and
all articles; a Wilcoxon test on the distribution of total citations of early
cited articles and the distribution of total citations of all other articles.
All analyses were performed using the programming language R. As Scientific
Reports is a multidisciplinary journal covering all natural and clinical
sciences, we also looked at the differences across subjects. We found a
moderate correlation between usage in the first year and citations in the first
two years since publication, and that articles with high usage in the first 6
months are more likely to have their first citation earlier (Wilcoxon=1811500,
p < 0.0001), which is also related to higher citations in the first two years
(Wilcoxon=8071200, p < 0.0001). As this final assertion is inferred based on
the results of the other elements of this paper, it requires further analysis.
Moreover, our choice of a 2 year window for our analysis did not consider the
articles' citation half-life, and our use of Scientific Reports (a journal that
is atypical compared to most academic journals) as the source of the articles
analysed has likely played a role in our findings, and so analysing a longer
timeframe and carrying out similar analysis on a different journal (or group of
journals) may lead to different conclusions.
|
We start with comparisons of hierarchies in Biology and relate it to Quan-
tum Field Theories. Thereby we discover many similarities and translate them
into rich mathematical correspondences. The basic connection goes via scale
transformations and Tropical geometry. One of our core observations is that
Cellular Automata can be naturally introduced in many physical models and refer
to a (generalised) mesoscopic scale, as we point it out for the fundamental
relation with Gauge Field Theories. To illustrate our framework further, we
apply it to the Witten-Kontsevich model and the Miller-Morita-Mumford classes,
which in our picture arise from a dynamical system.
|
Capture The Flag (CTF) challenges are puzzles related to computer security
scenarios. With the advent of large language models (LLMs), more and more CTF
participants are using LLMs to understand and solve the challenges. However, so
far no work has evaluated the effectiveness of LLMs in solving CTF challenges
with a fully automated workflow. We develop two CTF-solving workflows,
human-in-the-loop (HITL) and fully-automated, to examine the LLMs' ability to
solve a selected set of CTF challenges, prompted with information about the
question. We collect human contestants' results on the same set of questions,
and find that LLMs achieve higher success rate than an average human
participant. This work provides a comprehensive evaluation of the capability of
LLMs in solving real world CTF challenges, from real competition to fully
automated workflow. Our results provide references for applying LLMs in
cybersecurity education and pave the way for systematic evaluation of offensive
cybersecurity capabilities in LLMs.
|
It is believed that the presence of anticrossings with exponentially small
gaps between the lowest two energy levels of the system Hamiltonian, can render
adiabatic quantum optimization inefficient. Here, we present a simple adiabatic
quantum algorithm designed to eliminate exponentially small gaps caused by
anticrossings between eigenstates that correspond with the local and global
minima of the problem Hamiltonian. In each iteration of the algorithm,
information is gathered about the local minima that are reached after passing
the anticrossing non-adiabatically. This information is then used to penalize
pathways to the corresponding local minima, by adjusting the initial
Hamiltonian. This is repeated for multiple clusters of local minima as needed.
We generate 64-qubit random instances of the maximum independent set problem,
skewed to be extremely hard, with between 10^5 and 10^6 highly-degenerate local
minima. Using quantum Monte Carlo simulations, it is found that the algorithm
can trivially solve all the instances in ~10 iterations.
|
In gravitational wave astronomy, non-Gaussian noise, such as scattered light
noise disturbs stable interferometer operation, limiting the interferometer's
sensitivity, and reducing the reliability of the analyses. In scattered light
noise, the non-Gaussian noise dominates the sensitivity in a low frequency
range of less than a few hundred Hz, which is sensitive to gravitational waves
from compact binary coalescence. This non-Gaussian noise prevents reliable
parameter estimation, since several analysis methods are optimized only for
Gaussian noise. Therefore, identifying data contaminated by non-Gaussian noise
is important. In this work, we extended the conventional method to evaluate
non-Gaussian noise, Rayleigh statistic, by using a statistical hypothesis test
to determine a threshold for non-Gaussian noise. First, we estimated the
distribution of the Rayleigh statistic against Gaussian noise, called the
background distribution, and validated that our extension serves as the
hypothetical test. Moreover, we investigated the detection efficiency by
assuming two non-Gaussian noise models. For example, for the model with strong
scattered light noise, the true positive rate was always above 0.7 when the
significance level was 0.05. The results showed that our extension can
contribute to an initial detection of non-Gaussian noise and lead to further
investigation of the origin of the non-Gaussian noise.
|
In this paper we consider the classification problem of extensions of
Yang-Mills-type (YMT) theories. For us, a YMT theory differs from the classical
Yang-Mills theories by allowing an arbitrary pairing on the curvature. The
space of YMT theories with a prescribed gauge group $G$ and instanton sector
$P$ is classified, an upper bound to its rank is given and it is compared with
the space of Yang-Mills theories. We present extensions of YMT theories as a
simple and unified approach to many different notions of deformations and
addition of correction terms previously discussed in the literature. A relation
between these extensions and emergence phenomena in the sense of
arXiv:2004.13144 is presented. We consider the space of all extensions of a
fixed YMT theory $S^G$ and we prove that for every additive group action of
$\mathbb{G}$ in $\mathbb{R}$ and every commutative and unital ring $R$, this
space has an induced structure of $R[\mathbb{G}]$-module bundle. We conjecture
that this bundle can be continuously embedded into a trivial bundle. Morphisms
between extensions of a fixed YMT theory are defined in such a way that they
define a category of extensions. It is proved that this category is a
reflective subcategory of a slice category, reflecting some properties of its
limits and colimits.
|
The ADM formalism for two-point-mass systems in $d$ space dimensions is
sketched. It is pointed out that the regularization ambiguities of the 3rd
post-Newtonian ADM Hamiltonian considered directly in $d=3$ space dimensions
can be cured by dimensional continuation (to complex $d$'s), which leads to a
finite and unique Hamiltonian as $d\to3$. Some so far unpublished details of
the dimensional-continuation computation of the 3rd post-Newtonian
two-point-mass ADM Hamiltonian are presented.
|
Subsets and Splits