text
stringlengths 6
128k
|
---|
The gravitational energy shift for photons is extended to all mass-equivalent
energies $E = mc^2$, obeying the quantum condition $E = h\nu$.On an example of
a relativistic binary system, it was shown that the gravitational energy shift
would imply,in contrast to Newtonian gravity, the gravitational attraction
between full mass-equivalent energies. The corresponding space-time metric
becomes exponential. A good agreement was found with all results of weak field
tests of General relativity. The strong field effects in a binary system can be
easily studied. A long standing problems of Pioneer and other flyby anomalies
were also discussed in connection with the violation of total energy
conservation. It was shown that relatively small energy non-conservation during
the change of the orbit type could explain these persistent anomalies.
|
The local interstellar spectrum of cosmic ray electrons and positrons from
0.8 GeV to 2 TeV is derived by demodulating the measured spectra by balloon and
satellite experiments. It can be well represented by a single power law in
kinetic energy with spectral index 3.4 over the whole energy range, pushing for
the idea that it is not representative of the galactic average. Instead, the
spectrum has to reflect the nature of our local bubble, being mostly sensitive
to the last nearby supernova.
|
Using mirror symmetry, we resolve an old puzzle in the linear sigma model
description of the spacetime Higgs mechanism in a heterotic string
compactification with (2,2) worldsheet supersymmetry. The resolution has a nice
spacetime interpretation via the normalization of physical fields and suggests
that with a little care deformations of the linear sigma model can describe
heterotic Higgs branches.
|
Starting from one-point tail bounds, we establish an upper tail large
deviation principle for the directed landscape at the metric level. Metrics of
finite rate are in one-to-one correspondence with measures supported on a set
of countably many paths, and the rate function is given by a certain Kruzhkov
entropy of these measures. As an application of our main result, we prove a
large deviation principle for the directed geodesic.
|
In recent years, there has been a growing emphasis on the intersection of
audio, vision, and text modalities, driving forward the advancements in
multimodal research. However, strong bias that exists in any modality can lead
to the model neglecting the others. Consequently, the model's ability to
effectively reason across these diverse modalities is compromised, impeding
further advancement. In this paper, we meticulously review each question type
from the original dataset, selecting those with pronounced answer biases. To
counter these biases, we gather complementary videos and questions, ensuring
that no answers have outstanding skewed distribution. In particular, for binary
questions, we strive to ensure that both answers are almost uniformly spread
within each question category. As a result, we construct a new dataset, named
MUSIC-AVQA v2.0, which is more challenging and we believe could better foster
the progress of AVQA task. Furthermore, we present a novel baseline model that
delves deeper into the audio-visual-text interrelation. On MUSIC-AVQA v2.0,
this model surpasses all the existing benchmarks, improving accuracy by 2% on
MUSIC-AVQA v2.0, setting a new state-of-the-art performance.
|
Persuasive social robots employ their social influence to modulate children's
behaviours in child-robot interaction. In this work, we introduce the
Child-Robot Relational Norm Intervention (CRNI) model, leveraging the passive
role of social robots and children's reluctance to inconvenience others to
influence children's behaviours. Unlike traditional persuasive strategies that
employ robots in active roles, CRNI utilizes an indirect approach by generating
a disturbance for the robot in response to improper child behaviours, thereby
motivating behaviour change through the avoidance of norm violations. The
feasibility of CRNI is explored with a focus on improving children's
handwriting posture. To this end, as a preliminary work, we conducted two
participatory design workshops with 12 children and 1 teacher to identify
effective disturbances that can promote posture correction.
|
This paper proposes a push and pull search method in the framework of
differential evolution (PPS-DE) to solve constrained single-objective
optimization problems (CSOPs). More specifically, two sub-populations,
including the top and bottom sub-populations, are collaborated with each other
to search global optimal solutions efficiently. The top sub-population adopts
the pull and pull search (PPS) mechanism to deal with constraints, while the
bottom sub-population use the superiority of feasible solutions (SF) technique
to deal with constraints. In the top sub-population, the search process is
divided into two different stages --- push and pull stages.An adaptive DE
variant with three trial vector generation strategies is employed in the
proposed PPS-DE. In the top sub-population, all the three trial vector
generation strategies are used to generate offsprings, just like in CoDE. In
the bottom sub-population, a strategy adaptation, in which the trial vector
generation strategies are periodically self-adapted by learning from their
experiences in generating promising solutions in the top sub-population, is
used to choose a suitable trial vector generation strategy to generate one
offspring. Furthermore, a parameter adaptation strategy from LSHADE44 is
employed in both sup-populations to generate scale factor $F$ and crossover
rate $CR$ for each trial vector generation strategy. Twenty-eight CSOPs with
10-, 30-, and 50-dimensional decision variables provided in the CEC2018
competition on real parameter single objective optimization are optimized by
the proposed PPS-DE. The experimental results demonstrate that the proposed
PPS-DE has the best performance compared with the other seven state-of-the-art
algorithms, including AGA-PPS, LSHADE44, LSHADE44+IDE, UDE, IUDE,
$\epsilon$MAg-ES and C$^2$oDE.
|
This paper introduces a new multivariate convolutional sparse coding based on
tensor algebra with a general model enforcing both element-wise sparsity and
low-rankness of the activations tensors. By using the CP decomposition, this
model achieves a significantly more efficient encoding of the multivariate
signal-particularly in the high order/ dimension setting-resulting in better
performance. We prove that our model is closely related to the Kruskal tensor
regression problem, offering interesting theoretical guarantees to our setting.
Furthermore, we provide an efficient optimization algorithm based on
alternating optimization to solve this model. Finally, we evaluate our
algorithm with a large range of experiments, highlighting its advantages and
limitations.
|
A reaction-diffusion-advection model is proposed and investigated to
understand the invasive dynamics of Aedes aegypti mosquitoes. The free boundary
is introduced to model the expanding front of the invasive mosquitoes in a
heterogenous environment. The threshold $R^D_0$ for the model with Dirichlet
boundary condition is defined and the threshold $R^F_0(t)$ for the free
boundary problem is introduced, and the long-time behavior of positive
solutions to the reaction-diffusion-advection system is discussed. Sufficient
conditions for the mosquitoes to be eradicated or to spread are given. We show
that, if $R^F_0(\infty)\leq 1$, the mosquitoes always vanish, and if
$R^F_0(t_0)\geq 1$ for some $t_0\geq 0$, the mosquitoes must spread, while if
$R^F_0(0)<1<R^F_0(\infty)$, the spreading or vanishing of the mosquitoes
depends on the initial number of mosquitoes, or mosquitoes' invasive ability on
the free boundary.
|
Mobile networks of the future are predicted to be much denser than today's
networks in order to cater to increasing user demands. In this context, cloud
based radio access networks have garnered significant interest as a cost
effective solution to the problem of coping with denser networks and providing
higher data rates. However, to the best knowledge of the authors, a
quantitative analysis of the cost of such networks is yet to be undertaken.
This paper develops a theoretic framework that enables computation of the
deployment cost of a network (modeled using various spatial point processes) to
answer the question posed by the paper's title. Then, the framework obtained is
used along with a complexity model, which enables computing the information
processing costs of a network, to compare the deployment cost of a cloud based
network against that of a traditional LTE network, and to analyze why they are
more economical. Using this framework and an exemplary budget, this paper shows
that cloud-based radio access networks require approximately 10 to 15% less
capital expenditure per square kilometer than traditional LTE networks. It also
demonstrates that the cost savings depend largely on the costs of base stations
and the mix of backhaul technologies used to connect base stations with data
centers.
|
We demonstrate control of the absolute phase of an optical lattice with
respect to a single trapped ion. The lattice is generated by off-resonant
free-space laser beams, we actively stabilize its phase by measuring its
ac-Stark shift on a trapped ion. The ion is localized within the standing wave
to better than 2\% of its period. The locked lattice allows us to apply
displacement operations via resonant optical forces with a controlled direction
in phase space. Moreover, we observe the lattice-induced phase evolution of
spin superposition states in order to analyze the relevant decoherence
mechanisms. Finally, we employ lattice-induced phase shifts for inferring the
variation of the ion position over 157~$\mu$m range along the trap axis at
accuracies of better than 6~nm.
|
We study empirically the time evolution of scientific collaboration networks
in physics and biology. In these networks, two scientists are considered
connected if they have coauthored one or more papers together. We show that the
probability of scientists collaborating increases with the number of other
collaborators they have in common, and that the probability of a particular
scientist acquiring new collaborators increases with the number of his or her
past collaborators. These results provide experimental evidence in favor of
previously conjectured mechanisms for clustering and power-law degree
distributions in networks.
|
In this work, we survey skepticism regarding AI risk and show parallels with
other types of scientific skepticism. We start by classifying different types
of AI Risk skepticism and analyze their root causes. We conclude by suggesting
some intervention approaches, which may be successful in reducing AI risk
skepticism, at least amongst artificial intelligence researchers.
|
Wildfires are increasingly impacting the environment, human health and
safety. Among the top 20 California wildfires, those in 2020-2021 burned more
acres than the last century combined. California's 2018 wildfire season caused
damages of $148.5 billion. Among millions of impacted people, those living with
disabilities (around 15% of the world population) are disproportionately
impacted due to inadequate means of alerts. In this project, a multi-modal
wildfire prediction and personalized early warning system has been developed
based on an advanced machine learning architecture. Sensor data from the
Environmental Protection Agency and historical wildfire data from 2012 to 2018
have been compiled to establish a comprehensive wildfire database, the largest
of its kind. Next, a novel U-Convolutional-LSTM (Long Short-Term Memory) neural
network was designed with a special architecture for extracting key spatial and
temporal features from contiguous environmental parameters indicative of
impending wildfires. Environmental and meteorological factors were incorporated
into the database and classified as leading indicators and trailing indicators,
correlated to risks of wildfire conception and propagation respectively.
Additionally, geological data was used to provide better wildfire risk
assessment. This novel spatio-temporal neural network achieved >97% accuracy
vs. around 76% using traditional convolutional neural networks, successfully
predicting 2018's five most devastating wildfires 5-14 days in advance.
Finally, a personalized early warning system, tailored to individuals with
sensory disabilities or respiratory exacerbation conditions, was proposed. This
technique would enable fire departments to anticipate and prevent wildfires
before they strike and provide early warnings for at-risk individuals for
better preparation, thereby saving lives and reducing economic damages.
|
We develop a theory of thermal fluctuations of spin density emerging in a
two-dimensional electron gas. The spin fluctuations probed at spatially
separated spots of the sample are correlated due to Brownian motion of
electrons and spin-obit coupling. We calculate the spatiotemporal correlation
functions of the spin density for both ballistic and diffusive transport of
electrons and analyze them for different types of spin-orbit interaction
including the isotropic Rashba model and persistent spin helix regime. The
measurement of spatial spin fluctuations provides direct access to the
parameters of spin-orbit coupling and spin transport in conditions close to the
thermal equilibrium.
|
In this paper we consider two special cases of the "cover-by-pairs"
optimization problem that arise when we need to place facilities so that each
customer is served by two facilities that reach it by disjoint shortest paths.
These problems arise in a network traffic monitoring scheme proposed by Breslau
et al. and have potential applications to content distribution. The
"set-disjoint" variant applies to networks that use the OSPF routing protocol,
and the "path-disjoint" variant applies when MPLS routing is enabled, making
better solutions possible at the cost of greater operational expense. Although
we can prove that no polynomial-time algorithm can guarantee good solutions for
either version, we are able to provide heuristics that do very well in practice
on instances with real-world network structure. Fast implementations of the
heuristics, made possible by exploiting mathematical observations about the
relationship between the network instances and the corresponding instances of
the cover-by-pairs problem, allow us to perform an extensive experimental
evaluation of the heuristics and what the solutions they produce tell us about
the effectiveness of the proposed monitoring scheme. For the set-disjoint
variant, we validate our claim of near-optimality via a new lower-bounding
integer programming formulation. Although computing this lower bound requires
solving the NP-hard Hitting Set problem and can underestimate the optimal value
by a linear factor in the worst case, it can be computed quickly by CPLEX, and
it equals the optimal solution value for all the instances in our extensive
testbed.
|
The proximal gradient algorithm has been popularly used for convex
optimization. Recently, it has also been extended for nonconvex problems, and
the current state-of-the-art is the nonmonotone accelerated proximal gradient
algorithm. However, it typically requires two exact proximal steps in each
iteration, and can be inefficient when the proximal step is expensive. In this
paper, we propose an efficient proximal gradient algorithm that requires only
one inexact (and thus less expensive) proximal step in each iteration.
Convergence to a critical point %of the nonconvex problem is still guaranteed
and has a $O(1/k)$ convergence rate, which is the best rate for nonconvex
problems with first-order methods. Experiments on a number of problems
demonstrate that the proposed algorithm has comparable performance as the
state-of-the-art, but is much faster.
|
Using equivalencies between different models we reduce the model of two
spin-1/2 Heisenberg chains crossed at one point to the model of free fermions.
The spin-spin correlation function is calculated by summing the perturbation
series in the interchain coupling. The result reveals a power law decay with a
nonuniversal exponent.
|
The Friedmann-Robertson-Walker (FRW) cosmology is analyzed with a general
potential $\rm V(\phi)$ in the scalar field inflation scenario. The Bohmian
approach (a WKB-like formalism) was employed in order to constraint a generic
form of potential to the most suited to drive inflation, from here a family of
potentials emerges; in particular we select an exponential potential as the
first non trivial case and remains the object of interest of this work. The
solution to the Wheeler-DeWitt (WDW) equation is also obtained for the selected
potential in this scheme. Using Hamilton's approach and equations of motion for
a scalar field $\rm \phi$ with standard kinetic energy, we find the exact
solutions to the complete set of Einstein-Klein-Gordon (EKG) equations without
the need of the slow-roll approximation (SR). In order to contrast this model
with observational data (Planck 2018 results), the inflationary observables:
the tensor-to-scalar ratio and the scalar spectral index are derived in our
proper time, and then evaluated under the proper condition such as the number
of e-folding corresponds exactly at 50-60 before inflation ends. The employed
method exhibits a remarkable simplicity with rather interesting applications in
the near future.
|
As a natural variant of domination in graphs, Dankelmann et al. [Domination
with exponential decay, Discrete Math. 309 (2009) 5877-5883] introduce
exponential domination, where vertices are considered to have some dominating
power that decreases exponentially with the distance, and the dominated
vertices have to accumulate a sufficient amount of this power emanating from
the dominating vertices. More precisely, if $S$ is a set of vertices of a graph
$G$, then $S$ is an exponential dominating set of $G$ if $\sum\limits_{v\in
S}\left(\frac{1}{2}\right)^{{\rm dist}_{(G,S)}(u,v)-1}\geq 1$ for every vertex
$u$ in $V(G)\setminus S$, where ${\rm dist}_{(G,S)}(u,v)$ is the distance
between $u\in V(G)\setminus S$ and $v\in S$ in the graph $G-(S\setminus \{
v\})$. The exponential domination number $\gamma_e(G)$ of $G$ is the minimum
order of an exponential dominating set of $G$.
In the present paper we study exponential domination in subcubic graphs. Our
results are as follows: If $G$ is a connected subcubic graph of order $n(G)$,
then $$\frac{n(G)}{6\log_2(n(G)+2)+4}\leq \gamma_e(G)\leq
\frac{1}{3}(n(G)+2).$$ For every $\epsilon>0$, there is some $g$ such that
$\gamma_e(G)\leq \epsilon n(G)$ for every cubic graph $G$ of girth at least
$g$. For every $0<\alpha<\frac{2}{3\ln(2)}$, there are infinitely many cubic
graphs $G$ with $\gamma_e(G)\leq \frac{3n(G)}{\ln(n(G))^{\alpha}}$. If $T$ is a
subcubic tree, then $\gamma_e(T)\geq \frac{1}{6}(n(T)+2).$ For a given subcubic
tree, $\gamma_e(T)$ can be determined in polynomial time. The minimum
exponential dominating set problem is APX-hard for subcubic graphs.
|
We present scanning optical stroboscopic confocal microscopy and spectroscopy
measurements wherein three degrees of freedom, namely energy, real-space, and
real-time, are resolvable. The edge-state propagation is detected as a temporal
change in the optical response in the downstream edge. We succeeded in
visualizing the excited states of the most fundamental fractional quantum Hall
(FQH) state and the collective excitations near the edge. The results verify
the current understanding of the edge excitation and also point toward further
dynamics outside the edge channel.
|
Urban air mobility (UAM) has the potential to revolutionize our daily
transportation, offering rapid and efficient deliveries of passengers and cargo
between dedicated locations within and around the urban environment. Before the
commercialization and adoption of this emerging transportation mode, however,
aviation safety must be guaranteed, i.e., all the aircraft have to be safely
separated by strategic and tactical deconfliction. Reinforcement learning has
demonstrated effectiveness in the tactical deconfliction of en route commercial
air traffic in simulation. However, its performance is found to be dependent on
the traffic density. In this project, we propose a novel framework that
combines demand capacity balancing (DCB) for strategic conflict management and
reinforcement learning for tactical separation. By using DCB to precondition
traffic to proper density levels, we show that reinforcement learning can
achieve much better performance for tactical safety separation. Our results
also indicate that this DCB preconditioning can allow target levels of safety
to be met that are otherwise impossible. In addition, combining strategic DCB
with reinforcement learning for tactical separation can meet these safety
levels while achieving greater operational efficiency than alternative
solutions.
|
The entropy of the states associated to the solutions of the equations of
motion of the bosonic open string with combinations of Neumann and Dirichlet
boundary conditions is given. Also, the entropy of the string in the states $|
A^i > = \alpha^{i}_{-1} |0>$ and $| \phi^a > = \alpha^{a}_{-1} |0>$ that
describe the massless fields on the world-volume of the Dp-brane is computed.
|
The sub-luminal phase velocity of electromagnetic waves in free space is
generally unobtainable, being closely linked to forbidden faster than light
group velocities. The requirement of effective sub-luminal phase-velocity in
laser-driven particle acceleration schemes imposes a fundamental limit on the
total acceleration achievable in free-space, and necessitates the use of
dielectric structures and waveguides for extending the field-particle
interaction. Here we demonstrate a new travelling-source and free space
propagation approach to overcoming the sub-luminal propagation limits. The
approach exploits the relative ease of generating ultrafast optical sources
with slow group velocity propagation, and a group-to-phase front conversion
through non-linear optical interaction near a material-vacuum boundary. The
concept is demonstrated with two terahertz generation processes, non-linear
optical rectification and current-surge rectification. The phase velocity is
tunable, both above and below vacuum speed of light $c$, and we report
measurements of longitudinally polarized electric fields propagating between
$0.77c$ and $1.75c$. The ability to scale to multi-MV/m field strengths is
demonstrated. Our approach paves the way towards the realization of cheap and
compact particle accelerators with unprecedented femtosecond scale control of
particles.
|
Topological data analysis is becoming a popular way to study high dimensional
feature spaces without any contextual clues or assumptions. This paper concerns
itself with one popular topological feature, which is the number of
$d-$dimensional holes in the dataset, also known as the Betti$-d$ number. The
persistence of the Betti numbers over various scales is encoded into a
persistence diagram (PD), which indicates the birth and death times of these
holes as scale varies. A common way to compare PDs is by a point-to-point
matching, which is given by the $n$-Wasserstein metric. However, a big drawback
of this approach is the need to solve correspondence between points before
computing the distance; for $n$ points, the complexity grows according to
$\mathcal{O}($n$^3)$. Instead, we propose to use an entirely new framework
built on Riemannian geometry, that models PDs as 2D probability density
functions that are represented in the square-root framework on a Hilbert
Sphere. The resulting space is much more intuitive with closed form expressions
for common operations. The distance metric is 1) correspondence-free and also
2) independent of the number of points in the dataset. The complexity of
computing distance between PDs now grows according to $\mathcal{O}(K^2)$, for a
$K \times K$ discretization of $[0,1]^2$. This also enables the use of existing
machinery in differential geometry towards statistical analysis of PDs such as
computing the mean, geodesics, classification etc. We report competitive
results with the Wasserstein metric, at a much lower computational load,
indicating the favorable properties of the proposed approach.
|
Real-time PC based algorithm is developed for DSSSD detector. Complete fusion
nuclear reaction natYb+48Ca->217Th is used to test this algorithm at 48Ca beam.
Example of successful application of a former algorithm for resistive strip
PIPS detector in 249Bk+48Ca nuclear reaction is presented too. Case of
alpha-alpha correlations is also under brief consideration.
|
This paper is concerned with data-driven optimal control of nonlinear
systems. We present a convex formulation to the optimal control problem (OCP)
with a discounted cost function. We consider OCP with both positive and
negative discount factor. The convex approach relies on lifting nonlinear
system dynamics in the space of densities using the linear Perron-Frobenius
(P-F) operator. This lifting leads to an infinite-dimensional convex
optimization formulation of the optimal control problem. The data-driven
approximation of the optimization problem relies on the approximation of the
Koopman operator using the polynomial basis function. We write the approximate
finite-dimensional optimization problem as a polynomial optimization which is
then solved efficiently using a sum-of-squares-based optimization framework.
Simulation results are presented to demonstrate the efficacy of the developed
data-driven optimal control framework.
|
Observations of density profiles of galaxies and clusters constrain the
properties of dark matter. Formation of stable halos by collisional fluids with
very low mass particles appears as the most probable interpretation, while
halos formed by high mass particles, left over from a hot big bang, can
scarcely explain the observed density distributions. Detection methods of dark
matter are discussed.
|
We investigate three-dimensional black hole solutions in the realm of pure
and new massive gravity in 2+1 dimensions induced on a 2-brane embedded in a
flat four-dimensional spacetime. There is no cosmological constant neither on
the brane nor on the four-dimensional bulk. Only gravitational fields are
turned on and we indeed find vacuum solutions as black holes in 2+1 dimensions
even in the absence of any cosmological solution. There is a crossover scale
that controls how far the three- or four-dimensional gravity manifests on the
2-brane. Our solutions also indicate that local BTZ and SdS_3 solutions can
flow to local four-dimensional Schwarzschild like black holes, as one probes
from small to large distances, which is clearly a higher dimensional
manifestation on the 2-brane. This is similar to the DGP scenario where the
effects of extra dimensions for large probed distances along the brane
manifest.
|
In this note, we aim to provide generalizations of (i) Knuth's old sum (or
Reed Dawson identity) and (ii) Riordan's identity using a hypergeometric series
approach.
|
The double layer $\nu=2/3$ fractional quantum Hall system is studied using
the edge state formalism and finite-size diagonalization subject to periodic
boundary conditions. Transitions between three different ground states are
observed as the separation as well as the tunneling between the two layers is
varied. Experimental consequences are discussed.
|
The correlation functions of the multi-arc complex matrix model are shown to
be universal for any finite number of arcs. The universality classes are
characterized by the support of the eigenvalue density and are conjectured to
fall into the same classes as the ones recently found for the hermitian model.
This is explicitly shown to be true for the case of two arcs, apart from the
known result for one arc. The basic tool is the iterative solution of the loop
equation for the complex matrix model with multiple arcs, which provides all
multi-loop correlators up to an arbitrary genus. Explicit results for genus one
are given for any number of arcs. The two-arc solution is investigated in
detail, including the double-scaling limit. In addition universal expressions
for the string susceptibility are given for both the complex and hermitian
model.
|
At and near charge neutrality, monolayer graphene in a perpendicular magnetic
field is a quantum Hall ferromagnet. In addition to the highly symmetric
Coulomb interaction, residual lattice-scale interactions, Zeeman, and
sublattice couplings determine the fate of the ground state. Going beyond the
simplest model with ultra-short-range residual couplings to more generic
couplings, one finds integer phases that show the coexistence of magnetic and
lattice order parameters. Here we show that fractional quantum Hall states in
the vicinity of charge neutrality have even richer phase diagrams, with a
plethora of phases with simultaneous magnetic and lattice symmetry breaking.
|
Consider a qubit-qutrit ($2 \times 3$) composite state space. Let
$C(\{1}{2}I_2, \{1}{3}I_3)$ be a convex set of all possible states of composite
system whose marginals are given by $\{1}{2}I_2$ and $\{1}{3}I_3$ in two and
three dimensional spaces respectively. We prove that there exists no pure state
in $C(\{1}{2}I_2, \{1}{3}I_3)$. Further we generalize this result to an
arbitrary $m \times n$ bipartite systems. We prove that for $m < n$, no pure
state exists in the convex set $C(\rho_A,\rho_B)$, for an arbitrary $\rho_A$
and rank of $\rho_B >m$.
|
Graph neural networks for heterogeneous graph embedding is to project nodes
into a low-dimensional space by exploring the heterogeneity and semantics of
the heterogeneous graph. However, on the one hand, most of existing
heterogeneous graph embedding methods either insufficiently model the local
structure under specific semantic, or neglect the heterogeneity when
aggregating information from it. On the other hand, representations from
multiple semantics are not comprehensively integrated to obtain versatile node
embeddings. To address the problem, we propose a Heterogeneous Graph Neural
Network with Multi-View Representation Learning (named MV-HetGNN) for
heterogeneous graph embedding by introducing the idea of multi-view
representation learning. The proposed model consists of node feature
transformation, view-specific ego graph encoding and auto multi-view fusion to
thoroughly learn complex structural and semantic information for generating
comprehensive node representations. Extensive experiments on three real-world
heterogeneous graph datasets show that the proposed MV-HetGNN model
consistently outperforms all the state-of-the-art GNN baselines in various
downstream tasks, e.g., node classification, node clustering, and link
prediction.
|
We analyze the influence of the magnetic field in the convexity properties of
the relativistic magnetohydrodynamics system of equations. To this purpose we
use the approach of Lax, based on the analysis of the linearly
degenerate/genuinely non-linear nature of the characteristic fields. Degenerate
and non-degenerate states are discussed separately and the non-relativistic,
unmagnetized limits are properly recovered. The characteristic fields
corresponding to the material and Alfv\'en waves are linearly degenerate and,
then, not affected by the convexity issue. The analysis of the characteristic
fields associated with the magnetosonic waves reveals, however, a dependence of
the convexity condition on the magnetic field. The result is expressed in the
form of a generalized fundamental derivative written as the sum of two terms.
The first one is the generalized fundamental derivative in the case of purely
hydrodynamical (relativistic) flow. The second one contains the effects of the
magnetic field. The analysis of this term shows that it is always positive
leading to the remarkable result that the presence of a magnetic field in the
fluid reduces the domain of thermodynamical states for which the EOS is
non-convex.
|
We present the first results from a pilot study to search for distant radio
galaxies in the southern hemisphere (delta < -32). Within a 360 deg^2 region of
sky, we define a sample of 76 ultra-steep spectrum (USS) radio sources from the
843 MHz Sydney University Molonglo Sky Survey (SUMSS) and 1.4 GHz NRAO VLA Sky
Survey (NVSS) radio surveys with alpha_843^1400 < -1.3 and S_1400 > 15 mJy. We
observed 71 sources without bright optical or near-infrared counterparts at
1.385 GHz with the ATCA, providing ~5" resolution images and sub-arcsec
positional accuracy. To identify their host galaxies, we obtained near-IR
K-band images with IRIS2 at the AAT and SofI at the NTT. We identify 92% of the
USS sources down to K~20.5. The SUMSS-NVSS USS sources have a surface density
more than 4 times higher than USS sources selected at lower frequencies. This
is due to the higher effective selection frequency, and the well-matched
resolutions of both surveys constructed using the same source fitting
algorithm. The scattering of alpha >-1.3 sources into the USS sample due to
spectral index uncertainties can account for only 35% of the observed USS
sources. Since our sample appears to contain a similar fraction of very distant
(z>3) galaxies, selecting USS sources from SUMSS-NVSS should allow us to
identify large numbers of massive galaxies at high redshift.
|
The standard model can be extended to include weakly-interacting light
particle (WILP): real or complex singlet scalar, Majorana or Dirac neutral
fermion, neutral or hidden-charged vector boson, etc. Imposing the $Z_2$
symmetry, these particles can be lifted as the weakly-interacting massive
particle (WIMP), the candidate of dark matter. Instead, imposing the shift
symmetry on the scalar components gives rise to the axion-like particle, dark
photon, etc. Utilizing these light degree of freedom along with the standard
model particles and imposing different symmetries, we construct the complete
and independent sets of effective operators up to dimension eight with the
Young tensor technique, consistent with counting from the Hilbert series.
|
Multi-task learning for molecular property prediction is becoming
increasingly important in drug discovery. However, in contrast to other
domains, the performance of multi-task learning in drug discovery is still not
satisfying as the number of labeled data for each task is too limited, which
calls for additional data to complement the data scarcity. In this paper, we
study multi-task learning for molecular property prediction in a novel setting,
where a relation graph between tasks is available. We first construct a dataset
(ChEMBL-STRING) including around 400 tasks as well as a task relation graph.
Then to better utilize such relation graph, we propose a method called SGNN-EBM
to systematically investigate the structured task modeling from two
perspectives. (1) In the \emph{latent} space, we model the task representations
by applying a state graph neural network (SGNN) on the relation graph. (2) In
the \emph{output} space, we employ structured prediction with the energy-based
model (EBM), which can be efficiently trained through noise-contrastive
estimation (NCE) approach. Empirical results justify the effectiveness of
SGNN-EBM. Code is available on https://github.com/chao1224/SGNN-EBM.
|
We study the energy transfer in a classical dipole chain of $N$ interacting
rigid rotating dipoles. The underlying high--dimensional potential energy
landscape is analyzed in particular by determining the equilibrium points and
their stability in the common plane of rotation. Starting from the minimal
energy configuration, the response of the chain to excitation of a single
dipole is investigated. Using both the linearized and the exact Hamiltonian of
the dipole chain, we detect an approximate excitation energy threshold between
a weakly and a strongly nonlinear dynamics. In the weakly nonlinear regime, the
chain approaches in the course of time the expected energy equipartition among
the dipoles. For excitations of higher energy, strongly localized excitations
appear whose trajectories in time are either periodic or irregular, relating to
the well-known discrete or chaotic breathers, respectively. The phenomenon of
spontaneous formation of domains of opposite polarization and phase locking is
found to commonly accompany the time evolution of the chaotic breathers.
Finally, the sensitivity of the dipole chain dynamics to the initial conditions
is studied as a function of the initial excitation energy by computing a fast
chaos indicator. The results of this study confirm the aforementioned
approximate threshold value for the initial excitation energy, below which the
dynamics of the dipole chain is regular and above which it is chaotic.
|
We present the structural and dynamical studies of layered vanadium
pentaoxide (V2O5). The temperature dependent X-ray diffraction measurements
reveal highly anisotropic and anomalous thermal expansion from 12 K to 853 K.
The results do not show any evidence of structural phase transition or
decomposition of {\alpha}-V2O5, contrary to the previous transmission electron
microscopy (TEM) and electron energy loss spectroscopy (EELS) experiments. The
inelastic neutron scattering measurements performed up to 673 K corroborate the
result of our X-ray diffraction measurements. The analysis of the experimental
data is carried out using ab-initio lattice dynamics calculation. The important
role of van der-Waals dispersion and Hubbard interactions on the structure and
dynamics is revealed through the ab-initio calculations. The calculated
anisotropic thermal expansion behavior agrees well with temperature dependent
X- ray diffraction. The mechanism of anisotropic thermal expansion and
anisotropic linear compressibility is discussed in terms of calculated
anisotropy in Gr\"uneisen parameters and elastic coefficients. The calculated
Gibbs free energy in various phases of V2O5 is used to understand the high
pressure and temperature phase diagram of the compound. Softening of elastic
constant (C66) with pressure suggests a possibility of shear mechanism for
{\alpha} to \b{eta} phase transformation under pressure.
|
We give the complete solution to the local diffeomorphism classification
problem of generic singularities which appear in tangent surfaces, in as wider
situations as possible. We interpret tangent geodesics as tangent lines
whenever a (semi-)Riemannian metric, or, more generally, an affine connection
is given in an ambient space of arbitrary dimension. Then, given an immersed
curve, or, more generally a directed curve or a frontal curve which has
well-defined tangent directions along the curve, we define the tangent surface
as the ruled surface by tangent geodesics to the curve. We apply the
characterization of frontal singularities found by Kokubu, Rossman, Saji,
Umehara, Yamada, and Fujimori, Saji, Umehara, Yamada, and found by the first
author related to the procedure of openings of singularities.
|
In this paper we formalize the notions of information elements and
information lattices, first proposed by Shannon. Exploiting this formalization,
we identify a comprehensive parallelism between information lattices and
subgroup lattices. Qualitatively, we demonstrate isomorphisms between
information lattices and subgroup lattices. Quantitatively, we establish a
decisive approximation relation between the entropy structures of information
lattices and the log-index structures of the corresponding subgroup lattices.
This approximation extends the approximation for joint entropies carried out
previously by Chan and Yeung. As a consequence of our approximation result, we
show that any continuous law holds in general for the entropies of information
elements if and only if the same law holds in general for the log-indices of
subgroups. As an application, by constructing subgroup counterexamples we find
surprisingly that common information, unlike joint information, obeys neither
the submodularity nor the supermodularity law. We emphasize that the notion of
information elements is conceptually significant--formalizing it helps to
reveal the deep connection between information theory and group theory. The
parallelism established in this paper admits an appealing group-action
explanation and provides useful insights into the intrinsic structure among
information elements from a group-theoretic perspective.
|
Polar faculae are bright features that can be detected in solar limb
observations and they are related to magnetic field concentrations. Although
there is a large number of works studying them, some questions about their
nature as their magnetic properties at different heights are still open. Thus,
we aim to improve the understanding of solar polar faculae. In that sense, we
infer the vertical stratification of the temperature, gas pressure, line of
sight velocity and magnetic field vector of polar faculae regions. We performed
inversions of the Stokes profiles observed with Hinode/SP after removing the
stray light contamination produced by the spatial point spread function of the
telescope. Moreover, after solving the azimuth ambiguity, we transform the
magnetic field vector to local solar coordinates. The obtained results reveal
that the polar faculae are constituted by hot plasma with low line of sight
velocities and single polarity magnetic fields in the kilogauss range that are
nearly perpendicular to the solar surface. We also found that the spatial
location of these magnetic fields is slightly shifted respect to the continuum
observations towards the disc centre. We believe that this is due to the hot
wall effect that allows detecting photons that come from deeper layers located
closer to the solar limb.
|
Let X be a separable Banach space which admits a separating polynomial; in
particular X a separable Hilbert space. Let $f:X \rightarrow R$ be bounded,
Lipschitz, and $C^1$ with uniformly continuous derivative. Then for each
{\epsilon}>0, there exists an analytic function $g:X \rightarrow R$ with
$|g-f|<\epsilon$ and $||g'-f'||<\epsilon$.
|
Solving a linear system $Ax=b$ is a fundamental scientific computing
primitive for which numerous solvers and preconditioners have been developed.
These come with parameters whose optimal values depend on the system being
solved and are often impossible or too expensive to identify; thus in practice
sub-optimal heuristics are used. We consider the common setting in which many
related linear systems need to be solved, e.g. during a single numerical
simulation. In this scenario, can we sequentially choose parameters that attain
a near-optimal overall number of iterations, without extra matrix computations?
We answer in the affirmative for Successive Over-Relaxation (SOR), a standard
solver whose parameter $\omega$ has a strong impact on its runtime. For this
method, we prove that a bandit online learning algorithm--using only the number
of iterations as feedback--can select parameters for a sequence of instances
such that the overall cost approaches that of the best fixed $\omega$ as the
sequence length increases. Furthermore, when given additional structural
information, we show that a contextual bandit method asymptotically achieves
the performance of the instance-optimal policy, which selects the best $\omega$
for each instance. Our work provides the first learning-theoretic treatment of
high-precision linear system solvers and the first end-to-end guarantees for
data-driven scientific computing, demonstrating theoretically the potential to
speed up numerical methods using well-understood learning algorithms.
|
The interaction of oxygen with noble metals such as silver has been an
important topic of research for many decades. Here, we show occurrence of a
peak in the density of states (DOS) at the Fermi level ($E_F$) when oxygen
atoms occupy disordered substitutional positions in noble metals such as Ag, Au
or Ag-Au alloy. This results in large enhancement of DOS at $E_F$ with respect
to Ag or Au metal. Its origin is attributed to O 2$p$ related disorder
broadened flat band that straddles almost all the high symmetry directions of
the Brillouin zone. Our work suggests that if a large concentration of
disordered oxygen can be realized in nano-structures of noble metals, it may
lead to interesting phenomenon.
|
We report on the results of INTEGRAL observations of the neutron star low
mass X-ray binary SAX J1810.8-2609 during its latest active phase in August
2007. The current outburst is the first one since 1998 and the derived
luminosity is 1.1-2.6x10^36 erg s-1 in the 20-100 keV energy range. This low
outburst luminosity and the long-term time-average accretion rate of
~5x10^-12Msolar/yr suggest that SAXJ 1810.8-2609 is a faint soft X-ray
transient. During the flux increase, spectra are consistent with a thermal
Comptonization model with a temperature plasma of ~23-30 keV and an optical
depth of ~1.2-1.5, independent from luminosity of the system. This is a typical
low hard spectral state for which the X-ray emission is attributed to the
upscattering of soft seed photons by a hot, optically thin electron plasma.
During the decay, spectra have a different shape, the high energy tail being
compatible with a single power law. This confirm similar behavior observed by
BeppoSAX during the previous outburst, with absence of visible cutoff in the
hard X-ray spectrum. INTEGRAL/JEM-X instrument observed four X-ray bursts in
Fall 2007. The first one has the highest peak flux (~3.5Crab in 3--25 keV)
giving an upper limit to the distance of the source of about 5.7 kpc, for a
LEdd~3.8x10^38 erg s^-1. The observed recurrence time of ~1.2 days and the
ratio of the total energy emitted in the persistent flux to that emitted in the
bursts (~73) allow us to conclude that the burst fuel was composed by mixed
hydrogen and helium with X>0.4.
|
Dengue and Zika incidence data and the latest research have raised questions
about how dengue vaccine strategies might be impacted by the emergence of Zika
virus. Existing antibodies to one virus might temporarily protect or promote
infection by the other through antibody-dependent enhancement (ADE). With this
condition, understanding the dynamics of propagation of these two viruses is of
great importance when implementing vaccines. In this work, we analyze the
effect of vaccination against one strain, in a two-strain model that accounts
for cross-immunity and ADE. Using basic and invasion reproductive numbers, we
examined the dynamics of the model and provide conditions to ensure the
stability of the disease-free equilibrium. We provide conditions on
cross-immunity, ADE and vaccination rate under which the vaccination could
ensure the global stability of the disease-free equilibrium. The results
indicate scenarios in which vaccination against one strain may improve or
worsen the control of the other, as well as contribute to the eradication or
persistence of one or both viruses in the population.
|
A connected undirected graph is called \emph{geodetic} if for every pair of
vertices there is a unique shortest path connecting them. It has been
conjectured that for finite groups, the only geodetic Cayley graphs which occur
are odd cycles and complete graphs. In this article we present a series of
theoretical results which contribute to a computer search verifying this
conjecture for all groups of size up to 1024. The conjecture is also verified
theoretically for several infinite families of groups including dihedral and
some families of nilpotent groups. Two key results which enable the computer
search to reach as far as it does are: if the center of a group has even order,
then the conjecture holds (this eliminates all 2-groups from our computer
search); if a Cayley graph is geodetic then there are bounds relating the size
of the group, generating set and center (which cuts down the number of
generating sets which must be searched significantly).
|
Identification of model parameters in computer simulations is an important
topic in computer experiments. We propose a new method, called the projected
kernel calibration method, to estimate these model parameters. The proposed
method is proven to be asymptotic normal and semi-parametric efficient. As a
frequentist method, the proposed method is as efficient as the $L_2$
calibration method proposed by Tuo and Wu [Ann. Statist. 43 (2015) 2331-2352].
On the other hand, the proposed method has a natural Bayesian version, which
the $L_2$ method does not have. This Bayesian version allows users to calculate
the credible region of the calibration parameters without using a large sample
approximation. We also show that, the inconsistency problem of the calibration
method proposed by Kennedy and O'Hagan [J. R. Stat. Soc. Ser. B. Stat.
Methodol. 63 (2001) 425-464] can be rectified by a simple modification of the
kernel matrix.
|
The prospects for observing an invisibly decaying Higgs boson in the t anti-t
H production at LHC are discussed. An isolated lepton, reconstructed hadronic
top-quark decay, two identified b-jets and large missing transverse energy are
proposed as the final state signature for event selection. Only the Standard
Model backgrounds are taken into account. It is shown that the t anti-t Z, t
anti-t W, b anti-b Z and b anti-b W backgrounds can individually be suppressed
below the signal expectation. The dominant source of background remains the t
anti-t production. The key for observability will be an experimental selection
which allows further suppression of the contributions from the t anti-t events
with one of the top-quarks decaying into a tau lepton. Depending on the details
of the final analysis, an excess of the signal events above the Standard Model
background of about 10% to 100% can be achieved in the mass range m_H= 100-200
GeV.
|
The growth rate of matter perturbation and the expansion rate of the Universe
can be used to distinguish modified gravity and dark energy models in
explaining cosmic acceleration. We explore here the inclusion of spatial
curvature into the growth factor. We expand previous results using the
approximation $\Omega_{m}^\gamma$ and then suggest a new form,
$f_a=\Omega_m^\gamma+(\gamma-4/7)\Omega_k$, as an approximation for the growth
factor when the curvature $\Omega_k$ is not negligible, and where the growth
index $\gamma$ is usually model dependent. The expression recovers the standard
results for the curved and flat $\Lambda$CDM and Dvali-Gabadadze-Porrati
models. Using the best fit values of $\Omega_{m0}$ and $\Omega_{k0}$ to the
expansion/distance measurements from Type Ia supernovae, baryon acoustic
oscillation, WMAP5, and $H(z)$ data, we fit the growth index parameter to
current growth factor data and obtain $\gamma_{\Lambda}(\Omega_{k} \not= 0) =
0.65^{+0.17}_{-0.15}$ and $\gamma_{DGP}(\Omega_{k} \not= 0) =
0.53^{+0.14}_{-0.12}$. For the $\Lambda$CDM model, the 1-$\sigma$ observational
bounds are found consistent with theoretical value, unlike the case for the
Dvali-Gabadadze-Porrati model. We also find that the current data we used is
not enough to put significant constraints when the 3 parameters in $f_a$ are
fit simultaneously. Importantly, we find that, in the presence of curvature,
the analytical expression proposed for $f_a$ provides a better fit to the
growth factor than other forms and should be useful for future high precision
missions and studies.
|
The present paper introduces a jump-diffusion extension of the classical
diffusion default intensity model by means of subordination in the sense of
Bochner. We start from the bi-variate process $(X,D)$ of a diffusion state
variable $X$ driving default intensity and a default indicator process $D$ and
time change it with a L\'{e}vy subordinator ${\mathcal{T}}$. We characterize
the time-changed process
$(X^{\phi}_t,D^{\phi}_t)=(X({\mathcal{T}}_t),D({\mathcal{T}}_t))$ as a
Markovian--It\^{o} semimartingale and show from the Doob--Meyer decomposition
of $D^{\phi}$ that the default time in the time-changed model has a
jump-diffusion or a pure jump intensity. When $X$ is a CIR diffusion with
mean-reverting drift, the default intensity of the subordinate model (SubCIR)
is a jump-diffusion or a pure jump process with mean-reverting jumps in both
directions that stays nonnegative. The SubCIR default intensity model is
analytically tractable by means of explicitly computed eigenfunction expansions
of relevant semigroups, yielding closed-form pricing of credit-sensitive
securities.
|
MnBi2Te4 has recently been predicted and shown to be a magnetic topological
insulator with intrinsic antiferromagnetic order. However, it remains a
challenge to grow stoichiometric MnBi2Te4 films by molecular beam epitaxy (MBE)
and to observe pure antiferromagnetic order by magnetometry. We report on a
detailed study of MnBi2Te4 films grown on Si(111) by MBE with elemental
sources. Films of about 100 nm thickness are analyzed in stoichiometric,
structural, magnetic and magnetotransport properties with high accuracy.
High-quality MnBi2Te4 films with nearly perfect septuple-layer structure are
realized and structural defects typical for epitaxial van-der-Waals layers are
analyzed. The films reveal antiferromagnetic order with a Neel temperature of
19 K, a spin-flop transition at a magnetic field of 2.5 T and a resistivity of
1.6 mOhm cm. These values are comparable to that of bulk MnBi2Te4 crystals. Our
results provide an important basis for realizing and identifying single-phase
MnBi2Te4 films with antiferromagnetic order grown by MBE.
|
Differential privacy (DP) is increasingly used to protect the release of
hierarchical, tabular population data, such as census data. A common approach
for implementing DP in this setting is to release noisy responses to a
predefined set of queries. For example, this is the approach of the TopDown
algorithm used by the US Census Bureau. Such methods have an important
shortcoming: they cannot answer queries for which they were not optimized. An
appealing alternative is to generate DP synthetic data, which is drawn from
some generating distribution. Like the TopDown method, synthetic data can also
be optimized to answer specific queries, while also allowing the data user to
later submit arbitrary queries over the synthetic population data. To our
knowledge, there has not been a head-to-head empirical comparison of these
approaches. This study conducts such a comparison between the TopDown algorithm
and private synthetic data generation to determine how accuracy is affected by
query complexity, in-distribution vs. out-of-distribution queries, and privacy
guarantees. Our results show that for in-distribution queries, the TopDown
algorithm achieves significantly better privacy-fidelity tradeoffs than any of
the synthetic data methods we evaluated; for instance, in our experiments,
TopDown achieved at least $20\times$ lower error on counting queries than the
leading synthetic data method at the same privacy budget. Our findings suggest
guidelines for practitioners and the synthetic data research community.
|
The famous F5 algorithm for computing \gr basis was presented by Faug\`ere in
2002. The original version of F5 is given in programming codes, so it is a bit
difficult to understand. In this paper, the F5 algorithm is simplified as F5B
in a Buchberger's style such that it is easy to understand and implement. In
order to describe F5B, we introduce F5-reduction, which keeps the signature of
labeled polynomials unchanged after reduction. The equivalence between F5 and
F5B is also shown. At last, some versions of the F5 algorithm are illustrated.
|
We construct a model which exhibits resistivity going as a power law in
temperature $T$, as $T^\alpha$ down to the lowest temperature. There is no
residual resistivity because we assume the absence of disorder and momentum
relaxation is due to umklapp scattering. Our model consists of a quantum spin
liquid state with spinon Fermi surface and a hole Fermi surface made out of
doped holes. The key ingredient is a set of singular $2k_F$ modes living on a
ring in momentum space. Depending on parameters, $\alpha$ may be unity (strange
metal) or even smaller. The model may be applicable to a doped organic
compound, which has been found to exhibit linear T resistivity. We conclude
that it is possible to obtain strange metal behavior starting with a model
which is not so strange.
|
This review discusses the physics of the formation of planetary nebulae
around low mass WR stars, or [WR] stars. It especially focuses on the
differences which can be expected due to the different character of the fast
winds from these [WR] stars. Their fast winds are more massive and are highly H
deficient and metal enriched compared to the winds of normal central stars of
planetary nebulae. This is expected to lead to faster expansion velocities for
the nebulae and a longer momentum-driven phase in the evolution of the
wind-driven bubble, leading to more turbulent nebulae. The observational
evidence also shows that the process which produces the [WR] stars is unlikely
to influence the onset of aspherical mass loss, something which can be used as
a test for models for aspherical mass loss from AGB and post-AGB stars. Finally
it is shown that the nebular characteristics rule out a very late He shell
flash as the origin of most [WR] stars.
|
In the conventional approach, fermionic test fields lead to a generic
overspinning of black holes resulting in the formation of naked singularities.
The absorption of the fermionic test fields with arbitrarily low frequencies is
allowed for which the contribution to the angular momentum parameter of the
space-time diverges. Recently we have suggested a more subtle treatment of the
problem considering the fact that only the fraction of the test fields that is
absorbed by the black hole contributes to the space-time parameters. Here, we
re-consider the interaction of massless spin $(1/2)$ fields with Kerr and
Kerr-Newman black holes, adapting this new approach. We show that the drastic
divergence problem disappears when one incorporates the absorption
probabilities. Still, there exists a range of parameters for the test fields
that can lead to overspinning. We employ backreaction effects due to the
self-energy of the test fields which fixes the overspinning problem for fields
with relatively large amplitudes, and renders it non-generic for smaller
amplitudes. This non-generic overspinning appears likely to be fixed by
alternative semi-classical and quantum effects.
|
We investigate relationships between online self-disclosure and received
social feedback during the COVID-19 crisis. We crawl a total of 2,399 posts and
29,851 associated comments from the r/COVID19_support subreddit and manually
extract fine-grained personal information categories and types of social
support sought from each post. We develop a BERT-based ensemble classifier to
automatically identify types of support offered in users' comments. We then
analyze the effect of personal information sharing and posts' topical, lexical,
and sentiment markers on the acquisition of support and five interaction
measures (submission scores, the number of comments, the number of unique
commenters, the length and sentiments of comments). Our findings show that: 1)
users were more likely to share their age, education, and location information
when seeking both informational and emotional support, as opposed to pursuing
either one; 2) while personal information sharing was positively correlated
with receiving informational support when requested, it did not correlate with
emotional support; 3) as the degree of self-disclosure increased, information
support seekers obtained higher submission scores and longer comments, whereas
emotional support seekers' self-disclosure resulted in lower submission scores,
fewer comments, and fewer unique commenters; 4) post characteristics affecting
social feedback differed significantly based on types of support sought by post
authors. These results provide empirical evidence for the varying effects of
self-disclosure on acquiring desired support and user involvement online during
the COVID-19 pandemic. Furthermore, this work can assist support seekers hoping
to enhance and prioritize specific types of social feedback.
|
In this paper we introduce a new approach to determinant functors which
allows us to extend Deligne's determinant functors for exact categories to
Waldhausen categories, (strongly) triangulated categories, and derivators. We
construct universal determinant functors in all cases by original methods which
are interesting even for the known cases. Moreover, we show that the target of
each universal determinant functor computes the corresponding $K$-theory in
dimensions 0 and 1. As applications, we answer open questions by Maltsiniotis
and Neeman on the $K$-theory of (strongly) triangulated categories and a
question of Grothendieck to Knudsen on determinant functors. We also prove
additivity theorems for low-dimensional $K$-theory and obtain generators and
(some) relations for various $K_{1}$-groups.
|
In accordance with current models of the accelerating Universe as a spacetime
with a positive cosmological constant, new results about a cosmological upper
bound for the area of stable marginally outer trapped surfaces are found taking
into account angular momentum, gravitational waves and matter. Compared to
previous results which take into account only some of the aforementioned
variables, the bound is found to be tighter, giving a concrete limit to the
size of black holes especially relevant in the early Universe.
|
In this paper we study the ways to use a global entangling operator to
efficiently implement circuitry common to a selection of important quantum
algorithms. In particular, we focus on the circuits composed with global Ising
entangling gates and arbitrary addressable single-qubit gates. We show that
under certain circumstances the use of global operations can substantially
improve the entangling gate count.
|
We study irreducible ${\rm SL}_2$-representations of twist knots. We first
determine all non-acyclic ${\rm SL}_2(\mathbb{C})$-representations, which turn
out to lie on a line denoted as $x=y$ in $\mathbb{R}^2$. Our main tools are
character variety, Reidemeister torsion, and Chebyshev polynomials. We also
verify a certain common tangent property, which yields a result on the
$L$-functions of universal deformations, that is, the orders of the associated
knot modules. Secondly, we prove that a representation is on the line $x=y$ if
and only if it factors through the $(-3)$-Dehn surgery, and is non-acyclic if
and only if the image of a certain element is of order 3. Finally, we study
absolutely irreducible non-acyclic representations $\overline{\rho}$ over a
finite field with characteristic $p>2$ to concretely determine all non-trivial
$L$-functions $L_{\rho}$ of the universal deformations over a CDVR. We show
among other things that $L_{\rho}$ $\dot{=}$ $k_n(x)^2$ holds for a certain
series $k_n(x)$ of polynomials.
|
We present a series of modifications which improve upon Graph WaveNet's
previously state-of-the-art performance on the METR-LA traffic prediction task.
The goal of this task is to predict the future speed of traffic at each sensor
in a network using the past hour of sensor readings. Graph WaveNet (GWN) is a
spatio-temporal graph neural network which interleaves graph convolution to
aggregate information from nearby sensors and dilated convolutions to aggregate
information from the past. We improve GWN by (1) using better hyperparameters,
(2) adding connections that allow larger gradients to flow back to the early
convolutional layers, and (3) pretraining on an easier short-term traffic
prediction task. These modifications reduce the mean absolute error by .06 on
the METR-LA task, nearly equal to GWN's improvement over its predecessor. These
improvements generalize to the PEMS-BAY dataset, with similar relative
magnitude. We also show that ensembling separate models for short-and long-term
predictions further improves performance. Code is available at
https://github.com/sshleifer/Graph-WaveNet .
|
In this paper, for a compact Lie group action,we prove the anomaly formula
and the functoriality of the equivariant Bismut-Cheeger eta forms with
perturbation operators when the equivariant family index vanishes. In order to
prove them, we extend the Melrose-Piazza spectral section and its main
properties to the equivariant case and introduce the equivariant version of the
Dai-Zhang higher spectral flow for arbitrary dimensional fibers.Using these
results, we construct a new analytic model of the equivariant differential
K-theory for compact manifolds when the group action has finite stabilizers
only,which modifies the Bunck-Schick model of the differential K-theory. This
model could also be regarded as an analytic model of the differential K-theory
for compact orbifolds. Especially, we answer a question proposed by Bunke and
Schick about the well-definedness of the push-forward map.
|
Pre-trained language models such as BERT have exhibited remarkable
performances in many tasks in natural language understanding (NLU). The tokens
in the models are usually fine-grained in the sense that for languages like
English they are words or sub-words and for languages like Chinese they are
characters. In English, for example, there are multi-word expressions which
form natural lexical units and thus the use of coarse-grained tokenization also
appears to be reasonable. In fact, both fine-grained and coarse-grained
tokenizations have advantages and disadvantages for learning of pre-trained
language models. In this paper, we propose a novel pre-trained language model,
referred to as AMBERT (A Multi-grained BERT), on the basis of both fine-grained
and coarse-grained tokenizations. For English, AMBERT takes both the sequence
of words (fine-grained tokens) and the sequence of phrases (coarse-grained
tokens) as input after tokenization, employs one encoder for processing the
sequence of words and the other encoder for processing the sequence of the
phrases, utilizes shared parameters between the two encoders, and finally
creates a sequence of contextualized representations of the words and a
sequence of contextualized representations of the phrases. Experiments have
been conducted on benchmark datasets for Chinese and English, including CLUE,
GLUE, SQuAD and RACE. The results show that AMBERT can outperform BERT in all
cases, particularly the improvements are significant for Chinese. We also
develop a method to improve the efficiency of AMBERT in inference, which still
performs better than BERT with the same computational cost as BERT.
|
If the assumption that physical space has a trivial topology is dropped, then
the Universe may be described by a multiply connected Friedmann-Lema\^{\i}tre
model on a sub-horizon scale. Specific candidates for the multiply connected
space manifold have already been suggested. How precisely would a significant
detection of multiple topological images of a single object, or a region on the
cosmic microwave background, (due to photons arriving at the observer by
multiple paths which have crossed the Universe in different directions),
constrain the values of the curvature parameters $\Omega_0$ and $\lambda_0$?
The way that the constraints on $\Omega_0$ and $\lambda_0$ depend on the
redshifts of multiple topological images and on their radial and tangential
separations is presented and calculated. The tangential separations give the
tighter constraints: multiple topological images of known types of
astrophysical objects at redshifts $z \ltapprox 3$ would imply values of
$\Omega_0$ and $\lambda_0$ preciser than $\sim 1%$ and $\sim 10%$ respectively.
Cosmic microwave background `spots' identified with lower redshift objects by
the Planck or MAP satellites would provide similar precision. This method is
purely geometrical: no dynamical assumptions (such as the virial theorem) are
required and the constraints are independent of the Hubble constant, $H_0.$
|
We investigate the rigidity of the $\ell^p$ analog of Roe-type algebras. In
particular, we show that if $p\in[1,\infty)\setminus\{2\}$, then an isometric
isomorphism between the $\ell^p$ uniform Roe algebras of two metric spaces with
bounded geometry yields a bijective coarse equivalence between the underlying
metric spaces, while a stable isometric isomorphism yields a coarse
equivalence. We also obtain similar results for other $\ell^p$ Roe-type
algebras. In this paper, we do not assume that the metric spaces have Yu's
property A or finite decomposition complexity.
|
Viral marketing is becoming important due to the popularity of online social
networks (OSNs). Companies may provide incentives (e.g., via free samples of a
product) to a small group of users in an OSN, and these users provide
recommendations to their friends, which eventually increases the overall sales
of a given product. Nevertheless, this also opens a door for "malicious
behaviors": dishonest users may intentionally give misleading recommendations
to their friends so as to distort the normal sales distribution. In this paper,
we propose a detection framework to identify dishonest users in OSNs. In
particular, we present a set of fully distributed and randomized algorithms,
and also quantify the performance of the algorithms by deriving probability of
false positive, probability of false negative, and the distribution of number
of detection rounds. Extensive simulations are also carried out to illustrate
the impact of misleading recommendations and the effectiveness of our detection
algorithms. The methodology we present here will enhance the security level of
viral marketing in OSNs.
|
The behavior of deep neural networks (DNNs) is hard to understand. This makes
it necessary to explore post hoc explanation methods. We conduct the first
comprehensive evaluation of explanation methods for NLP. To this end, we design
two novel evaluation paradigms that cover two important classes of NLP
problems: small context and large context problems. Both paradigms require no
manual annotation and are therefore broadly applicable. We also introduce
LIMSSE, an explanation method inspired by LIME that is designed for NLP. We
show empirically that LIMSSE, LRP and DeepLIFT are the most effective
explanation methods and recommend them for explaining DNNs in NLP.
|
This paper consists of two parts. First, the (undirected) Hamiltonian path
problem is reduced to a signal filtering problem - number of Hamiltonian paths
becomes amplitude at zero frequency for (a combination of) sinusoidal signal
f(t) that encodes a graph. Then a 'divide and conquer' strategy to filtering
out wide bandwidth components of a signal is suggested - one filters out
angular frequency 1/2 to 1, then 1/4 to 1/2, then 1/8 to 1/4 and so on. An
actual implementation of this strategy involves careful local polynomial
extrapolation using numerical differentiation filters. When conjectures
regarding required number of samples for specified filter designs and time
complexity of obtaining filter coefficients hold, P=NP conditionally.
|
We present a spectral analysis of NuSTAR and NICER observations of the
luminous, persistently accreting neutron star (NS) low-mass X-ray binary Cygnus
X-2. The data were divided into different branches that the source traces out
on the Z-track of the X-ray color-color diagram; namely the horizontal branch,
normal branch, and the vertex between the two. The X-ray continuum spectrum was
modeled in two different ways that produced a comparable quality fit. The
spectra showed clear evidence of a reflection component in the form of a
broadened Fe K line, as well as a lower energy emission feature near 1 keV
likely due to an ionized plasma located far from the innermost accretion disk.
We account for the reflection spectrum with two independent models (relxillns
and rdblur*rfxconv). The inferred inclination is in agreement with earlier
estimates from optical observations of ellipsoidal light curve modeling
(relxillns: $i=67^{\circ}\pm4^{\circ}$, rdblur*rfxconv:
$i=60^{\circ}\pm10^{\circ}$). The inner disk radius remains close to the NS
($R_{\rm in}\leq1.15\ R_{\mathrm{ISCO}}$) regardless of the source position
along the Z-track or how the 1 keV feature is modeled. Given the optically
determined NS mass of $1.71\pm0.21\ M_{\odot}$, this corresponds to a
conservative upper limit of $R_{\rm in}\leq19.5$ km for $M=1.92\ M_{\odot}$ or
$R_{\rm in}\leq15.3$ km for $M=1.5\ M_{\odot}$. We compare these radius
constraints to those obtained from NS gravitational wave merger events and
recent NICER pulsar light curve modeling measurements.
|
The scaling behaviors of anisotropic flows of light charged particles are
studied for 25 \,MeV/nucleon $^{40}$Ca+$^{40}$Ca collisions at different impact
parameters by the isospin-dependent quantum molecular dynamics model. The
number of nucleons scaling of elliptic flow is existed and the scaling of the
ratios of $v_{4}/v_{2}^{2}$ and $v_{3}/(v_{1}v_{2})$ are applicable for
collisions at almost all impact parameters except for peripheral collisions.
|
Standard approaches to sequential decision-making exploit an agent's ability
to continually interact with its environment and improve its control policy.
However, due to safety, ethical, and practicality constraints, this type of
trial-and-error experimentation is often infeasible in many real-world domains
such as healthcare and robotics. Instead, control policies in these domains are
typically trained offline from previously logged data or in a growing-batch
manner. In this setting a fixed policy is deployed to the environment and used
to gather an entire batch of new data before being aggregated with past batches
and used to update the policy. This improvement cycle can then be repeated
multiple times. While a limited number of such cycles is feasible in real-world
domains, the quality and diversity of the resulting data are much lower than in
the standard continually-interacting approach. However, data collection in
these domains is often performed in conjunction with human experts, who are
able to label or annotate the collected data. In this paper, we first explore
the trade-offs present in this growing-batch setting, and then investigate how
information provided by a teacher (i.e., demonstrations, expert actions, and
gradient information) can be leveraged at training time to mitigate the sample
complexity and coverage requirements for actor-critic methods. We validate our
contributions on tasks from the DeepMind Control Suite.
|
Vision foundation models are renowned for their generalization ability due to
massive training data. Nevertheless, they demand tremendous training resources,
and the training data is often inaccessible, e.g., CLIP, DINOv2, posing great
challenges to developing derivatives that could advance research in this field.
In this work, we offer a very simple and general solution, named Proteus, to
distill foundation models into smaller equivalents on ImageNet-1K without
access to the original training data. Specifically, we remove the designs from
conventional knowledge distillation settings that result in dataset bias and
present three levels of training objectives, i.e., token, patch, and feature,
to maximize the efficacy of knowledge transfer. In this manner, Proteus is
trained at ImageNet-level costs with surprising ability, facilitating the
accessibility of training foundation models for the broader research community.
Leveraging DINOv2-g/14 as the teacher, Proteus-L/14 matches the performance of
the Oracle method DINOv2-L/14 (142M training data) across 15 benchmarks and
outperforms other vision foundation models including CLIP-L/14 (400M),
OpenCLIP-L/14 (400M/2B) and SynCLR-L/14 (600M).
|
Recent multi-dimensional simulations suggest that high-entropy buoyant plumes
help massive stars to explode. Outwardly protruding iron-rich fingers in the
galactic supernova remnant Cassiopeia A are uniquely suggestive of this
picture. Detecting signatures of specific elements synthesized in the
high-entropy nuclear burning regime (i.e., $\alpha$-rich freeze out) would be
among the strongest substantiating evidence. Here we report the discovery of
such elements, stable Ti and Cr, at a confidence level greater than 5$\sigma$
in the shocked high-velocity iron-rich ejecta of Cassiopeia A. We found the
observed Ti/Fe and Cr/Fe mass ratios require $\alpha$-rich freeze out,
providing the first observational demonstration for the existence of
high-entropy ejecta plumes that boosted the shock wave at explosion. The metal
composition of the plumes agrees well with predictions for strongly
neutrino-processed proton-rich ejecta. These results support the operation of
the convective supernova engine via neutrino heating in the supernova that
produced Cassiopeia A.
|
It is known (see e.g. Weibull (1995)) that ESS is not robust against multiple
mutations. In this article, we introduce robustness against multiple mutations
and study some equivalent formulations and consequences.
|
Asymptotic results are derived for the number of random walks in alcoves of
affine Weyl groups (which are certain regions in $n$-dimensional Euclidean
space bounded by hyperplanes), thus solving problems posed by Grabiner [J.
Combin. Theory Ser. A 97 (2002), 285-306]. These results include asymptotic
expressions for the number of vicious walkers on a circle, and as well for the
number of vicious walkers in an interval. The proofs depart from the exact
results of Grabiner [loc. cit.], and require as diverse means as results from
symmetric function theory and the saddle point method, among others.
|
We have numerically investigated whether or not a mean-field theory of spin
textures generate fictitious flux in the doped two dimensional $t-J$-model.
First we consider the properties of uniform systems and then we extend the
investigation to include models of striped phases where a fictitious flux is
generated in the domain wall providing a possible source for lowering the
kinetic energy of the holes. We have compared the energetics of uniform systems
with stripes directed along the (10)- and (11)-directions of the lattice,
finding that phase-separation generically turns out to be energetically
favorable. In addition to the numerical calculations, we present topological
arguments relating flux and staggered flux to geometric properties of the spin
texture. The calculation is based on a projection of the electron operators of
the $t-J$ model into a spin texture with spinless fermions.
|
For a given regular language of infinite trees, one can ask about the minimal
number of priorities needed to recognize this language with a
non-deterministic, alternating, or weak alternating parity automaton. These
questions are known as, respectively, the non-deterministic, alternating, and
weak Rabin-Mostowski index problems. Whether they can be answered effectively
is a long-standing open problem, solved so far only for languages recognizable
by deterministic automata (the alternating variant trivializes).
We investigate a wider class of regular languages, recognizable by so-called
game automata, which can be seen as the closure of deterministic ones under
complementation and composition. Game automata are known to recognize languages
arbitrarily high in the alternating Rabin-Mostowski index hierarchy; that is,
the alternating index problem does not trivialize any more.
Our main contribution is that all three index problems are decidable for
languages recognizable by game automata. Additionally, we show that it is
decidable whether a given regular language can be recognized by a game
automaton.
|
In this paper we modify a fast heuristic solver for the Linear Sum Assignment
Problem (LSAP) for use on Graphical Processing Units (GPUs). The motivating
scenario is an industrial application for P2P live streaming that is moderated
by a central node which is periodically solving LSAP instances for assigning
peers to one another. The central node needs to handle LSAP instances involving
thousands of peers in as near to real-time as possible. Our findings are
generic enough to be applied in other contexts. Our main result is a parallel
version of a heuristic algorithm called Deep Greedy Switching (DGS) on GPUs
using the CUDA programming language. DGS sacrifices absolute optimality in
favor of low computation time and was designed as an alternative to classical
LSAP solvers such as the Hungarian and auctioning methods. The contribution of
the paper is threefold: First, we present the process of trial and error we
went through, in the hope that our experience will be beneficial to adopters of
GPU programming for similar problems. Second, we show the modifications needed
to parallelize the DGS algorithm. Third, we show the performance gains of our
approach compared to both a sequential CPU-based implementation of DGS and a
parallel GPU-based implementation of the auctioning algorithm.
|
By globally analyzing all existing measured branching fractions and partial
rates in different four momentum transfer-squared $q^2$ bins of $D\to
Ke^+\nu_e$ decays, we obtain the product of the form factor and magnitude of
CKM matrix element $V_{cs}$ to be $f_+^K(0)|V_{cs}|=0.717\pm0.004$. With this
product, we determine the $D\to K$ semileptonic form factor
$f_+^K(0)=0.737\pm0.004\pm0.000$ in conjunction with the value of $|V_{cs}|$
determined from the SM global fit. Alternately, with the product together with
the input of the form factor $f_+^K(0)$ calculated in lattice QCD recently, we
extract $|V_{cs}|^{D\to Ke^+\nu_e}=0.962\pm0.005\pm0.014$, where the error is
still dominated by the uncertainty of the form factor calculated in lattice
QCD. Combining the $|V_{cs}|^{D_s^+\to\ell^+\nu_\ell}=1.012\pm0.015\pm0.009$
extracted from all existing measurements of $D^+_s\to\ell^+\nu_\ell$ decays and
$|V_{cs}|^{D\to Ke^+\nu_e}=0.962\pm0.005\pm0.014$ together, we find the most
precisely determined $|V_{cs}|$ to be $|V_{cs}|=0.983\pm0.011$, which improves
the accuracy of the PDG'2014 value $|V_{cs}|^{\rm PDG'2014}=0.986\pm0.016$ by
$45\%$.
|
We use the worldline numerics technique to study a cylindrically symmetric
model of magnetic flux tubes in a dense lattice and the non-local Casimir
forces acting between regions of magnetic flux. Within a superconductor the
magnetic field is constrained within magnetic flux tubes and if the background
magnetic field is on the order the quantum critical field strength, $B_k =
\frac{m^2}{e} = 4.4 \times 10^{13}$ Gauss, the magnetic field is likely to vary
rapidly on the scales where \acs{QED} effects are important. In this paper, we
construct a cylindrically symmetric toy model of a flux tube lattice in which
the non-local influence of \acs{QED} on neighbouring flux tubes is taken into
account. We compute the effective action densities using the worldline numerics
technique. The numerics predict a greater effective energy density in the
region of the flux tube, but a smaller energy density in the regions between
the flux tubes compared to a locally-constant-field approximation. We also
compute the interaction energy between a flux tube and its neighbours as the
lattice spacing is reduced from infinity. Because our flux tubes exhibit
compact support, this energy is entirely non-local and predicted to be zero in
local approximations such as the derivative expansion. This Casimir-Polder
energy can take positive or negative values depending on the distance between
the flux tubes, and it may cause the flux tubes in neutron stars to form
bunches.
In addition to the above results we also discuss two important subtleties of
determining the statistical uncertainties within the worldline numerics
technique and recommend a form of jackknife analysis.
|
We present the Minkowski space solutions of the inhomogeneous Bethe-Salpeter
equation for spinless particles with a ladder kernel. The off-mass shell
scattering amplitude is first obtained.
|
We present new, high resolution, infrared spectra of the T dwarf Gliese 229B
in the J, H, and K bandpasses. We analyze each of these as well as previously
published spectra to determine its metallicity and the abundances of NH3 and CO
in terms of the surface gravity of Gl 229B, which remains poorly constrained.
The metallicity increases with increasing gravity and is below the solar value
unless Gl 229B is a high-gravity brown dwarf with log g(cgs) ~ 5.5. The NH3
abundance is determined from both the H and the K band spectra which probe two
different levels in the atmosphere. We find that the abundance from the K band
data is well below that expected from chemical equilibrium, which we interpret
as strong evidence for dynamical transport of NH3 in the atmosphere. This is
consistent with the previous detection of CO and provides additional
constraints on the dynamics of the atmosphere of this T dwarf.
|
We present a solution to the Conjugacy Problem in the group of
outer-automorphisms of $F_3$, a free group of rank 3. We distinguish according
to several computable invariants, such as irreducibility, subgroups of
polynomial growth, and subgroups carrying the attracting lamination. We
establish, by considerations on train tracks, that the conjugacy problem is
decidable for the outer-automorphisms of $F_3$ that preserve a given rank 2
free factor. Then we establish, by consideration on mapping tori, that it is
decidable for outer-automorphisms of $F_3$ whose maximal polynomial growth
subgroups are cyclic. This covers all the cases left by the state of the art.
|
The spontaneous motion of a camphor particle with a slight modification from
a circle is investigated. The effect of the shape on the motion is examined by
the perturbation method. We introduce a slight $n$-mode modification from a
circle, where the profile is described by $r = R(1 + \epsilon \cos n\theta)$ in
polar coordinates. The results predict that a camphor particle with an $n=3$
mode modification from a circle, i.e., a triangular modification, moves in the
direction of a corner for a small particle, while it moves in the direction of
a side for a large particle. The numerical simulation results well reproduce
the theoretical prediction. The present study will help understand the effect
of the particle shape on spontaneous motion.
|
We present a prime-generating polynomial $(1+2n)(p -2n) + 2$ where $p>2$ is a
lower member of a pair of twin primes less than $41$ and the integer $n$ is
such that $\: \frac {1-p}{2} < n < p-1$.
|
The advent of a next linear $e^\pm e^-$ collider and back-scatterd laser
beams will allow the study of a vast array of high energy processes of the
Standard Model through the fusion of real and virtual photons and other gauge
bosons. As examples, I discuss virtual photon scattering $\gamma^* \gamma^* \to
X$ in the region dominated by BFKL hard Pomeron exchange and report the
predicted cross sections at present and future $e^\pm e^-$ colliders. I also
discuss exclusive $\gamma \gamma$ reactions in QCD as a measure of hadron
distribution amplitudes and a new method for measuring the anomalous magnetic
and quadrupole moments of the $W$ and $Z$ gauge bosons to high precision in
polarized electron-photon collisions.
|
We determine all cases when there exists a meromorphic solution of the third
order ODE describing traveling waves solutions of the Kuramoto-Sivashinsky
equation. It turns out that there are no other meromorphic solutions besides
those explicit solutions found by Kuramoto and Kudryashov. The general method
used in this paper, based on Nevanlinna theory, is applicable to finding all
meromorphic solutions of a wide class of non-linear ODE.
Keywords: Kuramoto and Sivashinsky equation, meromorphic functions, elliptic
functions, Nevanlinna theory.
|
Coherence phenomena appear in two different situations. In the context of
category theory the term `coherence constraints' refers to a set of diagrams
whose commutativity implies the commutativity of a larger class of diagrams. In
the context of algebra coherence constrains are a minimal set of generators for
the second syzygy, that is, a set of equations which generate the full set of
identities among the defining relations of an algebraic theory.
A typical example of the first type is Mac Lane's coherence theorem for
monoidal categories, an example of the second type is the result of Drinfel'd
saying that the pentagon identity for the `associator' of a quasi-Hopf algebra
implies the validity of a set of identities with higher instances of this
associator.
We show that both types of coherence are governed by a homological invariant
of the operad for the underlying algebraic structure. We call this invariant
the (space of) coherence constraints. In many cases these constraints can be
explicitly described, thus giving rise to various coherence results, both
classical and new.
|
What if the paradoxical nature of quantum theory could find its source in
some undecidability analog to that of G\"odel's incompleteness theorem ? This
essay aims at arguing for such G\"odelian hunch via two case studies. Firstly,
using a narrative based on the Newcomb problem, the theological motivational
origin of quantum contextuality is introduced in order to show how this result
might be related to a Liar-like undecidability. A topological generalization of
contextuality by Abramsky et al. in which the logical structure of quantum
contextuality is compared with "Liar cycles" is also presented. Secondly, the
measurement problem is analyzed as emerging from a logical error. A personal
analysis of the related Wigner's friend thought experiment and and a recent
paradox by Frauchiger and Renner is presented, by introducing the notion of
"meta-contextuality" as a Liar-like feature underlying the neo-Copenhagen
interpretations of quantum theory. Finally, this quantum G\"odelian hunch opens
a discussion of the paradoxical nature of quantum physics and the emergence of
time itself from self-contradiction.
|
In this article, we present a multi-tool method for the development and the
analysis of a new medium access method. IEEE 802.15.4 / ZigBee technology has
been used as a basis for this new determinist MAC layer which enables a high
level of QoS. This WPAN can be typically used for wireless sensor networks
which require strong temporal constraints. To validate the proposed protocol,
three complementary and adequate tools are used: Petri Nets for the formal
validation of the algorithm, a dedicated simulator for the temporal aspects,
and some measures on a real prototype based on a couple of ZigBee FREESCALE
components for the hardware characterization of layers #1 and #2.
|
One of the objectives of the pedestrian analysis is to evaluate the effects
of proposed policy on the pedestrian facilities before its implementation. The
implementation of a policy without pedestrian analysis might lead to a very
costly trial and error due to the implementation cost (i.e. user cost,
construction time and cost, etc.). On the other hand, using good analysis
tools, the trial and error of policy could be done in the analysis level. Once
the analysis could prove a good performance, the implementation of the policy
is straightforward. The problem is how to evaluate the impact of the policy
quantitatively toward the behavior of pedestrians before its implementation.
Since the interaction of pedestrians cannot be well address using a macroscopic
level of analysis, a microscopic level of analysis is the choice. However, the
analytical solution of the microscopic pedestrian model is very difficult and
simulation models are more practical approach. To evaluate the impact of the
policy quantitatively toward the behavior of pedestrians before its
implementation, a microscopic pedestrian simulation model was developed. The
model was based on physical forces, which work upon each pedestrian
dynamically. To demonstrate the numerical analysis of the model, an
experimental policy on pedestrian crossing was performed. The simulation
results showed that the keep right policy or the lane-like segregation policy
is inclined to be superior to do minimum or mix-lane policy in terms of average
speed, average delay and dissipation time.
|
The correlated Erd\"os-R\'enyi random graph ensemble is a probability law on
pairs of graphs with $n$ vertices, parametrized by their average degree
$\lambda$ and their correlation coefficient $s$. It can be used as a benchmark
for the graph alignment problem, in which the labels of the vertices of one of
the graphs are reshuffled by an unknown permutation; the goal is to infer this
permutation and thus properly match the pairs of vertices in both graphs. A
series of recent works has unveiled the role of Otter's constant $\alpha$ (that
controls the exponential rate of growth of the number of unlabeled rooted trees
as a function of their sizes) in this problem: for $s>\sqrt{\alpha}$ and
$\lambda$ large enough it is possible to recover in a time polynomial in $n$ a
positive fraction of the hidden permutation. The exponent of this polynomial
growth is however quite large and depends on the other parameters, which limits
the range of applications of the algorithm. In this work we present a family of
faster algorithms for this task, show through numerical simulations that their
accuracy is only slightly reduced with respect to the original one, and
conjecture that they undergo, in the large $\lambda$ limit, phase transitions
at modified Otter's thresholds $\sqrt{\widehat{\alpha}}>\sqrt{\alpha}$, with
$\widehat{\alpha}$ related to the enumeration of a restricted family of trees.
|
Direct and inverse Auger scattering are amongst the primary processes that
mediate the thermalization of hot carriers in semiconductors. These two
processes involve the annihilation or generation of an electron-hole pair by
exchanging energy with a third carrier, which is either accelerated or
decelerated. Inverse Auger scattering is generally suppressed, as the
decelerated carriers must have excess energies higher than the band gap itself.
In graphene, which is gapless, inverse Auger scattering is instead predicted to
be dominant at the earliest time delays. Here, $<8$ femtosecond
extreme-ultraviolet pulses are used to detect this imbalance, tracking both the
number of excited electrons and their kinetic energy with time- and
angle-resolved photoemission spectroscopy. Over a time window of approximately
25 fs after absorption of the pump pulse, we observe an increase in conduction
band carrier density and a simultaneous decrease of the average carrier kinetic
energy, revealing that relaxation is in fact dominated by inverse Auger
scattering. Measurements of carrier scattering at extreme timescales by
photoemission will serve as a guide to ultrafast control of electronic
properties in solids for PetaHertz electronics.
|
The hierarchical quadratic programming (HQP) is commonly applied to consider
strict hierarchies of multi-tasks and robot's physical inequality constraints
during whole-body compliance. However, for the one-step HQP, the solution can
oscillate when it is close to the boundary of constraints. It is because the
abrupt hit of the bounds gives rise to unrealisable jerks and even infeasible
solutions. This paper proposes the mixed control, which blends the single-axis
model predictive control (MPC) and proportional derivate (PD) control for the
whole-body compliance to overcome these deficiencies. The MPC predicts the
distances between the bounds and the control target of the critical tasks, and
it provides smooth and feasible solutions by prediction and optimisation in
advance. However, applying MPC will inevitably increase the computation time.
Therefore, to achieve a 500 Hz servo rate, the PD controllers still regulate
other tasks to save computation resources. Also, we use a more efficient null
space projection (NSP) whole-body controller instead of the HQP and distribute
the single-axis MPCs into four CPU cores for parallel computation. Finally, we
validate the desired capabilities of the proposed strategy via Simulations and
the experiment on the humanoid robot Walker X.
|
With increasing applications in areas such as biomedical information
extraction pipelines and social media analytics, Named Entity Recognition (NER)
has become an indispensable tool for knowledge extraction. However, with the
gradual shift in language structure and vocabulary, NERs are plagued with
distribution shifts, making them redundant or not as profitable without
re-training. Re-training NERs based on Large Language Models (LLMs) from
scratch over newly acquired data poses economic disadvantages. In contrast,
re-training only with newly acquired data will result in Catastrophic
Forgetting of previously acquired knowledge. Therefore, we propose NERDA-Con, a
pipeline for training NERs with LLM bases by incorporating the concept of
Elastic Weight Consolidation (EWC) into the NER fine-tuning NERDA pipeline. As
we believe our work has implications to be utilized in the pipeline of
continual learning and NER, we open-source our code as well as provide the
fine-tuning library of the same name NERDA-Con at
https://github.com/SupritiVijay/NERDA-Con and
https://pypi.org/project/NERDA-Con/.
|
Subsets and Splits