text
stringlengths 6
128k
|
---|
This paper proposes a self-calibrated transit service monitoring framework
that aims to obtain the performance of a transit system using automated
collected data. We first introduce an event-based transit simulation model,
which allows the detailed simulation of passenger travel behavior in a transit
system, including boarding, alighting, and transfer walking. To estimate
passenger path choices, we assume the path choices can be modeled using a
C-logit model, and propose a simulation-based optimization model to estimate
the path choice parameters based on automated fare collection and automated
vehicle location data. The path choices can be estimated on a daily basis,
which enables the simulation model to adapt to dynamic passenger behavior
changes, and output more accurate network performance indicators for regular
service monitoring such as train load, passenger travel time, and crowding at
platforms. The proposed system eliminates the need for conventional monitoring
equipment such as cameras at platforms and scaling/weighing systems on trains.
The Hong Kong Mass Transit Railway (MTR) system is used as the case study.
Results show that the model can well estimate the path choice behavior of
passengers in the system. The output passenger exit flows are closer to the
actual one compared to the two benchmark models (shortest path and uniform path
choice).
|
Given CW complexes X and Y, let map(X,Y) denote the space of continuous
functions from X to Y with the compact open topology. The space map(X,Y) need
not have the homotopy type of a CW complex. Here the results of an extensive
investigation of various necessary and various sufficient conditions for
map(X,Y) to have the homotopy type of a CW complex are exhibited. The results
extend all previously known results on this topic. Moreover, appropriate
converses are given for the previously known sufficient conditions.
It is shown that this difficult question is related to well known problems in
algebraic topology. For example, the geometric Moore conjecture (asserting that
a simply connected finite complex admits an eventual geometric exponent at any
prime if and only if it is elliptic) can be restated in terms of CW homotopy
type of certain function spaces.
Spaces of maps between CW complexes are a particular case of inverse limits
of systems whose bonds are Hurewicz fibrations between spaces of CW homotopy
type. Related problems concerning CW homotopy type of the limit space of such a
system are also studied. In particular, an almost complete solution to a well
known problem concerning towers of fibrations is presented.
|
For $A\in M^{2\times 2}$ let $S(A)=\sqrt{A^T A}$, i.e. the symmetric part of
the polar decomposition of $A$. We consider the relation between two
quasiregular mappings whose symmetric part of gradient are close. Our main
result is the following. Suppose $v,u\in W^{1,2}(B_1(0):\mathbb{R}^2)$ are
$Q$-quasiregular mappings with $\int_{B_1(0)} \det(Du)^{-p} dz\leq C_p$ for
some $p\in (0,1)$ and $\int_{B_1(0)} |Du|^2 dz\leq 1$. There exists constant
$M>1$ such that if $$ \int_{B_1(0)} |S(Du)-S(Dv)|^2 dz=\epsilon $$ then $$
\int_{B_{\frac{1}{2}}(0)} |Dv-R Du| dz\leq c
C_p^{\frac{1}{p}}\epsilon^{\frac{p^3}{M Q^5\log(10 C_p Q)}}\text{ for some
}R\in SO(2). $$
Taking $u=Id$ we obtain a special case of the quantitative rigidity result of
Friesecke, James and Muller. Our main result can be considered as a first step
in a new line of generalization of F-J-M Theorem in which $Id$ is replaced by a
mapping of non-trivial degree.
|
For a polynomial progression $$(x,\; x+P_1(y),\; \ldots,\; x+P_{t}(y)),$$ we
define four notions of complexity: Host-Kra complexity, Weyl complexity, true
complexity and algebraic complexity. The first two describe the smallest
characteristic factor of the progression, the third one refers to the
smallest-degree Gowers norm controlling the progression, and the fourth one
concerns algebraic relations between terms of the progressions. We conjecture
that these four notions are equivalent, which would give a purely algebraic
criterion for determining the smallest Host-Kra factor or the smallest Gowers
norm controlling a given progression. We prove this conjecture for all
progressions whose terms only satisfy homogeneous algebraic relations and
linear combinations thereof. This family of polynomial progressions includes,
but is not limited to, arithmetic progressions, progressions with linearly
independent polynomials $P_1,\; \ldots,\; P_t$ and progressions whose terms
satisfy no quadratic relations. For progressions that satisfy only linear
relations, such as $$(x,\; x+y^2,\; x+2y^2,\; x+y^3,\; x+2y^3),$$ we derive
several combinatorial and dynamical corollaries: (1) an estimate for the count
of such progressions in subsets of cyclic groups or totally ergodic dynamical
systems; (2) a lower bound for multiple recurrence; (3) and a popular common
difference result in cyclic groups. Lastly, we show that Weyl complexity and
algebraic complexity always agree, which gives a straightforward algebraic
description of Weyl complexity.
|
Proportional choosability is a list coloring analogue of equitable coloring.
Specifically, a $k$-assignment $L$ for a graph $G$ specifies a list $L(v)$ of
$k$ available colors to each $v \in V(G)$. An $L$-coloring assigns a color to
each vertex $v$ from its list $L(v)$. A proportional $L$-coloring of $G$ is a
proper $L$-coloring in which each color $c \in \bigcup_{v \in V(G)} L(v)$ is
used $\lfloor \eta(c)/k \rfloor$ or $\lceil \eta(c)/k \rceil$ times where
$\eta(c)=\left\lvert{\{v \in V(G) : c \in L(v) \}}\right\rvert$. A graph $G$ is
proportionally $k$-choosable if a proportional $L$-coloring of $G$ exists
whenever $L$ is a $k$-assignment for $G$. Motivated by earlier work, we
initiate the study of proportional choosability with a bounded palette by
studying proportional 2-choosability with a bounded palette. In particular,
when $\ell \geq 2$, a graph $G$ is said to be proportionally $(2,
\ell)$-choosable if a proportional $L$-coloring of $G$ exists whenever $L$ is a
$2$-assignment for $G$ satisfying $|\bigcup_{v \in V(G)} L(v)| \leq \ell$. We
observe that a graph is proportionally $(2,2)$-choosable if and only if it is
equitably 2-colorable. As $\ell$ gets larger, the set of proportionally $(2,
\ell)$-choosable graphs gets smaller. We show that whenever $\ell \geq 5$ a
graph is proportionally $(2, \ell)$-choosable if and only if it is
proportionally 2-choosable. We also completely characterize the connected
proportionally $(2, \ell)$-choosable graphs when $\ell = 3,4$.
|
Zero-knowledge succinct non-interactive argument of knowledge (zkSNARK)
allows a party, known as the prover, to convince another party, known as the
verifier, that he knows a private value $v$, without revealing it, such that
$F(u,v)=y$ for some function $F$ and public values $u$ and $y$. There are
various versions of zk-SNARK, among them, Quadratic Arithmetic Program
(QAP)-based zk-SNARK has been widely used in practice, specially in Blockchain
technology. This is attributed to two desirable features; its fixed-size proof
and the very light computation load of the verifier. However, the computation
load of the prover in QAP-based zkSNARKs, is very heavy, even-though it is
designed to be very efficient. This load can be beyond the prover's computation
power to handle, and has to be offloaded to some external servers. In the
existing offloading solutions, either (i) the load of computation, offloaded to
each sever, is a fraction of the prover's primary computation (e.g., DZIK),
however the servers need to be trusted, (ii) the servers are not required to be
trusted, but the computation complexity imposed to each one is the same as the
prover's primary computation (e.g., Trinocchio). In this paper, we present a
scheme, which has the benefits of both solutions. In particular, we propose a
secure multi-party proof generation algorithm where the prover can delegate its
task to $N $ servers, where (i) even if a group of $T \in \mathbb{N}$ servers,
$T\le N$, collude, they cannot gain any information about the secret value $v$,
(ii) the computation complexity of each server is less than $1/(N-T)$ of the
prover's primary computation. The design is such that we don't lose the
efficiency of the prover's algorithm in the process of delegating the tasks to
external servers.
|
Magnetars, isolated neutron stars with magnetic field strengths typically
$\gtrsim10^{14}$~G, exhibit distinctive months-long outburst epochs during
which strong evolution of soft X-ray pulse profiles, along with nonthermal
magnetospheric emission components, is often observed. Using near-daily NICER
observations of the magnetar SGR 1830-0645 during the first 37 days of a recent
outburst decay, a pulse peak migration in phase is clearly observed,
transforming the pulse shape from an initially triple-peaked to a single-peaked
profile. Such peak merging has not been seen before for a magnetar. Our
high-resolution phase-resolved spectroscopic analysis reveals no significant
evolution of temperature despite the complex initial pulse shape. Yet the
inferred surface hot spots shrink during the peak migration and outburst decay.
We suggest two possible origins for this evolution. For internal heating of the
surface, tectonic motion of the crust may be its underlying cause. The inferred
speed of this crustal motion is $\lesssim100$~m~day$^{-1}$, constraining the
density of the driving region to $\rho\sim10^{10}$~g~cm$^{-3}$, at a depth of
$\sim200$~m. Alternatively, the hot spots could be heated by particle
bombardment from a twisted magnetosphere possessing flux tubes or ropes,
somewhat resembling solar coronal loops, that untwist and dissipate on the
30-40~day timescale. The peak migration may then be due to a combination of
field-line footpoint motion (necessarily driven by crustal motion) and evolving
surface radiation beaming. These novel dataset paints a vivid picture of the
dynamics associated with magnetar outbursts, yet it also highlights the need
for a more generic theoretical picture where magnetosphere and crust are
considered in tandem.
|
We establish that a second countable locally compact groupoid possessing a
continuous Haar system is topologically amenable if and only if it is Borel
amenable. We give some examples and applications.
|
We consider extended slow-fast systems of N interacting diffusions. The
typical behavior of the empirical density is described by a nonlinear
McKean-Vlasov equation depending on , the scaling parameter separating the time
scale of the slow variable from the time scale of the fast variable. Its
atypical behavior is encapsulated in a large N Large Deviation Principle (LDP)
with a rate functional. We study the $\Gamma$-convergence of as $\rightarrow$ 0
and show it converges to the rate functional appearing in the Macroscopic
Fluctuations Theory (MFT) for diffusive systems.
|
Hydrodynamical systems are usually taken as chaotic systems with fast
relaxations. It is counter intuitive for "ideal" gas to have a hydrodynamical
description. We find that a hydrodynamical model of one-dimensional $|\Phi|^6$
theory shares the same ground state density profile, density-wave excitation,
as well as the similar dynamical and statistical properties with the
Calogero-Sutherland model in thermodynamic limit when their interaction
strengths matches each other. The interaction strength g0 in the
$|\Phi|^6$theory is then the index of fractional statistics. Although the model
is interacting in Bose liquid sense, but it shows integrability with periodical
coherent evolution. We also discussed the fractional statistics emerges from
the $|\Phi|^6$ theory.
|
In unpublished work, Geelen proved that a matroid is near-regular if and only
if it has no minor isomorphic to: U2,5; U3,5; the Fano plane and its dual; the
non-Fano and its dual; the single-element deletion of AG(2,3), its dual, and
the matroid obtained from it with a Delta-Y operation; and P8. We provide a
proof of this characterization.
|
Visible-infrared person re-identification (VI-ReID) aims to match persons
captured by visible and infrared cameras, allowing person retrieval and
tracking in 24-hour surveillance systems. Previous methods focus on learning
from cross-modality person images in different cameras. However, temporal
information and single-camera samples tend to be neglected. To crack this nut,
in this paper, we first contribute a large-scale VI-ReID dataset named
BUPTCampus. Different from most existing VI-ReID datasets, it 1) collects
tracklets instead of images to introduce rich temporal information, 2) contains
pixel-aligned cross-modality sample pairs for better modality-invariant
learning, 3) provides one auxiliary set to help enhance the optimization, in
which each identity only appears in a single camera. Based on our constructed
dataset, we present a two-stream framework as baseline and apply Generative
Adversarial Network (GAN) to narrow the gap between the two modalities. To
exploit the advantages introduced by the auxiliary set, we propose a curriculum
learning based strategy to jointly learn from both primary and auxiliary sets.
Moreover, we design a novel temporal k-reciprocal re-ranking method to refine
the ranking list with fine-grained temporal correlation cues. Experimental
results demonstrate the effectiveness of the proposed methods. We also
reproduce 9 state-of-the-art image-based and video-based VI-ReID methods on
BUPTCampus and our methods show substantial superiority to them. The codes and
dataset are available at: https://github.com/dyhBUPT/BUPTCampus.
|
Reversible data hiding continues to attract significant attention in recent
years. In particular, an increasing number of authors focus on the higher
significant bit (HSB) plane of an image which can yield more redundant space.
On the other hand, the lower significant bit planes are often ignored for
embedding in existing schemes due to their harm to the embedding rate. This
paper proposes an efficient reversible data hiding scheme via a double-peak
two-layer embedding (DTLE) strategy with prediction error expansion. The higher
six-bit planes of the image are assigned as the HSB plane, and double
prediction error peaks are applied in either embedding layer. This makes fuller
use of the redundancy space of images compared with the one error peak
strategy. Moreover, we carry out the median-edge detector pre-processing for
complex images to reduce the size of the auxiliary information. A series of
experimental results show that our DTLE approach achieves up to 83% higher
embedding rate on real-world datasets while guaranteeing better image quality.
|
In our daily lives, we observe objects sinking, floating, or rising when
immersed in a fluid. The Archimedes principle, which explains an object's
behavior when immersed in a fluid, is important in fluid mechanics; however, it
is a relatively complex concept for middle school students to grasp, as they
often harbor misconceptions. To initiate conceptual change among students
regarding the misconception "heavy objects sink and light objects float," I
created a project during which students build a stable submarine that uses
fluid transfers to move up, down, and forward while carrying a load. Students
must take into account several variables, from the design of the submarine to
the choice of materials. Additionally, students write a report that includes a
user manual, challenges they encountered and how they overcame those
challenges, and a detailed text that links theory to their submarine.
|
We call a continuous map $f : X \to Y$ nowhere constant if it is not constant
on any non-empty open subset of its domain $X$. Clearly, this is equivalent
with the assumption that every fiber $f^{-1}(y)$ of $f$ is nowhere dense in
$X$. We call the continuous map $f : X \to Y$ pseudo-open if for each nowhere
dense $Z \subset Y$ its inverse image $f^{-1}(Z)$ is nowhere dense in $X$.
Clearly, if $Y$ is crowded, i.e. has no isolated points, then $f$ is nowhere
constant.
The aim of this paper is to study the following, admittedly imprecise,
question: How "small" nowhere constant, resp. pseudo-open continuous images can
"large" spaces have? Our main results yield the following two precise answers
to this question, explaining also our title. Both of them involve the cardinal
function $\widehat{c}(X)$, the "hat version" of cellularity, which is defined
as the smallest cardinal $\kappa$ such that there is no $\kappa$-sized disjoint
family of open sets in $X$. Thus, for instance, $\widehat{c}(X) = \omega_1$
means that $X$ is CCC.
THEOREM A. Any crowded Tychonov space $X$ has a crowded Tychonov nowhere
constant continuous image $Y$ of weight $w(Y) \le \widehat{c}(X)$. Moreover, in
this statement $\le$ may be replaced with $<$ iff there are no
$\widehat{c}(X)$-Suslin lines (or trees).
THEOREM B. Any crowded Tychonov space $X$ has a crowded Tychonov pseudo-open
continuous image $Y$ of weight $w(Y) \le 2^{<\widehat{c}(X)}$. If Martin's
axiom holds then there is a CCC crowded Tychonov space $X$ such that for any
crowded Hausdorff pseudo-open continuous image $Y$ of $X$ we have $w(Y) \ge
\mathfrak{c}\,( = 2^{< \omega_1})$.
|
We analyze neutrino oscillation for the general case when the initial
neutrino is not in a pure flavor state. We show that, after such a neutrino
beam propagates for a while, the probability of detecting any pure flavor state
depends even on the CP-violating Majorana phases in the mixing matrix. The
dependence remains even when energy spectrum of the initial beam is taken into
account. We discuss various implications of this dependence.
|
The next generation of wide-field deep astronomical surveys will deliver
unprecedented amounts of images through the 2020s and beyond. As both the
sensitivity and depth of observations increase, more blended sources will be
detected. This reality can lead to measurement biases that contaminate key
astronomical inferences. We implement new deep learning models available
through Facebook AI Research's Detectron2 repository to perform the
simultaneous tasks of object identification, deblending, and classification on
large multi-band coadds from the Hyper Suprime-Cam (HSC). We use existing
detection/deblending codes and classification methods to train a suite of deep
neural networks, including state-of-the-art transformers. Once trained, we find
that transformers outperform traditional convolutional neural networks and are
more robust to different contrast scalings. Transformers are able to detect and
deblend objects closely matching the ground truth, achieving a median bounding
box Intersection over Union of 0.99. Using high quality class labels from the
Hubble Space Telescope, we find that the best-performing networks can classify
galaxies with near 100\% completeness and purity across the whole test sample
and classify stars above 60\% completeness and 80\% purity out to HSC i-band
magnitudes of 25 mag. This framework can be extended to other upcoming deep
surveys such as the Legacy Survey of Space and Time and those with the Roman
Space Telescope to enable fast source detection and measurement. Our code,
\textsc{DeepDISC} is publicly available at
\url{https://github.com/grantmerz/deepdisc}.
|
Quantum information processing with geometric features of quantum states may
provide promising noise-resilient schemes for quantum metrology. In this work,
we theoretically explore phase-space geometric Sagnac interferometers with
trapped atomic clocks for rotation sensing, which could be intrinsically robust
to certain decoherence noises and reach high precision. With the wave guide
provided by sweeping ring-traps, we give criteria under which the well-known
Sagnac phase is a pure or unconventional geometric phase with respect to the
phase space. Furthermore, corresponding schemes for geometric Sagnac
interferometers with designed sweeping angular velocity and interrogation time
are presented, and the experimental feasibility is also discussed. Such
geometric Sagnac interferometers are capable of saturating the ultimate
precision limit given by the quantum Cram\'er-Rao bound.
|
We investigate additional properties of protolocalizations, introduced and
studied by F. Borceux, M. M. Clementino, M. Gran, and L. Sousa, and of
protoadditive reflections, introduced and studied by T. Everaert and M. Gran.
Among other things we show that there are no non-trivial (protolocalizations
and) protoadditive reflections of the category of groups, and establish a
connection between protolocalizations and Kurosh--Amitsur radicals of groups
with multiple operators whose semisimple classes form subvarieties.
|
(abridged) The first unidentified very high energy gamma ray source (TeV
J2032+4130) in the Cygnus region has been the subject of intensive search for a
counterpart source at other wavelengths. A deep ($\approx 50$ ksec) exposure of
TeV J2032+4130 with \textit{XMM-Newton} has been obtained. The contribution of
point sources to the observed X-ray emission from TeV J2032+4130 is subtracted
from the data. The point-source subtracted X-ray data are analyzed using blank
sky exposures and regions adjacent to the position of TeV J2032+4130 in the
field of view covered by the XMM-Newton telescopes to search for diffuse X-ray
emission. An extended X-ray emission region with a full width half maximum
(FWHM) size of $\approx 12$ arc min is found. The centroid of the emission is
co-located with the position of TeV J2032+4130.The energy spectrum of the
emission coinciding with the position and extension of TeV J2032+4130 can be
modeled by a power-law model with a photon index
$\Gamma=1.5\pm0.2_\mathrm{stat}\pm0.3_\mathrm{sys}$ and an energy flux
integrated between 2 and 10 keV of $f_{2-10 \mathrm{keV}} \approx 7\cdot
10^{-13}$ ergs/(cm$^2$ s) which is lower than the very high energy gamma-ray
flux observed from TeV J2032+4130. We conclude that the faint extended X-ray
emission discovered in this observation is the X-ray counterpart of TeV
J2032+4130. Formally, it can not be excluded that the extended emission is due
to an unrelated population of faint, hot ($k_BT\approx 10$ keV) unresolved
point-sources which by chance coincides with the position and extension of TeV
J2032+4130. We discuss our findings in the frame of both hadronic and leptonic
gamma-ray production scenarios.
|
In the context of flavor-universal topcolor-assisted technicolor (TC2)
models, we calculate the lepton flavor violating (LFV) $Z\to l_il_j$ decays. We
find that the extra U(1) gauge boson $Z^{\prime}$ can give significant
contributions to these LFV processes. With reasonable values of the parameters,
the branching ratios of the processes $Z\to \tau\mu$ and $Z\to \tau e$ can
approach the experimental upper limits. The indirect bound on the process $Z\to
\mu e$ can give a severe constraint on the flavor-universal TC2 models.
|
In this article we study the quasilinear wave equation $\Box_{g(u, t, x)} u =
0$ where the metric $g(u, t, x)$ is close to the Schwarzschild metric. Under
suitable assumptions of the metric coefficients, and assuming that the initial
data for $u$ is small enough, we prove global existence of the solution. The
main technical result of the paper is a local energy estimate for the linear
wave equation on metrics with slow decay to the Schwarzschild metric.
|
We present a Multi-Index Quasi-Monte Carlo method for the solution of
elliptic partial differential equations with random coefficients. By combining
the multi-index sampling idea with randomly shifted rank-1 lattice rules, the
algorithm constructs an estimator for the expected value of some functional of
the solution. The efficiency of this new method is illustrated on a
three-dimensional subsurface flow problem with lognormal diffusion coefficient
with underlying Mat\'ern covariance function. This example is particularly
challenging because of the small correlation length considered, and thus the
large number of uncertainties that must be included. We show numerical evidence
that it is possible to achieve a cost inversely proportional to the requested
tolerance on the root-mean-square error, for problems with a smoothly varying
random field
|
We extend our investigation of backgrounds to new physics signals, following
CMS's data-driven search for supersymmetry at the LHC. The aim is to use
different sets of cuts in gamma + 3-jet production to predict the irreducible Z
+ 3-jet background (with the Z boson decaying to neutrinos) to searches with
missing transverse energy + 3-jet signal topologies. We compute ratios of Z +
3-jet to gamma + 3-jet production cross sections and kinematic distributions at
next-to-leading order (NLO) in alpha_s. We compare these ratios with those
obtained using a parton shower matched to leading-order matrix elements
(ME+PS). This study extends our previous work [arXiv:1106.1423 [hep-ph]] on the
Z + 2-jet to gamma + 2-jet ratio. We find excellent agreement with the ratio
determined from the earlier NLO results involving two instead of three jets,
and agreement to within 10% between the NLO and ME+PS results for the ratios.
We also examine the possibility of large QCD logarithms in these processes.
Ratios of Z + n-jet to gamma + n-jet cross sections are plausibly less
sensitive to such corrections than the cross sections themselves. Their effect
on estimates of Z + 3-jet to gamma + 3-jet ratios can be assessed
experimentally by measuring the gamma + 3-jet to gamma + 2-jet production ratio
in search regions. We partially address the question of potentially large
electroweak logarithms by computing the real-emission part of the electroweak
corrections to the ratio using ME+PS, and find that it is 1% or less. Our
estimate of the remaining theoretical uncertainties in the Z to gamma ratio is
in agreement with our earlier study.
|
The one-meter telescope-reflector `Saturn' (D=1 m, F = 4 m) was partially
renovated at the Pulkovo observatory at the end of 2014. The telescope was
equipped by CCD camera S2C with 14x14 arcmin field of view and 824 mas per pix
scale. The observations of outer Jovian satellites have been performed in a
test mode since January 2015. The exposure time of 30 seconds allows us to
obtain images of stars up to magnitude 19.5 with the present state of the
mirror and the equipment. The observations of outer Jovian satellites have been
performed during testing period. These objects are interesting targets because
their astrometric observations required to improve ephemeris and dynamic
studies. Satellites positions have been determined on the basis of CCD images
obtained within 6 nights. Astrometric reduction is performed by linear method
using HCRF/UCAC4 and HCRF/URAT1. Internal accuracy of satellites positions has
been estimated as 20 - 100 mas. The absolute values of residuals O-C do not
exceed 100 mas in most cases. The independent tests have been carried out by
the direct comparison with the results of observations of the Jovian satellite
Himalia performed simultaneously by the Normal astrograph (the largest
difference was 113 mas). This work has been partially supported by RFBR
(12-02-00675-a) and the 22 Program of RAS Praesidium.
|
The paper is devoted to quadratic Poisson structures compatible with the
canonical linear Poisson structures on trivial 1-dimensional central extensions
of semisimple Lie algebras. In particular, we develop the general theory of
such structures and study related families of functions in involution. We also
show that there exists a 10-parametric family of quadratic Poisson structures
on $\gl(3)^*$ compatible with the canonical linear Poisson structure and
containing the 3-parametric family of quadratic bivectors recently introduced
by Vladimir Sokolov. The involutive family of polynomial functions related to
the corresponding Poisson pencils contains the hamiltonian of the polynomial
form of the elliptic Calogero--Moser system.
|
We study the statistical performance and applicability of a simple quantum
state discrimination technique for the analysis of data from nuclear quadrupole
resonance experiments on a TNT sample. The target application is remote
detection of anti-personnel landmines.We show that, even for data that allows
the determination of only one time dependent component of the NQR subsystem,
the use of the Bayes optimal detector leads to greatly improved ROC curves with
respect to the popular demodulation technique, especially for spin echo signals
with a low signal to noise ratio. The method can easily be extended to
incorporate results from other sensing modalities and the incorporation of
informationally complete measurements that estimate the full density matrix of
the NQR subsystem.
|
In this paper, we propose a novel stochastic binary resetting algorithm for
networks of pulse-coupled oscillators (or, simply, agents) to reach global
synchronization. The algorithm is simple to state: Every agent in a network
oscillates at a common frequency. Upon completing an oscillation, an agent
generates a Bernoulli random variable to decide whether it sends pulses to all
of its out-neighbors or it stays quiet. Upon receiving a pulse, an agent resets
its state by following a binary phase update rule. We show that such an
algorithm can guarantee global synchronization of the agents almost surely as
long as the underlying information flow topology is a rooted directed graph.
The proof of the result relies on the use of a stochastic hybrid dynamical
system approach. Toward the end of the paper, we present numerical
demonstrations for the validity of the result and, also, numerical studies
about the times needed to reach synchronization for various information flow
topologies.
|
We analyze a multilayer neural field model of spatial working memory,
focusing on the impact of interlaminar connectivity, spatial heterogeneity, and
velocity inputs. Models of spatial working memory typically employ networks
that generate persistent activity via a combination of local excitation and
lateral inhibition. Our model is comprised of a multilayer set of equations
that describes connectivity between neurons in the same and different layers
using an integral term. The kernel of this integral term then captures the
impact of different interlaminar connection strengths, spatial heterogeneity,
and velocity input. We begin our analysis by focusing on how interlaminar
connectivity shapes the form and stability of (persistent) bump attractor
solutions to the model. Subsequently, we derive a low-dimensional approximation
that describes how spatial heterogeneity, velocity input, and noise combine to
determine the position of bump solutions. The main impact of spatial
heterogeneity is to break the translation symmetry of the network, so bumps
prefer to reside at one of a finite number of local attractors in the domain.
With the reduced model in hand, we can then approximate the dynamics of the
bump position using a continuous time Markov chain model that describes bump
motion between local attractors. While heterogeneity reduces the effective
diffusion of the bumps, it also disrupts the processing of velocity inputs by
slowing the velocity-induced propagation of bumps. However, we demonstrate that
noise can play a constructive role by promoting bump motion transitions,
restoring a mean bump velocity that is close to the input velocity.
|
Core excitons in solids have garnered increasing interest, yet their behavior
and decay mechanisms are not fully understood. Here, we use attosecond extreme
ultraviolet (XUV) transient absorption spectroscopy, performed with a broadband
25-45 eV sub-fs XUV pump pulse and a 500-1000 nm sub 5 fs near-infrared (NIR)
supercontinuum probe pulse to monitor the excitation, dynamics, and decay of
core excitons in CaF$_2$ at the Ca$^{2+}$ M$_{2,3}$ edge. The XUV pulses are
used to excite core excitons in CaF$_2$ based around the Ca$^{2+}$ and the
polarization of the medium is subsequently perturbed by the time-delayed NIR
pulses to measure the spectral changes and decays. A number of features are
identified in the transient absorption spectrum, which suggest transfer between
excitonic states, Stark shifts, and the emergence of light-induced states. We
find that various core excitons identified exhibit coherence lifetimes spanning
3-7 fs. Furthermore, a NIR-intensity-dependent analysis finds a negative
correlation with the coherence lifetime of various identified excitonic
features, supporting a phonon-mediated mechanism as responsible for the core
exciton decoherence. We present a computational band structure projection
analysis strategy to estimate the orbital structure of the core excitons and
determine which core excitonic transitions should be allowed by selection rules
with the probe beam. This strategy is found to successfully describe the
observed spectroscopic data. The outlined joint spectroscopic and computational
investigation of core excitons is a powerful technique that explains the
complex behavior of core excitons in solid-state materials.
|
Perfect $T$-linear resistivity associated with universal scattering rate:
$1/\tau =\alpha k_B T/\hbar$ with $\alpha \sim 1$, so-called Planckian metal
state, has been observed in the normal state of a variety of strongly
correlated superconductors close to a quantum critical point. However, the
microscopic origin of this intriguing phenomena and its link to quantum
criticality still remains an outstanding open problem. In this work, we observe
the quantum-critical $T/B$-scaling of the Planckian metal state in the
resistivity and heat capacity of heavy-electron superconductor
Ce$_{1-x}$Nd$_x$CoIn$_5$ in magnetic fields near the edge of
antiferromagnetism, driven by critical Kondo hybridization at the critical
doping $x_c \sim 0.03$. We further provide the first microscopic mechanism to
account for the Planckian state in a quantum critical system based on the
critical charge fluctuations near Kondo breakdown transition at $x_c$ within
the quasi-two-dimensional Kondo-Heisenberg lattice model. This mechanism
simultaneously captures the observed universal Planckian scattering rate as
well as the quantum-critical scaling and power-law divergence in thermodynamic
observables near criticality. Our mechanism is generic to Planckian metal
states in a variety of quantum critical superconductors near Kondo destruction.
|
Tanno [6] provided an algebraic characterization in an almost Hermitian
manifold to reduce to a space of constant holomorphic sectional curvature,
which he later extended for the Sasakian manifolds as well. In this present
paper, we generalize the same characterization in generalized
$g.f.f-$manifolds.
|
The superradiant phase transition (SPT) controlled by the interacting
strength between the two-level atom and the photons has been a hot topic in the
Rabi model and the Rabi-dimer model. The latter describes two Rabi cavities
coupled with an inter-cavity hopping parameter. Moreover, the SPT in the
Rabi-dimer model is found to be the same universal class that in the Rabi model
by investigating the correlation-length critical exponent. In this paper, we
are concerned about whether the inter-cavity hopping parameter between two Rabi
cavities (i.e., the Rabi-dimer model) will induce the SPT and to which the
universal class of the phase transition belongs. We analytically derive the
phase boundary of the SPT and investigate the ground-state properties of the
system. We uncover that the inter-cavity induced SPT can be apparently
understood from the ground-state energy and the ground-state photon population,
as well as the ground-state expectation value of the squared anti-symmetric
mode. From the scaling analysis of the fidelity susceptibility, we numerically
verify that the SPT driven by the cavity coupling belongs to the same universal
class as the one driven by the atom-cavity interaction. Our work enriches the
studies on the SPT and its critical behaviors in the Rabi-dimer model.
|
Consensus algorithms facilitate agreement on and resolution of blockchain
functions, such as smart contracts and transactions. Ethereum uses a
Proof-of-Stake (PoS) consensus mechanism, which depends on financial incentives
to ensure that validators perform certain duties and do not act maliciously.
Should a validator attempt to defraud the system, legitimate validators will
identify this and then staked cryptocurrency is `burned' through a process of
slashing.
In this paper, we show that an attacker who has compromised a set of
validators could threaten to perform malicious actions that would result in
slashing and thus, hold those validators to ransom. We use game theory to study
how an attacker can coerce payment from a victim, for example by deploying a
smart contract to provide a root of trust shared between attacker and victim
during the extortion process. Our game theoretic model finds that it is in the
interests of the validators to fully pay the ransom due to a lack of systemic
protections for validators. Financial risk is solely placed on the victim
during such an attack, with no mitigations available to them aside from
capitulation (payment of ransom) in many scenarios. Such attacks could be
disruptive to Ethereum and, likely, to many other PoS networks, if public trust
in the validator system is eroded. We also discuss and evaluate potential
mitigation measures arising from our analysis of the game theoretic model.
|
How large can a family \cal A \subset \cal P [n] be if it does not contain
A,B with |A\setminus B| = 1? Our aim in this paper is to show that any such
family has size at most \frac{2+o(1)}{n} \binom {n}{\lfloor n/2\rfloor }. This
is tight up to a multiplicative constant of $2$. We also obtain similar results
for families \cal A \subset \cal P[n] with |A\setminus B| \neq k, showing that
they satisfy |{\mathcal A}| \leq \frac{C_k}{n^k}\binom {n}{\lfloor n/2\rfloor
}, where C_k is a constant depending only on k.
|
We describe a method to computationally estimate the probability density
function of a univariate random variable by applying the maximum entropy
principle with some local conditions given by Gaussian functions. The
estimation errors and optimal values of parameters are determined. Experimental
results are presented. The method estimates the distribution well if a large
enough selection is used, typically at least 1 000 values. Compared to the
classical approach of entropy maximisation, local conditions allow improving
estimation locally. The method is well suited for a heuristic optimisation
approach.
|
This paper is a sequel to "Logical systems I: Lambda calculi through
discreteness". It provides a general 2-categorical setting for extensional
calculi and shows how intensional and extensional calculi can be related in
logical systems. We define Yoneda triangles as relativisations of internal
adjunctions, and use them to characterise universes that admit a notion of
convolution. We show that such universes induce semantics for lambda calculi.
We prove that a construction analogical to enriched Day convolution works for
categories internal to a locally cartesian closed category with finite
colimits.
|
We classify up to coarse equivalence all countable abelian groups of finite
torsion free rank. The Q-cohomological dimension and the torsion free rank are
the two invariants that give us such classification. We also prove that any
countable abelian group of finite torsion free rank is coarsely equivalent to
Z^n + H where H is a direct sum (possibly infinite) of cyclic groups. A partial
generalization to countable abelian groups of the Gromov rigidity theorem for
abelian groups is shown.
|
Art objects can evoke certain emotions. Color is a fundamental element of
visual art and plays a significant role in how art is perceived. This paper
introduces a novel approach to classifying emotions in art using Fuzzy Sets. We
employ a fuzzy approach because it aligns well with human judgments' imprecise
and subjective nature. Extensive fuzzy colors (n=120) and a broad emotional
spectrum (n=10) allow for a more human-consistent and context-aware exploration
of emotions inherent in paintings. First, we introduce the fuzzy color
representation model. Then, at the fuzzification stage, we process the Wiki Art
Dataset of paintings tagged with emotions, extracting fuzzy dominant colors
linked to specific emotions. This results in fuzzy color distributions for ten
emotions. Finally, we convert them back to a crisp domain, obtaining a
knowledge base of color-emotion associations in primary colors. Our findings
reveal strong associations between specific emotions and colors; for instance,
gratitude strongly correlates with green, brown, and orange. Other noteworthy
associations include brown and anger, orange with shame, yellow with happiness,
and gray with fear. Using these associations and Jaccard similarity, we can
find the emotions in the arbitrary untagged image. We conducted a 2AFC
experiment involving human subjects to evaluate the proposed method. The
average hit rate of 0.77 indicates a significant correlation between the
method's predictions and human perception. The proposed method is simple to
adapt to art painting retrieval systems. The study contributes to the
theoretical understanding of color-emotion associations in art, offering
valuable insights for various practical applications besides art, like
marketing, design, and psychology.
|
An inductive inference system for proving validity of formulas in the initial
algebra $T_{\mathcal{E}}$ of an order-sorted equational theory $\mathcal{E}$ is
presented. It has 20 inference rules, but only 9 of them require user
interaction; the remaining 11 can be automated as simplification rules. In this
way, a substantial fraction of the proof effort can be automated. The inference
rules are based on advanced equational reasoning techniques, including:
equationally defined equality predicates, narrowing, constructor variant
unification, variant satisfiability, order-sorted congruence closure,
contextual rewriting, ordered rewriting, and recursive path orderings. All
these techniques work modulo axioms $B$, for $B$ any combination of
associativity and/or commutativity and/or identity axioms. Most of these
inference rules have already been implemented in Maude's NuITP inductive
theorem prover.
|
We question the role of entanglement in masking quantum information contained
in a set of mixed quantum states. We first show that a masker that can mask any
two single-qubit pure states, can mask the entire set of mixed states
comprising of the classical mixtures of those two pure qubit states as well. We
then try to find the part played by entanglement in masking two different sets:
One, a set of mixed states formed by the classical mixtures of two single-qubit
pure commuting states, and another, a set of mixed states obtained by mixing
two single-qubit pure non-commuting states. For both cases, we show that the
masked states remain entangled unless the input state is an equal mixture of
the two pure states. This in turn reveals that entanglement is necessary for
masking an arbitrary set of two single qubit states, regardless of their
mixednesses and mutual commutativity.
|
We address the occurrence of narrow planetary rings and some of their
structural properties, in particular when the rings are shepherded. We consider
the problem as Hamiltonian {\it scattering} of a large number of
non-interacting massless point particles in an effective potential. Using the
existence of stable motion in scattering regions in this set up, we describe a
mechanism in phase space for the occurrence of narrow rings and some
consequences in their structure. We illustrate our approach with three
examples. We find eccentric narrow rings displaying sharp edges, variable width
and the appearance of distinct ring components (strands) which are spatially
organized and entangled (braids). We discuss the relevance of our approach for
narrow planetary rings.
|
Addressing the optical properties of a single nanoparticle in the infrared is
particularly challenging, thus alternative methods for characterizing the
conductance spectrum of nanoparticles in this spectral range need to be
developed. Here we describe an efficient method of fabricating single
nanoparticle tunnel junctions on a chip circuit. We apply this method to narrow
band gap nanoparticles of HgSe, which band structure combine the inverted
character of the bulk semimetal with quantum confinement and self-doping. Upon
tuning the gate bias, measurement reveals the presence of two energy gaps in
the spectrum. The wider gap results from the interband gap, while the narrower
gap results from intraband transitions. The observation of the latter near zero
gate voltage confirms the doped character of the nanoparticle at the single
particle level, which is in full agreement with the ensemble optical and
transport measurements. Finally we probe the phototransport within a single
quantum dot and demonstrate a large photogain mechanism resulting from
photogating.
|
The biadjoint scalar theory has cubic interactions and fields transforming in
the biadjoint representation of ${\rm SU}(N)\times {\rm SU}\big({\tilde
N}\big)$. Amplitudes are "color" decomposed in terms of partial amplitudes
computed using Feynman diagrams which are simultaneously planar with respect to
two orderings. In 2019, a generalization of biadjoint scalar amplitudes based
on generalized Feynman diagrams (GFDs) was introduced. GFDs are collections of
Feynman diagrams derived by incorporating an additional constraint of "local
planarity" into the construction of the arrangements of metric trees in
combinatorics. In this work, we propose a natural generalization of color
orderings which leads to color-dressed amplitudes. A generalized color ordering
(GCO) is defined as a collection of standard color orderings that is induced,
in a precise sense, from an arrangement of projective lines on $\mathbb{RP}^2$.
We present results for $n\leq 9$ generalized color orderings and GFDs,
uncovering new phenomena in each case. We discover generalized decoupling
identities and propose a definition of the "colorless" generalized scalar
amplitude. We also propose a notion of GCOs for arbitrary $\mathbb{RP}^{k-1}$,
discuss some of their properties, and comment on their GFDs. In a companion
paper, we explore the definition of partial amplitudes using CEGM integral
formulas.
|
An ongoing challenge in the study of quantum materials, is to reveal and
explain collective quantum effects in spin systems where interactions between
different modes types are important. Here we approach this problem through a
combined experimental and theoretical study of interacting transverse and
longitudinal modes in an easy-plane quantum magnet near a continuous quantum
phase transition. Our inelastic neutron scattering measurements of
Ba$_{2}$FeSi$_{2}O$_{7}$ reveal the emergence, decay, and renormalization of a
longitudinal mode throughout the Brillouin zone. The decay of the longitudinal
mode is particularly pronounced at the zone center. To account for the
many-body effects of the interacting low-energy modes in anisotropic magnets,
we generalize the standard spin-wave theory. The measured mode decay and
renormalization is reproduced by including all one-loop corrections. The
theoretical framework developed here is broadly applicable to quantum magnets
with more than one type of low energy mode.
|
High-quality factor microwave resonators operating in a magnetic field are a
necessity for some quantum sensing applications and hybrid platforms. Losses in
microwave superconducting resonators can have several origins, including
microscopic defects, usually known as two-level-systems (TLS). Here, we
characterize the magnetic field response of NbTiN resonators patterned on
sapphire and observe clear absorption lines occurring at specific magnetic
fields. We identify the spin systems responsible for these features, including
a yet unreported spin with $g=1.85$ that we attribute to defects in the NbTiN
thin film. We develop mitigation strategies involving namely an aluminum etch
mask, resulting in maintaining quality factors above $Q>2 \times 10^5$ in the
range $0$-$0.3$ T.
|
The structure of the gap parameter ($\Delta_{k}$) for the hole-doped cuprates
has been studied. The obtained results indicate that the antinodal part of
$\Delta_{k}$ is very weakly temperature dependent and above the critical
temperature ($T_{C}$), it extends into the anomalous normal state to the
pseudogap temperature. On the other hand, the values of $\Delta_{k}$, which are
close to the nodal part, are strongly temperature dependent. The model has been
tested for the ${\rm YBa_{2}Cu_{3}O_{7-\delta}}$ superconductor. It has been
shown that the theoretical results agree with the experimental data.
|
The number and relative placement of BPMs and steerers with respect to the
quadrupoles in a circular lattice can lead to degeneracy in the context of
inverse modeling of accelerator optics. Further, the measurement uncertainties
introduced by beam position monitors can propagate by the inverse modeling
process in ways that prohibit the successful estimation of model errors. In
this contribution, the influence of BPM and steerer placement on the
conditioning of the inverse problem is studied. An analytical version of the
Jacobian, linking the quadrupole gradient errors along with BPM and steerer
gain errors with the orbit response matrix, is derived. It is demonstrated that
this analytical version of the Jacobian can be used in place of the numerically
obtained Jacobian during the fitting procedure. The approach is first tested
with simulations and the findings are verified by measurement data taken on
SIS18 synchrotron at GSI. The results are crosschecked with the standard
numerical Jacobian approach. The quadrupole errors causing tune discrepancies
observed at SIS18 are identified.
|
Integrated quantum photonic circuitry is an emerging topic that requires
efficient coupling of quantum light sources to waveguides and optical
resonators. So far, great effort has been devoted to engineering on-chip
systems from three-dimensional crystals such as diamond or gallium arsenide. In
this study, we demonstrate room temperature coupling of quantum emitters
embedded within a layered hexagonal boron nitride to an on-chip aluminium
nitride waveguide. We achieved 1.2% light coupling efficiency of the device and
realise transmission of single photons through the waveguide. Our results serve
as a foundation for the integration of layered materials with on-chip
components and for the realisation of integrated quantum photonic circuitry.
|
We demonstrate coherent one-color photoassociation of a Bose-Einstein
condensate, which results in Rabi oscillations between atomic and molecular
condensates. We attain atom-molecule Rabi frequencies that are comparable to
decoherence rates by driving photoassociation of atoms in an $^{88}$Sr
condensate to a weakly-bound level of the metastable $^{1}S_{0}+^{3}P_{1}$
molecular potential, which has a long lifetime and large Franck-Condon overlap
integral with the ground scattering state. Transient shifts and broadenings of
the excitation spectrum are clearly seen at short times, and they create an
asymmetric excitation profile that only displays Rabi oscillations for blue
detuning from resonance.
|
Recently, unsupervised learning has made impressive progress on various
tasks. Despite the dominance of discriminative models, increasing attention is
drawn to representations learned by generative models and in particular,
Generative Adversarial Networks (GANs). Previous works on the interpretation of
GANs reveal that GANs encode semantics in feature maps in a linearly separable
form. In this work, we further find that GAN's features can be well clustered
with the linear separability assumption. We propose a novel clustering
algorithm, named KLiSH, which leverages the linear separability to cluster
GAN's features. KLiSH succeeds in extracting fine-grained semantics of GANs
trained on datasets of various objects, e.g., car, portrait, animals, and so
on. With KLiSH, we can sample images from GANs along with their segmentation
masks and synthesize paired image-segmentation datasets. Using the synthesized
datasets, we enable two downstream applications. First, we train semantic
segmentation networks on these datasets and test them on real images, realizing
unsupervised semantic segmentation. Second, we train image-to-image translation
networks on the synthesized datasets, enabling semantic-conditional image
synthesis without human annotations.
|
Integrated task and motion planning (TAMP) has proven to be a valuable
approach to generalizable long-horizon robotic manipulation and navigation
problems. However, the typical TAMP problem formulation assumes full
observability and deterministic action effects. These assumptions limit the
ability of the planner to gather information and make decisions that are
risk-aware. We propose a strategy for TAMP with Uncertainty and Risk Awareness
(TAMPURA) that is capable of efficiently solving long-horizon planning problems
with initial-state and action outcome uncertainty, including problems that
require information gathering and avoiding undesirable and irreversible
outcomes. Our planner reasons under uncertainty at both the abstract task level
and continuous controller level. Given a set of closed-loop goal-conditioned
controllers operating in the primitive action space and a description of their
preconditions and potential capabilities, we learn a high-level abstraction
that can be solved efficiently and then refined to continuous actions for
execution. We demonstrate our approach on several robotics problems where
uncertainty is a crucial factor and show that reasoning under uncertainty in
these problems outperforms previously proposed determinized planning, direct
search, and reinforcement learning strategies. Lastly, we demonstrate our
planner on two real-world robotics problems using recent advancements in
probabilistic perception.
|
In this work, we study direct limits of finite dimensional basic classical
simple Lie superalgebras and obtain the conjugacy classes of Cartan subalgebras
under the group of automorphisms.
|
We present the 2018 DAVIS Challenge on Video Object Segmentation, a public
competition specifically designed for the task of video object segmentation. It
builds upon the DAVIS 2017 dataset, which was presented in the previous edition
of the DAVIS Challenge, and added 100 videos with multiple objects per sequence
to the original DAVIS 2016 dataset. Motivated by the analysis of the results of
the 2017 edition, the main track of the competition will be the same than in
the previous edition (segmentation given the full mask of the objects in the
first frame -- semi-supervised scenario). This edition, however, also adds an
interactive segmentation teaser track, where the participants will interact
with a web service simulating the input of a human that provides scribbles to
iteratively improve the result.
|
This paper reports statistically significant correlations between various
burst parameters, observed in a sample of 156 GRBs belonging to BATSE 4B
catalog with T90 less than 2 s. The number of subpulses in a burst is strongly
correlated not only with the object's duration but also with its fluence and
hardness ratio, suggesting that when the central engine is more powerful,
ejecting matter with typically higher values of Lorentz factor, the bulk energy
is dissipated on longer time scales in the form of larger number of gamma
pulses. We estimate hard-to-soft lag in bursts by taking the difference between
centroids corresponding to time profiles at energies $> 100$ keV and $<100$
keV. The number of short GRBs that show soft-to-hard spectral evolution is
slightly over one quarter of the total, in the sample considered here. Bursts
that exhibit hard-to-soft spectral change appear to form a distinct class, with
strength as well as hardness of individual subpeaks tending to decrease with
peak position. Opposite is true for objects with softer photons arriving
earlier than the harder ones, implying some kind of a rejuvenation of the
central engine (may be due to enhanced accretion of matter towards the end).
The two classes also show other diverging trends. For instance, objects
belonging to the larger of the two classes display strong correlations between
spectral lag and the fluence, the hardness ratio as well as the number of
pulse, respectively. While no such correlations are seen in bursts that evolve
from soft to hard. However, the magnitude of lag is strongly correlated with
burst duration in both the classes.
|
Recommender System (RS) is currently an effective way to solve information
overload. To meet users' next click behavior, RS needs to collect users'
personal information and behavior to achieve a comprehensive and profound user
preference perception. However, these centrally collected data are
privacy-sensitive, and any leakage may cause severe problems to both users and
service providers. This paper proposed a novel privacy-preserved recommender
system framework (PPRSF), through the application of federated learning
paradigm, to enable the recommendation algorithm to be trained and carry out
inference without centrally collecting users' private data. The PPRSF not only
able to reduces the privacy leakage risk, satisfies legal and regulatory
requirements but also allows various recommendation algorithms to be applied.
|
When applying deep learning models in open-world scenarios, active learning
(AL) strategies are crucial for identifying label candidates from a nearly
infinite amount of unlabeled data. In this context, robust out-of-distribution
(OOD) detection mechanisms are essential for handling data outside the target
distribution of the application. However, current works investigate both
problems separately. In this work, we introduce SISOM as the first unified
solution for both AL and OOD detection. By leveraging feature space distance
metrics SISOM combines the strengths of the currently independent tasks to
solve both effectively. We conduct extensive experiments showing the problems
arising when migrating between both tasks. In these evaluations SISOM
underlined its effectiveness by achieving first place in two of the widely used
OpenOOD benchmarks and second place in the remaining one. In AL, SISOM
outperforms others and delivers top-1 performance in three benchmarks
|
This note sketches the extension of the basic characterisation theorems as
the bisimulation-invariant fragment of first-order logic to modal logic with
graded modalities and matching adaptation of bisimulation. We focus on showing
expressive completeness of graded multi-modal logic for those first-order
properties of pointed Kripke structures that are preserved under counting
bisimulation equivalence among all or among just all finite pointed Kripke
structures.
|
The ratio of the transverse and longitudinal component of polarization
transfer to protons in quasi-elastic $(\vec{e}, e^{\prime} \vec{p}\,)$
reaction, $P^{\prime}_x/P^{\prime}_z$, is sensitive to the proton's
electromagnetic form factor ratio, $G_E/G_M$. To explore density-dependent
in-medium modifications, a comparison of polarization transfer ratios involving
protons from distinct nuclear shells, each with different local nuclear
densities, has been proposed. In this study, we present such comparisons
between four shells, $1s_{1/2}$, $1p_{3/2}$ in $^{12}\mathrm{C}$ and
$1d_{3/2}$, $2s_{1/2}$ in $^{40}\mathrm{Ca}$. In an effort to account for other
many-body effects that may differ between shells, we use state-of-the-art
relativistic distorted-wave impulse-approximation (RDWIA) calculation and
present the double ratios, $(P^{\prime}_x/P^{\prime}_z)_{\rm
Data}/(P^{\prime}_x/P^{\prime}_z)_{\rm RDWIA}$ as well as the super ratios,
$\left[(P^{\prime}_x/P^{\prime}_z)_{\rm A}/(P^{\prime}_x/P^{\prime}_z)_{\rm
B}\right]_{\rm Data}/\left[(P^{\prime}_x/P^{\prime}_z)_{\rm
A}/(P^{\prime}_x/P^{\prime}_z)_{\rm B}\right]_{\rm RDWIA}$, for chosen shells A
and B, as a function of effective local nuclear densities. We find that double
ratios for individual shells show a dependence on the probed effective nuclear
densities. Studying the ratios, we observed a systematic variation between
pairs of higher- and lower-density shells.
|
Measuring the argon purity is critical for all Ar-based rare event research
experiments. Mass spectrometry is typically used to assess the uranium and
thorium contamination in samples of the materials used to build low-background
detectors; however, this technique has the potential to provide other valuable
information that is currently not exploited. We have shown that, by ICP-MS, it
is possible to identify and quantify common chemical contaminants in argon.
Preliminary tests were done with the gas extracted from the experiments
MicroBooNE at FNAL and ArDM at LSC. In the former case, we evidenced relevant
nitrogen contamination well above the one measured in the commercial argon gas.
In ArDM, we identified and quantified the presence of mercury in the gas used
for its science run. In both cases, the presence of krypton (~ppb) and xenon
(~10s ppb) in argon gas has been established.
|
We present a new method of proving the Diophantine extremality of various
dynamically defined measures, vastly expanding the class of measures known to
be extremal. This generalizes and improves the celebrated theorem of Kleinbock
and Margulis [{\it Invent. Math.} {\bf 138}(3) (1999), 451--494] resolving
Sprind\v zuk's conjecture, as well as its extension by Kleinbock,
Lindenstrauss, and Weiss [On fractal measures and Diophantine approximation.
{\it Selecta Math.} {\bf 10} (2004), 479--523], hereafter abbreviated KLW. As
applications we prove the extremality of all hyperbolic measures of smooth
dynamical systems with sufficiently large Hausdorff dimension, and of the
Patterson--Sullivan measures of all nonplanar geometrically finite groups. The
key technical idea, which has led to a plethora of new applications, is a
significant weakening of KLW's sufficient conditions for extremality. In the
first of this series of papers [{\it Selecta Math.} {\bf 24}(3) (2018),
2165--2206], we introduce and develop a systematic account of two classes of
measures, which we call {\it quasi-decaying} and {\it weakly quasi-decaying}.
We prove that weak quasi-decay implies strong extremality in the matrix
approximation framework, as well as proving the ``inherited exponent of
irrationality'' version of this theorem. In this paper, the second of the
series, we establish sufficient conditions on various classes of conformal
dynamical systems for their measures to be quasi-decaying. In particular, we
prove the above-mentioned result about Patterson--Sullivan measures, and we
show that equilibrium states (including conformal measures) of nonplanar
infinite iterated function systems (including those which do not satisfy the
open set condition) and rational functions are quasi-decaying.
|
Within the Hamiltonian formulation of Lattice gauge theories, prepotentials,
belonging to the fundamental representation of the gauge group and defined
locally at each site of the lattice, enables us to construct local loop
operators and loop states. We propose a set of diagrammatic rules for the
action of local gauge invariant operators on arbitrary loop states. Moreover We
propose a new set of fusion variables within the prepotential aproach suitable
for approaching the weak coupling limit.
|
We present the results of a systematic, first-principles study of the
spectrum and decay constants of mesons for different numbers of color charges
N, via lattice computations. We restrict our attention to states in the
non-zero isospin sector, evaluating the masses associated with the ground-state
and first excitation in the pseudoscalar, vector, scalar, and axial vector
channels. Our results are based on a new set of simulations of four dimensional
SU(N) Yang-Mills theories with the number of colors ranging from N=2 to N=17;
the spectra and the decay constants are computed in the quenched approximation
(which becomes exact in the 't Hooft limit) using Wilson fermions. After
discussing the extrapolations to the chiral and large-N limits, we present a
comparison of our results to some of the numerical computations and analytical
predictions available in the literature - including, in particular, those from
holographic computations.
|
We characterize the oriented Seifert-fibered three-manifolds which admit
positive, transverse contact structures.
|
Within the Ginzburg-Landau model we study the critical field and temperature
enhancement for crossing superconducting channels formed either along the
sample edges or domain walls in thin-film magnetically coupled superconducting
- ferromagnetic bilayers. The corresponding Cooper pair wave function can be
viewed as a hybridization of two order parameter (OP) modes propagating along
the boundaries and/or domain walls. Different momenta of hybridized OP modes
result in the formation of vortex chains outgoing from the crossing point of
these channels. Near this crossing point the wave functions of the modes merge
giving rise to the increase in the critical temperature for a localized
superconducting state. The origin of this critical temperature enhancement
caused by the wave function squeezing is illustrated for a limiting case of
approaching parallel boundaries and/or domain walls. Using both the variational
method and numerical simulations we have studied the critical temperature
dependence and OP structure vs the applied magnetic field and the angle between
the crossing channels.
|
An engineer in a product company is expected to design a good solution to a
computing problem (Design skill) and articulate the solution well (Expression
skill). We expect an industry-ready student (final year student or a fresh
campus hire) as well to demonstrate both these skills when working on simple
problems assigned to them.
This paper reports on the results when we tested a cohort of participants
(N=16) for these two skills. We created two participant groups from two
different tiers of college, one from a Tier 1 college (who were taking an
advanced elective course), and another from Tier 2 colleges (who had been hired
for internship in a SaaS product company). We gave them a simple design problem
and evaluated the quality of their design and expression. Design quality was
evaluated along three design principles of Abstraction, Decomposition, and
Precision (adapted from the Software Engineering Book of Knowledge). Expression
quality was evaluated using criteria we developed for our study that is based
on the diversity and density of the expressions used in the articulation.
We found the students lacking in design and expression skills. Specifically,
a) they struggled with abstraction as a design principle, b) they did not use
enough modes of expressions to articulate their design, and c) they did not use
enough formal notations (UML, equations, relations, etc.). We also found
significant difference in the performance between the two participant groups.
|
We address the mu-problem in the context of General Gauge Mediation (GGM). We
classify possible models depending on the way the Higgs fields couple to the
supersymmetry breaking hidden-sector. The different types of models have
distinct signatures in the MSSM parameters. We find concrete and surprisingly
simple examples based on messengers in each class. These examples lead to all
the soft masses and a consistent Higgs-sector.
|
Minkowski space is the local model of 3 dimensionnal flat spacetimes. Recent
progress in the description of globally hyperbolic flat spacetimes showed
strong link between Lorentzian geometry and Teichm{\"u}ller space. We notice
that Lorentzian generalisations of conical singularities are useful for the
endeavours of descripting flat spacetimes, creating stronger links with
hyperbolic geometry and compactifying spacetimes. In particular massive
particles and extreme BTZ singular lines arise naturally. This paper is
three-fold. First, prove background local properties which will be useful for
future work. Second, generalise fundamental theorems of the theory of globally
hyperbolic flat spacetimes. Third, defining BTZ-extension and proving it
preserves Cauchy-maximality and Cauchy-completeness.
|
The boundary of the region in spacetime containing future-trapped closed
surfaces is considered. In asymptotically flat spacetimes, this boundary does
not need to be the event horizon nor a dynamical/trapping horizon. Some
properties of this boundary and its localization are analyzed, and illustrated
with examples. In particular, fully explicit future-trapped compact surfaces
penetrating into flat portions of a Vaidya spacetime are presented.
|
We show that non-Hermiticity enables topological phases with unidirectional
transport in one-dimensional Floquet chains. The topological signatures of
these phases are non-contractible loops in the spectrum of the Floquet
propagator that are separated by an imaginary gap. Such loops occur exclusively
in non-Hermitian Floquet systems. We define the corresponding topological
invariant as the winding number of the Floquet propagator relative to the
imaginary gap. To relate topology to transport, we introduce the concept of
regularized dynamics of non-Hermitian chains. We establish that, under the
conditions of regularized dynamics, transport is quantized in so far as the
charge transferred over one period equals the topological winding number. We
illustrate these theoretical findings with the example of a Floquet chain that
features a topological phase transition and acts as a charge pump in the
non-trivial topological phase. We finally discuss whether these findings
justify the notion that non-Hermitian Floquet chains support topological
transport.
|
In recent years higher-dimensional black holes have attracted much interest
because of various developments in gravity and high energy physics. But whereas
higher-dimensional charged static (Tangherlini) and uncharged rotating
(Myers-Perry) black holes were found long ago, black hole solutions of
Einstein-Maxwell theory, are not yet known in closed form in more than 4
dimensions, when both electric charge and rotation are present. Here we
therefore study these solutions and those of Einstein-Maxwell-dilaton theory,
by using numerical and perturbative methods, and by exploiting the existence of
spacetime symmetries. The properties of these black holes reveal new
interesting features, not seen in D=4. For instance, unlike the D=4 Kerr-Newman
solution, they possess a non-constant gyromagnetic factor.
|
Few-shot sequence labeling aims to identify novel classes based on only a few
labeled samples. Existing methods solve the data scarcity problem mainly by
designing token-level or span-level labeling models based on metric learning.
However, these methods are only trained at a single granularity (i.e., either
token level or span level) and have some weaknesses of the corresponding
granularity. In this paper, we first unify token and span level supervisions
and propose a Consistent Dual Adaptive Prototypical (CDAP) network for few-shot
sequence labeling. CDAP contains the token-level and span-level networks,
jointly trained at different granularities. To align the outputs of two
networks, we further propose a consistent loss to enable them to learn from
each other. During the inference phase, we propose a consistent greedy
inference algorithm that first adjusts the predicted probability and then
greedily selects non-overlapping spans with maximum probability. Extensive
experiments show that our model achieves new state-of-the-art results on three
benchmark datasets.
|
We have demonstrated and modeled a simple and efficient method to transfer
atoms from a first Magneto-Optical Trap (MOT) to a second one. Two independent
setups, with cesium and rubidium atoms respectively, have shown that a high
power and slightly diverging laser beam optimizes the transfer between the two
traps when its frequency is red-detuned from the atomic transition. This
pushing laser extracts a continuous beam of slow and cold atoms out of the
first MOT and also provides a guiding to the second one through the dipolar
force. In order to optimize the transfer efficiency, the dependence of the
atomic flux on the pushing laser parameters (power, detuning, divergence and
waist) is investigated. The atomic flux is found to be proportional to the
first MOT loading rate. Experimentally, the transfer efficiency reaches 70%,
corresponding to a transfer rate up to 2.7x10^8 atoms/s with a final velocity
of 5.5 m/s. We present a simple analysis of the atomic motion inside the
pushing-guiding laser, in good agreement with the experimental data.
|
Most recent semantic segmentation methods adopt a fully-convolutional network
(FCN) with an encoder-decoder architecture. The encoder progressively reduces
the spatial resolution and learns more abstract/semantic visual concepts with
larger receptive fields. Since context modeling is critical for segmentation,
the latest efforts have been focused on increasing the receptive field, through
either dilated/atrous convolutions or inserting attention modules. However, the
encoder-decoder based FCN architecture remains unchanged. In this paper, we aim
to provide an alternative perspective by treating semantic segmentation as a
sequence-to-sequence prediction task. Specifically, we deploy a pure
transformer (ie, without convolution and resolution reduction) to encode an
image as a sequence of patches. With the global context modeled in every layer
of the transformer, this encoder can be combined with a simple decoder to
provide a powerful segmentation model, termed SEgmentation TRansformer (SETR).
Extensive experiments show that SETR achieves new state of the art on ADE20K
(50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on
Cityscapes. Particularly, we achieve the first position in the highly
competitive ADE20K test server leaderboard on the day of submission.
|
We introduce a new categorical framework for studying derived functors, and
in particular for comparing composites of left and right derived functors. Our
central observation is that model categories are the objects of a double
category whose vertical and horizontal arrows are left and right Quillen
functors, respectively, and that passage to derived functors is functorial at
the level of this double category. The theory of conjunctions and mates in
double categories, which generalizes the theory of adjunctions and mates in
2-categories, then gives us canonical ways to compare composites of left and
right derived functors. We give a number of sample applications, most of which
are improvements of existing proofs in the literature.
|
For a given set of points in a metric space and an integer $k$, we seek to
partition the given points into $k$ clusters. For each computed cluster, one
typically defines one point as the center of the cluster. A natural objective
is to minimize the sum of the cluster center's radii, where we assign the
smallest radius $r$ to each center such that each point in the cluster is at a
distance of at most $r$ from the center. The best-known polynomial time
approximation ratio for this problem is $3.389$. In the setting with outliers,
i.e., we are given an integer $m$ and allow up to $m$ points that are not in
any cluster, the best-known approximation factor is $12.365$.
In this paper, we improve both approximation ratios to $3+\epsilon$. Our
algorithms are primal-dual algorithms that use fundamentally new ideas to
compute solutions and to guarantee the claimed approximation ratios. For
example, we replace the classical binary search to find the best value of a
Lagrangian multiplier $\lambda$ by a primal-dual routine in which $\lambda$ is
a variable that is raised. Also, we show that for each connected component due
to almost tight dual constraints, we can find one single cluster that covers
all its points and we bound its cost via a new primal-dual analysis. We remark
that our approximation factor of $3+\epsilon$ is a natural limit for the known
approaches in the literature.
Then, we extend our results to the setting of lower bounds. There are
algorithms known for the case that for each point $i$ there is a lower bound
$L_{i}$, stating that we need to assign at least $L_{i}$ clients to $i$ if $i$
is a cluster center. For this setting, there is a $ 3.83$ approximation if
outliers are not allowed and a ${12.365}$-approximation with outliers. We
improve both ratios to $3.5 + \epsilon$ and, at the same time, generalize the
type of allowed lower bounds.
|
We examine the SLOCC classification of the (non-normalized) pure states of
four qubits obtained by F. Verstraete et al. The rigorous proofs of their basic
results are provided and necessary corrections implemented. We use Invariant
Theory to solve the problem of equivalence of pure states under SLOCC
transformations of determinant 1 and qubit permutations. As a byproduct, we
produce a new set of generators for the invariants of the Weyl group of type
F_4. We complete the determination of the tensor ranks of 4-qubit pure states
initiated by J.-L. Brylinski. As a result we obtain a simple algorithm for
computing these ranks. We obtain also a very simple classification of pure
states of rank at most 3.
|
The problem of adaptively setting the timeout interval for retransmitting a
packet has been discussed. A layered view of the algorithms has been presented.
It is shown that a timeout algorithm consists of essentially five layers or
procedures which can be independently chosen and modified. A number of timeout
algorithms proposed in the literature have been decomposed into these five
layers.
One of the key layers not discussed in the literature is that of determining
the sample round trip delay for packets that have been transmitted more than
once. It is shown that this layer has a significant impact on the network
performance.
Under repeated packet loss, most timeout algorithms either diverge or
converge to a wrong value. A number of alternative schemes have been presented.
It is argued that divergence is preferable to false convergence. It is a
feature that is helpful in reducing network traffic congestion.
|
Normalizing flows attempt to model an arbitrary probability distribution
through a set of invertible mappings. These transformations are required to
achieve a tractable Jacobian determinant that can be used in high-dimensional
scenarios. The first normalizing flow designs used coupling layer mappings
built upon affine transformations. The significant advantage of such models is
their easy-to-compute inverse. Nevertheless, making use of affine
transformations may limit the expressiveness of such models. Recently,
invertible piecewise polynomial functions as a replacement for affine
transformations have attracted attention. However, these methods require
solving a polynomial equation to calculate their inverse. In this paper, we
explore using linear rational splines as a replacement for affine
transformations used in coupling layers. Besides having a straightforward
inverse, inference and generation have similar cost and architecture in this
method. Moreover, simulation results demonstrate the competitiveness of this
approach's performance compared to existing methods.
|
We discuss simple generic model of ``jet quenching'' in which matter
absorption is defined by one parameter. We show that as absorption grows, the
azimuthal asymmetry v_2 grows as well, reaching the finite limit with a simple
geometric interpretation. It turns out, that this limit is still below the
experimental values for 6 > p_t > 2 GeV, according to preliminary data from
STAR experiment at RHIC. We thus conclude that ``jet quenching'' models alone
cannot account for the observed phenomenon, and speculate about alternative
scenarios.
|
The contribution of the axial triangle anomalous graph to the parity
non-conservation effect in atoms is evaluated. The final answer looks like the
emission of the electric photon by the magnetic dipole. The relative
contribution to the parity non-conservation effect in neutral atoms appears to
be negligible but is essentially larger in case of multicharged ions.
|
Speaker verification has been widely and successfully adopted in many
mission-critical areas for user identification. The training of speaker
verification requires a large amount of data, therefore users usually need to
adopt third-party data ($e.g.$, data from the Internet or third-party data
company). This raises the question of whether adopting untrusted third-party
data can pose a security threat. In this paper, we demonstrate that it is
possible to inject the hidden backdoor for infecting speaker verification
models by poisoning the training data. Specifically, we design a
clustering-based attack scheme where poisoned samples from different clusters
will contain different triggers ($i.e.$, pre-defined utterances), based on our
understanding of verification tasks. The infected models behave normally on
benign samples, while attacker-specified unenrolled triggers will successfully
pass the verification even if the attacker has no information about the
enrolled speaker. We also demonstrate that existing backdoor attacks cannot be
directly adopted in attacking speaker verification. Our approach not only
provides a new perspective for designing novel attacks, but also serves as a
strong baseline for improving the robustness of verification methods. The code
for reproducing main results is available at
\url{https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification}.
|
We study optimal perfect distinguishability between a unitary and a general
quantum operation. In 2-dimensional case we provide a simple sufficient and
necessary condition for sequential perfect distinguishability and an analytical
formula of optimal query time. We extend the sequential condition to general
d-dimensional case. Meanwhile, we provide an upper bound and a lower bound for
optimal sequential query time. In the process a new iterative method is given,
the most notable innovation of which is its independence to auxiliary systems
or entanglement. Following the idea, we further obtain an upper bound and a
lower bound of (entanglement-assisted) q-maximal fidelities between a unitary
and a quantum operation. Thus by the recursion in [1] an upper bound and a
lower bound for optimal general perfect discrimination are achieved. Finally
our lower bound result can be extended to the case of arbitrary two quantum
operations.
|
The class of permutations that avoid the bivincular pattern (231, {1},{1}) is
known to be enumerated by the Fishburn numbers. In this paper, we call them
Fishburn permutations and study their pattern avoidance. For classical patterns
of size 3, we give a complete enumerative picture for regular and
indecomposable Fishburn permutations. For patterns of size 4, we focus on a
Wilf equivalence class of Fishburn permutations that are enumerated by the
Catalan numbers. In addition, we also discuss a class enumerated by the
binomial transform of the Catalan numbers and give conjectures for other
equivalence classes of pattern-avoiding Fishburn permutations.
|
In this work, we study the optical properties of a class of magnetically
charged rotating black hole spacetimes. The black holes in question are assumed
to be immersed in the quintessence field, and subsequently, the resulting black
hole shadows are expected to be modified by the presence of the dark energy. We
investigate the photon region and the black hole shadow, and in particular,
their dependence on the relevant physical conditions such as the state
parameter of the quintessence, the angular momentum, and the magnitude of the
magnetic charge. It is shown that the photon regions sensitively depend on the
horizon structure and possess intricate features. Moreover, from the viewpoint
of a static observer, we explore a few observables, especially those associated
with the distortion of the observed black hole shadows.
|
We present an infinite family of protocols to distill magic states for
$T$-gates that has a low space overhead and uses an asymptotic number of input
magic states to achieve a given target error that is conjectured to be optimal.
The space overhead, defined as the ratio between the physical qubits to the
number of output magic states, is asymptotically constant, while both the
number of input magic states used per output state and the $T$-gate depth of
the circuit scale linearly in the logarithm of the target error $\delta$ (up to
$\log \log 1/\delta$). Unlike other distillation protocols, this protocol
achieves this performance without concatenation and the input magic states are
injected at various steps in the circuit rather than all at the start of the
circuit. The protocol can be modified to distill magic states for other gates
at the third level of the Clifford hierarchy, with the same asymptotic
performance. The protocol relies on the construction of weakly self-dual CSS
codes with many logical qubits and large distance, allowing us to implement
control-SWAPs on multiple qubits. We call this code the "inner code". The
control-SWAPs are then used to measure properties of the magic state and detect
errors, using another code that we call the "outer code". Alternatively, we use
weakly-self dual CSS codes which implement controlled Hadamards for the inner
code, reducing circuit depth. We present several specific small examples of
this protocol.
|
The widespread use of mobile devices propels the development of new-fashioned
video applications like 3D (3-Dimensional) stereo video and mobile cloud game
via web or App, exerting more pressure on current mobile access network. To
address this challenge, we adopt the crowdsourcing paradigm to offer some
incentive for guiding the movement of recruited crowdsourcing users and
facilitate the optimization of the movement control decision. In this paper,
based on a practical 4G (4th-Generation) network throughput measurement study,
we formulate the movement control decision as a cost-constrained user
recruitment optimization problem. Considering the intractable complexity of
this problem, we focus first on a single crowdsourcing user case and propose a
pseudo-polynomial time complexity optimal solution. Then, we apply this
solution to solve the more general problem of multiple users and propose a
graph-partition-based algorithm. Extensive experiments show that our solutions
can improve the efficiency of real-time D2D communication for mobile videos.
|
For dimensions $N \geq 4$, we consider the Br\'ezis-Nirenberg variational
problem of finding \[ S(\epsilon V) := \inf_{0\not\equiv u\in H^1_0(\Omega)}
\frac{\int_\Omega |\nabla u|^2 \, dx +\epsilon \int_\Omega V\, |u|^2 \,
dx}{\left(\int_\Omega |u|^q \, dx \right)^{2/q}}, \] where $q=\frac{2N}{N-2}$
is the critical Sobolev exponent and $\Omega \subset \mathbb{R}^N$ is a bounded
open set. We compute the asymptotics of $S(0) - S(\epsilon V)$ to leading order
as $\epsilon \to 0+$. We give a precise description of the blow-up profile of
(almost) minimizing sequences and, in particular, we characterize the
concentration points as being extrema of a quotient involving the Robin
function. This complements the results from our recent paper in the case $N =
3$.
|
Efficient automated print defect mapping is valuable to the printing industry
since such defects directly influence customer-perceived printer quality and
manually mapping them is cost-ineffective. Conventional methods consist of
complicated and hand-crafted feature engineering techniques, usually targeting
only one type of defect. In this paper, we propose the first end-to-end
framework to map print defects at pixel level, adopting an approach based on
semantic segmentation. Our framework uses Convolutional Neural Networks,
specifically DeepLab-v3+, and achieves promising results in the identification
of defects in printed images. We use synthetic training data by simulating two
types of print defects and a print-scan effect with image processing and
computer graphic techniques. Compared with conventional methods, our framework
is versatile, allowing two inference strategies, one being near real-time and
providing coarser results, and the other focusing on offline processing with
more fine-grained detection. Our model is evaluated on a dataset of real
printed images.
|
We investigate the correlations between optical and radio isophotal position
angles for 14302 SDSS galaxies with $r$ magnitudes brighter than 18 and which
have been associated with extended FIRST radio sources. We identify two
separate populations of galaxies using the colour, concentration and their
principal components. Surprisingly strong statistical alignments are found:
late-type galaxies are overwhelmingly biased towards a position angle
differences of $0^{\circ}$ and early-type galaxies to $90^{\circ}$. The
late-type alignment can be easily understood in terms of the standard picture
in which the radio emission is intimately related to areas of recent
star-formation. In early-type galaxies the radio emission is expected to be
driven by accretion on to a nuclear black hole. We argue that the observed
correlation of the radio axis with the minor axis of the large-scale stellar
distribution gives a fundamental insight into the structure of elliptical
galaxies, for example, whether or not the nuclear kinematics are decoupled form
the rest of the galaxy. Our results imply that the galaxies are oblate
spheroids with their radio emission aligned with the minor axis. Remarkably the
strength of the correlation of the radio major axis with the optical minor axis
depends on radio loudness. Those objects with a low ratio of FIRST radio flux
density to total stellar light show a strong minor axis correlation while the
stronger radio sources do not. This may reflect different formation histories
for the different objects and we suggest we may be seeing the different
behaviour of rationally supported and non-rotationally supported ellipticals.
|
The complex band structure of an isolated polyethylene chain is calculated
within Density Functional Theory (DFT). A plane wave basis and ultrasoft
pseudopotentials are used. The results are compared with those obtained via a
local basis set. We obtain a gap between the highest occupied molecular orbilar
(HOMO) and the antibonding unoccupied molecular orbitals of 9.3 eV and a
non-resonant tunneling $\beta$ parameter of 0.9 per monomer, in reasonable
agreement with experiment and with results obtained via local basis.
Polyethylene is a negative electron affinity material and the actual gap should
be the energy of the HOMO with respect to the vacuum level (in DFT
approximation only about 5.14 eV). The Bloch states at imaginary k are mainly
free-electron-like parabolic bands which are missing in the local basis. We
present also the complex bands of the bulk polyethylene in order to estimate
the effects of the chain-chain interactions on the complex band structure. The
relevance of these results for the tunnelling conduction of n-alkane chains is
discussed.
|
In many practical applications of AI, an AI model is used as a decision aid
for human users. The AI provides advice that a human (sometimes) incorporates
into their decision-making process. The AI advice is often presented with some
measure of "confidence" that the human can use to calibrate how much they
depend on or trust the advice. In this paper, we present an initial exploration
that suggests showing AI models as more confident than they actually are, even
when the original AI is well-calibrated, can improve human-AI performance
(measured as the accuracy and confidence of the human's final prediction after
seeing the AI advice). We first train a model to predict human incorporation of
AI advice using data from thousands of human-AI interactions. This enables us
to explicitly estimate how to transform the AI's prediction confidence, making
the AI uncalibrated, in order to improve the final human prediction. We
empirically validate our results across four different tasks--dealing with
images, text and tabular data--involving hundreds of human participants. We
further support our findings with simulation analysis. Our findings suggest the
importance of jointly optimizing the human-AI system as opposed to the standard
paradigm of optimizing the AI model alone.
|
We propose the application of laser cooling to a number of transition-metal
atoms, allowing numerous bosonic and fermionic atomic gases to be cooled to
ultra-low temperatures. The non-zero electron orbital angular momentum of these
atoms implies that strongly atom-state-dependent light-atom interactions occur
even for light that is far-detuned from atomic transitions. At the same time,
many transition-metal atoms have small magnetic dipole moments in their
low-energy states, reducing the rate of dipolar-relaxation collisions.
Altogether, these features provide compelling opportunities for future
ultracold-atom research. Focusing on the case of atomic titanium, we identify
the metastable $a ^5F_5$ state as supporting a $J \rightarrow J+1$ optical
transition with properties similar to the D2 transition of alkali atoms, and
suited for laser cooling. The high total angular momentum and electron spin of
this state suppresses leakage out of the the nearly closed optical transition
to a branching ratio estimated below $\sim 10^{-5}$. Following the pattern
exemplified by titanium, we identify optical transitions that are suited for
laser cooling of elements in the scandium group (Sc, Y, La), the titanium group
(Ti, Zr), the vanadium group (V, Nb), the manganese group (Mn, Tc), and the
iron group (Fe, Ru).
|
The structural and electronic properties of fluorine (F)-doped BN nanotubes
(BNNTs) are studied using density functional methods. Our results indicate that
F atoms prefer to substitute N atoms, resulting in substantial changes of BN
layers. However, F substitutional doping results in no shallow impurity states.
The adsorption of F atoms on B sites is more stable than that on N sites. BNNTs
with adsorbed F atoms are p-type semiconductors, suggesting the electronic
conduction in F-doped multiwalled BNNTs with large conductivity observed
experimentally might be of p-type due to the adsorbed F atoms, but not n-type
as supposed before.
|
The high tracking overhead, the amount of up-front effort required to
selecting the trace points, and the lack of effective data analysis model are
the significant barriers to the adoption of intra-component tracking for fault
diagnosis today. This paper introduces a novel method for fault diagnosis by
combining adaptive function level dynamic tracking, target fault injection, and
graph convolutional network. In order to implement this method, we introduce
techniques for (i) selecting function level trace points, (ii) constructing
approximate function call tree of program when using adaptive tracking, and
(iii) constructing graph convolutional network with fault injection campaign.
We evaluate our method using a web service benchmark composed of Redis, Nginx,
Httpd, and SQlite. The experimental results show that this method outperforms
log based method, full tracking method, and Gaussian influence method in the
accuracy of fault diagnosis, overhead, and performance impact on the diagnosis
target.
|
Gradient boosted decision trees (GBDT) is the leading algorithm for many
commercial and academic data applications. We give a deep analysis of this
algorithm, especially the histogram technique, which is a basis for the
regulized distribution with compact support. We present three new
modifications. 1) Share memory technique to reduce memory usage. In many cases,
it only need the data source itself and no extra memory. 2) Implicit merging
for "merge overflow problem"."merge overflow" means that merge some small
datasets to huge datasets, which are too huge to be solved. By implicit
merging, we just need the original small datasets to train the GBDT model. 3)
Adaptive resize algorithm of histogram bins to improve accuracy. Experiments on
two large Kaggle competitions verified our methods. They use much less memory
than LightGBM and have higher accuracy. We have implemented these algorithms in
an open-source package LiteMORT. The source codes are available at
https://github.com/closest-git/LiteMORT
|
This essay considers ways that recent uses of computers in mathematics
challenge contemporary views on the nature of mathematical understanding. It
also puts these challenges in a historical perspective and offers speculation
as to a possible resolution.
|
Analysis of all-sky Planck submillimetre observations and the IRAS 100um data
has led to the detection of a population of Galactic cold clumps. The clumps
can be used to study star formation and dust properties in a wide range of
Galactic environments. Our aim is to measure dust spectral energy distribution
(SED) variations as a function of the spatial scale and the wavelength. We
examine the SEDs at large scales using IRAS, Planck, and Herschel data. At
smaller scales, we compare with JCMT/SCUBA-2 850um maps with Herschel data that
are filtered using the SCUBA-2 pipeline. Clumps are extracted using the
Fellwalker method and their spectra are modelled as modified blackbody
functions. According to IRAS and Planck data, most fields have dust colour
temperatures T_C ~ 14-18K and opacity spectral index values of beta=1.5-1.9.
The clumps/cores identified in SCUBA-2 maps have T~ 13K and similar beta
values. There are some indications of the dust emission spectrum becoming
flatter at wavelengths longer than 500um. In fits involving Planck data, the
significance is limited by the uncertainty of the corrections for CO line
contamination. The fits to the SPIRE data give a median beta value slightly
above 1.8. In the joint SPIRE and SCUBA-2 850um fits the value decreases to
beta ~1.6. Most of the observed T-beta anticorrelation can be explained by
noise. The typical submillimetre opacity spectral index beta of cold clumps is
found to be ~1.7. This is above the values of diffuse clouds but lower than in
some previous studies of dense clumps. There is only tentative evidence of
T-beta anticorrelation and beta decreasing at millimetre wavelengths.
|
The objective of this paper is to predict (A) whether a sentence in a written
text expresses an emotion, (B) the mode(s) in which it is expressed, (C)
whether it is basic or complex, and (D) its emotional category.
One of our major contributions, through a dataset and a model, is to
integrate the fact that an emotion can be expressed in different modes: from a
direct mode, essentially lexicalized, to a more indirect mode, where emotions
will only be suggested, a mode that NLP approaches generally don't take into
account.
Another originality is that the scope is on written texts, as opposed usual
work focusing on conversational (often multi-modal) data. In this context,
modes of expression are seen as a factor towards the automatic analysis of
complexity in texts.
Experiments on French texts show acceptable results compared to the human
annotators' agreement, and outperforming results compared to using a large
language model with in-context learning (i.e. no fine-tuning).
|
Motivated by the observation that the Higgs quartic coupling runs to zero at
an intermediate scale, we propose a new framework for models of split
supersymmetry, in which gauginos acquire intermediate scale Dirac masses of
$\sim 10^{8-11}$ GeV. Scalar masses arise from one-loop finite contributions as
well as direct gravity-mediated contributions. Like split supersymmetry, one
Higgs doublet is fine-tuned to be light. The scale at which the Dirac gauginos
are introduced to make the Higgs quartic zero is the same as is necessary for
gauge coupling unification. Thus, gauge coupling unification persists
(nontrivially, due to adjoint multiplets), though with a somewhat higher
unification scale $\gtrsim 10^{17}$ GeV. The $\mu$-term is naturally at the
weak scale, and provides an opportunity for experimental verification. We
present two manifestations of Split Dirac Supersymmetry. In the "Pure Dirac"
model, the lightest Higgsino must decay through R-parity violating couplings,
leading to an array of interesting signals in colliders. In the "Hypercharge
Impure" model, the bino acquires a Majorana mass that is one-loop suppressed
compared with the Dirac gluino and wino. This leads to weak scale Higgsino dark
matter whose overall mass scale, as well as the mass splitting between the
neutral components, is naturally generated from the same UV dynamics. We
outline the challenges to discovering pseudo-Dirac Higgsino dark matter in
collider and dark matter detection experiments.
|
Subsets and Splits