text
stringlengths 6
128k
|
---|
We study a variant of the subgraph isomorphism problem that is of high
interest to the quantum computing community. Our results give an algorithm to
perform pattern matching in quantum circuits for many patterns simultaneously,
independently of the number of patterns. After a pre-computation step in which
the patterns are compiled into a decision tree, the running time is linear in
the size of the input quantum circuit.
More generally, we consider connected port graphs, in which every edge $e$
incident to $v$ has a label $L_v(e)$ unique in $v$. Jiang and Bunke showed that
the subgraph isomorphism problem $H \subseteq G$ for such graphs can be solved
in time $O(|V(G)| \cdot |V(H)|)$. We show that if in addition the graphs are
directed acyclic, then the subgraph isomorphism problem can be solved for an
unbounded number of patterns simultaneously. We enumerate all $m$ pattern
matches in time $O(P)^{P+3/2} \cdot |V(G)| + O(m)$, where $P$ is the number of
vertices of the largest pattern. In the case of quantum circuits, we can
express the bound obtained in terms of the maximum number of qubits $N$ and
depth $\delta$ of the patterns : $O(N)^{N + 1/2} \cdot \delta \log \delta \cdot
|V(G)| + O(m)$.
|
The study of concurrent persistent programs has seen a surge of activity in
recent years due to the introduction of non-volatile random access memories
(NVRAM), yielding many models and correctness notions that are difficult to
compare. In this paper, we survey existing correctness properties for this
setting, placing them into the same context and comparing them. We present a
hierarchy of these persistence properties based on the generality of the
histories they deem correct, and show how this hierarchy shifts based on
different model assumptions.
|
We have reported nanometer-scale three-dimensional studies of brain networks
of schizophrenia cases and found that their neurites are thin and tortuous
compared to healthy controls. This suggests that connections between distal
neurons are suppressed in microcircuits of schizophrenia cases. In this study,
we applied these biological findings to the design of schizophrenia-mimicking
artificial neural network to simulate the observed connection alteration in the
disorder. Neural networks having a "schizophrenia connection layer" in place of
a fully connected layer were subjected to image classification tasks using the
MNIST and CIFAR-10 datasets. The results revealed that the schizophrenia
connection layer is tolerant to overfitting and outperforms a fully connected
layer. The outperformance was observed only for networks using band matrices as
weight windows, indicating that the shape of the weight matrix is relevant to
the network performance. A schizophrenia convolution layer was also tested
using the VGG configuration, showing that 60% of the kernel weights of the last
three convolution layers can be eliminated without loss of accuracy. The
schizophrenia layers can be used instead of conventional layers without any
change in the network configuration and training procedures; hence, neural
networks can easily take advantage of these layers. The results of this study
suggest that the connection alteration found in schizophrenia is not a burden
to the brain, but has functional roles in brain performance.
|
Quantum optimal control problems are typically solved by gradient-based
algorithms such as GRAPE, which suffer from exponential growth in storage with
increasing number of qubits and linear growth in memory requirements with
increasing number of time steps. Employing QOC for discrete lattices reveals
that these memory requirements are a barrier for simulating large models or
long time spans. We employ a nonstandard differentiable programming approach
that significantly reduces the memory requirements at the cost of a reasonable
amount of recomputation. The approach exploits invertibility properties of the
unitary matrices to reverse the computation during back-propagation. We utilize
QOC software written in the differentiable programming framework JAX that
implements this approach, and demonstrate its effectiveness for lattice gauge
theory.
|
Understanding information exchange and aggregation on networks is a central
problem in theoretical economics, probability and statistics. We study a
standard model of economic agents on the nodes of a social network graph who
learn a binary "state of the world" S, from initial signals, by repeatedly
observing each other's best guesses.
Asymptotic learning is said to occur on a family of graphs G_n = (V_n, E_n),
with |V_n| tending to infinity, if with probability tending to 1 as n tends to
infinity all agents in G_n eventually estimate S correctly. We identify
sufficient conditions for asymptotic learning and contruct examples where
learning does not occur when the conditions do not hold.
|
A Riemannian metric on a compact 4-manifold is said to be Bach-flat if it is
a critical point for the L2-norm of the Weyl curvature. When the Riemannian
4-manifold in question is a Kaehler surface, we provide a rough classification
of solutions, followed by detailed results regarding each case in the
classification. The most mysterious case prominently involves 3-dimensional CR
manifolds.
|
Sparse non-Hermitian random matrices arise in the study of disordered
physical systems with asymmetric local interactions, and have applications
ranging from neural networks to ecosystem dynamics. The spectral
characteristics of these matrices provide crucial information on system
stability and susceptibility, however, their study is greatly complicated by
the twin challenges of a lack of symmetry and a sparse interaction structure.
In this review we provide a concise and systematic introduction to the main
tools and results in this field. We show how the spectra of sparse
non-Hermitian matrices can be computed via an analogy with infinite dimensional
operators obeying certain recursion relations. With reference to three
illustrative examples -- adjacency matrices of regular oriented graphs,
adjacency matrices of oriented Erd\H{o}s-R\'{e}nyi graphs, and adjacency
matrices of weighted oriented Erd\H{o}s-R\'{e}nyi graphs -- we demonstrate the
use of these methods to obtain both analytic and numerical results for the
spectrum, the spectral distribution, the location of outlier eigenvalues, and
the statistical properties of eigenvectors.
|
We present several new phenomena about almost sure convergence on homogeneous
chaoses that include Gaussian Wiener chaos and homogeneous sums in independent
random variables. Concretely, we establish the fact that almost sure
convergence on a fixed finite sum of chaoses forces the almost sure convergence
of each chaotic component. Our strategy uses "{\it extra randomness}" and a
simple conditioning argument. These ideas are close to the spirit of
\emph{Stein's method of exchangeable pairs}. Some natural questions are left
open in this note.
|
In this paper, we prove a sharp ill-posedness result for the incompressible
non-resistive MHD equations. In any dimension $d\ge 2$, we show the
ill-posedness of the non-resistive MHD equations in
$H^{\frac{d}{2}-1}(\mathbb{R}^d)\times H^{\frac{d}{2}}(\mathbb{R}^d)$, which is
sharp in view of the results of the local well-posedness in
$H^{s-1}(\mathbb{R}^d)\times H^{s}(\mathbb{R}^d)(s>\frac{d}{2})$ established by
Fefferman et al.(Arch. Ration. Mech. Anal., \textbf{223} (2), 677-691, 2017).
Furthermore, we generalize the ill-posedness results from
$H^{\frac{d}{2}-1}(\mathbb{R}^d)\times H^{\frac{d}{2}}(\mathbb{R}^d)$ to Besov
spaces $B^{\frac{d}{p}-1}_{p, q}(\mathbb{R}^d)\times B^{\frac{d}{p}}_{p,
q}(\mathbb{R}^d)$ and $\dot B^{\frac{d}{p}-1}_{p, q}(\mathbb{R}^d)\times \dot
B^{\frac{d}{p}}_{p, q}(\mathbb{R}^d)$ for $1\le p\le\infty, q>1$. Different
from the ill-posedness mechanism of the incompressible Navier-Stokes equations
in $\dot B^{-1}_{\infty, q}$ \cite{B,W}, we construct an initial data such that
the paraproduct terms (low-high frequency interaction) of the nonlinear term
make the main contribution to the norm inflation of the magnetic field.
|
We study the effect of coupling a spin bath environment to a system which, at
low energies, can be modeled as a quantum Ising system. A field theoretic
formalism incorporating both thermal and quantum fluctuations is developed to
derive results for the thermodynamic properties and response functions, both
for a toy model and for the $LiHoF_4$ system, in which spin-8 electronic spins
couple to a spin-$7/2$ nuclear spin bath: the phase transition then occurs in a
system of electronuclear degrees of freedom, coupled by long-range dipolar
interactions. The quantum Ising phase transition still exists, and one
hybridized mode of the Ising and bath spins always goes soft at the transition.
|
Superconducting Nanowire Single Photon Detector (SNSPD) emerges as a
potential candidate in the multiple fields requiring sensitive and fast
photodetection. While nanowires of low temperature superconducting detectors
are mature with commercial solutions, other material options with higher
transition temperature and faster responses are currently being explored.
Towards this goal, we develop a generalized numerical model that incorporates
the thermodynamic properties of the superconducting material and identifies the
minimum resolvable photon count for a given bias and device parameters. A phase
diagram of detection and latching phases with the minimum number of photons as
a function of biasing current and biasing temperature for each material system
is presented. We show using the developed model that while low temperature
superconducting (LTS) nanowires are more sensitive to the incident photon at
different wavelengths, the ultimate limit of a single photon can be achieved
using high temperature superconducting (HTS) material such as
YBa2Cu3O7-{\delta}, albeit at stringent biasing conditions. On the contrary,
ultrafast response time with three orders of magnitude smaller response times
can be achieved in select HTS materials making it an appealing for several
practical applications.
|
In this paper, we analyze the light variations of KIC 10975348 using
photometric data delivered from $Kepler$ mission. This star is exceptionally
faint ($K_{p}$ = 18.6 mag), compared to most well-studied $\delta$ Scuti stars.
The Fourier analysis of the short cadence data (i.e. Q14, Q15 and Q16, spanning
220 days) reveals the variations are dominated by the strongest mode with
frequency F0 = 10.231899 $\rm{d^{-1}}$, which is compatible with that obtained
from $RATS-Kepler$. The other two independent modes with F1 (= 13.4988
$\rm{d^{-1}}$) and F2 (= 19.0002 $\rm{d^{-1}}$) are newly detected and have
amplitudes two orders of magnitude smaller than F0. We note that, for the first
time, this star is identified to be a high-amplitude $\delta$ Sct (HADS) star
with amplitude of about 0.7 mag, and the lower ratio of F0/F1 = 0.758 suggests
it might be a metal-rich variable star. The frequency F2 may be a third
overtone mode, suggesting this target might be a new radial triple-mode HADS
star. We perform $O - C$ analysis using 1018 newly determined times of maximum
light and derive an ephemeris formula: $T_{max}$ =
2456170.241912(0)+0.097734(1) $\times$ $E$. The $O - C$ diagram shows that the
pulsation period of KIC 10975348 seems to show no obvious change, which is in
contrast to that of the majority of HADS stars. The possible cause of that may
be due to the current short time span of observations. To verify its possible
period variations, regular observation from space with a longer time span in
future is needed.
|
We consider the problem of batch multi-task reinforcement learning with
observed context descriptors, motivated by its application to personalized
medical treatment. In particular, we study two general classes of learning
algorithms: direct policy learning (DPL), an imitation-learning based approach
which learns from expert trajectories, and model-based learning. First, we
derive sample complexity bounds for DPL, and then show that model-based
learning from expert actions can, even with a finite model class, be
impossible. After relaxing the conditions under which the model-based approach
is expected to learn by allowing for greater coverage of state-action space, we
provide sample complexity bounds for model-based learning with finite model
classes, showing that there exist model classes with sample complexity
exponential in their statistical complexity. We then derive a sample complexity
upper bound for model-based learning based on a measure of concentration of the
data distribution. Our results give formal justification for imitation learning
over model-based learning in this setting.
|
Human impedance parameters play an integral role in the dynamics of strength
amplification exoskeletons. Many methods are used to estimate the stiffness of
human muscles, but few are used to improve the performance of strength
amplification controllers for these devices. We propose a compliance shaping
amplification controller incorporating an accurate online human stiffness
estimation from surface electromyography (sEMG) sensors and stretch sensors
connected to the forearm and upper arm of the human. These sensor values along
with exoskeleton position and velocity are used to train a random forest
regression model that accurately predicts a person's stiffness despite varying
movement, relaxation, and muscle co-contraction. Our model's accuracy is
verified using experimental test data and the model is implemented into the
compliance shaping controller. Ultimately we show that the online estimation of
stiffness can improve the bandwidth and amplification of the controller while
remaining robustly stable.
|
We apply the Dirac bracket quantization to open strings attached to branes in
the presence of background antisymmetric field and recover an inherent
noncommutativity in the internal coordinates of the brane.
|
We prove that many spaces of positive scalar curvature metrics have the
homotopy type of infinite loop spaces. Our result in particular applies to the
path component of the round metric inside $\mathcal{R}^+ (S^d)$ if $d \geq 6$.
To achieve that goal, we study the cobordism category of manifolds with
positive scalar curvature. Under suitable connectivity conditions, we can
identify the homotopy fibre of the forgetful map from the psc cobordism
category to the ordinary cobordism category with a delooping of spaces of psc
metrics. This uses a version of Quillen's Theorem B and instances of the
Gromov--Lawson surgery theorem.
We extend some of the surgery arguments by Galatius and the second named
author to the psc setting to pass between different connectivity conditions.
Segal's theory of $\Gamma$-spaces is then used to construct the claimed
infinite loop space structures.
The cobordism category viewpoint also illuminates the action of
diffeomorphism groups on spaces of psc metrics. We show that under mild
hypotheses on the manifold, the action map from the diffeomorphism group to the
homotopy automorphisms of the spaces of psc metrics factors through the
Madsen--Tillmann spectrum. This implies a strong rigidity theorem for the
action map when the manifold has trivial rational Pontrjagin classes.
A delooped version of the Atiyah--Singer index theorem proved by the first
named author is used to moreover show that the secondary index invariant to
real $K$-theory is an infinite loop map. These ideas also give a new proof of
the main result of our previous work with Botvinnik.
|
This paper shows several connections between data structure problems and
cryptography against preprocessing attacks. Our results span data structure
upper bounds, cryptographic applications, and data structure lower bounds, as
summarized next.
First, we apply Fiat--Naor inversion, a technique with cryptographic origins,
to obtain a data structure upper bound. In particular, our technique yields a
suite of algorithms with space $S$ and (online) time $T$ for a preprocessing
version of the $N$-input 3SUM problem where $S^3\cdot T = \widetilde{O}(N^6)$.
This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is
no data structure that solves this problem for $S=N^{2-\delta}$ and $T =
N^{1-\delta}$ for any constant $\delta>0$.
Secondly, we show equivalence between lower bounds for a broad class of
(static) data structure problems and one-way functions in the random oracle
model that resist a very strong form of preprocessing attack. Concretely, given
a random function $F: [N] \to [N]$ (accessed as an oracle) we show how to
compile it into a function $G^F: [N^2] \to [N^2]$ which resists $S$-bit
preprocessing attacks that run in query time $T$ where
$ST=O(N^{2-\varepsilon})$ (assuming a corresponding data structure lower bound
on 3SUM). In contrast, a classical result of Hellman tells us that $F$ itself
can be more easily inverted, say with $N^{2/3}$-bit preprocessing in $N^{2/3}$
time. We also show that much stronger lower bounds follow from the hardness of
kSUM. Our results can be equivalently interpreted as security against
adversaries that are very non-uniform, or have large auxiliary input, or as
security in the face of a powerfully backdoored random oracle.
Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric
problems which match the best known lower bounds for static data structure
problems.
|
We present an algorithm for the identification of transient noise artifacts
(glitches) in cross-correlation searches for long O(10s) gravitational-wave
transients. The algorithm utilizes the auto-power in each detector as a
discriminator between well-behaved Gaussian noise (possibly including a
gravitational-wave signal) and glitches. We test the algorithm with both Monte
Carlo noise and time-shifted data from the LIGO S5 science run and find that it
is effective at removing a significant fraction of glitches while keeping the
vast majority (99.6%) of the data. Using an accretion disk instability signal
model, we estimate that the algorithm is accidentally triggered at a rate of
less than 10^-5% by realistic signals, and less than 3% even for exceptionally
loud signals. We conclude that the algorithm is a safe and effective method for
cleaning the cross-correlation data used in searches for long
gravitational-wave transients.
|
The persistent $a_\mu \equiv (g-2)/2$ anomaly in the muon sector could be due
to new physics visible in the electron sector through a sub-ppb measurement of
the anomalous magnetic moment of the electron $a_e$. Driven by recent results
on the electron mass (S. Sturm et al., Nature 506 (2014) 467), we reconsider
the sources of uncertainties that limit our knowledge of $a_e$ including
current advances in atom interferometry. We demonstrate that it is possible to
attain the level of precision needed to test $a_\mu$ in the naive scaling
hypothesis on a timescale similar to next generation $g-2$ muon experiments at
Fermilab and JPARC. In order to achieve such level of precision, the knowledge
of the quotient $h/M$, i.e. the ratio between the Planck constant and the mass
of the atom employed in the interferometer, will play a crucial role. We
identify the most favorable isotopes to achieve an overall relative precision
below $10^{-10}$.
|
Consider the following broadcasting process run on a connected graph
$G=(V,E)$. Suppose that $k \ge 2$ agents start on vertices selected from $V$
uniformly and independently at random. One of the agents has a message that she
wants to communicate to the other agents. All agents perform independent random
walks on $G$, with the message being passed when an agent that knows the
message meets an agent that does not know the message. The broadcasting time
$\xi(G,k)$ is the time it takes to spread the message to all agents.
Our ultimate goal is to gain a better understanding of the broadcasting
process run on real-world networks of roads of large cities that might shed
some light on the behaviour of future autonomous and connected vehicles. Due to
the complexity of road networks, such phenomena have to be studied using
simulation in practical applications. In this paper, we study the process on
the simplest scenario, i.e., the family of complete graphs, as in this case the
problem is analytically tractable. We provide tight bounds for $\xi(K_n,k)$
that hold asymptotically almost surely for the whole range of the parameter
$k$. These theoretical results reveal interesting relationships and, at the
same time, are also helpful to understand and explain the behaviour we observe
in more realistic networks.
|
There is a surging need across the world for protection against gun violence.
There are three main areas that we have identified as challenging in research
that tries to curb gun violence: temporal location of gunshots, gun type
prediction and gun source (shooter) detection. Our task is gun source detection
and muzzle head detection, where the muzzle head is the round opening of the
firing end of the gun. We would like to locate the muzzle head of the gun in
the video visually, and identify who has fired the shot. In our formulation, we
turn the problem of muzzle head detection into two sub-problems of human object
detection and gun smoke detection. Our assumption is that the muzzle head
typically lies between the gun smoke caused by the shot and the shooter. We
have interesting results both in bounding the shooter as well as detecting the
gun smoke. In our experiments, we are successful in detecting the muzzle head
by detecting the gun smoke and the shooter.
|
Normal multi-scale transform [4] is a nonlinear multi-scale transform for
representing geometric objects that has been recently investigated [1, 7, 10].
The restrictive role of the exact order of polynomial reproduction $P_e$ of the
approximating subdivision operator $S$ in the analysis of the $S$ normal
multi-scale transform, established in [7, Theorem 2.6], significantly disfavors
the practical use of these transforms whenever $P_e\ll P$. We analyze in detail
the normal multi-scale transform for planar curves based on B-spline
subdivision scheme $S_p$ of degree $p\ge3$ and derive higher smoothness of the
normal re-parameterization than in [7]. We show that further improvements of
the smoothness factor are possible, provided the approximate normals are
cleverly chosen. Following [10], we introduce a more general framework for
those transforms where more than one subdivision operator can be used in the
prediction step, which leads to higher detail decay rates.
|
As technology advances, the use of Machine Learning (ML) in cybersecurity is
becoming increasingly crucial to tackle the growing complexity of cyber
threats. While traditional ML models can enhance cybersecurity, their high
energy and resource demands limit their applications, leading to the emergence
of Tiny Machine Learning (TinyML) as a more suitable solution for
resource-constrained environments. TinyML is widely applied in areas such as
smart homes, healthcare, and industrial automation. TinyML focuses on
optimizing ML algorithms for small, low-power devices, enabling intelligent
data processing directly on edge devices. This paper provides a comprehensive
review of common challenges of TinyML techniques, such as power consumption,
limited memory, and computational constraints; it also explores potential
solutions to these challenges, such as energy harvesting, computational
optimization techniques, and transfer learning for privacy preservation. On the
other hand, this paper discusses TinyML's applications in advancing
cybersecurity for Electric Vehicle Charging Infrastructures (EVCIs) as a
representative use case. It presents an experimental case study that enhances
cybersecurity in EVCI using TinyML, evaluated against traditional ML in terms
of reduced delay and memory usage, with a slight trade-off in accuracy.
Additionally, the study includes a practical setup using the ESP32
microcontroller in the PlatformIO environment, which provides a hands-on
assessment of TinyML's application in cybersecurity for EVCI.
|
We investigate lateral recoil forces exerted on nanoparticles located near
plasmonic platforms with in-plane nonreciprocal response. To this purpose, we
first develop a comprehensive theoretical framework based on the Lorentz force
within the Rayleigh approximation combined with nonreciprocal Green's functions
and then derive approximate analytical expressions to model lateral recoil
forces, demonstrating their explicit dependence on the dispersion relation of
the system and unveiling the mechanisms that govern them. In particular, a
dominant lateral recoil force component appears due to the momentum imbalance
of nonreciprocal surface plasmons supported by the platform. This force can be
orders of magnitude larger than other recoil force components, acts only along
or against the direction of the external bias, and is quasi-independent of the
direction, polarization, and wavelength of the incident plane wave. Lateral
recoil forces are explored using drift-biased graphene metasurfaces, a platform
that is also proposed to sort nanoparticles as a function of their size.
Nonreciprocal plasmonic systems may enable new venues to trap, bind, and
manipulate nanoparticles and to alleviate some of the challenges of
conventional optical tweezers.
|
In this study a phenomenological three-dimensional coupled (3DC) mixed-mode
cohesive zone model (CZM) is proposed. This is done by extending an improved
version of the well established exponential CZM of Xu and Needleman (XN) to 3D
contact problems. Coupled traction-separation relationships are individually
presented for normal and transverse directions. The proposed model preserves
all the essential features of the XN model and yet correctly describes
mixed-mode separation and in particular mixed-mode closure conditions.
Moreover, it provides the possibility to explicitly account for all three
components of the gap function, i.e. separations in different directions. The
3DC model has some independent parameters, i.e. interface properties, similar
to the XN model. All the cohesive zone parameters can be determined using
mode-I and mode-II experiments.
|
In the multiple-soliton case, the freedom in the expansion of the solution of
the perturbed KdV equation is exploited so as to transform the equation into a
system of two equations: The (inte-grable) Normal Form for KdV-type solitons,
which obey the usual infinity of KdV-conservation laws, and an auxiliary
equation that describes the contribution of obstacles to asymptotic
inte-grability, which arise from the second order onwards. The analysis has
been carried through the third order in the expansion. Within that order, the
solution of the auxiliary equation is a con-served quantity.
|
We characterize HKT structure in terms of nondegenrate complex Poisson
bivector on hypercomplex manifold. We extend the characterization to the
twistor space. After considering the flat case in detail, we show that the
twistor space of hyperkaehler manifold admits a holomorphic Poisson structure.
We briefly mention the relation to quaternionic and hypercomplex deformations
on tori and K3 surfaces
|
We report on strongly enhanced electron multiplication in thin silicon
membranes. The device is configured as a transmission-type membrane for
electron multiplication. A sub-threshold electric field applied on the emission
side of the membrane enhances the number of electrons emitted by two orders of
magnitude. This enhancement stems from field emitted electrons stimulated by
the incident particles, which suggests that stacks of silicon membranes can
form ultra-sensitive electron multipliers.
|
The paper concerns spontaneous asymptotic phase-locking and synchronization
in two-qubit systems undergoing continuous Markovian evolution described by
Lindbladian dynamics with normal Lindblad operators. Using analytic methods,
all phase-locking-enforcing mechanisms within the given framework are obtained
and classified. Detailed structures of their respective attractor spaces are
provided and used to explore their properties from various perspectives. Amid
phase-locking processes those additionally enforcing identical stationary parts
of both qubits are identified, including as a special case the strictest form
of synchronization conceivable. A prominent basis is presented which reveals
that from a physical point of view two main types of phase-locking mechanisms
exist. The ability to preserve information about the initial state is explored
and an upper bound on the amplitude of oscillations of the resulting
phase-locked dynamics is established. Permutation symmetry of both asymptotic
states and phase-locking mechanisms is discussed. Lastly, the possibility of
entanglement production playing the role of a phase-locking witness is rebutted
by three analytically treatable examples.
|
Recent years have witnessed growing consolidation of web operations. For
example, the majority of web traffic now originates from a few organizations,
and even micro-websites often choose to host on large pre-existing cloud
infrastructures. In response to this, the "Decentralized Web" attempts to
distribute ownership and operation of web services more evenly. This paper
describes the design and implementation of the largest and most widely used
Decentralized Web platform - the InterPlanetary File System (IPFS) - an
open-source, content-addressable peer-to-peer network that provides distributed
data storage and delivery. IPFS has millions of daily content retrievals and
already underpins dozens of third-party applications. This paper evaluates the
performance of IPFS by introducing a set of measurement methodologies that
allow us to uncover the characteristics of peers in the IPFS network. We reveal
presence in more than 2700 Autonomous Systems and 152 countries, the majority
of which operate outside large central cloud providers like Amazon or Azure. We
further evaluate IPFS performance, showing that both publication and retrieval
delays are acceptable for a wide range of use cases. Finally, we share our
datasets, experiences and lessons learned.
|
We study the pair description of heavy tetraquark systems $|QQ\bar Q \bar
Q\rangle$ in the frame of a non-relativistic potential model. By taking the two
heavy quark pairs $(Q\bar Q)$ as colored clusters, the four-quark Schr\"odinger
equation is reduced to a two-pair equation, when the inner motion inside the
pairs can be neglected. Taking into account all the Casimir scaling potentials
between two quarks and using the lattice QCD simulated mixing angle between the
two color-singlet states for the tetraquark system, we extracted a detailed
pair potential between the two heavy quark pairs.
|
The kinematic induction equation of MHD is solved numerically in the case of
the normal ``111'' ABC flow using a general staggered mesh method. Careful 3-D
visualizations of the topology of the magnetic field reveal that previous
conclusions about the modes of operation of this type of kinematic dynamo must
be revised. The two known windows of dynamo action at low and high magnetic
Reynolds number, correspond to two distinct modes, both relying crucially on
the replenishing of the magnetic field near a discontinuity at the beta-type
stagnation points in the flow. One of these modes display double magnetic
structures that were previously found only to obscure the physics of the
dynamo: They turn out, however, to play an important part in the process of
amplifying the magnetic field. Invariant properties of the mode in the second
magnetic Reynolds number window support the case for the normal ABC flow as a
fast dynamo.
|
In recent years there has been a growing interest in the study of the
dynamics of stochastic populations. A key question in population biology is to
understand the conditions under which populations coexist or go extinct.
Theoretical and empirical studies have shown that coexistence can be
facilitated or negated by both biotic interactions and environmental
fluctuations. We study the dynamics of $n$ populations that live in a
stochastic environment and which can interact nonlinearly (through competition
for resources, predator-prey behavior, etc.). Our models are described by
$n$-dimensional Kolmogorov systems with white noise (stochastic differential
equations - SDE). We give sharp conditions under which the populations converge
exponentially fast to their unique stationary distribution as well as
conditions under which some populations go extinct exponentially fast.
The analysis is done by a careful study of the properties of the invariant
measures of the process that are supported on the boundary of the domain. To
our knowledge this is one of the first general results describing the
asymptotic behavior of stochastic Kolmogorov systems in non-compact domains.
We are able to fully describe the properties of many of the SDE that appear
in the literature. In particular, we extend results on two dimensional
Lotka-Volterra models, two dimensional predator-prey models, $n$ dimensional
simple food chains, and two predator and one prey models. We also show how one
can use our methods to classify the dynamics of any two-dimensional stochastic
Kolmogorov system satisfying some mild assumptions.
|
Extremely large-scale multiple-input multiple-output (XL-MIMO) is the
development trend of future wireless communications. However, the extremely
large-scale antenna array could bring inevitable nearfield and dual-wideband
effects that seriously reduce the transmission performance. This paper proposes
an algorithmic framework to design the beam combining for the near-field
wideband XL-MIMO uplink transmissions assisted by holographic metasurface
antennas (HMAs). Firstly, we introduce a spherical-wave-based channel model
that simultaneously takes into account both the near-field and dual-wideband
effects. Based on such a model, we then formulate the HMA-based beam combining
problem for the proposed XL-MIMO communications, which is challenging due to
the nonlinear coupling of high dimensional HMA weights and baseband combiners.
We further present a sum-mean-square-error-minimization-based algorithmic
framework. Numerical results showcase that the proposed scheme can effectively
alleviate the sum-rate loss caused by the near-field and dual-wideband effects
in HMA-assisted XL-MIMO systems. Meanwhile, the proposed HMA-based scheme can
achieve a higher sum rate than the conventional phase-shifter-based hybrid
analog/digital one with the same array aperture.
|
Systems of non-autonomous parabolic partial differential equations over a
bounded domain with nonlinear term of Carath\'eodory type are considered.
Appropriate topologies on sets of Lipschitz Carath\'eodory maps are defined in
order to have a continuous dependence of the mild solutions with respect to the
variation of both the nonlinear term and the initial conditions, under
different assumptions on the bound-maps of the nonlinearities.
|
A new definition of analytic adjoint ideal sheaves for quasi-plurisubharmonic
(quasi-psh) functions with only neat analytic singularities is studied and
shown to admit some residue short exact sequences which are obtained by
restricting sections of the newly defined adjoint ideal sheaves to some unions
of $\sigma$-log-canonical ($\sigma$-lc) centres. The newly defined adjoint
ideal sheaves induce naturally some residue $L^2$ norms on the unions of
$\sigma$-lc centres which are invariant under log-resolutions. They can also
describe unions of $\sigma$-lc centres without the need of log-resolutions even
if the quasi-psh functions in question are not in a simple-normal-crossing
configuration. This is hinting their potential use in discussing the
$\sigma$-lc centres even when the quasi-psh functions in question have more
general singularities. Furthermore, their relations between the algebraic
adjoint ideal sheaves of Ein--Lazarsfeld as well as those of Hacon--McKernan
are described in order to illustrate their role as a (potentially finer)
measurement of singularities in the minimal model program. In the course of the
study, a local $L^2$ extension theorem is proven, which shows that holomorphic
sections on any unions of $\sigma$-lc centres can be extended holomorphically
to some neighbourhood of the unions of $\sigma$-lc centres with some $L^2$
estimates. The proof does not rely on the techniques in the
Ohsawa--Takegoshi-type $L^2$ extension theorems.
|
The most significant challenge currently facing photometric surveys for
transiting gas-giant planets is that of confusion with eclipsing binary systems
that mimic the photometric signature. A simple way to reject most forms of
these false positives is high-precision, rapid-cadence monitoring of the
suspected transit at higher angular resolution and in several filters. We are
currently building a system that will perform higher-angular-resolution,
multi-color follow-up observations of candidate systems identified by Sleuth
(our wide-field transit survey instrument at Palomar), and its two twin system
instruments in Tenerife and northern Arizona.
|
Let $K$ be an algebraically closed field of characteristic different from 2,
$g$ a positive integer, $f(x)$ a degree $(2g+1)$ polynomial with coefficients
in $K$ and without multiple roots, $C:y^2=f(x)$ the corresponding genus $g$
hyperelliptic curve over K, and $J$ the jacobian of $C$. We identify $C$ with
the image of its canonical embedding into $J$ (the infinite point of $C$ goes
to the identity element of $J$). It is well known that for each $\mathfrak{b}
\in J(K)$ there are exactly $2^{2g}$ elements $\mathfrak{a} \in J(K)$ such that
$2\mathfrak{a}=\mathfrak{b}$. M. Stoll constructed an algorithm that provides
Mumford representations of all such $\mathfrak{a}$, in terms of the Mumford
representation of $\mathfrak{b}$. The aim of this paper is to give explicit
formulas for Mumford representations of all such $\mathfrak{a}$, when
$\mathfrak{b}\in J(K)$ is given by $P=(a,b) \in C(K)\subset J(K)$ in terms of
coordinates $a,b$. We also prove that if $g>1$ then $C(K)$ does not contain
torsion points with order between $3$ and $2g$.
|
About two-third of Physics PhDs establish careers outside of academia and the
national laboratories in areas like Software, Instrumentation, Data Science,
Finance, Healthcare, Journalism, Public Policy and Non-Governmental
Organization. Skills and knowledge developed during HEPA (High Energy Physics
and Astrophysics) research as an undergraduate, graduate or a postdoc level
(collectively called early career) have been long sought after in industry.
These skills are complex problem solving abilities, software programming, data
analysis, math, statistics and scientific writing, to name a few. Given that a
vast majority transition to the industry jobs, existing paths for such
transition should be strengthened and new ways of facilitating it be identified
and developed. A strong engagement between HEPA and its alumni would be a
pre-requisite for this. It might also lead to creative ways to reverse the
"brain drain" by encouraging alumni to collaborate on HEPA research projects or
possibly come back full time to research. We motivate and discuss below several
actionable recommendations by which HEPA institutions as well as HEPA faculty
mentors can strengthen both ability to identify non-HEP career opportunities
for students and post-docs as well as help more fully develop skills such as
effective networking, resume building, project management, risk assessment,
budget planning, to name a few. This will help prepare early career HEPA
scientists for successfully transitioning from academia to the diverse array of
non-traditional careers available. HEPA alumni can play a pivotal role by
engaging in this process.
|
Monolayer transition metal dichalcogenides (TMDs) offer new opportunities for
realizing quantum dots (QDs) in the ultimate two-dimensional (2D) limit. Given
the rich control possibilities of electron valley pseudospin discovered in the
monolayers, this quantum degree of freedom can be a promising carrier of
information for potential quantum spintronics exploiting single electrons in
TMD QDs. An outstanding issue is to identify the degree of valley
hybridization, due to the QD confinement, which may significantly change the
valley physics in QDs from its form in the 2D bulk. Here we perform a
systematic study of the intervalley coupling by QD confinement potentials on
extended TMD monolayers. We find that the intervalley coupling in such geometry
is generically weak due to the vanishing amplitude of the electron wavefunction
at the QD boundary, and hence valley hybridization shall be well quenched by
the much stronger spin-valley coupling in monolayer TMDs and the QDs can well
inherit the valley physics of the 2D bulk. We also discover sensitive
dependence of intervalley coupling strength on the central position and the
lateral length scales of the confinement potentials, which may possibly allow
tuning of intervalley coupling by external controls
|
The aim of this paper is to interpret the Grothendieck construction in the
monoidal world. That is to say, we restrict the equivalence between fibred
categories and pseudo functors to the case of categories having only a single
object. On other way of expressing this is to say that we are given a monoid
homomorphism. Though only a specialisation, we discover many pleasant results
and interpret many things in a new light. We also touch upon the case of finite
groups as an example.
|
Space dimensionality is a crucial variable in the analysis of the structure
and dynamics of natural systems and phenomena. The dimensionality effects of
the blackbody radiation has been the subject of considerable research activity
in recent years. These studies are still somewhat fragmentary, pos- ing
formidable qualitative and quantitative problems for various scientific and
technological areas. In this work we carry out an information-theoretical
analysis of the spectral energy density of a d-dimensional blackbody at
temperature T by means of various entropy-like quantities (disequilibrium,
Shannon entropy, Fisher information) as well as by three (dimensionless)
complexity measures (Cr\'amer-Rao, Fisher-Shannon and LMC). All these
frequency-functional quantities are calculated and discussed in terms of
temperature and dimensionality. It is shown that all three measures of
complexity have an universal character in the sense that they depend neither on
temperature nor on the Planck and Boltzmann constants, but only on the the
space dimensionality d. Moreover, they decrease when d is increasing; in
particular, the values 2.28415, 1.90979 and 1.17685 are found for the
Cr\'amer-Rao, Fisher-Shannon and LMC measures of complexity of the
3-dimensional blackbody radiation, respectively. In addition, beyond the
frequency at which the spectral density is maximum (which follows the
well-known Wien displacement law), three further characteristic frequencies are
defined in terms of the previous entropy quantities; they are shown to obey
Wien-like laws. The potential usefulness of these distinctive features of the
blackbody spectrum is physically discussed.
|
We show that bounds on the Castelnuovo-Mumford regularity of singular
schemes, as a function of the degrees of the equations defining the shceme, of
its dimension and of the dimension of their singular space. In the case where
the singularities are isolated, we improve the bound given by Chardin and
Ulrich, and in the general case we establish a bound doubly exponential in the
dimension of the singular space.
--
Nous montrons dans cet article des bornes pour la regularite de
Castelnuovo-Mumford d'un schema admettant des singularites, en fonction des
degres des equations definissant le schema, de sa dimension et de la dimension
de son lieu singulier. Dans le cas ou les singularites sont isolees, nous
ameliorons la borne fournie par Chardin et Ulrich et dans le cas general, nous
etablissons une borne doublement exponentielle en la dimension du lieu
singulier.
|
Complex phenotypic differences among different acute leukemias cannot be
fully captured by analyzing the expression levels of one single molecule, such
as a miR, at a time, but requires systematic analysis of large sets of miRs.
While a popular approach for analysis of such datasets is principal component
analysis (PCA), this method is not designed to optimally discriminate different
phenotypes. Moreover, PCA and other low-dimensional representation methods
yield linear or non-linear combinations of all measured miRs. Global human miR
expression was measured in AML, B-ALL, and T-ALL cell lines and patient RNA
samples. By systematically applying support vector machines to all measured
miRs taken in dyad and triad groups, we built miR networks using cell line data
and validated our findings with primary patient samples. All the coordinately
transcribed members of the miR-23a cluster (which includes also miR-24 and
miR-27a), known to function as tumor suppressors of acute leukemias, appeared
in the AML, B-ALL and T-ALL centric networks. Subsequent qRT-PCR analysis
showed that the most connected miR in the B-ALL-centric network, miR-708, is
highly and specifically expressed in B-ALLs, suggesting that miR-708 might
serve as a biomarker for B-ALL. This approach is systematic, quantitative,
scalable, and unbiased. Rather than a single signature, our approach yields a
network of signatures reflecting the redundant nature of biological signaling
pathways. The network representation allows for visual analysis of all
signatures by an expert and for future integration of additional information.
Furthermore, each signature involves only small sets of miRs, such as dyads and
triads, which are well suited for in depth validation through laboratory
experiments such as loss- and gain-of-function assays designed to drive changes
in leukemia cell survival, proliferation and differentiation.
|
With the aim of developing a fast yet accurate algorithm for compressive
sensing (CS) reconstruction of natural images, we combine in this paper the
merits of two existing categories of CS methods: the structure insights of
traditional optimization-based methods and the speed of recent network-based
ones. Specifically, we propose a novel structured deep network, dubbed
ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm
(ISTA) for optimizing a general $\ell_1$ norm CS reconstruction model. To cast
ISTA into deep network form, we develop an effective strategy to solve the
proximal mapping associated with the sparsity-inducing regularizer using
nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms,
shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than
being hand-crafted. Moreover, considering that the residuals of natural images
are more compressible, an enhanced version of ISTA-Net in the residual domain,
dubbed {ISTA-Net}$^+$, is derived to further improve CS reconstruction.
Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform
existing state-of-the-art optimization-based and network-based CS methods by
large margins, while maintaining fast computational speed. Our source codes are
available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.
|
This paper is concerned with analysis of electromagnetic wave scattering by
an obstacle which is embedded in a two-layered lossy medium separated by an
unbounded rough surface. Given a dipole point source, the direct problem is to
determine the electromagnetic wave field for the given obstacle and unbounded
rough surface; the inverse problem is to reconstruct simultaneously the
obstacle and unbounded rough surface from the electromagnetic field measured on
a plane surface above the obstacle. For the direct problem, a new boundary
integral equation is proposed and its well-posedness is established. The
analysis is based on the exponential decay of the dyadic Green function for
Maxwell's equations in a lossy medium. For the inverse problem, the global
uniqueness is proved and a local stability is discussed. A crucial step in the
proof of the stability is to obtain the existence and characterization of the
domain derivative of the electric field with respect to the shape of the
obstacle and unbounded rough surface.
|
Most optical systems involve a combination of lenses separated by free-space
regions where light acquires the required angle-dependent phase delay for a
certain functionality. Very recently, flat-optics structures have been proposed
to compress these large free-space volumes and miniaturize the overall optical
system. However, these early designs can only replace free-space volumes of
limited length, or operate in a very narrow angular range, or require a
high-index background. These issues raise questions about the applicability of
these devices in practical scenarios. Here, we first derive a fundamental
trade-off between the length of compressed free space and the operating angular
range, which explains some of the limitations of earlier designs, and we then
propose a solution to relax this trade-off using nonlocal metasurface
structures composed of suitably coupled resonant layers. This strategy,
inspired by coupled-resonator-based band-pass filters, allows replacing
free-space volumes of arbitrary length over wide angular ranges, and with very
high transmittance. Finally, we theoretically demonstrate, for the first time,
the potential of combining local and nonlocal metasurfaces to realize compact,
fully solid-state, planar structures for focusing, imaging, and magnification,
in which the focal length of the lens (and hence its magnifying power) does not
dictate the actual distance at which focusing is achieved. Our findings are
expected to extend the reach of the field of metasurfaces and open new
unexplored opportunities.
|
Combined measurements of Higgs boson production cross sections and branching
fractions are presented. The combination is based on the analyses of the Higgs
boson decay modes $H \to \gamma\gamma$, $ZZ^\ast$, $WW^\ast$, $\tau\tau$,
$b\bar{b}$, $\mu\mu$, searches for decays into invisible final states, and on
measurements of off-shell Higgs boson production. Up to $79.8$ fb$^{-1}$ of
proton-proton collision data collected at $\sqrt{s}=$ 13 TeV with the ATLAS
detector are used. Results are presented for the gluon-gluon fusion and
vector-boson fusion processes, and for associated production with vector bosons
or top-quarks. The global signal strength is determined to be $\mu =
1.11^{+0.09}_{-0.08}$. The combined measurement yields an observed (expected)
significance for the vector-boson fusion production process of $6.5\sigma$
($5.3\sigma$). Measurements in kinematic regions defined within the simplified
template cross section framework are also shown. The results are interpreted in
terms of modifiers applied to the Standard Model couplings of the Higgs boson
to other particles, and are used to set exclusion limits on parameters in
two-Higgs-doublet models and in the simplified Minimal Supersymmetric Standard
Model. No significant deviations from Standard Model predictions are observed.
|
The sequential recommendation problem has attracted considerable research
attention in the past few years, leading to the rise of numerous recommendation
models. In this work, we explore how Large Language Models (LLMs), which are
nowadays introducing disruptive effects in many AI-based applications, can be
used to build or improve sequential recommendation approaches. Specifically, we
design three orthogonal approaches and hybrids of those to leverage the power
of LLMs in different ways. In addition, we investigate the potential of each
approach by focusing on its comprising technical aspects and determining an
array of alternative choices for each one. We conduct extensive experiments on
three datasets and explore a large variety of configurations, including
different language models and baseline recommendation models, to obtain a
comprehensive picture of the performance of each approach. Among other
observations, we highlight that initializing state-of-the-art sequential
recommendation models such as BERT4Rec or SASRec with embeddings obtained from
an LLM can lead to substantial performance gains in terms of accuracy.
Furthermore, we find that fine-tuning an LLM for recommendation tasks enables
it to learn not only the tasks, but also concepts of a domain to some extent.
We also show that fine-tuning OpenAI GPT leads to considerably better
performance than fine-tuning Google PaLM 2. Overall, our extensive experiments
indicate a huge potential value of leveraging LLMs in future recommendation
approaches. We publicly share the code and data of our experiments to ensure
reproducibility.
|
An effective hadronic lagrangian consistent with the symmetries of quantum
chromodynamics and intended for applications to finite-density systems is
constructed. The degrees of freedom are (valence) nucleons, pions, and the
low-lying non-Goldstone bosons, which account for the intermediate-range
nucleon-nucleon interactions and conveniently describe the nonvanishing
expectation values of nucleon bilinears. Chiral symmetry is realized
nonlinearly, with a light scalar meson included as a chiral singlet to describe
the mid-range nucleon-nucleon attraction. The low-energy electromagnetic
structure of the nucleon is described within the theory using vector-meson
dominance, so that external form factors are not needed. The effective
lagrangian is expanded in powers of the fields and their derivatives, with the
terms organized using Georgi's ``naive dimensional analysis''. Results are
presented for finite nuclei and nuclear matter at one-baryon-loop order, using
the single-nucleon structure determined within the model. Parameters obtained
from fits to nuclear properties show that naive dimensional analysis is a
useful principle and that a truncation of the effective lagrangian at the first
few powers of the fields and their derivatives is justified.
|
Let $G$ be a graph with $n$ vertices, and let $A(G)$ and $D(G)$ denote
respectively the adjacency matrix and the degree matrix of $G$. Define $$
A_{\alpha}(G)=\alpha D(G)+(1-\alpha)A(G) $$ for any real $\alpha\in [0,1]$. The
collection of eigenvalues of $A_{\alpha}(G)$ together with multiplicities are
called the \emph{$A_{\alpha}$-spectrum} of $G$. A graph $G$ is said to be
\emph{determined by its $A_{\alpha}$-spectrum} if all graphs having the same
$A_{\alpha}$-spectrum as $G$ are isomorphic to $G$. We first prove that some
graphs are determined by its $A_{\alpha}$-spectrum for $0\leq\alpha<1$,
including the complete graph $K_m$, the star $K_{1,n-1}$, the path $P_n$, the
union of cycles and the complement of the union of cycles, the union of $K_2$
and $K_1$ and the complement of the union of $K_2$ and $K_1$, and the
complement of $P_n$. Setting $\alpha=0$ or $\frac{1}{2}$, those graphs are
determined by $A$- or $Q$-spectra. Secondly, when $G$ is regular, we show that
$G$ is determined by its $A_{\alpha}$-spectrum if and only if the join $G\vee
K_m$ is determined by its $A_{\alpha}$-spectrum for $\frac{1}{2}<\alpha<1$.
Furthermore, we also show that the join $K_m\vee P_n$ is determined by its
$A_{\alpha}$-spectrum for $\frac{1}{2}<\alpha<1$. In the end, we pose some
related open problems for future study.
|
The existence of exceptional points (EPs) ${-}$ where both eigenvalues and
eigenvectors converge ${-}$ is a key characteristic of non-Hermitian physics. A
newly-discovered class of magnets ${-}$ termed as altermagnets (AMs) ${-}$ are
characterized by a net zero magnetization as well as spin-split bands. In this
study, we propose the emergence of non-Hermitian physics at AM-ferromagnet (FM)
junctions. We discover that such a junction hosts tunable EPs. We demonstrate
that the positions of these emergent EPs can be tuned using an external applied
magnetic field and show that for a critical value of the applied magnetic field
the EPs can annihilate. Notably, the number and position of the EPs crucially
depends on the type of AM and its orientation with respect to the FM. Our work
puts forth a promising platform of exploration of non-Hermitian physics in an
emerging class of magnetic materials.
|
We achieve 3D semantic scene labeling by exploring semantic relation between
each point and its contextual neighbors through edges. Besides an
encoder-decoder branch for predicting point labels, we construct an edge branch
to hierarchically integrate point features and generate edge features. To
incorporate point features in the edge branch, we establish a hierarchical
graph framework, where the graph is initialized from a coarse layer and
gradually enriched along the point decoding process. For each edge in the final
graph, we predict a label to indicate the semantic consistency of the two
connected points to enhance point prediction. At different layers, edge
features are also fed into the corresponding point module to integrate
contextual information for message passing enhancement in local regions. The
two branches interact with each other and cooperate in segmentation. Decent
experimental results on several 3D semantic labeling datasets demonstrate the
effectiveness of our work.
|
The zero temperature d - wave superconductor phase transition theory given in
the case of T=0 for two - dimensional superconductors (I. Herbut, PRL {\bf 85},
1532 (2000)) is generalized for finite temperatures. The Gaussian behavior of
the system is associated with a non - Fermi behavior of the normal state
observed in the resistivity of cuprate superconductors.
|
This volume contains the proceedings of the (first) Graphs as Models (GaM)
2015 workshop, held on 10-11 April 2015 in London, U.K., as a satellite
workshop of ETAPS 2015, the European Joint Conferences on Theory and Practice
of Software. This new workshop combines the strengths of two pre-existing
workshop series: GT-VMT (Graph Transformation and Visual Modelling Techniques)
and GRAPHITE (Graph Inspection and Traversal Engineering).
Graphs are used as models in all areas of computer science: examples are
state space graphs, control flow graphs, syntax graphs, UML-type models of all
kinds, network layouts, social networks, dependency graphs, and so forth. Used
to model a particular phenomenon or process, graphs are then typically analysed
to find out properties of the modelled subject, or transformed to construct
other types of models.
The workshop aimed at attracting and stimulating research on the techniques
for graph analysis, inspection and transformation, on a general level rather
than in any specific domain. In total, we received 15 submissions covering
several different areas. Of these 15 submissions, nine were eventually accepted
and appear in this volume.
|
Given a cusp form $f$ which is supersingular at a fixed prime $p$ away from
the level, and a Coleman family $F$ through one of its $p$-stabilisations, we
construct a $2$-variable meromorphic $p$-adic $L$-function for the symmetric
square of $F$, denoted $L^{\mathrm{imp}}_p(\mathrm{Sym}^2 F)$. We prove that
this new $p$-adic $L$-function interpolates values of complex imprimitive
symmetric square $L$-functions, for the various specialisations of the family
$F$. It is in fact uniquely determined by its interpolation properties. We also
prove that the function $L^{\mathrm{imp}}_p(\mathrm{Sym}^2 F)$ satisfies a
functional equation. We use this $p$-adic $L$-function to prove a $p$-adic
factorisation formula, expressing the geometric $p$-adic $L$-function attached
to the self-convolution of $F$, as the product of
$L^{\mathrm{imp}}_p(\mathrm{Sym}^2 F)$ and a Kubota-Leopoldt $L$-function. This
extends a result of Dasgupta in the ordinary case.
Using Beilinson-Flach classes constructed by Kings, Zerbes and the second
author we construct motivic cohomology classes $b_f$, and prove that, under
some hypotheses, they differ by a scalar factor from the higher cyclotomic
classes constructed by Beilinson. Using this relation, we prove the
interpolation formulae for $L^{\mathrm{imp}}_p(\mathrm{Sym}^2 F)$ and the
factorisation formula.
|
Demand to use gadolinium (Gd) in detectors is increasing in the field of
elementary particle physics, especially neutrino measurements and dark matter
searches. Large amounts of Gd are used in these experiments. Therefore, to
access the impacts of Gd onto the environments, it is becoming important to
measure the baseline concentrations of Gd in the environments. The measurement
of the baseline concentrations, however, is not easy due to interferences by
other elements. In this paper, a method for measuring the concentrations of
rare earth elements including Gd is proposed. In the method, an inductively
coupled plasma-mass spectrometry is utilized after collecting the dissolved
elements in chelating resin. Results of the ability to detect anomalous
concentrations of rare earth elements in river water samples in the Kamioka and
Toyama areas are also reported.
|
Internet censorship is a phenomenon of societal importance and attracts
investigation from multiple disciplines. Several research groups, such as
Censored Planet, have deployed large scale Internet measurement platforms to
collect network reachability data. However, existing studies generally rely on
manually designed rules (i.e., using censorship fingerprints) to detect
network-based Internet censorship from the data. While this rule-based approach
yields a high true positive detection rate, it suffers from several challenges:
it requires human expertise, is laborious, and cannot detect any censorship not
captured by the rules. Seeking to overcome these challenges, we design and
evaluate a classification model based on latent feature representation learning
and an image-based classification model to detect network-based Internet
censorship.
To infer latent feature representations fromnetwork reachability data, we
propose a sequence-to-sequence autoencoder to capture the structure and the
order of data elements in the data. To estimate the probability of censorship
events from the inferred latent features, we rely on a densely connected
multi-layer neural network model.
Our image-based classification model encodes a network reachability data
record as a gray-scale image and classifies the image as censored or not using
a dense convolutional neural network. We compare and evaluate both approaches
using data sets from Censored Planet via a hold-out evaluation. Both
classification models are capable of detecting network-based Internet
censorship as we were able to identify instances of censorship not detected by
the known fingerprints. Latent feature representations likely encode more
nuances in the data since the latent feature learning approach discovers a
greater quantity, and a more diverse set, of new censorship instances.
|
Stochastic gradient descent (SGD) is a well known method for regression and
classification tasks. However, it is an inherently sequential algorithm at each
step, the processing of the current example depends on the parameters learned
from the previous examples. Prior approaches to parallelizing linear learners
using SGD, such as HOGWILD! and ALLREDUCE, do not honor these dependencies
across threads and thus can potentially suffer poor convergence rates and/or
poor scalability. This paper proposes SYMSGD, a parallel SGD algorithm that, to
a first-order approximation, retains the sequential semantics of SGD. Each
thread learns a local model in addition to a model combiner, which allows local
models to be combined to produce the same result as what a sequential SGD would
have produced. This paper evaluates SYMSGD's accuracy and performance on 6
datasets on a shared-memory machine shows upto 11x speedup over our heavily
optimized sequential baseline on 16 cores and 2.2x, on average, faster than
HOGWILD!.
|
We use hydrodynamical/N-body simulations to interpret the newly discovered
Bullet-cluster-like merging cluster, ZwCl 0008.8+5215 (ZwCl 0008 hereafter),
where a dramatic collision is apparent from multi-wavelength observations. We
have been able to find a self-consistent solution for the radio, X-ray, and
lensing phenomena by projecting an off-axis, binary cluster encounter viewed
just after first core passage. A pair radio relics traces well the leading and
trailing shock fronts that our simulation predict, providing constraints on the
collision parameters. We can also account for the observed distinctive
comet-like X-ray morphology and the positions of the X-ray peaks relative to
the two lensing mass centroids and the two shock front locations. Relative to
the Bullet cluster, the total mass is about 70% lower, ($1.2\pm0.1) \times
10^{15}$ Msun, with a correspondingly lower infall velocity, $1800\pm300$ km/s,
and an impact parameter of $400\pm100$ kpc. As a result, the gas component of
the infalling cluster is not trailing significantly behind the associated dark
matter as in the case of the Bullet cluster. The degree of agreement we find
between all the observables provides strong evidence that dark matter is
effectively collisionless on large scales calling into question other claims
and theories that advocate modified gravity.
|
Community detection in Social Networks is associated with finding and
grouping the most similar nodes inherent in the network. These similar nodes
are identified by computing tie strength. Stronger ties indicates higher
proximity shared by connected node pairs. This work is motivated by
Granovetter's argument that suggests that strong ties lies within densely
connected nodes and the theory that community cores in real-world networks are
densely connected. In this paper, we have introduced a novel method called
\emph{Disjoint Community detection using Cascades (DCC)} which demonstrates the
effectiveness of a new local density based tie strength measure on detecting
communities. Here, tie strength is utilized to decide the paths followed for
propagating information. The idea is to crawl through the tuple information of
cascades towards the community core guided by increasing tie strength.
Considering the cascade generation step, a novel preferential membership method
has been developed to assign community labels to unassigned nodes. The efficacy
of $DCC$ has been analyzed based on quality and accuracy on several real-world
datasets and baseline community detection algorithms.
|
For glucose electrochemical sensors, a comprehensive electronics interface is
designed and constructed in 0.18 um, CMOS process technology, and 1.5 V supply
voltage. This interface includes a programmable readout amplifier and bandgap
reference voltage potentiostat circuit. The programmable transimpedance
amplifier (PTIA), the proposed readout circuit, provides a large dynamic range
and low noise. The overall transimpedance increase for the PTIA is 17.3-50.5
kohm. For an input current range of 4.2-180 uA, the PTIA response has a linear
output voltage range of 0.55-1.44 V. The output rms noise value is calculated
to be 5.101 Vrms, and the overall power consumption of the design is 2.33 mW.
The THD percentage spans from 7.6 to 10.2 in the current range mentioned above.
All bandgap reference voltage potentiostat measurements are made using the
reference potential of 0.6 V. The working electrode was a glassy carbon
electrode (GCE) loaded with a CuO/Cu0:76CO2:25O4 (copper cobaltite) coating. An
electrochemical glucose sensing setup has been used to measure glucose
concentrations between 1 and 10 mM, and an emulated circuit has been used to
verify the viability of the proposed glucose sensing design. The suggested
glucose sensor architecture has a total size of 0.0684 mm2.
|
Random forests are ensemble methods which grow trees as base learners and
combine their predictions by averaging. Random forests are known for their good
practical performance, particularly in high dimensional set-tings. On the
theoretical side, several studies highlight the potentially fruitful connection
between random forests and kernel methods. In this paper, we work out in full
details this connection. In particular, we show that by slightly modifying
their definition, random forests can be rewrit-ten as kernel methods (called
KeRF for Kernel based on Random Forests) which are more interpretable and
easier to analyze. Explicit expressions of KeRF estimates for some specific
random forest models are given, together with upper bounds on their rate of
consistency. We also show empirically that KeRF estimates compare favourably to
random forest estimates.
|
Spatially-smoothed sources are often utilized in the pseudospectral
time-domain (PSTD) method to suppress the associated aliasing errors to levels
as low as possible. In this work, the explicit conditions of the optimal source
patterns for these spanning sources are presented based on the fact that the
aliasing errors are mainly attributed to the high spatial-frequency parts of
the time-stepped source items and subsequently demonstrated to be exactly
corresponding to the normalized rows of Pascal's triangle. The outstanding
performance of these optimal sources is verified by the practical 1-D, 2-D and
3-D PSTD simulations and compared with that of non-optimal sources.
|
It has been shown that uniform as well as non-uniform cellular automata (CA)
can be evolved to perform certain computational tasks. Random Boolean networks
are a generalization of two-state cellular automata, where the interconnection
topology and the cell's rules are specified at random. Here we present a novel
analytical approach to find the local rules of random Boolean networks (RBNs)
to solve the global density classification and the synchronization task from
any initial configuration. We quantitatively and qualitatively compare our
results with previously published work on cellular automata and show that
randomly interconnected automata are computationally more efficient in solving
these two global tasks. Our approach also provides convergence and quality
estimates and allows the networks to be randomly rewired during operation,
without affecting the global performance. Finally, we show that RBNs outperform
small-world topologies on the density classification task and that they perform
equally well on the synchronization task. Our novel approach and the results
may have applications in designing robust complex networks and locally
interacting distributed computing systems for solving global tasks.
|
This contribution describes the "spectro-perfectionism" algorithm of Bolton &
Schlegel (2010, PASP, 122, 248) that is being implemented within the Baryon
Oscillation Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey III
(SDSS-III), in terms of its potential to deliver Poisson-limited sky
subtraction and lossless compression of the input spectrum likelihood
functional given raw CCD data.
|
With the recent introduction of Assistants API, it is expected that
document-based language models will be actively used in various domains,
especially Role-playing. However, a key challenge lies in utilizing
protagonist's persona: Assistants API often fails to achieve with its search
because the information extraction part is different each time and it often
omits important information such as protagonist's backstory or relationships.
It is hard to maintain a consistent persona simply by using the persona
document as input to the Assistants API. To address the challenge of achieving
stable persona consistency, we propose CharacterGPT, a novel persona
reconstruction framework to alleviate the shortcomings of the Assistants API.
Our method involves Character Persona Training (CPT), an effective persona
rebuilding process that updates the character persona by extracting the
character's traits from given summary of the novel for each character as if the
story in a novel progresses. In our experiments, we ask each character to take
the Big Five Inventory personality test in various settings and analyze the
results. To assess whether it can think outside the box, we let each character
generate short novels. Extensive experiments and human evaluation demonstrate
that CharacterGPT presents new possibilities for role-playing agent research.
Code and results are available at: https://github.com/Jeiyoon/charactergpt
|
The General Data Protection Regulation (GDPR) is a European Union regulation
that will replace the existing Data Protection Directive on 25 May 2018. The
most significant change is a huge increase in the maximum fine that can be
levied for breaches of the regulation. Yet fewer than half of UK companies are
fully aware of GDPR - and a number of those who were preparing for it stopped
doing so when the Brexit vote was announced. A last-minute rush to become
compliant is therefore expected, and numerous companies are starting to offer
advice, checklists and consultancy on how to comply with GDPR. In such an
environment, artificial intelligence technologies ought to be able to assist by
providing best advice; asking all and only the relevant questions; monitoring
activities; and carrying out assessments. The paper considers four areas of
GDPR compliance where rule based technologies and/or machine learning
techniques may be relevant: * Following compliance checklists and codes of
conduct; * Supporting risk assessments; * Complying with the new regulations
regarding technologies that perform automatic profiling; * Complying with the
new regulations concerning recognising and reporting breaches of security. It
concludes that AI technology can support each of these four areas. The
requirements that GDPR (or organisations that need to comply with GDPR) state
for explanation and justification of reasoning imply that rule-based approaches
are likely to be more helpful than machine learning approaches. However, there
may be good business reasons to take a different approach in some
circumstances.
|
The energy spectrum of magnetohydrodynamic turbulence attracts interest due
to its fundamental importance and its relevance for interpreting astrophysical
data. Here we present measurements of the energy spectra from a series of
high-resolution direct numerical simulations of MHD turbulence with a strong
guide field and for increasing Reynolds number. The presented simulations, with
numerical resolutions up to 2048^3 mesh points and statistics accumulated over
30 to 150 eddy turnover times, constitute, to the best of our knowledge, the
largest statistical sample of steady state MHD turbulence to date. We study
both the balanced case, where the energies associated with Alfv\'en modes
propagating in opposite directions along the guide field, E^+ and $E^-, are
equal, and the imbalanced case where the energies are different. In the
balanced case, we find that the energy spectrum converges to a power law with
exponent -3/2 as the Reynolds number is increased, consistent with
phenomenological models that include scale-dependent dynamic alignment. For the
imbalanced case, with E^+>E^-, the simulations show that E^- ~ k_{\perp}^{-3/2}
for all Reynolds numbers considered, while E^+ has a slightly steeper spectrum
at small Re. As the Reynolds number increases, E^+ flattens. Since both E^+ and
E^- are pinned at the dissipation scale and anchored at the driving scales, we
postulate that at sufficiently high Re the spectra will become parallel in the
inertial range and scale as E^+ ~ E^- ~ k_{\perp}^{-3/2}. Questions regarding
the universality of the spectrum and the value of the "Kolmogorov constant" are
discussed.
|
Should nature be supersymmetric, then it will be described by Quantum
Supergravity at least in some energy regimes. The currently most advanced
description of Quantum Supergravity and beyond is Superstring Theory/M-Theory
in 10/11 dimensions. String Theory is a top-to-bottom approach to Quantum
Supergravity in that it postulates a new object, the string, from which
classical Supergravity emerges as a low energy limit. On the other hand, one
may try more traditional bottom-to-top routes and apply the techniques of
Quantum Field Theory. Loop Quantum Gravity (LQG) is a manifestly background
independent and non-perturbative approach to the quantisation of classical
General Relativity, however, so far mostly without supersymmetry. The main
obstacle to the extension of the techniques of LQG to the quantisation of
higher dimensional Supergravity is that LQG rests on a specific connection
formulation of General Relativity which exists only in D+1 = 4 dimensions. In
this Letter we introduce a new connection formulation of General Relativity
which exists in all space-time dimensions. We show that all LQG techniques
developed in D+1 = 4 can be transferred to the new variables in all dimensions
and describe how they can be generalised to the new types of fields that appear
in Supergravity theories as compared to standard matter, specifically
Rarita-Schwinger and p-form gauge fields.
|
Transformers are ubiquitous in Natural Language Processing (NLP) tasks, but
they are difficult to be deployed on hardware due to the intensive computation.
To enable low-latency inference on resource-constrained hardware platforms, we
propose to design Hardware-Aware Transformers (HAT) with neural architecture
search. We first construct a large design space with $\textit{arbitrary
encoder-decoder attention}$ and $\textit{heterogeneous layers}$. Then we train
a $\textit{SuperTransformer}$ that covers all candidates in the design space,
and efficiently produces many $\textit{SubTransformers}$ with weight sharing.
Finally, we perform an evolutionary search with a hardware latency constraint
to find a specialized $\textit{SubTransformer}$ dedicated to run fast on the
target hardware. Extensive experiments on four machine translation tasks
demonstrate that HAT can discover efficient models for different hardware (CPU,
GPU, IoT device). When running WMT'14 translation task on Raspberry Pi-4, HAT
can achieve $\textbf{3}\times$ speedup, $\textbf{3.7}\times$ smaller size over
baseline Transformer; $\textbf{2.7}\times$ speedup, $\textbf{3.6}\times$
smaller size over Evolved Transformer with $\textbf{12,041}\times$ less search
cost and no performance loss. HAT code is
https://github.com/mit-han-lab/hardware-aware-transformers.git
|
This paper proposes a method for measuring semantic similarity between words
as a new tool for text analysis. The similarity is measured on a semantic
network constructed systematically from a subset of the English dictionary,
LDOCE (Longman Dictionary of Contemporary English). Spreading activation on the
network can directly compute the similarity between any two words in the
Longman Defining Vocabulary, and indirectly the similarity of all the other
words in LDOCE. The similarity represents the strength of lexical cohesion or
semantic relation, and also provides valuable information about similarity and
coherence of texts.
|
In this paper, we study the Abreu equation on toric surfaces. In particular,
we prove the existence of the positive extremal metric when relative
$K$-stability is assumed.
|
Adaptive finite elements combined with geometric multigrid solvers are one of
the most efficient numerical methods for problems such as the instationary
Navier-Stokes equations. Yet despite their efficiency, computations remain
expensive and the simulation of, for example, complex flow problems can take
many hours or days. GPUs provide an interesting avenue to speed up the
calculations due to their very large theoretical peak performance. However, the
large degree of parallelism and non-standard API make the use of GPUs in
scientific computing challenging. In this work, we develop a GPU acceleration
for the adaptive finite element library Gascoigne and study its effectiveness
for different systems of partial differential equations. Through the systematic
formulation of all computations as linear algebra operations, we can employ
GPU-accelerated linear algebra libraries, which simplifies the implementation
and ensures the maintainability of the code while achieving very efficient GPU
utilizations. Our results for a transport-diffusion equation, linear
elasticity, and the instationary Navier-Stokes equations show substantial
speedups of up to 20X compared to multi-core CPU implementations.
|
We discuss the integrability properties of the Boussinesq equations in the
language of geometrical quantities defined on an appropriately chosen coset
manifold connected with the $W_{3}$ algebra of Zamolodchikov. We provide a
geometrical interpretation to the commuting conserved quantities, Lax-pair
formulation, zero-curvature representation, Miura maps, etc. in the framework
of nonlinear realization method.
|
We study the entanglement dynamics of quantum automaton (QA) circuits in the
presence of U(1) symmetry. We find that the second R\'enyi entropy grows
diffusively with a logarithmic correction as $\sqrt{t\ln{t}}$, saturating the
bound established by Huang [IOP SciNotes 1, 035205 (2020)]. Thanks to the
special feature of QA circuits, we understand the entanglement dynamics in
terms of a classical bit string model. Specifically, we argue that the
diffusive dynamics stems from the rare slow modes containing extensively long
domains of spin 0s or 1s. Additionally, we investigate the entanglement
dynamics of monitored QA circuits by introducing a composite measurement that
preserves both the U(1) symmetry and properties of QA circuits. We find that as
the measurement rate increases, there is a transition from a volume-law phase
where the second R\'enyi entropy persists the diffusive growth (up to a
logarithmic correction) to a critical phase where it grows logarithmically in
time. This interesting phenomenon distinguishes QA circuits from non-automaton
circuits such as U(1)-symmetric Haar random circuits, where a volume-law to an
area-law phase transition exists, and any non-zero rate of projective
measurements in the volume-law phase leads to a ballistic growth of the R\'enyi
entropy.
|
We describe various errors in the mathematical literature, and consider how
some of them might have been avoided, or at least detected at an earlier stage,
using tools such as Maple or Sage. Our examples are drawn from three broad
categories of errors. First, we consider some significant errors made by
highly-regarded mathematicians. In some cases these errors were not detected
until many years after their publication. Second, we consider in some detail an
error that was recently detected by the author. This error in a refereed
journal led to further errors by at least one author who relied on the
(incorrect) result. Finally, we mention some instructive errors that have been
detected in the author's own published papers.
|
We report on simultaneous radio and X-ray observations of the repeating fast
radio burst source FRB 180916.J0158+65 using the Canadian Hydrogen Intensity
Mapping Experiment (CHIME), Effelsberg, and Deep Space Network (DSS-14 and
DSS-63) radio telescopes and the Chandra X-ray Observatory. During 33 ks of
Chandra observations, we detect no radio bursts in overlapping Effelsberg or
Deep Space Network observations and a single radio burst during CHIME/FRB
source transits. We detect no X-ray events in excess of the background during
the Chandra observations. These non-detections imply a 5-$\sigma$ limit of
$<5\times10^{-10}$ erg cm$^{-2}$ for the 0.5--10 keV fluence of prompt emission
at the time of the radio burst and $1.3\times10^{-9}$ erg cm$^{-2}$ at any time
during the Chandra observations at the position of FRB 180916.J0158+65. Given
the host-galaxy redshift of FRB 180916.J0158+65 ($z\sim0.034$), these
correspond to energy limits of $<1.6\times10^{45}$ erg and $<4\times10^{45}$
erg, respectively. We also place a 5-$\sigma$ limit of $<8\times10^{-15}$ erg
s$^{-1}$ cm$^{-2}$ on the 0.5--10\,keV absorbed flux of a persistent source at
the location of FRB 180916.J0158+65. This corresponds to a luminosity limit of
$<2\times10^{40}$ erg s$^{-1}$. Using Fermi/GBM data we search for prompt
gamma-ray emission at the time of radio bursts from FRB 180916.J0158+65 and
find no significant bursts, placing a limit of $4\times10^{-9}$ erg cm$^{-2}$
on the 10--100 keV fluence. We also search Fermi/LAT data for periodic
modulation of the gamma-ray brightness at the 16.35-day period of radio-burst
activity and detect no significant modulation. We compare these deep limits to
the predictions of various fast radio burst models, but conclude that similar
X-ray constraints on a closer fast radio burst source would be needed to
strongly constrain theory.
|
The "Smart City" (SC) concept revolves around the idea of embodying
cutting-edge ICT solutions in the very fabric of future cities, in order to
offer new and better services to citizens while lowering the city management
costs, both in monetary, social, and environmental terms. In this framework,
communication technologies are perceived as subservient to the SC services,
providing the means to collect and process the data needed to make the services
function. In this paper, we propose a new vision in which technology and SC
services are designed to take advantage of each other in a symbiotic manner.
According to this new paradigm, which we call "SymbioCity", SC services can
indeed be exploited to improve the performance of the same communication
systems that provide them with data. Suggestive examples of this symbiotic
ecosystem are discussed in the paper. The dissertation is then substantiated in
a proof-of-concept case study, where we show how the traffic monitoring service
provided by the London Smart City initiative can be used to predict the density
of users in a certain zone and optimize the cellular service in that area.
|
We present 3D core-collapse supernova simulations of massive Pop-III
progenitor stars at the transition to the pulsational pair instability regime.
We simulate two progenitor models with initial masses of
$85\,\mathrm{M}_{\odot}$ and $100\,\mathrm{M}_\odot$ with the LS220, SFHo, and
SFHx equations of state. The $85\,\mathrm{M}_{\odot}$ progenitor experiences a
pair instability pulse coincident with core collapse, whereas the
$100\,\mathrm{M}_{\odot}$ progenitor has already gone through a sequence of
four pulses $1\mathord,500$ years before collapse in which it ejected its H and
He envelope. The $85\,\mathrm{M}_{\odot}$ models experience shock revival and
then delayed collapse to a black hole (BH) due to ongoing accretion within
hundreds of milliseconds. The diagnostic energy of the incipient explosion
reaches up to $2.7\times10^{51}\,\mathrm{erg}$ in the SFHx model. Due to the
high binding energy of the metal core, BH collapse by fallback is eventually
unavoidable, but partial mass ejection may be possible. The
$100\,\mathrm{M}_\odot$ models have not achieved shock revival or undergone BH
collapse by the end of the simulation. All models exhibit relatively strong
gravitational-wave emission both in the high-frequency g-mode emission band and
at low frequencies. The SFHx and SFHo models show clear emission from the
standing accretion shock instability. For our models, we estimate maximum
detection distances of up to $\mathord{\sim}46\,\mathrm{kpc}$ with LIGO and
$\mathord{\sim} 850\,\mathrm{kpc}$ with Cosmic Explorer.
|
We investigate learning the eigenfunctions of evolution operators for
time-reversal invariant stochastic processes, a prime example being the
Langevin equation used in molecular dynamics. Many physical or chemical
processes described by this equation involve transitions between metastable
states separated by high potential barriers that can hardly be crossed during a
simulation. To overcome this bottleneck, data are collected via biased
simulations that explore the state space more rapidly. We propose a framework
for learning from biased simulations rooted in the infinitesimal generator of
the process and the associated resolvent operator. We contrast our approach to
more common ones based on the transfer operator, showing that it can provably
learn the spectral properties of the unbiased system from biased data. In
experiments, we highlight the advantages of our method over transfer operator
approaches and recent developments based on generator learning, demonstrating
its effectiveness in estimating eigenfunctions and eigenvalues. Importantly, we
show that even with datasets containing only a few relevant transitions due to
sub-optimal biasing, our approach recovers relevant information about the
transition mechanism.
|
We prove Torelli-type uniqueness theorems for both ALG$^*$ gravitational
instantons and ALG gravitational instantons which are of order $2$. That is,
the periods uniquely characterize these types of gravitational instantons up to
diffeomorphism. We define a period mapping $\mathscr{P}$, which we show is
surjective in the ALG cases, and open in the ALG$^*$ cases. We also construct
some new degenerations of hyperk\"ahler metrics on the K3 surface which exhibit
bubbling of ALG$^*$ gravitational instantons.
|
Let A be a local ring which admits an exact pair x,y of zero divisors as
defined by Henriques and Sega. Assuming that this pair is regular and that
there exists a regular element on the A-module A/(x,y), we explicitly construct
an infinite family of non-isomorphic indecomposable totally reflexive
A-modules. In this setting, our construction provides an answer to a question
raised by Christensen, Piepmeyer, Striuli, and Takahashi. Furthermore, we
compute the module of homomorphisms between any two given modules from the
infinite family mentioned above.
|
For a nonlinear Anosov diffeomorphism of the 2-torus, we present examples of
measures so that the group of $\mu$-preserving diffeomorphisms is, up to
zero-entropy transformations, cyclic. For families of equilibrium states $\mu$,
we strengthen this to show that the group of $\mu$-preserving diffeomorphism is
virtually cyclic.
|
We study dynamic matching in a spatial setting. Drivers are distributed at
random on some interval. Riders arrive in some (possibly adversarial) order at
randomly drawn points. The platform observes the location of the drivers, and
can match newly arrived riders immediately, or can wait for more riders to
arrive. Unmatched riders incur a waiting cost $c$ per period. The platform can
match riders and drivers, irrevocably. The cost of matching a driver to a rider
is equal to the distance between them. We quantify the value of slightly
increasing supply. We prove that when there are $(1+\epsilon)$ drivers per
rider (for any $\epsilon > 0$), the cost of matching returned by a simple
greedy algorithm which pairs each arriving rider to the closest available
driver is $O(\log^3(n))$, where $n$ is the number of riders. On the other hand,
with equal number of drivers and riders, even the \emph{ex post} optimal
matching does not have a cost less than $\Theta(\sqrt{n})$. Our results shed
light on the important role of (small) excess supply in spatial matching
markets.
|
Parameterised actions in reinforcement learning are composed of discrete
actions with continuous action-parameters. This provides a framework for
solving complex domains that require combining high-level actions with flexible
control. The recent P-DQN algorithm extends deep Q-networks to learn over such
action spaces. However, it treats all action-parameters as a single joint input
to the Q-network, invalidating its theoretical foundations. We analyse the
issues with this approach and propose a novel method, multi-pass deep
Q-networks, or MP-DQN, to address them. We empirically demonstrate that MP-DQN
significantly outperforms P-DQN and other previous algorithms in terms of data
efficiency and converged policy performance on the Platform, Robot Soccer Goal,
and Half Field Offense domains.
|
Deep learning models are known to be vulnerable not only to input-dependent
adversarial attacks but also to input-agnostic or universal adversarial
attacks. Dezfooli et al. \cite{Dezfooli17,Dezfooli17anal} construct universal
adversarial attack on a given model by looking at a large number of training
data points and the geometry of the decision boundary near them. Subsequent
work \cite{Khrulkov18} constructs universal attack by looking only at test
examples and intermediate layers of the given model. In this paper, we propose
a simple universalization technique to take any input-dependent adversarial
attack and construct a universal attack by only looking at very few adversarial
test examples. We do not require details of the given model and have negligible
computational overhead for universalization. We theoretically justify our
universalization technique by a spectral property common to many
input-dependent adversarial perturbations, e.g., gradients, Fast Gradient Sign
Method (FGSM) and DeepFool. Using matrix concentration inequalities and
spectral perturbation bounds, we show that the top singular vector of
input-dependent adversarial directions on a small test sample gives an
effective and simple universal adversarial attack. For VGG16 and VGG19 models
trained on ImageNet, our simple universalization of Gradient, FGSM, and
DeepFool perturbations using a test sample of 64 images gives fooling rates
comparable to state-of-the-art universal attacks \cite{Dezfooli17,Khrulkov18}
for reasonable norms of perturbation. Code available at
https://github.com/ksandeshk/svd-uap .
|
Nuclear electromagnetic currents derived in a chiral-effective-field-theory
framework including explicit nucleons, $\Delta$ isobars, and pions up to
N$^2$LO, {\it i.e.} ignoring loop corrections, are used in a study of neutron
radiative captures on protons and deuterons at thermal energies, and of $A$=2
and 3 nuclei magnetic moments. With the strengths of the $\Delta$-excitation
currents determined to reproduce the $n$-$p$ cross section and isovector
combination of the trinucleon magnetic moments, we find that the cross section
and photon circular polarization parameter, measured respectively in $n$-$d$
and $\vec{n}$-$d$ processes, are significantly underpredicted by theory.
|
Based on the progress of image recognition, video recognition has been
extensively studied recently. However, most of the existing methods are focused
on short-term but not long-term video recognition, called contextual video
recognition. To address contextual video recognition, we use convolutional
recurrent neural networks (ConvRNNs) having a rich spatio-temporal information
processing capability, but ConvRNNs requires extensive computation that slows
down training. In this paper, inspired by the normalization and detrending
methods, we propose adaptive detrending (AD) for temporal normalization in
order to accelerate the training of ConvRNNs, especially for convolutional
gated recurrent unit (ConvGRU). AD removes internal covariate shift within a
sequence of each neuron in recurrent neural networks (RNNs) by subtracting a
trend. In the experiments for contextual recognition on ConvGRU, the results
show that (1) ConvGRU clearly outperforms the feed-forward neural networks, (2)
AD consistently offers a significant training acceleration and generalization
improvement, and (3) AD is further improved by collaborating with the existing
normalization methods.
|
The multiplicity of metal-free (Population III) stars may influence their
feedback efficiency within their host dark matter halos, affecting subsequent
metal enrichment and the transition to galaxy formation. Radiative feedback
from massive stars can trigger nearby star formation in dense self-shielded
clouds. In model radiation self-shielding, the H$_2$ column density must be
accurately computed. In this study, we compare two local approximations based
on the density gradient and Jeans length with a direct integration of column
density along rays. After the primary massive star forms, we find that no
secondary stars form for both the direct integration and density gradient
approaches. The approximate method reduces the computation time by a factor of
2. The Jeans length approximation overestimates the H$_2$ column density by a
factor of 10, leading to five numerically enhanced self-shielded, star-forming
clumps. We conclude that the density gradient approximation is sufficiently
accurate for larger volume galaxy simulations, although one must still caution
that the approximation cannot fully reproduce the result of direct integration.
|
In this work, we present a combinatorial, deterministic single-pass streaming
algorithm for the problem of maximizing a submodular function, not necessarily
monotone, with respect to a cardinality constraint (SMCC). In the case the
function is monotone, our algorithm reduces to the optimal streaming algorithm
of Badanidiyuru et al. (2014). In general, our algorithm achieves ratio $\alpha
/ (1 + \alpha) - \varepsilon$, for any $\varepsilon > 0$, where $\alpha$ is the
ratio of an offline (deterministic) algorithm for SMCC used for
post-processing. Thus, if exponential computation time is allowed, our
algorithm deterministically achieves nearly the optimal $1/2$ ratio. These
results nearly match those of a recently proposed, randomized streaming
algorithm that achieves the same ratios in expectation. For a deterministic,
single-pass streaming algorithm, our algorithm achieves in polynomial time an
improvement of the best approximation factor from $1/9$ of previous literature
to $\approx 0.2689$.
|
The rapid advancements in memory systems, CPU technology, and emerging
technologies herald a transformative potential in computing, promising to
revolutionize memory hierarchies. Innovations in DDR memory are delivering
unprecedented bandwidth, while advancements in on-chip wireless technology are
reducing size and increasing speed. The introduction of godspeed wireless
transceivers on chip, alongside near high-speed DRAM, is poised to directly
facilitate memory requests. This integration suggests the potential for
eliminating traditional memory hierarchies, offering a new paradigm in
computing efficiency and speed. These developments indicate a near-future where
computing systems are significantly more responsive and powerful, leveraging
direct, high-speed memory access mechanisms.
|
We propose a class of line-transformed cylindrical cloaks which have
easily-realizable constitutive parameters. The scattering properties of such
cloaks have been investigated numerically for both transverse-electric (TE) and
transverse-magnetic (TM) incidences of plane waves. A line-transformed
invisibility cloak with a perfectly electric conducting (PEC) inner boundary is
actually a reshaping of a PEC line to which the cloaked object is crushed. The
numerical results of near-field distributions and far-field scattering
properties have verified the above conclusions. We also investigate the
relationship between the constitutive parameters of a line-transformed cloak
and the length of the corresponding line. The changing range of constitutive
parameters is large when the line is short, while the changing range becomes
small when the line is long. The above conclusion provides an efficient way to
realize the invisibility cloaks using artificial metamaterials.
|
In scenarios of strongly coupled electroweak symmetry breaking, heavy
composite particles of different spin and parity may arise and cause observable
effects on signals that appear at loop levels. The recently observed process of
Higgs to $\gamma \gamma$ at the LHC is one of such signals. We study the new
constraints that are imposed on composite models from $H\to \gamma\gamma$,
together with the existing constraints from the high precision electroweak
tests. We use an effective chiral Lagrangian to describe the effective theory
that contains the Standard Model spectrum and the extra composites below the
electroweak scale. Considering the effective theory cutoff at $\Lambda = 4\pi v
\sim 3 $ TeV, consistency with the $T$ and $S$ parameters and the newly
observed $H\to \gamma\gamma$ can be found for a rather restricted range of
masses of vector and axial-vector composites from $1.5$ TeV to $1.7$ TeV and
$1.8$ TeV to $1.9$ TeV, respectively, and only provided a non-standard kinetic
mixing between the $W^{3}$ and $B^{0}$ fields is included.
|
In this letter we study both ground state properties and the superfluid
transition temperature of a spin-1/2 Fermi gas across a Feshbach resonance with
a synthetic spin-orbit coupling, using mean-field theory and exact solution of
two-body problem. We show that a strong spin-orbit coupling can significantly
enhance the pairing gap for 1/(k_F a_s)<=0 due to increased density-of-state.
Strong spin-orbit coupling also significantly enhances the superfluid
transition temperature when 1/(k_F a_s)<=0, while suppresses it slightly when
1/(k_F a_s)>0. The universal interaction energy and pair size at resonance are
also discussed.
|
This thesis provides an introduction to the various category theory ideas
employed in topological quantum field theory. These theories are viewed as
symmetric monoidal functors from topological cobordism categories into the
category of vector spaces. In two dimensions, they are classified by Frobenius
algebras. In three dimensions, and under certain conditions, they are
classified by modular categories. These are special kinds of categories in
which topological notions such as braidings and twists play a prominent role.
There is a powerful graphical calculus available for working in such
categories, which may be regarded as a generalization of the Feynman diagrams
method familiar in physics. This method is introduced and the necessary
algebraic structure is graphically motivated step by step.
A large subclass of two-dimensional topological field theories can be
obtained from a lattice gauge theory construction using triangulations. In
these theories, the gauge group is finite. This construction is reviewed, from
both the original algebraic perspective as well as using the graphical calculus
developed in the earlier chapters.
This finite gauge group toy model can be defined in all dimensions, and has a
claim to being the simplest non-trivial quantum field theory. We take the
opportunity to show explicitly the calculation of the modular category arising
from this model in three dimensions, and compare this algebraic data with the
corresponding data in two dimensions, computed both geometrically and from
triangulations. We use this as an example to introduce the idea of a quantum
field theory as producing a tower of algebraic structures, each dimension
related to the previous by the process of categorification.
|
In the present paper we analyze and discuss some mathematical aspects of the
fluid-static configurations of a self-gravitating perfect gas enclosed in a
spherical solid shell. The mathematical model we consider is based on the
well-known Lane-Emden equation, albeit under boundary conditions that differ
from those usually assumed in the astrophysical literature. The existence of
multiple solutions requires particular attention in devising appropriate
numerical schemes apt to deal with and catch the solution multiplicity as
efficiently and accurately as possible. In sequence, we describe some
analytical properties of the model, the two algorithms used to obtain numerical
solutions, and the numerical results for two selected cases.
|
Spectrally-resolved observations of three pure rotational lines of H$_2$,
conducted with the EXES instrument on SOFIA toward the classic bow shock HH7,
reveal systematic velocity shifts between the S(5) line of ortho-H$_2$ and the
two para-H$_2$ lines [S(4) and S(6)] lying immediately above and below it on
the rotational ladder. These shifts, reported here for the first time, imply
that we are witnessing the conversion of para-H$_2$ to ortho-H$_2$ within a
shock wave driven by an outflow from a young stellar object. The observations
are in good agreement with the predictions of models for non-dissociative,
C-type molecular shocks. They provide a clear demonstration of the chemical
changes wrought by interstellar shock waves, in this case the conversion of
para-H$_2$ to ortho-H$_2$ in reactive collisions with atomic hydrogen, and
provide among the most compelling evidence yet obtained for C-type shocks in
which the flow velocity changes continuously.
|
Initially, a number of frequent itemset mining (FIM) algorithms have been
designed on the Hadoop MapReduce, a distributed big data processing framework.
But, due to heavy disk I/O, MapReduce is found to be inefficient for such
highly iterative algorithms. Therefore, Spark, a more efficient distributed
data processing framework, has been developed with in-memory computation and
resilient distributed dataset (RDD) features to support the iterative
algorithms. On the Spark RDD framework, Apriori and FP-Growth based FIM
algorithms have been designed, but Eclat-based algorithm has not been explored
yet. In this paper, RDD-Eclat, a parallel Eclat algorithm on the Spark RDD
framework is proposed with its five variants. The proposed algorithms are
evaluated on the various benchmark datasets, which shows that RDD-Eclat
outperforms the Spark-based Apriori by many times. Also, the experimental
results show the scalability of the proposed algorithms on increasing the
number of cores and size of the dataset.
|
The lepton flavor violating decay of the Standard Model-like Higgs (LFVHD) is
discussed in the framework of the radiative neutrino mass model built in
\cite{Kenji}. The branching ratio (BR) of the LFVHD are shown to reach
$10^{-5}$ in the most interesting region of the parameter space shown in
\cite{Kenji}. The dominant contributions come from the singly charged Higgs
mediations, namely the coupling of $h^\pm_2$ with exotic neutrinos.
Furthermore, if doubly charged Higgs is heavy enough to allow the mass of
$h^\pm_2$ around 1 TeV, the mentioned BR can reach $10^{-4}$. Besides, we have
obtained that the large values of the Br$(h\rightarrow\mu\tau)$ leads to very
small ones of the Br$(h\rightarrow e\tau)$, much smaller than various
sensitivity of current experiments.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.