text
stringlengths 6
128k
|
---|
In federated learning, machine learning and deep learning models are trained
globally on distributed devices. The state-of-the-art privacy-preserving
technique in the context of federated learning is user-level differential
privacy. However, such a mechanism is vulnerable to some specific model
poisoning attacks such as Sybil attacks. A malicious adversary could create
multiple fake clients or collude compromised devices in Sybil attacks to mount
direct model updates manipulation. Recent works on novel defense against model
poisoning attacks are difficult to detect Sybil attacks when differential
privacy is utilized, as it masks clients' model updates with perturbation. In
this work, we implement the first Sybil attacks on differential privacy based
federated learning architectures and show their impacts on model convergence.
We randomly compromise some clients by manipulating different noise levels
reflected by the local privacy budget epsilon of differential privacy on the
local model updates of these Sybil clients such that the global model
convergence rates decrease or even leads to divergence. We apply our attacks to
two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our
evaluation results on the MNIST and CIFAR-10 datasets show that our attacks
effectively slow down the convergence of the global models. We then propose a
method to keep monitoring the average loss of all participants in each round
for convergence anomaly detection and defend our Sybil attacks based on the
prediction cost reported from each client. Our empirical study demonstrates
that our defense approach effectively mitigates the impact of our Sybil attacks
on model convergence.
|
Application of deep learning on histopathological whole slide images (WSIs)
holds promise of improving diagnostic efficiency and reproducibility but is
largely dependent on the ability to write computer code or purchase commercial
solutions. We present a code-free pipeline utilizing free-to-use, open-source
software (QuPath, DeepMIB, and FastPathology) for creating and deploying deep
learning-based segmentation models for computational pathology. We demonstrate
the pipeline on a use case of separating epithelium from stroma in colonic
mucosa. A dataset of 251 annotated WSIs, comprising 140 hematoxylin-eosin
(HE)-stained and 111 CD3 immunostained colon biopsy WSIs, were developed
through active learning using the pipeline. On a hold-out test set of 36 HE and
21 CD3-stained WSIs a mean intersection over union score of 96.6% and 95.3% was
achieved on epithelium segmentation. We demonstrate pathologist-level
segmentation accuracy and clinical acceptable runtime performance and show that
pathologists without programming experience can create near state-of-the-art
segmentation solutions for histopathological WSIs using only free-to-use
software. The study further demonstrates the strength of open-source solutions
in its ability to create generalizable, open pipelines, of which trained models
and predictions can seamlessly be exported in open formats and thereby used in
external solutions. All scripts, trained models, a video tutorial, and the full
dataset of 251 WSIs with ~31k epithelium annotations are made openly available
at https://github.com/andreped/NoCodeSeg to accelerate research in the field.
|
We present a study of decimetric radio activity, using the first high time
cadence (0.5 s) images from the ${\textit{Giant Meterwave Radio Telescope}}$
(${\textit{GMRT}}$) at 610 MHz, associated with ${\textit{GOES}}$ C1.4 and M1.0
class solar flares, and a coronal mass ejection (CME) onset that occurred on 20
June 2015. The high spatial resolution images from ${\textit{GMRT}}$ show a
strong radio source during the C1.4 flare, located ~500$''$ away from the
flaring site with no corresponding bright footpoints or coronal features
nearby. In contrast, however, strong radio sources are found near the flaring
site during the M1.0 flare and around the CME onset time. Weak radio sources,
located near the flaring site, are also found during the maximum of the C1.4
flare activity, which show a temporal association with metric type III bursts
identified by the Solar Broadband Radio Spectrometer at Yunnan Astronomical
Observatory. Based on a multi-wavelength analysis and magnetic potential field
source surface extrapolations, we suggest that the source electrons of
${\textit{GMRT}}$ radio sources and metric type III bursts were originated from
a common electron acceleration site. We also show that the strong
${\textit{GMRT}}$ radio source is generated by a coherent emission process and
its apparent location far from the flaring site is possibly due to the
wave-ducting effect.
|
Hitting times for discrete quantum walks on graphs give an average time
before the walk reaches an ending condition. To be analogous to the hitting
time for a classical walk, the quantum hitting time must involve repeated
measurements as well as unitary evolution. We derive an expression for hitting
time using superoperators, and numerically evaluate it for the discrete walk on
the hypercube. The values found are compared to other analogues of hitting time
suggested in earlier work. The dependence of hitting times on the type of
unitary ``coin'' is examined, and we give an example of an initial state and
coin which gives an infinite hitting time for a quantum walk. Such infinite
hitting times require destructive interference, and are not observed
classically. Finally, we look at distortions of the hypercube, and observe that
a loss of symmetry in the hypercube increases the hitting time. Symmetry seems
to play an important role in both dramatic speed-ups and slow-downs of quantum
walks.
|
The effect of proximity of the magnetism of the Pr-based manganite
(Pr0.6Sr0.4MnO3) on the superconductivity of Bi-based high-temperature
superconductor (Bi1.75Pb0.25Sr2Ca2Cu3O10+d) was studied based on the results
obtained from the magnetotransport and magnetization measurements. Decrease in
the values of the upper critical field (HC2(0)) and an increase in the width of
the superconducting transition (Delta TC) of Bi1.75Pb0.25Sr2Ca2Cu3O10+d were
observed in proximity with the manganite. The combined effect of magnetic
exchange interaction arising from the manganite, the leakage of Cooper-pairs
from the superconductor into the manganite, and the diffusion and transport of
spin-polarized electrons from the manganite into the superconductor were found
to modify the superconducting properties of Bi1.75Pb0.25Sr2Ca2Cu3O10+d. The
stacking sequence of the individual layers in these heterostructures was found
to dictate the ground state properties of the heterostructure. As a consequence
of the proximity effect, the colossal-magnetoresistance (CMR) ratio as high as
~ 99 % observed in the heterostructure makes the thin film heterostructures
promising candidates for potential technological applications.
|
This article describes various misconceptions and misinterpretations
concerning presentation of the hysteresis loop for ferromagnets occurring in
undergraduate textbooks. These problems originate from our teaching a solid
state / condensed matter physics (SSP/CMP) course. A closer look at the
definition of the 'coercivity' reveals two distinct notions referred to the
hysteresis loop: B vs H or M vs H, which can be easily confused and, in fact,
are confused in several textbooks. The properties of the M vs H type hysteresis
loop are often ascribed to the B vs H type loops, giving rise to various
misconceptions. An extensive survey of textbooks at first in the SSP/CMP area
and later extended into the areas of general physics, materials science and
magnetism / electromagnetism has been carried out. Relevant encyclopedias and
physics dictionaries have also been consulted. The survey has revealed various
other substantial misconceptions and/or misinterpretations than those
originally identified in the SSP/CMP area. The results are presented here to
help clarifying the misconceptions and misinterpretations in question. The
physics education aspects arising from the textbook survey are also discussed.
Additionally, analysis of the CMP examination results concerning questions
pertinent for the hysteresis loop is provided.
|
Infinitesimal bialgebras were introduced by Joni and Rota. An infinitesimal
bialgebra is at the same time an algebra and coalgebra, in such a way that the
comultiplication is a derivation. Twenty years after Joni and Rota, Aguiar
introduced the concept of an infinitesimal (non-unitary) Hopf algebra. In this
paper we study infinitesimal unitary bialgebras and infinitesimal unitary Hopf
algebras, in contrary to Aguiar's approach. Using an infinitesimal version of
the Hochschild 1-cocycle condition, we prove respectively that a class of
decorated planar rooted forests is the free cocycle infinitesimal unitary
bialgebra and free cocycle infinitesimal unitary Hopf algebra on a set. As an
application, we obtain that the planar rooted forests is the free cocycle
infinitesimal unitary Hopf algebra on the empty set.
|
We present a dataset of open source software developed mainly by enterprises
rather than volunteers. This can be used to address known generalizability
concerns, and, also, to perform research on open source business software
development. Based on the premise that an enterprise's employees are likely to
contribute to a project developed by their organization using the email account
provided by it, we mine domain names associated with enterprises from open data
sources as well as through white- and blacklisting, and use them through three
heuristics to identify 17,264 enterprise GitHub projects. We provide these as a
dataset detailing their provenance and properties. A manual evaluation of a
dataset sample shows an identification accuracy of 89%. Through an exploratory
data analysis we found that projects are staffed by a plurality of enterprise
insiders, who appear to be pulling more than their weight, and that in a small
percentage of relatively large projects development happens exclusively through
enterprise insiders.
|
The lifetimes of charmed mesons have been measured using 11.1fb$^{-1} of data
collected with the Belle detector at KEKB. Each candidate is fully
reconstructed to identify the flavor of the charmed meson. The lifetimes are
measured to be $\tau(D^0)=(414.5\pm1.7)$ fs, $\tau(D^+)=(1029\pm12)$ fs and
$\tau(D_s^+)=(488.4^{+7.8}_{-7.7})$ fs, where the error is statistical only.
The mixing parameter $y_{CP}$ is also measured to be
$y_{CP}=(1.16^{+1.67}_{-1.65}(stat.))%$ through the lifetime difference of
$D^0$ mesons decaying into CP-mixed states and CP eigenstates.
|
A great deal of recent data on event-by-event fluctuation and correlation
measurements has been released by several experiments at the SPS and RHIC.
Recent results on charge fluctuations, balance functions in pseudorapidity, and
average transverse momentum fluctuations will be reviewed. The results will be
compared to various model predictions after examining contributions to each
observable from known physics processes.
|
We present a brief introduction to the statistical properties of systems with
large fluctuations. We point out that for such systems the relevant statistical
quantities are scaling exponents and the nature of fluctuations is completely
different from the one belonging to system with small fluctuations. We then
propose two test to be performed on galaxy counts data as a function of
apparent magnitude, which may clarify in a direct way the nature of the large
scale galaxy clustering
|
We review the geometrical formulation of Quantum Mechanics to identify,
according to Klein's programme, the corresponding group of transformations. For
closed systems, it is the unitary group. For open quantum systems, the
semigroup of Kraus maps contains, as a maximal subgroup, the general linear
group. The same group emerges as the exponentiation of the $C^{*}$--algebra
associated with the quantum system, when thought of as a Lie algebra. Thus,
open quantum systems seem to identify the general linear group as associated
with quantum mechanics and moreover suggest to extend the Klein programme also
to groupoids. The usual unitary group emerges as a maximal compact subgroup of
the general linear group.
|
A simple variogram model with two parameters is presented that includes the
power variogram for the fractional Brownian motion, a modified De Wijsian
model, the generalized Cauchy model and the multiquadrics model. One parameter
controls the smoothness of the process. The other parameter allows for a smooth
parametrization between stationary and intrinsically stationary second order
processes in a Gaussian framework, or between mixing and non-ergodic max-stable
processes when modeling spatial extremes by a Brown-Resnick process.
|
Substellar multiplicity is a key outcome of the formation process. The
biggest challenge for the next decade will be to distinguish between the
formation history, environmental conditions, and dynamical evolution leading to
the least massive brown dwarfs and the most massive planets at the tail ends of
their mass functions. In this white paper, we advocate for a comprehensive
characterization of both the statistical distributions of the population of
ultracool dwarf multiple systems and the fundamental properties of their
individual components as a function of age. A space-based precision astrometry
mission in near-infrared wavelengths would provide the necessary measurements
to identify and characterize age-calibrated populations of multiple systems.
|
A Nijenhuis operator on a manifold $M$ is a $(1,1)$ tensor $\mathcal N$ whose
Nijenhuis-torsion vanishes. A Nijenhuis operator $\mathcal N$ on $M$ determines
a Lie algebroid structure $(TM)_{\mathcal N}$ on the tangent bundle $TM$. In
this sense a Nijenhuis operator can be seen as an infinitesimal object. In this
paper, we identify its "global counterpart". Namely, we show that when the Lie
algebroid $(TM)_{\mathcal N}$ is integrable, then it integrates to a Lie
groupoid equipped with appropriate additional structure responsible for
$\mathcal N$, and viceversa, the Lie algebroid of a Lie groupoid equipped with
such additional structure is of the type $(TM)_{\mathcal N}$ for some Nijenhuis
operator $\mathcal N$. We illustrate our integration result in various
examples.
|
It has frequently been claimed in the literature that the classical physical
predictions of scalar tensor theories of gravity depend on the conformal frame
in which the theory is formulated. We argue that this claim is false, and that
all classical physical predictions are conformal-frame invariants. We also
respond to criticisms by Vollick [gr-qc/0312041], in which this issue arises,
of our recent analysis of the Palatini form of 1/R gravity.
|
We present maps classifying regions of the sky according to their information
gain potential as quantified by the Fisher information. These maps can guide
the optimal retrieval of relevant physical information with targeted
cosmological searches. Specifically, we calculate the response of observed
cosmic structures to perturbative changes in the cosmological model and chart
their respective contributions to the Fisher information. Our physical forward
modeling machinery transcends the limitations of contemporary analyses based on
statistical summaries to yield detailed characterizations of individual 3D
structures. We demonstrate this using galaxy counts data and showcase the
potential of our approach by studying the information gain of the Coma cluster.
We find that regions in the vicinity of the filaments and cluster core, where
mass accretion ensues from gravitational infall, are the most informative about
our physical model of structure formation in the Universe. Hence, collecting
data in those regions would be most optimal for testing our model predictions.
The results presented in this work are the first of their kind and elucidate
the inhomogeneous distribution of cosmological information in the Universe.
This study paves a new way forward to perform efficient targeted searches for
the fundamental physics of the Universe, where search strategies are
progressively refined with new cosmological data sets within an active learning
framework.
|
Extensions of the Standard Model often come with additional, possibly
electroweakly charged Higgs states, the prototypal example being the
Two-Higgs-Doublet Model. While collider phenomenology does not exclude the
possibility for some of these new scalar fields to be light, it is relatively
natural to consider masses in the multi-TeV range, in which case the only
remaining light Higgs boson automatically receives SM-like properties. The
appearance of a hierarchy between the new-physics states and the electroweak
scale then leads to sizable electroweak corrections, e. g. in the decays of the
heavy Higgs bosons, which are dominated by effects of infrared type, namely
Sudakov logarithms. Such radiative contributions obviously affect the two-body
decays, but should also be paired with the radiation of electroweak gauge
bosons (or lighter Higgs bosons) for a consistent picture at the one-loop
order. Resummation of the leading terms is also relatively easy to achieve. We
re-visit these questions in the specific case of the fermionic decays of heavy
Higgs particles in the Next-to-Minimal Supersymmetric Standard Model, in
particular pointing out the consequences of the three-body final states for the
branching ratios of the heavy scalars.
|
Node similarity scores are a foundation for machine learning in graphs for
clustering, node classification, anomaly detection, and link prediction with
applications in biological systems, information networks, and recommender
systems. Recent works on link prediction use vector space embeddings to
calculate node similarities in undirected networks with good performance.
Still, they have several disadvantages: limited interpretability, need for
hyperparameter tuning, manual model fitting through dimensionality reduction,
and poor performance from symmetric similarities in directed link prediction.
We propose MapSim, an information-theoretic measure to assess node similarities
based on modular compression of network flows. Unlike vector space embeddings,
MapSim represents nodes in a discrete, non-metric space of communities and
yields asymmetric similarities in an unsupervised fashion. We compare MapSim on
a link prediction task to popular embedding-based algorithms across 47 networks
and find that MapSim's average performance across all networks is more than 7%
higher than its closest competitor, outperforming all embedding methods in 11
of the 47 networks. Our method demonstrates the potential of compression-based
approaches in graph representation learning, with promising applications in
other graph learning tasks.
|
We analyze the existence of vector spaces of large dimension inside the set
$\mathcal{C}(L, \K) \setminus \overline{\mathcal{A}}$, where $L$ is a compact
Hausdorff space and $\mathcal{A}$ is a self-adjoint subalgebra of $\mathcal
C(L, \K)$ that vanishes nowhere on $L$ and does not necessarily separate the
points of $L$. The results depend strongly on an equivalence relation that is
defined on the algebra $\mathcal{A}$, denoted by $\sim_{\mathcal{A}}$, and a
cardinal number that depends on $\sim_{\mathcal{A}}$ which we call the order of
$\sim_{\mathcal{A}}$. We then introduce two different cases, when the order of
$\sim_{\mathcal{A}}$ is finite or infinite. In the finite case, we show that
$\mathcal{C}(L, \K) \setminus \overline{\mathcal{A}}$ is $n$-lineable but not
$(n+1)$-lineable with $n$ being the order of $\sim_{\mathcal{A}}$. On the other
hand, when the order of $\sim_{\mathcal{A}}$ is infinite, we obtain general
results assuming, for instance, that the codimension of the closure of
$\mathcal{A}$ is infinite or when $L$ is sequentially compact. To be more
precise, we introduce the notion of the Stone-Weiestrass character of $L$ which
is closely related to the topological weight of $L$ and allows us to describe
the lineability of $\mathcal{C}(L, \K) \setminus \overline{\mathcal{A}}$ in
terms of the Stone-Weierstrass character of subsets of $\sim_{\mathcal A}$. We
also prove, in the classical case, that $(\mathcal{C}(\partial{D}, \C)
\setminus \overline{\mbox{Pol}(\partial{D})}) \cup \{0\}$ (where
$\mbox{Pol}(\partial{D})$ is the set of all complex polynomials in one variable
restricted to the boundary of the unit disk) contains an isometric copy of
$\text{Hol}(\partial{D})$ and is strongly $\mathfrak c$-algebrable, extending
previous results from the literature.
|
Gamma-ray telescopes in space are bombarded by large fluxes of charged
particles, photons and secondary neutrons. These particles and radiation pose a
threat to the nominal operation of satellites and limit the detection
sensitivity of gamma-ray instruments. The background noise generated in
gamma-ray space detectors by impinging particles is always much higher than the
astrophysical signal to be detected. In this chapter, we present the different
types of orbits suitable for gamma-ray missions, discussing their advantages
and disadvantages, as well as the value of experiments embarked in
stratospheric balloons. We then review the physical properties of all the
background components in the different orbits and the stratosphere.
|
Measuring the complex permittivity of material is essential in many scenarios
such as quality check and component analysis. Generally, measurement methods
for characterizing the material are based on the usage of vector network
analyzer, which is large and not easy for on-site measurement, especially in
high frequency range such as millimeter wave (mmWave). In addition, some
measurement methods require the destruction of samples, which is not suitable
for non-destructive inspection. In this work, a small distance increment (SDI)
method is proposed to non-destructively measure the complex permittivity of
material. In SDI, the transmitter and receiver are formed as the monostatic
radar, which is facing towards the material under test (MUT). During the
measurement, the distance between radar and MUT changes with small increments
and the signals are recorded at each position. A mathematical model is
formulated to depict the relationship among the complex permittivity, distance
increment, and measured signals. By fitting the model, the complex permittivity
of MUT is estimated. To implement and evaluate the proposed SDI method, a
commercial off-the-shelf mmWave radar is utilized and the measurement system is
developed. Then, the evaluation was carried out on the acrylic plate. With the
proposed method, the estimated complex permittivity of acrylic plate shows good
agreement with the literature values, demonstrating the efficacy of SDI method
for characterizing the complex permittivity of material.
|
This work proposes a novel, general and robust method of determining bond
micromoduli for anisotropic linear elastic bond-based peridynamics. The problem
of finding a discrete distribution of bond micromoduli that reproduces an
anisotropic peridynamic stiffness tensor is cast as a least-squares problem.
The proposed numerical method is able to find a distribution of bond
micromoduli that is able to exactly reproduce a desired anisotropic stiffness
tensor provided conditions of Cauchy's relations are met. Examples of all eight
possible elastic material symmetries, from triclinic to isotropic are given and
discussed in depth. Parametric studies are conducted to demonstrate that the
numerical method is robust enough to handle a variety of horizon sizes,
neighborhood shapes, influence functions and lattice rotation effects. Finally,
an example problem is presented to demonstrate that the proposed method is
physically sound and that the solution agrees with the analytical solution from
classical elasticity. The proposed method has great potential for modeling of
deformation and fracture in anisotropic materials with bond-based peridynamics.
|
We consider the problem of jointly estimating a collection of graphical
models for discrete data, corresponding to several categories that share some
common structure. An example for such a setting is voting records of
legislators on different issues, such as defense, energy, and healthcare. We
develop a Markov graphical model to characterize the heterogeneous dependence
structures arising from such data. The model is fitted via a joint estimation
method that preserves the underlying common graph structure, but also allows
for differences between the networks. The method employs a group penalty that
targets the common zero interaction effects across all the networks. We apply
the method to describe the internal networks of the U.S. Senate on several
important issues. Our analysis reveals individual structure for each issue,
distinct from the underlying well-known bipartisan structure common to all
categories which we are able to extract separately. We also establish
consistency of the proposed method both for parameter estimation and model
selection, and evaluate its numerical performance on a number of simulated
examples.
|
Anatomically guided PET reconstruction using MRI information has been shown
to have the potential to improve PET image quality. However, these improvements
are limited to PET scans with paired MRI information. In this work we employed
a diffusion probabilistic model (DPM) to infer T1-weighted-MRI (deep-MRI)
images from FDG-PET brain images. We then use the DPM-generated T1w-MRI to
guide the PET reconstruction. The model was trained with brain FDG scans, and
tested in datasets containing multiple levels of counts. Deep-MRI images
appeared somewhat degraded than the acquired MRI images. Regarding PET image
quality, volume of interest analysis in different brain regions showed that
both PET reconstructed images using the acquired and the deep-MRI images
improved image quality compared to OSEM. Same conclusions were found analysing
the decimated datasets. A subjective evaluation performed by two physicians
confirmed that OSEM scored consistently worse than the MRI-guided PET images
and no significant differences were observed between the MRI-guided PET images.
This proof of concept shows that it is possible to infer DPM-based MRI imagery
to guide the PET reconstruction, enabling the possibility of changing
reconstruction parameters such as the strength of the prior on anatomically
guided PET reconstruction in the absence of MRI.
|
Highly efficient anti-Stokes (AS) photoluminescence (PL) is observed from
halide perovskite quantum dots (QDs) due to their strong electron-phonon
interactions. The AS PL is particularly intriguing as it suggests the potential
for semiconductor optical cooling if the external quantum efficiency approaches
100%. However, the PL quantum efficiency in QDs is primarily dominated by
multiparticle nonradiative Auger recombination processes under intense
photoexcitation, which impose limits on the optical cooling gain. Here, we
investigate the Auger recombination of dot-in-crystal perovskites. We
quantitatively estimate the maximum optical cooling gain and the corresponding
excitation intensity. We further conducted optical cooling experiments and
demonstrate a maximum photo-cooling of approximately 9 K from room temperature.
Additionally, we confirmed that increasing the excitation intensity leads to a
transition from photo-cooling to photo-heating. These observations are
consistent with our time-resolved measurements, offering insights into the
potential and limitations of optical cooling in semiconductor QDs.
|
We introduce a perturbation $h_{\mu\nu}$ onto a background Lifshitz spacetime
and examine some of its consequences. In particular, we consider a radially
localized perturbation and compute the resulting holographic Green's function
to linearized order. At leading order, the Lifshitz Green's function
demonstrates suppression of spectral weight at low frequencies, and this
feature allows bulk perturbations in the IR to be partially hidden from local
boundary probes.
|
Fundamentally new families of carbon single walled nanotubes are proposed.
These nanotubes, called graphynes, result from the elongation of covalent
interconnections of graphite-based nanotubes by the introduction of yne groups.
Similarly to ordinary nanotubes, arm-chair, zig-zag, and chiral graphyne
nanotubes are possible. Electronic properties, predicted using tight-binding
and ab initio density functional methods, show a rich variety of metallic and
semiconducting behaviors.
|
An architectural framework, based on collaborative filtering using K-nearest
neighbor and cosine similarity, was developed and implemented to fit the
requirements for the company DecorRaid. The aim of the paper is to test
different evaluation techniques within the environment to research the
recommender systems performance. Three perspectives were found relevant for
evaluating a recommender system in the specific environment, namely dataset,
system and user perspective. With these perspectives it was possible to gain a
broader view of the recommender systems performance. Online A/B split testing
was conducted to compare the performance of small adjustments to the RS and to
test the relevance of the evaluation techniques. Key factors are solving the
sparsity and cold start problem, where the suggestion is to research a hybrid
RS combining Content-based and CF based techniques.
|
We study dynamics emergent from a two-dimensional reaction--diffusion process
modelled via a finite lattice dynamical system, as well as an analogous PDE
system, involving spatially nonlocal interactions. These models govern the
evolution of cells in a bioactive porous medium, with evolution of the local
cell density depending on a coupled quasi--static fluid flow problem. We
demonstrate differences emergent from the choice of a discrete lattice or a
continuum for the spatial domain of such a process. We find long--time
oscillations and steady states in cell density in both lattice and continuum
models, but that the continuum model only exhibits solutions with vertical
symmetry, independent of initial data, whereas the finite lattice admits
asymmetric oscillations and steady states arising from symmetry-breaking
bifurcations. We conjecture that it is the structure of the finite lattice
which allows for more complicated asymmetric dynamics. Our analysis suggests
that the origin of both types of oscillations is a nonlocal reaction-diffusion
mechanism mediated by quasi-static fluid flow.
|
We applied the currently most comprehensive version of the
statistical-parallax technique to derive kinematical parameters of the maser
sample with 136 sources. Our kinematic model comprises the overall rotation of
the Galactic disk and the spiral density-wave effects. We take into account the
variation of radial velocity dispersion with Galactocentric distance. The best
description of the velocity field is provided by the model with constant radial
and vertical velocity dispersions, $(\sigma U0, \sigma W0) \approx (9.4 \pm
0.9~, 5.9 \pm 0.8)~ km/s$. We compute flat Galactic rotation curve over the
Galactocentric distance interval from 3 to 15 kpc and find the local circular
rotation velocity to be $ V_0 \approx (235-238)$~ km/s $\pm 7$~ km/s. We also
determine the parameters of the four-armed spiral pattern (pitch angle $i
\approx (-10.4 \pm 0.3)^\circ$ and the phase of the Sun $\chi_0 \approx (125
\pm 10) ^\circ$). The radial and tangential spiral perturbations are about $f_R
\approx (-6.9 \pm 1.4)$~km/s, $f_\Theta \approx (+2.8 \pm 1.0$) ~km/s. The
kinematic data yield a solar Galactocentric distance of $R_0 \approx (8.24 \pm
0.12)~kpc$. Based on rotation curve parameters and the asymmetric drift we
Infer the exponential disk scale $H_D \approx (2.7 \pm 0.2)$ ~kpc under
assumption of marginal stability of the intermediate-age disk, and finally we
estimate the minimum local surface disk density, $\Sigma (R_0) > (26 \pm 3) ~
M_\odot pc^{-2}$.
|
We consider the problem of coordination via signaling in network congestion
games to improve social welfare deteriorated by incomplete information about
traffic flow. Traditional studies on signaling, which focus on exogenous
factors of congestion and ignore congestion externalities, fail to discuss the
oscillations of traffic flow. To address this gap, we formulate a problem of
designing a coordination signal on endogenous information about traffic flow
and introduce a it self-fulfilling characteristic of a signal that guarantees
an outcome flow consistent with the signal itself without causing the unwanted
oscillation. An instance of the self-fulfilling signal is shown in the case of
a Gaussian signal distribution. In addition, we show simple numerical examples.
The results reveal how a self-fulfilling signal suppresses the oscillation and
simultaneously improves social welfare through improved network efficiency.
|
Sequence generation models for dialogue are known to have several problems:
they tend to produce short, generic sentences that are uninformative and
unengaging. Retrieval models on the other hand can surface interesting
responses, but are restricted to the given retrieval set leading to erroneous
replies that cannot be tuned to the specific context. In this work we develop a
model that combines the two approaches to avoid both their deficiencies: first
retrieve a response and then refine it -- the final sequence generator treating
the retrieval as additional context. We show on the recent CONVAI2 challenge
task our approach produces responses superior to both standard retrieval and
generation models in human evaluations.
|
Instantons on the Taub-NUT space are related to `bow solutions' via a
generalization of the ADHM-Nahm transform. Both are related to complex
geometry, either via the twistor transform or via the Kobayashi-Hitchin
correspondence. We explore various aspects of this complex geometry, exhibiting
equivalences. For both the instanton and the bow solution we produce two monads
encoding each of them respectively. Identifying these monads we establish the
one-to-one correspondence between the instanton and the bow solution.
|
Within the covariant formulation of Light-Front Dynamics, in a scalar model
with the interaction Hamiltonian $H=-g\psi^{2}(x)\phi(x)$, we calculate
nonperturbatively the renormalized state vector of a scalar "nucleon" in a
truncated Fock space containing the $N$, $N\pi$ and $N\pi\pi$ sectors. The
model gives a simple example of non-perturbative renormalization which is
carried out numerically. Though the mass renormalization $\delta m^2$ diverges
logarithmically with the cutoff $L$, the Fock components of the "physical"
nucleon are stable when $L\to\infty$.
|
The time evolution of an exactly solvable layered feedforward neural network
with three-state neurons and optimizing the mutual information is studied for
arbitrary synaptic noise (temperature). Detailed stationary
temperature-capacity and capacity-activity phase diagrams are obtained. The
model exhibits pattern retrieval, pattern-fluctuation retrieval and spin-glass
phases. It is found that there is an improved performance in the form of both a
larger critical capacity and information content compared with three-state
Ising-type layered network models. Flow diagrams reveal that saddle-point
solutions associated with fluctuation overlaps slow down considerably the flow
of the network states towards the stable fixed-points.
|
In this paper, we consider the following two-component elliptic system with
critical growth \begin{equation*}
\begin{cases}
-\Delta u+(V_1(x)+\lambda)u=\mu_1u^{3}+\beta uv^{2}, \ \ x\in \mathbb{R}^4,
-\Delta v+(V_2(x)+\lambda)v=\mu_2v^{3}+\beta vu^{2}, \ \ x\in \mathbb{R}^4 ,
% u\geq 0, \ \ v\geq 0 \ \text{in} \ \R^4.
\end{cases} \end{equation*} where $V_j(x) \in L^{2}(\mathbb{R}^4)$ are
nonnegative potentials and the nonlinear coefficients $\beta ,\mu_j$, $j=1,2$,
are positive. Here we also assume $\lambda>0$. By variational methods combined
with degree theory, we prove some results about the existence and multiplicity
of positive solutions under the hypothesis $\beta>\max\{\mu_1,\mu_2\}$. These
results generalize the results for semilinear Schr\"{o}dinger equation on half
space by Cerami and Passaseo (SIAM J. Math. Anal., 28, 867-885, (1997)) to the
above elliptic system, while extending the existence result from Liu and Liu
(Calc. Var. Partial Differential Equations, 59:145, (2020)).
|
Resistive Plate Chambers (RPCs) are gaseous detectors widely used in high
energy physics experiments, operating with a gas mixture primarily containing
Tetrafluoroethane (C$_{2}$H$_{2}$F$_{4}$), commonly known as R-134a, which has
a global warming potential (GWP) of 1430. To comply with European regulations
and explore environmentally friendly alternatives, the RPC EcoGas@GIF++
collaboration, involving ALICE, ATLAS, CMS, LHCb/SHiP, and EP-DT communities,
has undertaken intensive R\&D efforts to explore new gas mixtures for RPC
technology.
A leading alternative under investigation is HFO1234ze, boasting a low GWP of
6 and demonstrating reasonable performance compared to R-134a. Over the past
few years, RPC detectors with slightly different characteristics and
electronics have been studied using HFO and CO$_{2}$-based gas mixtures at the
CERN Gamma Irradiation Facility. An aging test campaign was launched in August
2022, and during the latest test beam in July 2023, all detector systems
underwent evaluation. This contribution will report the results of the aging
studies and the performance evaluations of the detectors with and without
irradiation.
|
In this paper, we focus on developing driver-in-the loop fuel economic
control strategy for multiple connected vehicles. The control strategy is
considered to work in a driver assistance framework where the controller gives
command to a driver to follow while considering the ability of the driver in
following control commands. Our proposed method uses vehicle-to-vehicle (V2V)
communication, exploits traffic lights' Signal Phase and Timing (SPAT)
information, models driver error injection with Markov chain, and employs
scenario tree based stochastic model predictive control to improve vehicle fuel
economy and traffic mobility. The proposed strategy is decentralized in nature
as every vehicle evaluates its own strategy using only local information.
Simulation results show the effect of consideration of driver error injection
when synthesizing fuel economic controllers in a driver assistance fashion.
|
The generation and control of nanoscale magnetic fields are of fundamental
interest in material science and a wide range of applications. Nanoscale
magnetic resonance imaging quantum spintronics for example require single spin
control with high precision and nanoscale spatial resolution using fast
switchable magnetic fields with large gradients. Yet, characterizing those
fields on nanometer length scales at high band width with arbitrary orientation
has not been possible so far. Here we demonstrate single electron and nuclear
spin coherent control using the magnetic field of a hard disc drive write head.
We use single electron spins for measuring fields with high spatial resolution
and single nuclear spins for large band width measurements. We are able to
derive field profiles from coherent spin Rabi oscillations close to GHz in
fields with gradients of up to 10 mT/nm and measure all components of a static
and dynamic magnetic field independent of its orientation. Our method paves the
way for precision measurement of the magnetic fields of nanoscale write heads
important for future miniaturization of the devices.
|
We present photometric and spectroscopic observations of a peculiar variable
(designated DASCH J075731.1+201735 or J0757) discovered from our DASCH project
using the digitized Harvard College Observatory archival photographic plates.
It brightened by about 1.5 magnitudes in B within a year starting in 1942, and
then slowly faded back to its pre-outburst brightness from 1943 to the 1950s.
The mean brightness level was stable before and after the outburst, and
ellipsoidal variations with a period of $P=119.18\pm0.07$ days are seen,
suggesting that the star is tidally distorted. Radial-velocity measurements
indicate that the orbit is nearly circular ($e=0.02\pm0.01$) with a
spectroscopic period that is the same as the photometric period. The binary
consists of a $1.1\pm0.3 M_\odot$ M0III star, and a $0.6\pm0.2 M_\odot$
companion, very likely a white dwarf (WD). Unlike other symbiotic binaries,
there is no sign of emission lines or a stellar wind in the spectra. With an
outburst timescale of ~10 years and estimated B band peak luminosity M_B~0.7,
J0757 is different from any other known classic or symbiotic novae. The most
probable explanation of the outburst is Hydrogen shell-burning on the WD,
although an accretion-powered flare cannot be ruled out.
|
Automated detection of acoustic signals is crucial for effective monitoring
of vocal animals and their habitats across ecologically-relevant spatial and
temporal scales. Recent advances in deep learning have made these approaches
more accessible. However, there are few deep learning approaches that can be
implemented natively in the R programming environment; approaches that run
natively in R may be more accessible for ecologists. The "torch for R"
ecosystem has made the use of transfer learning with convolutional neural
networks accessible for R users. Here, we evaluate a workflow that uses
transfer learning for the automated detection of acoustic signals from passive
acoustic monitoring (PAM) data. Our specific goals include: 1) present a method
for automated detection of gibbon calls from PAM data using the "torch for R"
ecosystem; 2) compare the results of transfer learning for six pretrained CNN
architectures; and 3) investigate how well the different architectures perform
on datasets of the female calls from two different gibbon species: the northern
grey gibbon (Hylobates funereus) and the southern yellow-cheeked crested gibbon
(Nomascus gabriellae). We found that the highest performing architecture
depended on the test dataset. We successfully deployed the top performing model
for each gibbon species to investigate spatial of variation in gibbon calling
behavior across two grids of autonomous recording units in Danum Valley
Conservation Area, Malaysia and Keo Seima Wildlife Sanctuary, Cambodia. The
fields of deep learning and automated detection are rapidly evolving, and we
provide the methods and datasets as benchmarks for future work.
|
In this letter, we develop a framework to study the mechanical response of
athermal amorphous solids via a coupling of mesoscale and microscopic models.
Using measurements of coarse grained quantities from simulations of dense
disordered particulate systems, we present a coherent elasto-plastic model
approach for deformation and flow of yield stress materials. For a given set of
parameters, this model allows to match consistently transient and steady state
features of driven disordered systems under both applied shear-rate and creep
protocols.
|
Two spins located at the edge of a quantum spin Hall insulator may interact
with each other via indirect spin-exchange interaction mediated by the helical
edge states, namely the RKKY interaction, which can be measured by the magnetic
correlation between the two spins. By means of the newly developed natural
orbitals renormalization group (NORG) method, we investigated the magnetic
correlation between two Kondo impurities interacting with the helical edge
states, based on the Kane-Mele model defined in a finite zigzag graphene
nanoribbon with spin-orbital coupling (SOC). We find that the SOC effect breaks
the symmetry in spatial distribution of the magnetic correlation, leading to
anisotropy in the RKKY interaction. Specifically, the total correlation is
always ferromagnetic (FM) when the two impurities are located at the same
sublattice, while it is always antiferromagnetic (AFM) when at the different
sublattices. Meanwhile, the behavior of the in-plane correlation is consistent
with that of the total correlation. However, the out-of-plane correlation can
be tuned from FM to AFM by manipulating either the Kondo coupling or the
interimpurity distance. Furthermore, the magnetic correlation is tunable by the
SOC, especially that the out-of-plane correlation can be adjusted from FM to
AFM by increasing the strength of SOC. Dynamic properties of the system,
represented by the spin-staggered excitation spectrum and the spin-staggered
susceptibility at the two impurity sites, are finally explored. It is shown
that the spin-staggered susceptibility is larger when the two impurities are
located at the different sublattices than at the same sublattice, which is
consistent with the behavior of the out-of-plane correlation. On the other
hand, our study further demonstrates that the NORG is an effective numerical
method for studying the quantum impurity systems.
|
The fractional Laplacian operator, $-(-\triangle)^{\frac{\alpha}{2}}$,
appears in a wide class of physical systems, including L\'evy flights and
stochastic interfaces. In this paper, we provide a discretized version of this
operator which is well suited to deal with boundary conditions on a finite
interval. The implementation of boundary conditions is justified by appealing
to two physical models, namely hopping particles and elastic springs. The
eigenvalues and eigenfunctions in a bounded domain are then obtained
numerically for different boundary conditions. Some analytical results
concerning the structure of the eigenvalues spectrum are also obtained.
|
We use a variational method for generating probability distributions,
specifically, the Uniform, the Normal, the Binomial distribution, and the
Poisson distribution. To do the same, we use many different architectures for
the two, three and four-qubit cases using the Jensen-Shannon divergence as our
objective function. We use gradient descent with momentum as our optimization
scheme instead of conventionally used gradient descent. To calculate the
gradient, we use the parameter shift rule, whose formulation we modify to take
the probability values as outputs instead of the conventionally taken
expectation values. We see that this method can approximate probability
distributions, and there exists a specific architecture which outperforms other
architectures, and this architecture depends on the number of qubits. The four,
three and two-qubit cases consist of a parameterized layer followed by an
entangling layer; a parameterized layer followed by an entangling layer, which
is followed by a parameterized layer and only parameterized layers,
respectively.
|
A quick proof of Bing's theorem indicated by the title is given. The proof
also concludes Gumerov's result on covering degrees of solenoids.
|
Camouflage is a common visual phenomenon, which refers to hiding the
foreground objects into the background images, making them briefly invisible to
the human eye. Previous work has typically been implemented by an iterative
optimization process. However, these methods struggle in 1) efficiently
generating camouflage images using foreground and background with arbitrary
structure; 2) camouflaging foreground objects to regions with multiple
appearances (e.g. the junction of the vegetation and the mountains), which
limit their practical application. To address these problems, this paper
proposes a novel Location-free Camouflage Generation Network (LCG-Net) that
fuse high-level features of foreground and background image, and generate
result by one inference. Specifically, a Position-aligned Structure Fusion
(PSF) module is devised to guide structure feature fusion based on the
point-to-point structure similarity of foreground and background, and introduce
local appearance features point-by-point. To retain the necessary identifiable
features, a new immerse loss is adopted under our pipeline, while a background
patch appearance loss is utilized to ensure that the hidden objects look
continuous and natural at regions with multiple appearances. Experiments show
that our method has results as satisfactory as state-of-the-art in the
single-appearance regions and are less likely to be completely invisible, but
far exceed the quality of the state-of-the-art in the multi-appearance regions.
Moreover, our method is hundreds of times faster than previous methods.
Benefitting from the unique advantages of our method, we provide some
downstream applications for camouflage generation, which show its potential.
The related code and dataset will be released at
https://github.com/Tale17/LCG-Net.
|
Quasi-convexity in probabilistic mixtures is a common and useful property in
decision analysis. We study a general class of non-monotone mappings, called
the generalized rank-dependent functions, which include the preference models
of expected utilities, dual utilities, and rank-dependent utilities as special
cases, as well as signed Choquet integrals used in risk management. As one of
our main results, quasi-convex (in mixtures) signed Choquet integrals precisely
include two parts: those that are convex (in mixtures) and the class of scaled
quantile-spread mixtures, and this result leads to a full characterization of
quasi-convexity for generalized rank-dependent functions. Seven equivalent
conditions for quasi-convexity in mixtures are obtained for dual utilities and
signed Choquet integrals. We also illustrate a conflict between convexity in
mixtures and convexity in risk pooling among constant-additive mappings.
|
This is a discussion of how we can understand the world-view given to us by
the Everett interpretation of quantum mechanics, and in particular the role
played by the concept of `world'. The view presented is that we are entitled to
use `many-worlds' terminology even if the theory does not specify the worlds in
the formalism; this is defended by means of an extensive analogy with the
concept of an `instant' or moment of time in relativity, with the lack of a
preferred foliation of spacetime being compared with the lack of a preferred
basis in quantum theory. Implications for identity of worlds over time, and for
relativistic quantum mechanics, are discussed.
|
We report on the structural and magnetic properties of a cobalt-implanted ZnO
film grown on a sapphire substrate. X-ray diffraction and transmission electron
microscopy reveal the presence of a (10-10)-oriented hexagonal Co phase in the
Al2O3 sapphire substrate, but not in the ZnO film. Co clusters, with a diameter
of is about 5-6 nm, form a Co rich layer in the substrate close to the
ZnO/Al2O3 interface. Magnetization measurements indicate that there exist two
different magnetic phases in the implanted region. One originates from the Co
clusters in Al2O3, the other one belongs to a homogeneous ferromagnetic phase
with a ferromagnetic Curie temperature far above room temperature and can be
attributed to Co substitution on Zn sites in the ZnO layer. We have observed
magnetic dichroism at the Co L2,3 and O K edges at room temperature as well as
the multiplet structure in x-ray absorption spectra around the Co L3 edge,
supporting the intrinsic nature of the observed ferromagnetism in Co-implanted
ZnO film. The magnetic moment per substituted cobalt is found about 2.81 Bohr
magneton which is very close to the theoretical expected value of 3 Bohr
magneton per Co atom for Co 2+ in its high spin state.
|
A formalism is developed for the rigorous study of solvable fractional
quantum Hall parent Hamiltonians with Landau level mixing. The idea of
organization through "generalized Pauli principles" is expanded to allow for
root level entanglement, giving rise to "entangled Pauli principles". Through
the latter, aspects of the effective field theory description become ingrained
in exact microscopic solutions for a great wealth of phases for which no
similar single Landau level description is known. We discuss in detail braiding
statistic, edge theory, and rigorous zero mode counting for the Jain-221 state
as derived from a microscopic Hamiltonian. The relevant root-level entanglement
is found to feature an AKLT-type MPS structure associated with an emergent
SU(2) symmetry.
|
We explicitly construct, in terms of Gelfand--Tsetlin tableaux, a new family
of simple positive energy representations for the simple affine vertex algebra
V_k(sl_{n+1}) in the minimal nilpotent orbit of sl_{n+1}. These representations
are quotients of induced modules over the affine Kac-Moody algebra of sl_n+1
and include in particular all admissible simple highest weight modules and all
simple modules induced from sl_2. Any such simple module in the minimal
nilpotent orbit has bounded weight multiplicities.
|
Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively
large number of training samples for learning complex tasks. Many recent works
on speeding up Deep RL have focused on distributed training and simulation.
While distributed training is often done on the GPU, simulation is not. In this
work, we propose using GPU-accelerated RL simulations as an alternative to CPU
ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising
speed-ups of learning various continuous-control, locomotion tasks. With one
GPU and CPU core, we are able to train the Humanoid running task in less than
20 minutes, using 10-1000x fewer CPU cores than previous works. We also
demonstrate the scalability of our simulator to multi-GPU settings to train
more challenging locomotion tasks.
|
We consider the phenomenology of a naturally leptophobic $Z$-prime boson in
the 1 to 10 GeV mass range. The $Z$-prime's couplings to leptons arise only via
a radiatively-generated kinetic mixing with the $Z$ and photon, and hence are
suppressed. We map out the allowed regions of the mass-coupling plane and show
that such a $Z$-prime that couples to quarks with electromagnetic strength is
not excluded by the current data. We then discuss possible signatures at bottom
and charm factories.
|
Assembly planning is the core of automating product assembly, maintenance,
and recycling for modern industrial manufacturing. Despite its importance and
long history of research, planning for mechanical assemblies when given the
final assembled state remains a challenging problem. This is due to the
complexity of dealing with arbitrary 3D shapes and the highly constrained
motion required for real-world assemblies. In this work, we propose a novel
method to efficiently plan physically plausible assembly motion and sequences
for real-world assemblies. Our method leverages the assembly-by-disassembly
principle and physics-based simulation to efficiently explore a reduced search
space. To evaluate the generality of our method, we define a large-scale
dataset consisting of thousands of physically valid industrial assemblies with
a variety of assembly motions required. Our experiments on this new benchmark
demonstrate we achieve a state-of-the-art success rate and the highest
computational efficiency compared to other baseline algorithms. Our method also
generalizes to rotational assemblies (e.g., screws and puzzles) and solves
80-part assemblies within several minutes.
|
We show that a semisimple overconvergent "absolutely unit-root" F-isocrystal
on a geometrically connected smooth variety over a finite field becomes
constant over a finite covering.
|
We present a novel technique to automatically colorize grayscale images that
combine the U-Net model and Fusion Layer features. This approach allows the
model to learn the colorization of images from pre-trained U-Net. Moreover, the
Fusion layer is applied to merge local information results dependent on small
image patches with global priors of an entire image on each class, forming
visually more compelling colorization results. Finally, we validate our
approach with a user study evaluation and compare it against state-of-the-art,
resulting in improvements.
|
We suggest a global perspective on dynamic network flow problems that takes
advantage of the similarities to port-Hamiltonian dynamics. Dynamic minimum
cost flow problems are formulated as open-loop optimal control problems for
general port-Hamiltonian systems with possibly state-dependent system matrices.
We prove well-posedness of these systems and characterize optimal controls by
the first-order optimality system, which is the starting point for the
derivation of an adjoint-based gradient descent algorithm. Our theoretical
analysis is complemented by a proof of concept, where we apply the proposed
algorithm to static minimum cost flow problems and dynamic minimum cost flow
problems on a simple directed acyclic graph. We present numerical results to
validate the approach.
|
For their ability to capture non-linearities in the data and to scale to
large training sets, local Support Vector Machines (SVMs) have received a
special attention during the past decade. In this paper, we introduce a new
local SVM method, called L$^3$-SVMs, which clusters the input space, carries
out dimensionality reduction by projecting the data on landmarks, and jointly
learns a linear combination of local models. Simple and effective, our
algorithm is also theoretically well-founded. Using the framework of Uniform
Stability, we show that our SVM formulation comes with generalization
guarantees on the true risk. The experiments based on the simplest
configuration of our model (i.e. landmarks randomly selected, linear
projection, linear kernel) show that L$^3$-SVMs is very competitive w.r.t. the
state of the art and opens the door to new exciting lines of research.
|
We explore the potential manifestation of a hexaquark, the H particle, as a
constituent within neutron stars. The H particle, characterized by a quark
composition of $uuddss$, is constructed using the framework of Chromomagnetic
Interaction (CMI). Specifically, we contemplate the flavor-singlet state H with
$J^P=0^+$. Our computations indicate that the three-flavor hexaquark state, the
H particle, possesses a lower mass of $2212.7~\rm{MeV}$ in comparison to the
$d^*(2380)$, implying greater stability than the two-flavor $d^*(2380)$. The
analysis involving the H particle is carried out using the relativistic
mean-field (RMF) model. We investigate the influence of H particle couplings, a
key factor in determining the system stability, and focus on the potential
existence of H particle within neutron stars. We find that H particle could
potentially endure as a stable constituent within neutron stars, and lead to a
reduction of the maximum mass.
|
We undertake a systematic study of historic market volatility spanning
roughly five preceding decades. We focus specifically on the time series of
realized volatility (RV) of the S&P500 index and its distribution function. As
expected, the largest values of RV coincide with the largest economic upheavals
of the period: Savings and Loan Crisis, Tech Bubble, Financial Crisis and Covid
Pandemic. We address the question of whether these values belong to one of the
three categories: Black Swans (BS), that is they lie on scale-free, power-law
tails of the distribution; Dragon Kings (DK), defined as statistically
significant upward deviations from BS; or Negative Dragons Kings (nDK), defined
as statistically significant downward deviations from BS. In analyzing the
tails of the distribution with RV > 40, we observe the appearance of
"potential" DK which eventually terminate in an abrupt plunge to nDK. This
phenomenon becomes more pronounced with the increase of the number of days over
which the average RV is calculated -- here from daily, n=1, to "monthly," n=21.
We fit the entire distribution with a modified Generalized Beta (mGB)
distribution function, which terminates at a finite value of the variable but
exhibits a long power-law stretch prior to that, as well as Generalized Beta
Prime (GB2) distribution function, which has a power-law tail. We also fit the
tails directly with a straight line on a log-log scale. In order to ascertain
BS, DK or nDK behavior, all fits include their confidence intervals and
p-values are evaluated for the data points to check if they can come from the
respective distributions.
|
Early-stage firms play a significant role in driving innovation and creating
new products and services, especially for cybersecurity. Therefore, evaluating
their performance is crucial for investors and policymakers. This work presents
a financial evaluation of early-stage firms' performance in 19 cybersecurity
sectors using a private-equity dataset from 2010 to 2022 retrieved from
Crunchbase. We observe firms, their primary and secondary activities, funding
rounds, and pre and post-money valuations. We compare cybersecurity sectors
regarding the amount raised over funding rounds and post-money valuations while
inferring missing observations. We observe significant investor interest
variations across categories, periods, and locations. In particular, we find
the average capital raised (valuations) to range from USD 7.24 mln (USD 32.39
mln) for spam filtering to USD 45.46 mln (USD 447.22 mln) for the private cloud
sector. Next, we assume a log process for returns computed from post-money
valuations and estimate the expected returns, systematic and specific risks,
and risk-adjusted returns of investments in early-stage firms belonging to
cybersecurity sectors. Again, we observe substantial performance variations
with annualized expected returns ranging from 9.72\% for privacy to 177.27\%
for the blockchain sector. Finally, we show that overall, the cybersecurity
industry performance is on par with previous results found in private equity.
Our results shed light on the performance of cybersecurity investments and,
thus, on investors' expectations about cybersecurity.
|
We construct universal prediction systems in the spirit of Popper's
falsifiability and Kolmogorov complexity and randomness. These prediction
systems do not depend on any statistical assumptions (but under the IID
assumption they dominate, to within the usual accuracy, conformal prediction).
Our constructions give rise to a theory of algorithmic complexity and
randomness of time containing analogues of several notions and results of the
classical theory of Kolmogorov complexity and randomness.
|
A framework is presented for a statistical theory of neutron star glitches,
motivated by the results emerging from recent Gross-Pitaevskii simulations of
pinned, decelerating quantum condensates. It is shown that the observed glitch
size distributions cannot be reproduced if superfluid vortices unpin
independently via a Poisson process; the central limit theorem yields a narrow
Gaussian for the size distribution, instead of the broad, power-law tail
observed. This conclusion is not altered fundamentally when a range of pinning
potentials is included, which leads to excavation of the potential distribution
of occupied sites, vortex accumulation at strong pinning sites, and hence the
occasional, abnormally large glitch. Knock-on processes are therefore needed to
make the unpinning rate of a vortex conditional on the pinning state of its
near and/or remote neighbours, so that the Gaussian size distributions
resulting generically from the central limit theorem are avoided. At least two
knock-on processes, nearest- neighbour proximity knock-on and remote acoustic
knock-on, are clearly evident in the Gross-Pitaevskii simulation output. It is
shown that scale-invariant (i.e. power-law) vortex avalanches occur when
knock-on is included, provided that two specific relations hold between the
temperature and spin-down torque. This fine tuning is unlikely in an
astronomical setting, leaving the overall problem partly unsolved. A
state-dependent Poisson formalism is presented which will form the basis of
future studies in this area.
|
It is a common sense to apply the VFC (view frustum culling) of spatial
object to bounding cube of the object in 3D graphics. The accuracy of VFC can
not be guaranteed even in cube rotated three-dimensionally. In this paper is
proposed a method which is able to carry out more precise and fast VFC of any
spatial object in the image domain of cube by an analytic mapping, and is
demonstrated the effect of the method for terrain block on global surface.
|
Spiking Neural Network (SNN) naturally inspires hardware implementation as it
is based on biology. For learning, spike time dependent plasticity (STDP) may
be implemented using an energy efficient waveform superposition on memristor
based synapse. However, system level implementation has three challenges.
First, a classic dilemma is that recognition requires current reading for short
voltage$-$spikes which is disturbed by large voltage$-$waveforms that are
simultaneously applied on the same memristor for real$-$time learning i.e. the
simultaneous read$-$write dilemma. Second, the hardware needs to exactly
replicate software implementation for easy adaptation of algorithm to hardware.
Third, the devices used in hardware simulations must be realistic. In this
paper, we present an approach to address the above concerns. First, the
learning and recognition occurs in separate arrays simultaneously in
real$-$time, asynchronously $-$ avoiding non$-$biomimetic clocking based
complex signal management. Second, we show that the hardware emulates software
at every stage by comparison of SPICE (circuit$-$simulator) with MATLAB
(mathematical SNN algorithm implementation in software) implementations. As an
example, the hardware shows 97.5 per cent accuracy in classification which is
equivalent to software for a Fisher$-$Iris dataset. Third, the STDP is
implemented using a model of synaptic device implemented using HfO2 memristor.
We show that an increasingly realistic memristor model slightly reduces the
hardware performance (85 per cent), which highlights the need to engineer RRAM
characteristics specifically for SNN.
|
I show that by observing microlensing events both astrometrically and
photometrically, the Space Interferometry Mission (SIM) can measure the mass
function of stellar remnants in the Galactic bulge including white dwarfs,
neutron stars, and black holes. Neutron stars and black holes can be identified
individually, while white dwarfs are detected statistically from the sharp peak
in their mass function near M~ 0.6Msun. This peak is expected to be more than
twice as high as the `background' of main-sequence microlenses. I estimate that
of order 20% of the ~400 bulge microlensing events detected to date are due to
remnants, but show that these are completely unrecognizable from their time
scale distribution (the only observable that `normal' microlensing observations
produce). To resolve the white-dwarf peak, the SIM mass measurements must be
accurate to ~5%, substantially better than is required to measure the mass
function of the more smoothly distributed main sequence. Nevertheless, SIM
could measure the masses of about 20 bulge remnants in 500 hours of observing
time.
|
Artificial neural networks can be trained with relatively low-precision
floating-point and fixed-point arithmetic, using between one and 16 bits.
Previous works have focused on relatively wide-but-shallow, feed-forward
networks. We introduce a quantization scheme that is compatible with training
very deep neural networks. Quantizing the network activations in the middle of
each batch-normalization module can greatly reduce the amount of memory and
computational power needed, with little loss in accuracy.
|
We study the $\beta, N$ critical behaviour of non compact QED with $N$
species of light fermions, using a method we have proposed for unquenched
simulations. We find that there exist two phase transition lines: one, second
order, and the other, first order, that approaches asymptotically the $\beta=0$
axis. These two lines have different physical origin, the second one being
entirely due to fermions effects. We discuss the effect of the approximation
used, in terms of an expansion of the effective action in powers of $N$, and
conclude that the general features should not be affected by this
approximation.
|
The present article summarizes the work of the papers \cite{1} dealing with
the quantization of pure gravity and gravity coupled to a Maxwell field and a
cosmological constant in presence of spherical symmetry. The class of models
presented is intended as an interesting testing ground for the quantization of
full 3+1 gravity. We are working in Ashtekar's self-dual representation.
|
We construct quantum supersymmetric pairs $({\bold U},{\bold U}^\imath)$ of
type AIII and elucidate their fundamental properties. An $\imath$Schur duality
between the $\imath$quantum supergroup ${\bold U}^\imath$ and the Hecke algebra
of type B acting on a tensor space is established, providing a super
generalization of the $\imath$Schur duality of type AIII. Additionally, we
construct a (quasi) $K$-matrix for arbitrary parameters, which facilitates the
realization of the Hecke algebra action on the tensor space.
|
The total mass of clusters of galaxies is a key parameter to study massive
halos. It relates to numerous gravitational and baryonic processes at play in
the framework of large scale structure formation, thus rendering its
determination important but challenging. From a sample of the 11 X-ray bright
clusters selected from the excpres sample, we investigate the optical and X-ray
properties of clusters with respect to their total mass derived from weak
gravitational lensing. From multi-color wide field imaging obtained with
MegaCam at CFHT, we derive the shear profile of each individual cluster of
galaxies. We perform a careful investigation of all systematic sources related
to the weak lensing mass determination. The weak lensing masses are then
compared to the X-ray masses obtained from the analysis of XMM observations and
assuming hydrostatic equilibrium. We find a good agreement between the two mass
proxies although a few outliers with either perturbed morphology or poor
quality data prevent to derive robust mass estimates. The weak lensing mass is
also correlated with the optical richness and the total optical luminosity, as
well as with the X-ray luminosity, to provide scaling relations within the
redshift range 0.4<z<0.6. These relations are in good agreement with previous
works at lower redshifts. For the L_X-M relation we combine our sample with two
other cluster and group samples from the literature, thus covering two decades
in mass and X-ray luminosity, with a regular and coherent correlation between
the two physical quantities.
|
Graph Neural Networks (GNNs) have been popularly used for analyzing
non-Euclidean data such as social network data and biological data. Despite
their success, the design of graph neural networks requires a lot of manual
work and domain knowledge. In this paper, we propose a Graph Neural
Architecture Search method (GraphNAS for short) that enables automatic search
of the best graph neural architecture based on reinforcement learning.
Specifically, GraphNAS first uses a recurrent network to generate
variable-length strings that describe the architectures of graph neural
networks, and then trains the recurrent network with reinforcement learning to
maximize the expected accuracy of the generated architectures on a validation
data set. Extensive experimental results on node classification tasks in both
transductive and inductive learning settings demonstrate that GraphNAS can
achieve consistently better performance on the Cora, Citeseer, Pubmed citation
network, and protein-protein interaction network. On node classification tasks,
GraphNAS can design a novel network architecture that rivals the best
human-invented architecture in terms of test set accuracy.
|
The results of a search for new heavy $W^\prime$ bosons decaying to an
electron or muon and a neutrino using proton-proton collision data at a
centre-of-mass energy of $\sqrt{s} = 13$ TeV are presented. The dataset was
collected in 2015 and 2016 by the ATLAS experiment at the Large Hadron Collider
and corresponds to an integrated luminosity of 36.1 fb$^{-1}$. As no excess of
events above the Standard Model prediction is observed, the results are used to
set upper limits on the $W^\prime$ boson cross-section times branching ratio to
an electron or muon and a neutrino as a function of the $W^\prime$ mass.
Assuming a $W^\prime$ boson with the same couplings as the Standard Model $W$
boson, $W^\prime$ masses below 5.1 TeV are excluded at the 95% confidence
level.
|
The paper has been withdrawn by the authors due to some error in the
manuscript.
|
We consider a new momentum cut-off scheme for sums over zero-point energies,
containing an arbitrary function f(k) which interpolates smoothly between the
zero-point energies of the modes around the kink and those in flat space. A
term proportional to df(k)/dk modifies the result for the one-loop quantum mass
M^(1) as obtained from naive momentum cut-off regularization, which now agrees
with previous results, both for the nonsusy and susy case. We also introduce a
new regularization scheme for the evaluation of the one-loop correction to the
central charge Z^(1), with a cut-off K for the Dirac delta function in the
canonical commutation relations and a cut-off \Lambda for the loop momentum.
The result for Z^(1) depends only on whether K>\Lambda or K<\Lambda or
K=\Lambda. The last case yields the correct result and saturates the BPS bound,
M^(1)=Z^(1),in agreement with the fact that multiplet shortening does occur in
this N=(1,1) system. We show how to apply mode number regularization by
considering first a kink-antikink system, and also obtain the correct result
with this method. Finally we discuss the relation of these new schemes to
previous approaches based on the Born expansion of phase shifts and
higher-derivative regularization.
|
We discuss thermodynamic properties of open confining strings introduced via
static sources in the vacuum of Yang-Mills theory. We derive new sum rules for
the chromoelectric and chromomagnetic condensates and use them to show that the
presence of the confining string lowers the gluonic pressure in the bulk of the
system. The pressure deficit of the gluon plasma is related to the potential
energy in the system of heavy quarks and anti-quarks in the plasma.
|
As this is for the Bulletin of the A.M.S., it is not only a review of
Alexander Isaev's Spherical Tube Hypersurfaces but also a brief introduction to
CR geometry.
|
A geometric estimator is proposed for the rigid body attitude under
multi-rate measurements using discrete-time Lyapunov stability analysis in this
work. The angular velocity measurements are assumed to be sampled at a higher
rate compared to the attitude. The attitude determination problem from two or
more vector measurements in the body-fixed frame is formulated as Wahba's
problem. In the case when measurements are absent, a discrete-time model for
attitude kinematics is assumed in order to propagate the measurements. A
discrete-time Lyapunov function is constructed as the sum of a kinetic
energy-like term that is quadratic in the angular velocity estimation error and
an artificial potential energy-like term obtained from Wahba's cost function. A
filtering scheme is obtained by discrete-time stability analysis using a
suitable Lyapunov function. The analysis shows that the filtering scheme is
exponentially stable in the absence of measurement noise and the domain of
convergence is almost global. For a realistic evaluation of the scheme,
numerical experiments are conducted with inputs corrupted by bounded
measurement noise. Simulation results exhibit convergence of the estimated
states to a bounded neighborhood of the actual states.
|
We show that there is a basis of the set of K\"{a}hler differentials of an
irreducible germ of holomorphic plane curve whose non-trivial elements
correspond to dicritical foliations. Indeed, we discuss several concepts of
generation for the semimodule of values of K\"{a}hler differentials of the
curve and provide basis of K\"{a}hler differentials, for every of these
concepts, whose geometric properties are described. Moreover, we give an
algorithmic construction of the bases.
|
Lumped method (Electrical analogy) is a quick and easy way to model human
cardiovascular system. In this paper Lumped method is used for simulating a
complete model. It describes a 36-vessel model and cardiac system of human body
with details that could show hydrodynamic parameters of cardiovascular system.
Also this paper includes modeling of pulmonary, atrium, left and right
ventricles with their equivalent circuits. Exact modeling of right and left
ventricles pressure with division of ascending aorta into 27 segments increases
the accuracy of our simulation. In this paper we show that a calculated
pressure for aorta from our complex circuit is near to measured pressure by
using advanced medical instruments. Also it is shown that pressure graph from
brachial is so near to aortic pressure because of this its pressure signal is
usable instead of aortic pressure. Furthermore, obstruction in ascending aorta,
brachial and its effects has been showed in different figures.
|
In the solar corona, magnetic helicity slowly and continuously accumulates in
response to plasma flows tangential to the photosphere and magnetic flux
emergence through it. Analyzing this transfer of magnetic helicity is key for
identifying its role in the dynamics of active regions (ARs). The
connectivity-based helicity flux density method was recently developed for
studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes
into account the 3D nature of magnetic helicity by explicitly using knowledge
of the magnetic field connectivity, which allows it to faithfully track the
photospheric flux of magnetic helicity. Because the magnetic field is not
measured in the solar corona, modeled 3D solutions obtained from force-free
magnetic field extrapolations must be used to derive the magnetic connectivity.
Different extrapolation methods can lead to markedly different 3D magnetic
field connectivities, thus questioning the reliability of the
connectivity-based approach in observational applications. We address these
concerns by applying this method to the isolated and internally complex AR
11158 with different magnetic field extrapolation models. We show that the
connectivity-based calculations are robust to different extrapolation methods,
in particular with regards to identifying regions of opposite magnetic helicity
flux. We conclude that the connectivity-based approach can be reliably used in
observational analyses and is a promising tool for studying the transfer of
magnetic helicity in ARs and relate it to their flaring activity.
|
In this paper, we investigate thermophysical characteristics of near-Earth
asteroid (341843) 2008 EV5, based on our improved Advanced Thermal Physical
Model (ATPM) by considering the contribution of sunlight-reflection for rough
surface, along with four wavebands observations from Wide-field Infrared Survey
Explorer (WISE) and the radar-derived shape model. Here we derive that 2008 EV5
has a relatively low thermal inertia of $\Gamma=110 _{-12}^{+40}\rm~J m^{-2}
s^{-1/2} K^{-1}$ but a high roughness fraction. The geometric albedo and
effective diameter are then constrained to be $p_v=0.095_{-0.003}^{+0.016}$ and
$D_{\rm eff}=431_{-33}^{+6}\rm~m$, respectively. The low thermal inertia
indicates that 2008 EV5 may have undergone sufficient space weathering over
secular evolution. The high roughness may have resemblances to the appearances
of Bennu and Ryugu recently observed by spacecrafts, where a great number of
boulders are widely distributed on the asteroid's surface. Moreover, we
numerically perform 1000 backward simulations of 2008 EV5's cloned orbits
within 1 Myr to explore its origin, and present a probability of $\sim6.1\%$
that the asteroid originates from the main belt. Finally, we estimate that the
mean grain size of the surface ranges from 0.58 to 1.3 mm, and infer that it is
unlikely to find water ice on most area of 2008 EV5, but there may exist water
ice on high-latitudes near polar region.
|
To train deep learning models for vision-based action recognition of elders'
daily activities, we need large-scale activity datasets acquired under various
daily living environments and conditions. However, most public datasets used in
human action recognition either differ from or have limited coverage of elders'
activities in many aspects, making it challenging to recognize elders' daily
activities well by only utilizing existing datasets. Recently, such limitations
of available datasets have actively been compensated by generating synthetic
data from realistic simulation environments and using those data to train deep
learning models. In this paper, based on these ideas we develop ElderSim, an
action simulation platform that can generate synthetic data on elders' daily
activities. For 55 kinds of frequent daily activities of the elders, ElderSim
generates realistic motions of synthetic characters with various adjustable
data-generating options, and provides different output modalities including RGB
videos, two- and three-dimensional skeleton trajectories. We then generate KIST
SynADL, a large-scale synthetic dataset of elders' activities of daily living,
from ElderSim and use the data in addition to real datasets to train three
state-of the-art human action recognition models. From the experiments
following several newly proposed scenarios that assume different real and
synthetic dataset configurations for training, we observe a noticeable
performance improvement by augmenting our synthetic data. We also offer
guidance with insights for the effective utilization of synthetic data to help
recognize elders' daily activities.
|
In this article, we discuss metal-protein interactions in the Ag-lysozyme
complex by spectroscopic measurements. The analysis of the variation in
relative intensities of SERS bands reveal the orientation and the change in
conformation of the protein molecules on the Ag surface with time. The
interaction kinetics of metal-protein complexes has been analyzed over a period
of three hours via both Raman and absorption measurements. Our analysis
indicates that the Ag nanoparticles most likely interact with Trp-123 which is
in close proximity to Phe-34 of the lysozyme molecule.
|
Attaining ultra-reliable communication (URC) in fifth-generation (5G) and
beyond networks requires deriving statistics of channel in ultra-reliable
region by modeling the extreme events. Extreme value theory (EVT) has been
previously adopted in channel modeling to characterize the lower tail of
received powers in URC systems. In this paper, we propose a multivariate EVT
(MEVT)-based channel modeling methodology for tail of the joint distribution of
multi-channel by characterizing the multivariate extremes of multiple-input
multiple-output (MIMO) system. The proposed approach derives lower tail
statistics of received power of each channel by using the generalized Pareto
distribution (GPD). Then, tail of the joint distribution is modeled as a
function of estimated GPD parameters based on two approaches: logistic
distribution, which utilizes logistic distribution to determine dependency
factors among the Frechet transformed tail sequence and obtain a bi-variate
extreme value model, and Poisson point process, which estimates probability
measure function of the Pickands angular component to model bi-variate extreme
values. Finally, validity of the proposed models is assessed by incorporating
the mean constraint on probability measure function of Pichanks coordinates.
Based on the data collected within the engine compartment of Fiat Linea, we
demonstrate the superiority of proposed methodology compared to the
conventional extrapolation-based methods in providing the best fit to the
multivariate extremes.
|
A Hamiltonian graph $G$ of order $n$ is $k$-ordered, $2\leq k \leq n$, if for
every sequence $v_1, v_2, \ldots ,v_k$ of $k$ distinct vertices of $G$, there
exists a Hamiltonian cycle that encounters $v_1, v_2, \ldots , v_k$ in this
order. In this paper, answering a question of Ng and Schultz, we give a sharp
bound for the minimum degree guaranteeing that a graph is a $k$-ordered
Hamiltonian graph under some mild restrictions. More precisely, we show that
there are $\varepsilon, n_0> 0$ such that if $G$ is a graph of order $n\geq
n_0$ with minimum degree at least $\lceil \frac{n}{2} \rceil + \lfloor
\frac{k}{2} \rfloor - 1$ and $2\leq k \leq \eps n$, then $G$ is a $k$-ordered
Hamiltonian graph. It is also shown that this bound is sharp for every $2\leq k
\leq \lfloor \frac{n}{2} \rfloor$.
|
Based on the framework of nonrelativistic Quantum Chromodynamics (NRQCD), we
carry out next-to-leading order (NLO) QCD corrections to the decay of $Z$ boson
into $\chi_c$ and $\chi_b$, respectively. The branching ratio of $Z \to
\chi_{c}(\chi_b)+X$ is about $10^{-5}(10^{-6})$. It is found that, for $Z \to
\chi_c(\chi_b)+X$, the single gluon fragmentation diagrams of $^3S_1^{[8]}$,
which first appear at the NLO level, can provide significant contributions,
leading to a great enhancement on the leading-order results. Consequently the
contributions from the color octet (CO) channels will account for a large
proportion of the total decay widths. Moreover, the introduction of the CO
processes will thoroughly change the color singlet (CS) predictions on the
ratios of $\Gamma_{\chi_{c1}}/\Gamma_{\chi_{c0}}$,
$\Gamma_{\chi_{c2}}/\Gamma_{\chi_{c0}}$,
$\Gamma_{\chi_{b1}}/\Gamma_{\chi_{b0}}$ and
$\Gamma_{\chi_{b2}}/\Gamma_{\chi_{b0}}$, which can be regarded as an
outstanding probe to distinguish the CO and CS mechanism. With regard to the CS
($^3P_J^{[1]}$) channels, the heavy quark pair associated processes serve as
the leading role, however, in the case of $\chi_b$, $Z \to
b\bar{b}[^3P_J^{[1]}]+g+g$ can also contribute significantly. Summing over all
the feeddown contributions from $\chi_{cJ}$ and $\chi_{bJ}$, respectively, we
find $\Gamma(Z \to J/\psi+X)|_{\chi_c-\textrm{feeddown}}=(0.28 - 2.4) \times
10^{-5}$ and $\Gamma(Z \to \Upsilon(1S)+X)|_{\chi_b-\textrm{feeddown}}=(0.15 -
0.49) \times 10^{-6}$.
|
Acoustic holographic lenses, also known as acoustic holograms, can change the
phase of a transmitted wavefront in order to shape and construct complex
ultrasound pressure fields, often for focusing the acoustic energy on a target
region. These lenses have been proposed for transcranial focused ultrasound
(tFUS) to create diffraction-limited focal zones that target specific brain
regions while compensating for skull aberration. Holograms for tFUS are
currently designed using time-reversal approaches in full-wave time-domain
numerical simulations. However, such simulations need time-consuming
computations, which severely limits the adoption of iterative optimization
strategies. Furthermore, in the time-reversal method, the number and
distribution of virtual sources can significantly influence the final sound
field. Because of the computational constraints, predicting these effects and
determining the optimal arrangement is challenging. This study introduces an
efficient method for designing acoustic holograms using a volumetric
holographic technique to generate focused fields inside the skull. The proposed
method combines a modified mixed-domain method for ultrasonic propagation with
a gradient descent iterative optimization algorithm. This approach enables
substantially faster holographic computation than previously reported
techniques. The iterative process uses explicitly defined loss functions to
bias the ultrasound field's optimization parameters to specific desired
characteristics, such as axial resolution, transversal resolution, coverage,
and focal region uniformity, while eliminating the uncertainty associated with
virtual sources in time-reversal techniques. Numerical studies are conducted on
four brain structures: the anterior insula, hippocampus, caudate nucleus, and
amygdala. The findings are further validated in underwater experiments with a
3D-printed skull phantom.
|
Optimal Volt/VAR control (VVC) in distribution networks relies on an
effective coordination between the conventional utility-owned mechanical
devices and the smart residential photovoltaic (PV) inverters. Typically, a
central controller carries out a periodic optimization and sends setpoints to
the local controller of each device. However, instead of tracking centrally
dispatched setpoints, smart PV inverters can cooperate on a much faster
timescale to reach optimality within a PV inverter group. To accommodate such
PV inverter groups in the VVC architecture, this paper proposes a bi-level
optimization framework. The upper-level determines the setpoints of the
mechanical devices to minimize the network active power losses, while the
lower-level represents the coordinated actions that the inverters take for
their own objectives. The interactions between these two levels are captured in
the bi-level optimization, which is solved using the Karush-Kuhn-Tucker (KKT)
conditions. This framework fully exploits the capabilities of the different
types of voltage regulation devices and enables them to cooperatively optimize
their goals. Case studies on typical distribution networks with field-recorded
data demonstrate the effectiveness and advantages of the proposed approach.
|
Event cameras offer promising advantages such as high dynamic range and low
latency, making them well-suited for challenging lighting conditions and
fast-moving scenarios. However, reconstructing 3D scenes from raw event streams
is difficult because event data is sparse and does not carry absolute color
information. To release its potential in 3D reconstruction, we propose the
first event-based generalizable 3D reconstruction framework, called EvGGS,
which reconstructs scenes as 3D Gaussians from only event input in a
feedforward manner and can generalize to unseen cases without any retraining.
This framework includes a depth estimation module, an intensity reconstruction
module, and a Gaussian regression module. These submodules connect in a
cascading manner, and we collaboratively train them with a designed joint loss
to make them mutually promote. To facilitate related studies, we build a novel
event-based 3D dataset with various material objects and calibrated labels of
grayscale images, depth maps, camera poses, and silhouettes. Experiments show
models that have jointly trained significantly outperform those trained
individually. Our approach performs better than all baselines in reconstruction
quality, and depth/intensity predictions with satisfactory rendering speed.
|
Active gels made of cytoskeletal proteins are valuable materials with
attractive non-equilibrium properties such as spatial self-organization and
self-propulsion. At least four typical routes to spatial patterning have been
reported to date in different types of cytoskeletal active gels: bending and
buckling instabilities in extensile systems, and global and local contraction
instabilities in contractile gels. Here we report the observation of these four
instabilities in a single type of active gel and we show that they are
controlled by two parameters: the concentrations of ATP and depletion agent. We
demonstrate that as the ATP concentration decreases, the concentration of
passive motors increases until the gel undergoes a gelation transition. At this
point, buckling is selected against bending, while global contraction is
favored over local ones. Our observations are coherent with a hydrodynamic
model of a viscoelastic active gel where the filaments are crosslinked with a
characteristic time that diverges as the ATP concentration decreases. Our work
thus provides a unified view of spatial instabilities in cytoskeletal active
matter.
|
The aim of this work is to put together two novel concepts from the theory of
integrable billiards: billiard ordered games and confocal billiard books.
Billiard books appeared recently in the work of Fomenko's school, in particular
of V. Vedyushkina. These more complex billiard domains are obtained by gluing
planar sets bounded by arcs of confocal conics along common edges. Such domains
are used in this paper to construct the configuration space for billiard
ordered games. We analyse dynamical and topological properties of the systems
obtained in that way.
|
We introduce a fractional Fokker-Planck equation (FFPE) for Levy flights in
the presence of an external field. The equation is derived within the framework
of the subordination of random processes which leads to Levy flights. It is
shown that the coexistence of anomalous transport and a potential displays a
regular exponential relaxation towards the Boltzmann equilibrium distribution.
The properties of the Levy-flight FFPE derived here are compared with earlier
findings for subdiffusive FFPE. The latter is characterized by a
non-exponential Mittag-Leffler relaxation to the Boltzmann distribution. In
both cases, which describe strange kinetics, the Boltzmann equilibrium is
reached and modifications of the Boltzmann thermodynamics are not required.
|
Clarifying the principles leading to complete decoherence is essential for
understanding information loss in black hole formation. We employ strictly
perturbative quantum master equations to compute the long-term decoherence rate
in infrared-divergent qubit dynamics. In moderately sub-Ohmic dissipation, we
find that this rate exceeds the Hubble's constant, meaning that infrared
particles do not completely destroy quantum information at the universe's
current age. Deep sub-Ohmic dissipation leads to infrared-divergent recovery of
quantum coherence.
|
This paper describes our work which is based on discovering context for text
document categorization. The document categorization approach is derived from a
combination of a learning paradigm known as relation extraction and an
technique known as context discovery. We demonstrate the effectiveness of our
categorization approach using reuters 21578 dataset and synthetic real world
data from sports domain. Our experimental results indicate that the learned
context greatly improves the categorization performance as compared to
traditional categorization approaches.
|
This paper explores the application of Exploratory Data Analytics (EDA) in
the banking and finance domain, focusing on credit card usage and customer
churning. It presents a step-by-step analysis using EDA techniques such as
descriptive statistics, data visualization, and correlation analysis. The study
examines transaction patterns, credit limits, and usage across merchant
categories, providing insights into consumer behavior. It also considers
demographic factors like age, gender, and income on usage patterns.
Additionally, the report addresses customer churning, analyzing churn rates and
factors such as demographics, transaction history, and satisfaction levels.
These insights help banking professionals make data-driven decisions, improve
marketing strategies, and enhance customer retention, ultimately contributing
to profitability.
|
In this article we introduce the topological study of codimension-1
foliations which admit contact or symplectic structures on the leaves. A
parametric existence h-principle for foliated contact structures is provided
for any cooriented foliation in a closed oriented 4-fold.
|
We present an efficient approximate message passing solver for the lifted
disjoint paths problem (LDP), a natural but NP-hard model for multiple object
tracking (MOT). Our tracker scales to very large instances that come from long
and crowded MOT sequences. Our approximate solver enables us to process the
MOT15/16/17 benchmarks without sacrificing solution quality and allows for
solving MOT20, which has been out of reach up to now for LDP solvers due to its
size and complexity. On all these four standard MOT benchmarks we achieve
performance comparable or better than current state-of-the-art methods
including a tracker based on an optimal LDP solver.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.