text
stringlengths 6
128k
|
---|
We show that generalized broken fibrations in arbitrary dimensions admit
rank-2 Poisson structures compatible with the fibration structure. After
extending the notion of wrinkled fibration to dimension 6 we prove that these
wrinkled fibrations also admit compatible rank-2 Poisson structures. In the
cases with indefinite singularities we can provide these wrinkled fibrations in
dimension 6 with near-symplectic structures.
|
This paper presents an $O(\log\log \bar{d})$ round massively parallel
algorithm for $1+\epsilon$ approximation of maximum weighted $b$-matchings,
using near-linear memory per machine. Here $\bar{d}$ denotes the average degree
in the graph and $\epsilon$ is an arbitrarily small positive constant. Recall
that $b$-matching is the natural and well-studied generalization of the
matching problem where different vertices are allowed to have multiple (and
differing number of) incident edges in the matching. Concretely, each vertex
$v$ is given a positive integer budget $b_v$ and it can have up to $b_v$
incident edges in the matching. Previously, there were known algorithms with
round complexity $O(\log\log n)$, or $O(\log\log \Delta)$ where $\Delta$
denotes maximum degree, for $1+\epsilon$ approximation of weighted matching and
for maximal matching [Czumaj et al., STOC'18, Ghaffari et al. PODC'18; Assadi
et al. SODA'19; Behnezhad et al. FOCS'19; Gamlath et al. PODC'19], but these
algorithms do not extend to the more general $b$-matching problem.
|
The addition formulae for KP $\tau$-functions, when evaluated at lattice
points in the KP flow group orbits in the infinite dimensional
Sato-Segal-Wilson Grassmannian, give infinite parametric families of solutions
to discretizations of the KP hierarchy. The CKP hierarchy may similarly be
viewed as commuting flows on the Lagrangian sub-Grassmannian of maximal
isotropic subspaces with respect to a suitably defined symplectic form.
Evaluating the $\tau$-functions at a sublattice of points within the KP orbit,
the resulting discretization gives solutions both to the hyperdeterminantal
relations (or Kashaev recurrence) and the hexahedron (or Kenyon-Pemantle)
recurrence.
|
Among the greatest mysteries in cosmology are the flatness problem,
concerning the lack of curvature of the universe, and the homogeneity problem,
questioning why the universe is almost isotropic despite having regions that
are causally disconnected. These problems served as motivation for the theory
of inflation, which suggests a period of exponential expansion in the early
universe, and the inflationary origin of the universe can be traced by B-mode
polarization. In an effort to better understand the potential foreground
systematics, especially the levels of polarized dust emission, we queried the
Heiles catalog to produce a list of starlight polarization data in the
so-called "Southern Hole", which is an approximately $20\times20$ degree region
centered at RA: $00^h12^m00^s$ and DEC: $-59{\deg}18'00''$ that is being
examined by multiple CMB polarization experiments. Because magnetic field tends
to dictate the orientation of dust grains, which in turn determines how
starlight is polarized, starlight polarization can be used to trace magnetic
fields. Therefore, to improve our understanding of the properties of this
region, we used this catalog, along with Gaia data as tracers of the
three-dimensional distribution of dust, as a potential indicator of magnetic
field orientation throughout the galaxy in the Southern Hole region. We then
analyzed these data with the hope that magnetic field data can be used to
create a template to aid in subtracting away the contamination of CMB B-mode
searches by polarized dust emission. While the results of the analysis are
promising, we found that the currently available data are severely inadequate
for the purpose of creating a template, thus demonstrating the need for
improved and more uniform coverage of the Southern Hole when it comes to
polarization measurements.
|
Traditionally, federated learning (FL) aims to train a single global model
while collaboratively using multiple clients and a server. Two natural
challenges that FL algorithms face are heterogeneity in data across clients and
collaboration of clients with {\em diverse resources}. In this work, we
introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeL
that facilitates collective training with heterogeneous clients while
respecting resource diversity. For personalization, we allow clients to learn
\textit{compressed personalized models} with different quantization parameters
depending on their resources. Towards this, first we propose an algorithm for
learning quantized models through a relaxed optimization problem, where
quantization values are also optimized over. When each client participating in
the (federated) learning process has different requirements of the quantized
model (both in value and precision), we formulate a quantized personalization
framework by introducing a penalty term for local client objectives against a
globally trained model to encourage collaboration. We develop an alternating
proximal gradient update for solving this quantized personalization problem,
and we analyze its convergence properties. Numerically, we show that optimizing
over the quantization levels increases the performance and we validate that
QuPeL outperforms both FedAvg and local training of clients in a heterogeneous
setting.
|
Mobile robot navigation in complex and dynamic environments is a challenging
but important problem. Reinforcement learning approaches fail to solve these
tasks efficiently due to reward sparsities, temporal complexities and
high-dimensionality of sensorimotor spaces which are inherent in such problems.
We present a novel approach to train action policies to acquire navigation
skills for wheel-legged robots using deep reinforcement learning. The policy
maps height-map image observations to motor commands to navigate to a target
position while avoiding obstacles. We propose to acquire the multifaceted
navigation skill by learning and exploiting a number of manageable navigation
behaviors. We also introduce a domain randomization technique to improve the
versatility of the training samples. We demonstrate experimentally a
significant improvement in terms of data-efficiency, success rate, robustness
against irrelevant sensory data, and also the quality of the maneuver skills.
|
We study the scaling of the magnetic susceptibility in the square Ising model
based upon the delta-expansion in the high temperature phase. The
susceptibility chi is expressed in terms of the mass M and expanded in powers
of 1/M. The dilation around M=0 by the delta expansion and the parametric
extension of the ratio of derivatives of chi, chi^{(ell+1)}/chi^{(ell)} is used
as a test function for the estimation of the critical exponent gamma with no
bias from information of the critical temperature. Estimation is done with the
help of the principle of minimum sensitivity and detailed analysis revealed
that ell=0,1 cases provide us accurate estimation results. Critical exponent of
the sub-leading scaling term is also estimated.
|
We constructed a unitary semigroup $(e^{tA})_{t \geq 0}$ on a Hilbert space
and an orthogonal projection $P$ such that the limit $\lim_{n \to \infty} [
e^{\frac{t}{n}A}P ]^n$ does not exist strongly. A similar example with a
positive contractive semigroup and positive contractive projection on $L_p$ is
also constructed.
|
Atomic force microscopy is based on tip sample interaction, which is
determined by the properties of tip and sample. Unfortunately, in particular in
ambient conditions the tip as well as the sample are contaminated, and it is
not clear how this contamination may affect data in Atomic Force Microscopy
(AFM) applications. In the present work we propose to use on the one hand AFM
imaging of the cantilever chips and on the other hand multidimensional AFM
spectroscopy techniques to characterize the state of contamination of the tip
sample system. We find that typically AFM cantilevers may be severely
contaminated when taken from typical packaging boxes that have been opened for
a long time. In addition, by acquisition spectroscopy data as a function of
tip-sample voltage and tip-sample distance, we are able to determine the
Hamaker constant of the system, which depends strongly on the contamination
within the tip-sample system. This method allows for in-situ characterization
of the tip-sample system using only AFM techniques.
|
Energy structure of the Peierls gap in orthorhombic TaS$_3$ is examined by
spectral study of photoconduction. The gap edge and energy levels inside the
Peierls gap are observed. The amplitude of the energy levels is found to depend
on both the temperature and the electric field. The electric field of the order
of 10 V/cm affects the energy levels and leads to the redistribution of
intensity between peaks. The small value of the electric field indicates
participation of the collective state in formation of the energy levels inside
the Peierls gap.
|
Indexing of static and dynamic sets is fundamental to a large set of
applications such as information retrieval and caching. Denoting the
characteristic vector of the set by B, we consider the problem of encoding sets
and multisets to support approximate versions of the operations rank(i) (i.e.,
computing sum_{j <= i}B[j]) and select(i) (i.e., finding min{p | rank(p) >= i})
queries. We study multiple types of approximations (allowing an error in the
query or the result) and present lower bounds and succinct data structures for
several variants of the problem. We also extend our model to sliding windows,
in which we process a stream of elements and compute suffix sums. This is a
generalization of the window summation problem that allows the user to specify
the window size at query time. Here, we provide an algorithm that supports
updates and queries in constant time while requiring just (1+o(1)) factor more
space than the fixed-window summation algorithms.
|
We address Gillis' recent criticism [arXiv:1506.05795] of a series of papers
(by different combinations of the present authors) on formulations of Bell's
theorem. Those papers intended to address an unfortunate gap of communication
between two broad camps in the quantum foundations community that we identify
as "operationalists" and "realists". Here, we once again urge the readers to
approach the question from an unbiased standpoint, and explain that Gillis'
criticism draws too heavily on the philosophical inclinations of one side of
that debate -- the realist camp. As part of that explanation we discuss
intuition versus proof, look again at Bell's formalizations of locality, and
correct misstatements by Gillis of our views, and those of Bell and Einstein.
|
The classification of gigapixel-sized whole slide images (WSIs), digital
representations of histological slides obtained via a high-resolution scanner,
faces significant challenges associated with the meticulous and time-consuming
nature of fine-grained labeling. While weakly-supervised multiple instance
learning (MIL) has emerged as a promising approach, current MIL methods are
constrained by their limited ability to leverage the wealth of information
embedded within unlabeled WSIs. This limitation often necessitates training MIL
feature aggregators from scratch after the feature extraction process,
hindering efficiency and accuracy. PreMix extends the general MIL framework by
pre-training the MIL aggregator with an intra-batch slide mixing approach.
Specifically, PreMix incorporates Barlow Twins Slide Mixing during
pre-training, enhancing its ability to handle diverse WSI sizes and maximizing
the utility of unlabeled WSIs. Combined with Mixup and Manifold Mixup during
fine-tuning, PreMix achieves a mean of 4.7% performance improvement over the
baseline MIL framework, the hierarchical image pyramid transformer (HIPT), on
the Camelyon16 dataset. The observed improvement across a range of active
learning acquisition functions and WSI-labeled training budgets highlights the
framework's adaptability to diverse datasets and varying resource constraints.
Ultimately, PreMix paves the way for more efficient and accurate WSI
classification under limited WSI-labeled datasets, encouraging the broader
adoption of unlabeled WSI data in histopathological research. The code is
available at https://anonymous.4open.science/r/PreMix
|
Network traffic model is a critical problem for urban applications, mainly
because of its diversity and node density. As wireless sensor network is highly
concerned with the development of smart cities, careful consideration to
traffic model helps choose appropriate protocols and adapt network parameters
to reach best performances on energy-latency tradeoffs. In this paper, we
compare the performance of two off-the-shelf medium access control protocols on
two different kinds of traffic models, and then evaluate their application-end
information delay and energy consumption while varying traffic parameters and
network density. From the simulation results, we highlight some limits induced
by network density and occurrence frequency of event-driven applications. When
it comes to realtime urban services, a protocol selection shall be taken into
account - even dynamically - with a special attention to energy-delay tradeoff.
To this end, we provide several insights on parking sensor networks.
|
(Abridged) WASP-5b is a highly irradiated dense hot Jupiter orbiting a G4V
star every 1.6 days. We observed two secondary eclipses of WASP-5b in the J, H
and K bands simultaneously. Thermal emission of WASP-5b is detected in the J
and K bands. The retrieved planet-to-star flux ratios in the J and K bands are
0.168 +0.050/-0.052% and 0.269+/-0.062%, corresponding to brightness
temperatures of 2996 +212/-261K and 2890 +246/-269K, respectively. No thermal
emission is detected in the H band, with a 3-sigma upper limit of 0.166%,
corresponding to a maximum temperature of 2779K. On the whole, our J, H, K
results can be explained by a roughly isothermal temperature profile of ~2700K
in the deep layers of the planetary dayside atmosphere that are probed at these
wavelengths. Together with Spitzer observations, which probe higher layers that
are found to be at ~1900K, a temperature inversion is ruled out in the range of
pressures probed by the combined data set. While an oxygen-rich model is unable
to explain all the data, a carbon-rich model provides a reasonable fit but
violates energy balance.
|
Understanding neural networks is becoming increasingly important. Over the
last few years different types of visualisation and explanation methods have
been proposed. However, none of them explicitly considered the behaviour in the
presence of noise and distracting elements. In this work, we will show how
noise and distracting dimensions can influence the result of an explanation
model. This gives a new theoretical insights to aid selection of the most
appropriate explanation model within the deep-Taylor decomposition framework.
|
Current methods for capturing circulating tumor cells (CTCs) are based on the
overexpression of cytokeratin (CK) or epithelial cell-adhesion molecule (EpCAM)
on cancer cells. However, during the process of metastasis, tumor cells undergo
epithelial to mesenchymal transition (EMT) that can lead to the loss of
CK/EpCAM expression. Therefore, it is vital to develop a capturing technique
independent of CK/EpCAM expression on the cancer cell. To develop this
technique, it is important to identify common secondary oncogenic markers
overexpressed on tumor cells before and after EMT. We analyzed the biomarker
expression levels in tumor cells, before and after EMT, and found two common
proteins human epidermal growth factor receptor 2 (Her2) and epidermal growth
factor receptor (EGFR) whose levels remained unaffected. So, we synthesized
immunomagnetic iron nanocubes covalently conjugated with antibodies of Her2 or
EGFR to capture cancer cells irrespective of the EMT status. The nanocubes
showed high specificity (6 to 9 fold) in isolating the cancer cells of interest
from a mixture of cells spiked in serum. We characterized the captured cells
for identifying their EMT status. Thus, we believe the results presented here
would help in the development of novel strategies for capturing both primary
and metastatic cancer cells from patients blood to develop an effective
treatment plan.
|
A novel quantum dynamical model based on the dissipative quantum dynamics of
open quantum systems is presented. It allows the treatment of both
deep-inelastic processes and quantum tunneling (fusion) within a fully quantum
mechanical coupled-channels approach. Model calculations show the transition
from pure state (coherent) to mixed state (decoherent and dissipative) dynamics
during a near-barrier nuclear collision. Energy dissipation, due to
irreversible decay of giant-dipole excitations of the interacting nuclei,
results in hindrance of quantum tunneling.
|
Bidirectional associative memory (BAM) is a kind of an artificial neural
network used to memorize and retrieve heterogeneous pattern pairs. Many efforts
have been made to improve BAM from the the viewpoint of computer application,
and few theoretical studies have been done. We investigated the theoretical
characteristics of BAM using a framework of statistical-mechanical analysis. To
investigate the equilibrium state of BAM, we applied self-consistent signal to
noise analysis (SCSNA) and obtained a macroscopic parameter equations and
relative capacity. Moreover, to investigate not only the equilibrium state but
also the retrieval process of reaching the equilibrium state, we applied
statistical neurodynamics to the update rule of BAM and obtained evolution
equations for the macroscopic parameters. These evolution equations are
consistent with the results of SCSNA in the equilibrium state.
|
Computational models of collective behavior in birds has allowed us to infer
interaction rules directly from experimental data. Using a generic form of
these rules we explore the collective behavior and emergent dynamics of a
simulated swarm. For a wide range of flock size and interaction extent (the
fixed number of neighbors with which an individual will interact) we find that
the computational collective is inherently stable --- individuals are attracted
to one another and will position themselves a preferred distance from their
fixed neighbors within a rigid lattice. Nonetheless, the irregular overall
shape of the flock, coupled with the need for individuals on the boundary to
move towards their neighbors creates a torque which leads the flock to rotate
and then meander. We argue that this "rolling meander" is a very good proxy for
real collective behavior in animal species and yet arises from a simple
homogeneous and deterministic rule for interaction. Rather than then introduce
leaders --- which has already been shown, quite straightforwardly, to drive
collective swarms such as this --- we introduce a small number of "followers".
Each follower is bound to consider a random fixed individual to be among their
neighbors, irrespective of actual metric distance between them. We find that
the introduction of a small number of such followers causes a phase transition
that quickly leads to instability in the flock structure (as no stable
configuration arises) and the previously rigid crystalline interaction among
neighbors now becomes fluid: the distance between neighbors decreases, the
flock ceases to rotate and meanders less.
|
Active particles with their characteristic feature of self-propulsion are
regarded as the simplest models for motility in living systems. The
accumulation of active particles in low activity regions has led to the general
belief that chemotaxis requires additional features and at least a minimal
ability to process information and to control motion. We show that
self-propelled particles display chemotaxis and move into regions of higher
activity, if the particles perform work on passive objects, or cargo, to which
they are bound. The origin of this cooperative chemotaxis is the exploration of
the activity gradient by the active particle when bound to a load, resulting in
an average excess force on the load in the direction of higher activity. Using
a minimalistic theoretical model, we capture the most relevant features of
these active-passive dimers and in particular we predict the crossover between
anti-chemotactic and chemotactic behaviour. Moreover we show that merely
connecting active particles to chains is sufficient to obtain the crossover
from anti-chemotaxis to chemotaxis with increasing chain length. Such an active
complex is capable of moving up a gradient of activity such as provided by a
gradient of fuel and to accumulate where the fuel concentration is at its
maximum. The observed transition is of significance to proto-forms of life
enabling them to locate a source of nutrients even in the absence of any
supporting sensomotoric apparatus.
|
In this paper, we investigate the uplink transmissions in low-power wide-area
networks (LPWAN) where the users are self-powered by the energy harvested from
the ambient environment. Demonstrating their potential in supporting diverse
Internet-of-Things (IoT) applications, we focus on long range (LoRa) networks
where the LoRa users are using the harvested energy to transmit data to a
gateway via different spreading codes. Precisely, we study the throughput
fairness optimization problem for LoRa users by jointly optimizing the
spreading factor (SF) assignment, energy harvesting (EH) time duration, and the
transmit power of LoRa users. First, through examination of the various
permutations of collisions among users, we derive a general expression of the
packet collision time between LoRa users, which depends on the SFs and EH
duration requirements. Then, after reviewing prior SF allocation work, we
develop two types of algorithms that either assure fair SF assignment indeed
purposefully 'unfair' allocation schemes for the LoRa users. Our results
unearth three new findings. Firstly, we demonstrate that, to maximize the
minimum rate, the unfair SF allocation algorithm outperforms the other
approaches. Secondly, considering the derived expression of packet collision
between simultaneous users, we are now able to improve the performance of the
minimum rate of LoRa users and show that it is protected from inter-SF
interference which occurs between users with different SFs. That is, imperfect
SF orthogonality has no impact on minimum rate performance. Finally, we have
observed that co-SF interference is the main limitation in the throughput
performance, and not the energy scarcity.
|
We consider the long-standing problem of predicting the hierarchical
clustering amplitudes $S_p$ in the strongly non-linear regime of gravitational
evolution. N-body results for the non-linear evolution of the bispectrum (the
Fourier transform of the three-point density correlation function) suggest a
physically motivated ansatz that yields the strongly non-linear behavior of the
skewness, $S_3$, starting from leading-order perturbation theory. When
generalized to higher-order ($p>3$) polyspectra or correlation functions, this
ansatz leads to a good description of non-linear amplitudes in the strongly
non-linear regime for both scale-free and cold dark matter models. Furthermore,
these results allow us to provide a general fitting formula for the non-linear
evolution of the bispectrum that interpolates between the weakly and strongly
non-linear regimes, analogous to previous expressions for the power spectrum.
|
P\'olya's random walk theorem states that a random walk on a $d$-dimensional
grid is recurrent for $d=1,2$ and transient for $d\ge3$. We prove a version of
P\'olya's random walk theorem for non-backtracking random walks. Namely, we
prove that a non-backtracking random walk on a $d$-dimensional grid is
recurrent for $d=2$ and transient for $d=1$, $d\ge3$. Along the way, we prove
several useful general facts about non-backtracking random walks on graphs. In
addition, our proof includes an exact enumeration of the number of closed
non-backtracking random walks on an infinite 2-dimensional grid. This
enumeration suggests an interesting combinatorial link between non-backtracking
random walks on grids, and trinomial coefficients.
|
We present a conjecture for the leading $1/N$ anomalous dimension of the
scalar primary operator in $U(N)_k$ Chern-Simons theories coupled to a single
fundamental field, to all orders in the t'Hooft coupling $\lambda=\frac{N}{k}$.
Following this we compute the anomalous dimension of the scalar in a Regular
Bosonic theory perturbatively at two-loop order and demonstrate that matches
exactly with the result predicted by our conjecture. We also show that our
proposed expression for the anomalous dimension is consistent with all other
existing two-loop perturbative results, which constrain its form at both weak
and strong coupling thanks to the bosonization duality. Furthermore, our
conjecture passes a novel non-trivial all loop test which provides a strong
evidence for its consistency.
|
The `Weyl symmetric functions' studied here naturally generalize classical
symmetric (polynomial) functions, and `Weyl bialternants,' sometimes also
called Weyl characters, analogize the Schur functions. For this generalization,
the underlying symmetry group is a finite Weyl group. A `splitting poset' for a
Weyl bialternant is an edge-colored ranked poset possessing a certain
structural property and a natural weighting of its elements so that the
weighted sum of poset elements is the given Weyl bialternant. Connected such
posets are of combinatorial interest in part because they are rank symmetric
and rank unimodal and have nice quotient-of-product expressions for their rank
generating functions. Supporting graphs of weight bases for irreducible
semisimple Lie algebra representations provide one large family of examples.
However, many splitting posets can be obtained outside of this Lie theoretic
context. This monograph provides a tutorial on Weyl bialternants / Weyl
symmetric functions and splitting posets that is largely self-contained and
independent of Lie algebra representation theory. New results are also
obtained.
|
Target Identification by Enzymes (TIE) problem aims to identify the set of
enzymes in a given metabolic network, such that their inhibition eliminates a
given set of target compounds associated with a disease while incurring minimum
damage to the rest of the compounds. This is an NP-complete problem, and thus
optimal solutions using classical computers fail to scale to large metabolic
networks. In this paper, we consider the TIE problem for identifying drug
targets in metabolic networks. We develop the first quantum optimization
solution, called QuTIE (Quantum optimization for Target Identification by
Enzymes), to this NP-complete problem. We do that by developing an equivalent
formulation of the TIE problem in Quadratic Unconstrained Binary Optimization
(QUBO) form, then mapping it to a logical graph, which is then embedded on a
hardware graph on a quantum computer. Our experimental results on 27 metabolic
networks from Escherichia coli, Homo sapiens, and Mus musculus show that QuTIE
yields solutions which are optimal or almost optimal. Our experiments also
demonstrate that QuTIE can successfully identify enzyme targets already
verified in wet-lab experiments for 14 major disease classes.
|
We study the numerical reconstruction problem in acousto-electric tomography
(AET) of recovering the conductivity distribution in a bounded domain from
multiple interior power density data. The Two-Point-Gradient-$\Theta$
(TPG-$\Theta$) in Kaczmarz type is proposed, with a general convex penalty term
$\Theta$, the algorithm can be utilized in AET problem for recovering sparse
and discontinuous conductivity distributions. We establish the convergence of
such iterative regularized method. Extensive numerical experiments are
presented to illustrate the feasibility and effectiveness of the proposed
approach.
|
Dark matter (DM) is added to the Froggatt-Nielsen (FN) mechanism, and
conditions for its successful freezeout identified. Requesting the FN scale
$\Lambda_{\text{FN}}$ to be the cutoff of the theory renders freezeout
scenarios surprisingly few. Fermionic DM is typically charged under
$U(1)_{\text{FN}}$, with the dominant annihilation channel a CP-even flavon +
CP-odd flavon. A minimal case is when the DM-flavon coupling strength is
$\mathcal{O}(1)$, with several implications: (1) the DM mass is
$\mathcal{O}$(100 GeV - 1 TeV), thanks to the WIMP coincidence, (2) requiring
perturbativity of couplings puts a lower $and$ upper limit on the flavor scale,
2 TeV $\lesssim \Lambda_{\text{FN}} \lesssim 14~$TeV, on account of its
relation to DM mass and couplings, (3) DM is a "secluded WIMP" effectively
hidden from collider and direct detection searches. Limits on the masses of
dark matter and mediators from kaon mixing measurements constitute the best
constraints, surpassing Xenon1T, Fermi-LAT, and the LHC. Future direct
detection searches, and collider searches for missing energy plus a single
jet/bottom/top, are promising avenues for discovery.
|
Diabetic retinopathy (DR) grading is crucial in determining the adequate
treatment and follow up of patients, but the screening process can be tiresome
and prone to errors. Deep learning approaches have shown promising performance
as computer-aided diagnosis(CAD) systems, but their black-box behaviour hinders
the clinical application. We propose DR$\vert$GRADUATE, a novel deep
learning-based DR grading CAD system that supports its decision by providing a
medically interpretable explanation and an estimation of how uncertain that
prediction is, allowing the ophthalmologist to measure how much that decision
should be trusted. We designed DR$\vert$GRADUATE taking into account the
ordinal nature of the DR grading problem. A novel Gaussian-sampling approach
built upon a Multiple Instance Learning framework allow DR$\vert$GRADUATE to
infer an image grade associated with an explanation map and a prediction
uncertainty while being trained only with image-wise labels. DR$\vert$GRADUATE
was trained on the Kaggle training set and evaluated across multiple datasets.
In DR grading, a quadratic-weighted Cohen's kappa (QWK) between 0.71 and 0.84
was achieved in five different datasets. We show that high QWK values occur for
images with low prediction uncertainty, thus indicating that this uncertainty
is a valid measure of the predictions' quality. Further, bad quality images are
generally associated with higher uncertainties, showing that images not
suitable for diagnosis indeed lead to less trustworthy predictions.
Additionally, tests on unfamiliar medical image data types suggest that
DR$\vert$GRADUATE allows outlier detection. The attention maps generally
highlight regions of interest for diagnosis. These results show the great
potential of DR$\vert$GRADUATE as a second-opinion system in DR severity
grading.
|
It was shown in PHYSICAL REVIEW B 92, 085409 (2015) that the dynamics of a
pair of electrons in graphene can be mapped onto that of a single particle with
negative effective mass, leading to bound states of positive energy despite the
formally repulsive interaction. However, this conclusion was based on the
analysis of the two--particle problem, neglecting the role of the Dirac sea and
the many--body effects. The two dominant such effects at zero temperature are
screening of the Coulomb interaction by the Dirac sea, and reduction of the
available phase space due to Pauli blocking of transitions into the states
below the Fermi level. We show that these effects result in strong
renormalization of the binding energy, but do not destroy the metastable
states. Thus the binding energies are strongly dependent on the chemical
potential owing to the combined effects of screening and Pauli blocking. Hence,
the quasibound resonances can be tuned by electrostatic doping.
|
We introduce an asymmetric distance in the space of learning tasks, and a
framework to compute their complexity. These concepts are foundational for the
practice of transfer learning, whereby a parametric model is pre-trained for a
task, and then fine-tuned for another. The framework we develop is
non-asymptotic, captures the finite nature of the training dataset, and allows
distinguishing learning from memorization. It encompasses, as special cases,
classical notions from Kolmogorov complexity, Shannon, and Fisher Information.
However, unlike some of those frameworks, it can be applied to large-scale
models and real-world datasets. Our framework is the first to measure
complexity in a way that accounts for the effect of the optimization scheme,
which is critical in Deep Learning.
|
In the past decade, there has been a systematic investigation of
symmetry-protected topological (SPT) phases in interacting fermion systems.
Specifically, by utilizing the concept of equivalence classes of finite-depth
fermionic symmetric local unitary (FSLU) transformations and the fluctuating
decorated symmetry domain wall picture, a large class of fixed-point wave
functions have been constructed for fermionic SPT (FSPT) phases. Remarkably,
this construction coincides with the Atiyah-Hirzebruch spectral sequence,
enabling a complete classification of FSPT phases. However, unlike bosonic SPT
phases, the stacking group structure in fermion systems proves to be much more
intricate. The construction of fixed-point wave functions does not explicitly
provide this information. In this paper, we employ FSLU transformations to
investigate the stacking group structure of FSPT phases. Specifically, we
demonstrate how to compute stacking FSPT data from the input FSPT data in each
layer, considering both unitary and anti-unitary symmetry, up to 2+1
dimensions. As concrete examples, we explicitly compute the stacking group
structure for crystalline FSPT phases in all 17 wallpaper groups and the
mixture of wallpaper groups with onsite time-reversal symmetry using the
fermionic crystalline equivalence principle. Importantly, our approach can be
readily extended to higher dimensions, offering a versatile method for
exploring the stacking group structure of FSPT phases.
|
The conveyance of employees holds paramount significance for expansive
corporations. Employees typically commute to their workplaces either via
personal vehicles or through public transit. In this research endeavor, our
role is that of a third-party entity entrusted with orchestrating the
transportation of employees whose place of employment is situated within the
grey zone. This zone exclusively permits the ingress of electric/hybrid
vehicles and buses. We advocate for employees to adopt carpooling and furnish
bus services for those who abstain from it. The primary objective of this
research is to curtail the quantity of vehicles leased by the third-party
entity, promote carpooling among employees, amplify employee contentment, and
mitigate environmental degradation stemming from vehicular gasoline
consumption. To decipher the model delineated in this study, the epsilon
constraint method is proffered for petite-scale instances, while NSGA-II is
introduced as a potent meta-heuristic technique tailored for large-scale
scenarios. Computational trials corroborate that the models posited can be
efficaciously harnessed by enterprises to pare down transportation
expenditures.
|
The energy and angular dependence of double differential cross sections was
measured for p,d,t,He,Li,Be, and B isotopes produced in collisions of 1.2 and
1.9 GeV protons with Au target. The shape of the spectra and angular
distributions almost does not change in the beam energy range from 1.2 to 2.5
GeV, however, the absolute value of the cross sections increases for all
ejectiles. A phenomenological model of two emitting, moving sources reproduces
very well spectra and angular distributions of intermediate mass fragments.
Double differential cross sections for light charged particles (LCP) were
analyzed in the frame of the microscopic model of intranuclear cascade (INC)
with coalescence of nucleons and statistical model for evaporation of particles
from excited residual nuclei. Energy and angular dependencies of data agree
satisfactorily neither with predictions of microscopic intranuclear cascade
calculations for protons, nor with coalescence calculations for other LCP.
Phenomenological inclusion of another reaction mechanism - emission of LCP from
a "fireball", i.e., fast and hot moving source - combined with the microscopic
model calculations of INC, coalescence and evaporation of particles leads to
very good description of the data. It was found that nonequilibrium processes
are very important for production of LCP. They exhaust 40-80% of the total
cross sections - depending on the emitted particles. Coalescence and "fireball"
emission give comparable contributions to the cross sections with exception of
3He data where coalescence clearly dominates. The ratio of sum of all
nonequilibrium processes to those proceeding through stage of statistical
equilibrium does almost not change in the beam energy range from 1.2 GeV to 2.5
GeV for all light charged particles.
|
The Grundy number of a graph G is the maximum number k of colors used to
color the vertices of G such that the coloring is proper and every vertex x
colored with color i, is adjacent to (i - 1) vertices colored with each color
j, In this paper we give bounds for the Grundy number of some graphs and
Cartesian products of graphs. In particular, we determine an exact value of
this parameter for n-dimensional meshes and some n-dimensional toroidal meshes.
Finally, we present an algorithm to generate all graphs for a given Grundy
number
|
In a groundbreaking work, Duplantier, Miller and Sheffield showed that
subcritical Liouville quantum gravity (LQG) coupled with Schramm-Loewner
evolutions (SLE) can be described by the mating of two continuum random trees.
In this paper, we consider the counterpart of their result for critical LQG and
SLE, i.e., for the case when $\gamma^2=\kappa=16/\kappa=4$. We prove that as
one sends $\kappa \downarrow 4$ in the subcritical setting, the space-filling
SLE$_\kappa$ in a disk degenerates to the CLE$_4$ exploration introduced by
Werner and Wu, along with a collection of i.i.d.\ coin tosses indexed by the
branch points of the exploration. Furthermore, in the
$\kappa=16/\gamma^2\downarrow 4$ limit, the pair of continuum random trees
collapse into a single continuum random tree, and we observe that upon applying
an appropriate affine transform to the encoding Brownian motions before taking
the limit, we get convergence to a pair of independent Brownian motions
$(A,B)$. The Brownian motion $A$ encodes the LQG distance from the CLE loops to
the boundary of the disk, while the Brownian motion $B$ encodes the boundary
lengths of the CLE$_4$ loops. In contrast to the subcritical setting, $(A,B)$
does not determine the CLE-decorated LQG surface.
|
Localization of a target object has been performed conventionally using
multiple terrestrial reference nodes. This paradigm is recently shifted towards
utilization of unmanned aerial vehicles (UAVs) for locating target objects.
Since locating of a target using simultaneous multiple UAVs is costly and
impractical, achieving this task by utilizing single UAV becomes desirable.
Hence, in this paper, we propose an RSSI-based localization method that
utilizes only a single UAV. The proposed approach is based on clustering method
along with the Singular Value Decomposition (SVD). The performance of the
proposed method is verified by the experimental measurements collected by a UAV
that we have designed and computer simulations. The results show that the
proposed method can achieve location accuracy as low as 7m depending on the
number of iterations.
|
Let $X$ be a finite type simply connected rationally elliptic CW-complex with
Sullivan minimal model $(\Lambda V, d)$ and let $k\geq 2$ the biggest integer
such that $d=\sum_{i\geq k}d_i$ with $d_i(V)\subseteq \Lambda ^iV$. We show
that: $cat(X_{\mathbb{Q}}) = depht(\Lambda V, d_k)$ if and only if $(\Lambda
V,d_{k})$ is elliptic. This result is obtained by introducing tow new spectral
sequences that generalize the Milnor-Moore spectral sequence and its
$\mathcal{E}xt$-version \cite{Mur94}. As a corollary, we recover a known result
proved - with different methods - by L. Lechuga and A. Murillo in \cite{LM02}
and G. Lupton in \cite{Lup02}: If $(\Lambda V,d_{k})$ is elliptic, then
$cat(X_{\mathbb{Q}}) = dim(\pi_{odd}(X)\otimes\mathbb{Q}) +
(k-2)dim(\pi_{even}(X)\otimes\mathbb{Q})$. In the case of a field ${IK}$ of
$char({IK})=p$ (an odd prim) we obtain an algebraic approach for $e_{IK}(X)$
where $X$ is an $r$-connected ($r\geq 1$) finite CW-complex such that $p>
dim(X)/r$.
|
The analysis and visualization of tensor fields is a very challenging task.
Besides the cases of zeroth- and first-order tensors, most techniques focus on
symmetric second-order tensors. Only a few works concern totally symmetric
tensors of higher-order. Work on other tensors of higher-order than two is
exceptionally rare. We believe that one major reason for this gap is the lack
of knowledge about suitable tensor decompositions for the general higher-order
tensors. We focus here on three dimensions as most applications are concerned
with three-dimensional space. A lot of work on symmetric second-order tensors
uses the spectral decomposition. The work on totally symmetric higher-order
tensors deals frequently with a decomposition based on spherical harmonics.
These decompositions do not directly apply to general tensors of higher-order
in three dimensions. However, another option available is the deviatoric
decomposition for such tensors, splitting them into deviators. Together with
the multipole representation of deviators, it allows to describe any tensor in
three dimensions uniquely by a set of directions and non-negative scalars. The
specific appeal of this methodology is its general applicability, opening up a
potentially general route to tensor interpretation. The underlying concepts,
however, are not broadly understood in the engineering community. In this
article, we therefore gather information about this decomposition from a range
of literature sources. The goal is to collect and prepare the material for
further analysis and give other researchers the chance to work in this
direction. This article wants to stimulate the use of this decomposition and
the search for interpretation of this unique algebraic property. A first step
in this direction is given by a detailed analysis of the multipole
representation of symmetric second-order three-dimensional tensors.
|
Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models.
|
We investigate the statistics of stationary points in the sum of squares of
$N$ Gaussian random fields, which we call a "chi-squared" field. The behavior
of such a field at a point is investigated, with particular attention paid to
the formation of topological defects. An integral to compute the number density
of stationary points at a given field amplitude is constructed. We compute
exact expressions for the integral in various limits and provide code to
evaluate it numerically in the general case. We investigate the dependence of
the number density of stationary points on the field amplitude, number of
fields, and power spectrum of the individual Gaussian random fields. This work
parallels the work of Bardeen, Bond, Kaiser and Szalay, who investigated the
statistics of peaks of Gaussian random fields. A number of results for
integrating over matrices are presented in appendices.
|
Contrary to all the 2D models, where the reconnection x-line extent is
infinitely long, we study magnetic reconnection in the opposite limit. The
scaling of the average reconnection rate and outflow speed are modeled as a
function of the x-line extent. An internal x-line asymmetry along the current
direction develops because of the flux transport by electrons beneath the ion
kinetic scale, and it plays an important role in suppressing reconnection in
the short x-line limit; the average reconnection rate drops because of the
limited active region, and the outflow speed reduction is associated with the
reduction of the $J \times B$ force, that is caused by the phase shift between
the J and B profiles, also as a consequence of this flux transport.
|
If k is a commutative field and G a reductive (connected) algebraic group
over k, we give bounds for the orders of the finite subgroups of G(k); these
bounds depends on the type of G and on the Galois groups of the cyclotomic
extensions of k.
|
The Boron to Carbon (B/C) and sub-Fe/Fe ratios provides an important clue on
Cosmic Ray (CR) propagation within the Galaxy. These ratios estimate the
grammage that the CR traverse as they propagate from their sources to Earth.
Attempts to explain these ratios within the standard CR propagation models
require ad hoc modifications and even with those these models necessitate
inconsistent grammages to explain both ratios. As an alternative, physically
motivated model, we have proposed that CR originate preferably within the
galactic spiral arms. CR propagation from dynamic spiral arms has important
imprints on various secondary to primary ratios, such as the B/C ratio and the
positron fraction. We use our spiral arm diffusion model with the spallation
network extended up to Nickel to calculate the sub-Fe/Fe ratio. We show that
without any additional parameters the spiral arm model consistently explains
both ratios with the same grammage, providing further evidence in favor of this
model.
|
The ability of having a sparse representation for a certain class of signals
has many applications in data analysis, image processing, and other research
fields. Among sparse representations, the cosparse analysis model has recently
gained increasing interest. Many signals exhibit a multidimensional structure,
e.g. images or three-dimensional MRI scans. Most data analysis and learning
algorithms use vectorized signals and thereby do not account for this
underlying structure. The drawback of not taking the inherent structure into
account is a dramatic increase in computational cost. We propose an algorithm
for learning a cosparse Analysis Operator that adheres to the preexisting
structure of the data, and thus allows for a very efficient implementation.
This is achieved by enforcing a separable structure on the learned operator.
Our learning algorithm is able to deal with multidimensional data of arbitrary
order. We evaluate our method on volumetric data at the example of
three-dimensional MRI scans.
|
We discuss the problem of competition between a superconducting (SC) ordered
state with a charge density wave (CDW) state in stripe phases of high $T_c$
superconductors. We consider an effective model for each stripe motivated by
studies of spin-gapped electronic ladder systems. We analyze the problem of
dimensional crossover arising from inter-stripe SC and CDW couplings using
non-Abelian bosonization and renormalization group (RG) arguments to derive an
effective $O(4)$-symmetric nonlinear $\sigma$-model in $D=2+1$ for the case of
when both inter-stripe couplings are of equal magnitude as well as equally RG
relevant. By studying the effects of various symmetry lowering perturbations,
we determine the structure of the phase diagram and show that, in general, it
has a broad regime in which both orders coexist. The quantum and thermal
critical behavior is discussed in detail, and the phase coexistence region is
found to end at associated $T=0$ as well as $T>0$ tetracritical points. The
possible role of hedgehog topological excitations of the theory is considered
and argued to be RG irrelevant at the spatially anisotropic higher dimensional
low-energy fixed point theory. Our results are also relevant to the case of
competing N\'eel and valence bond solid (VBS) orders in quantum magnets on 2D
isotropic square as well as rectangular lattices interacting via nearest
neighbor Heisenberg exchange interactions.
|
Many deep learning applications, like keyword spotting, require the
incorporation of new concepts (classes) over time, referred to as Class
Incremental Learning (CIL). The major challenge in CIL is catastrophic
forgetting, i.e., preserving as much of the old knowledge as possible while
learning new tasks. Various techniques, such as regularization, knowledge
distillation, and the use of exemplars, have been proposed to resolve this
issue. However, prior works primarily focus on the incremental learning step,
while ignoring the optimization during the base model training. We hypothesize
that a more transferable and generalizable feature representation from the base
model would be beneficial to incremental learning.
In this work, we adopt multitask learning during base model training to
improve the feature generalizability. Specifically, instead of training a
single model with all the base classes, we decompose the base classes into
multiple subsets and regard each of them as a task. These tasks are trained
concurrently and a shared feature extractor is obtained for incremental
learning. We evaluate our approach on two datasets under various
configurations. The results show that our approach enhances the average
incremental learning accuracy by up to 5.5%, which enables more reliable and
accurate keyword spotting over time. Moreover, the proposed approach can be
combined with many existing techniques and provides additional performance
gain.
|
We derive the Einstein field equations and black hole entropy from the first
law of thermodynamics on a holographic time-like screen. Because of the
universality of gravity, the stress tensor on the screen must be independent of
the details of matter fields, so it should be a pure geometric quantity. For
simplicity, we assume that the stress tensor on the screen depends on surface
Ricci curvature and extrinsic curvature linearly. Then we prove that the
surface stress tensor is just the Brown-York stress tensor plus terms which do
not affect the field equations of gravitation and the entropy of the system. By
assuming a generalized "Fine first law of thermodynamics" or the usual
universal first law of thermodynamics on the screen, we can derive the matter
field equations as well.
|
A considerable number of research works has been devoted to the study of
tumor models. Several biophysical factors, such as cell proliferation,
apoptosis, chemotaxis, angiogenesis and necrosis, have been discovered to have
an impact on the complicated biological system of tumors. An indicator of the
aggressiveness of tumor development is the instability of the shape of the
tumor boundary. Complex patterns of tumor morphology have been explored by Lu,
Min-Jhe et al. [Nonlinear simulation of vascular tumor growth with chemotaxis
and the control of necrosis, Journal of Computational Physics 459 (2022):
111153]. In this paper, we continue to carry out a bifurcation analysis on such
a vascular tumor model with a controlled necrotic core and chemotaxis. This
bifurcation analysis, to the parameter of cell proliferation, is built on the
explicit formulas of radially symmetric steady-state solutions. By perturbing
the tumor free boundary and establishing rigorous estimates of the free
boundary system, %applying the Hanzawa transformation, we prove the existence
of the bifurcation branches with Crandall-Rabinowitz theorem. The parameter of
chemotaxis is found to influence the monotonicity of the bifurcation point as
the mode $l$ increases both theoretically and numerically.
|
Z Cam dwarf novae are distinguished from other dwarf novae based on the
appearance of so called 'standstills' in their long-term optical light curves.
It has been suggested previously that WW Cet might be a Z Cam type dwarf nova,
but this classification was subsequently ruled out, based on its long-term
light curve behavior. Forty years of historical data for WW Cet has shown no
evidence of standstills. WW Ceti is therefore classified as a UG type dwarf
nova in the General Catalog of Variable Stars (GCVS) and the International
Variable Star Index (VSX). Beginning in the 2010 observing season, WW Cet has
been observed to be in a standstill, remaining more or less steady in the 12th
magnitude range. Based on this first ever, historical standstill of WW Ceti, we
conclude that it is indeed a bona fide member of the Z Cam class of dwarf
novae.
|
We explicitly construct random hash functions for privacy amplification
(extractors) that require smaller random seed lengths than the previous
literature, and still allow efficient implementations with complexity $O(n\log
n)$ for input length $n$. The key idea is the concept of dual universal$_2$
hash function introduced recently. We also use a new method for constructing
extractors by concatenating $\delta$-almost dual universal$_2$ hash functions
with other extractors. Besides minimizing seed lengths, we also introduce
methods that allow one to use non-uniform random seeds for extractors. These
methods can be applied to a wide class of extractors, including dual
universal$_2$ hash function, as well as to conventional universal$_2$ hash
functions.
|
A common mechanism for intracellular transport is the use of controlled
deformations of the membrane to create spherical or tubular buds. While the
basic physical properties of homogeneous membranes are relatively well-known,
the effects of inhomogeneities within membranes are very much an active field
of study. Membrane domains enriched in certain lipids in particular are
attracting much attention, and in this Letter we investigate the effect of such
domains on the shape and fate of membrane tubes. Recent experiments have
demonstrated that forced lipid phase separation can trigger tube fission, and
we demonstrate how this can be understood purely from the difference in elastic
constants between the domains. Moreover, the proposed model predicts timescales
for fission that agree well with experimental findings.
|
In this paper, we present an information propagation game on a network where
the information is originated from a sponsor who is willing to pay a fixed
total budget to the players who propagate the information. Our solution can be
applied to real world situations such as advertising via social networks with
limited budgets. The goal is to design a mechanism to distribute the budget
such that all players in the social network are incentivized to propagate
information to all their neighbours. We propose a family of mechanisms to
achieve the goal, where propagating information to all neighbours is a dominant
strategy for all players. Furthermore, we also consider the cases where the
budget has to be completely shared.
|
The literature on information flow security with respect to transitive
policies has been concentrated largely on the case of policies with two
security domains, High and Low, because of a presumption that more general
policies can be reduced to this two-domain case. The details of the reduction
have not been the subject of careful study, however. Many works in the
literature use a reduction based on a quantification over "Low-down"
partitionings of domains into those below and those not below a given domain in
the information flow order. A few use "High-up" partitionings of domains into
those above and those not above a given domain. Our paper argues that more
general "cut" partitionings are also appropriate, and studies the relationships
between the resulting multi-domain notions of security when the basic notion
for the two-domain case to which we reduce is either Nondeducibility on Inputs
or Generalized Noninterference. The Low-down reduction is shown to be weaker
than the others, and while the High-up reduction is sometimes equivalent to the
cut reduction, both it and the Low-down reduction may have an undesirable
property of non-monotonicity with respect to a natural ordering on policies.
These results suggest that the cut-based partitioning yields a more robust
general approach for reduction to the two-domain case.
|
Existing visual explanation generating agents learn to fluently justify a
class prediction. However, they may mention visual attributes which reflect a
strong class prior, although the evidence may not actually be in the image.
This is particularly concerning as ultimately such agents fail in building
trust with human users. To overcome this limitation, we propose a phrase-critic
model to refine generated candidate explanations augmented with flipped phrases
which we use as negative examples while training. At inference time, our
phrase-critic model takes an image and a candidate explanation as input and
outputs a score indicating how well the candidate explanation is grounded in
the image. Our explainable AI agent is capable of providing counter arguments
for an alternative prediction, i.e. counterfactuals, along with explanations
that justify the correct classification decisions. Our model improves the
textual explanation quality of fine-grained classification decisions on the CUB
dataset by mentioning phrases that are grounded in the image. Moreover, on the
FOIL tasks, our agent detects when there is a mistake in the sentence, grounds
the incorrect phrase and corrects it significantly better than other models.
|
We consider partially observable Markov decision processes (POMDPs) with a
set of target states and every transition is associated with an integer cost.
The optimization objective we study asks to minimize the expected total cost
till the target set is reached, while ensuring that the target set is reached
almost-surely (with probability 1). We show that for integer costs
approximating the optimal cost is undecidable. For positive costs, our results
are as follows: (i) we establish matching lower and upper bounds for the
optimal cost and the bound is double exponential; (ii) we show that the problem
of approximating the optimal cost is decidable and present approximation
algorithms developing on the existing algorithms for POMDPs with finite-horizon
objectives. While the worst-case running time of our algorithm is double
exponential, we also present efficient stopping criteria for the algorithm and
show experimentally that it performs well in many examples of interest.
|
Neutron stars (NSs) can capture dark matter (DM) particles because of their
deep gravitational potential and high density. The accumulated DM can affect
the properties of NSs. In this work we use a general relativistic two-fluid
formalism to solve the structure of DM-admixed NSs (DANSs) and the surrounding
spacetime. Specifically, we pay attention to the situation where those DANSs
possess DM halos. Due to the gravitational effect of the DM halo, the pulse
profile of an X-ray pulsar is changed. Our study finds a universal relation
between the peak flux deviation of the pulse profile and $M_{\rm halo}/R_{\rm
BM}$, which is the ratio of the DM halo mass, $M_{\rm halo}$, to the baryonic
matter (BM) core radius, $R_{\rm BM}$. Our results show that, when $M_{\rm
halo}/R_{\rm BM}=0.292$ and the DM particle mass $m_f = 0.3\,$GeV, the maximum
deviation of the profile can be larger than 100$\%$, which has implication in
X-ray pulsar observation.
|
We present the decomposition of QCD partial amplitudes into primitive
amplitudes at one-loop level and tree level for arbitrary numbers of quarks and
gluons. Our method is based on shuffle relations. This method is purely
combinatorial and does not require the inversion of a system of linear
equations.
|
Parity-time ($\cal PT$) symmetric lasers have attracted considerable
attention lately due to their promising applications and intriguing properties,
such as free spectral range doubling and single-mode lasing. In this work we
discuss nonlinear modal interactions in these laser systems under steady state
conditions, and we demonstrate that several gain clamping scenarios can occur
for lasing operation in the $\cal PT$-symmetric and $\cal PT$-broken phases. In
particular, we show that, depending on the system's design and the external
pump profile, its operation in the nonlinear regime falls into two different
categories: in one the system is frozen in the $\cal PT$ phase space as the
applied gain increases, while in the other the system is pulled towards its
exceptional point. These features are first illustrated by a coupled mode
formalism and later verified by employing the Steady-state Ab-initio Laser
Theory (SALT). Our findings shine light on the robustness of single-mode
operation in these lasers against saturation nonlinearity in $\cal
PT$-symmetric lasers.
|
Learned image compression methods have shown superior rate-distortion
performance and remarkable potential compared to traditional compression
methods. Most existing learned approaches use stacked convolution or
window-based self-attention for transform coding, which aggregate spatial
information in a fixed range. In this paper, we focus on extending spatial
aggregation capability and propose a dynamic kernel-based transform coding. The
proposed adaptive aggregation generates kernel offsets to capture valid
information in the content-conditioned range to help transform. With the
adaptive aggregation strategy and the sharing weights mechanism, our method can
achieve promising transform capability with acceptable model complexity.
Besides, according to the recent progress of entropy model, we define a
generalized coarse-to-fine entropy model, considering the coarse global
context, the channel-wise, and the spatial context. Based on it, we introduce
dynamic kernel in hyper-prior to generate more expressive global context.
Furthermore, we propose an asymmetric spatial-channel entropy model according
to the investigation of the spatial characteristics of the grouped latents. The
asymmetric entropy model aims to reduce statistical redundancy while
maintaining coding efficiency. Experimental results demonstrate that our method
achieves superior rate-distortion performance on three benchmarks compared to
the state-of-the-art learning-based methods.
|
Electron beam induced current (EBIC) is a powerful characterization technique
which offers the high spatial resolution needed to study polycrystalline solar
cells. Current models of EBIC assume that excitations in the $p$-$n$ junction
depletion region result in perfect charge collection efficiency. However we
find that in CdTe and Si samples prepared by focused ion beam (FIB) milling,
there is a reduced and nonuniform EBIC lineshape for excitations in the
depletion region. Motivated by this, we present a model of the EBIC response
for excitations in the depletion region which includes the effects of surface
recombination from both charge-neutral and charged surfaces. For neutral
surfaces we present a simple analytical formula which describes the numerical
data well, while the charged surface response depends qualitatively on the
location of the surface Fermi level relative to the bulk Fermi level. We find
the experimental data on FIB-prepared Si solar cells is most consistent with a
charged surface, and discuss the implications for EBIC experiments on
polycrystalline materials.
|
In this paper we present new ALMA observations towards the proto-planet
hosting transitional disc of Herbig Ae/Be star HD 100546. This includes
resolved 1.3 mm continuum, $^{13}$CO and the first detection of C$^{18}$O in
this disc, which displays azimuthal asymmetry in regions spatially coincident
with structures previously identified in HST images related to spiral arms. The
lower limit on the mass of the dust disc is calculated to be
9.6x10$^{-4}$M$_\odot$. A firm lower-limit on the total gas mass calculated
from optically thin, mid-plane tracing C$^{18}$O (2-1) emission is
0.018M$_\odot$ assuming ISM abundances. These mass estimates provide an
estimate of gas-to-dust ratio in the disc of 19, the ratio will increase if
C$^{18}$O is relatively under-abundant in the disc compared to CO and H2.
Through deprojection and azimuthal averaging of the image plane we detect 1.3
mm continuum emission out to 290+/-10 au,$^{13}$CO to 390+/-10 au and C$^{18}$O
to 300+/-10au. We measure a radially increasing millimetre spectral index
between wavelengths of 867$\mu$m and 1.3 mm, which shows that grain sizes
increase towards the star, with solid particles growing to cm scales in the
inner disc.
|
We live in a modern world supported by large, complex networks. Examples
range from financial markets to communication and transportation systems. In
many realistic situations the flow of physical quantities in the network, as
characterized by the loads on nodes, is important. We show that for such
networks where loads can redistribute among the nodes, intentional attacks can
lead to a cascade of overload failures, which can in turn cause the entire or a
substantial part of the network to collapse. This is relevant for real-world
networks that possess a highly heterogeneous distribution of loads, such as the
Internet and power grids. We demonstrate that the heterogeneity of these
networks makes them particularly vulnerable to attacks in that a large-scale
cascade may be triggered by disabling a single key node. This brings obvious
concerns on the security of such systems.
|
Speech separation, the task of isolating multiple speech sources from a mixed
audio signal, remains challenging in noisy environments. In this paper, we
propose a generative correction method to enhance the output of a
discriminative separator. By leveraging a generative corrector based on a
diffusion model, we refine the separation process for single-channel mixture
speech by removing noises and perceptually unnatural distortions. Furthermore,
we optimize the generative model using a predictive loss to streamline the
diffusion model's reverse process into a single step and rectify any associated
errors by the reverse process. Our method achieves state-of-the-art performance
on the in-domain Libri2Mix noisy dataset, and out-of-domain WSJ with a variety
of noises, improving SI-SNR by 22-35% relative to SepFormer, demonstrating
robustness and strong generalization capabilities.
|
Black phosphorus (P) has emerged as a layered semiconductor with a unique
crystal structure featuring corrugated atomic layers and strong in-plane
anisotropy in its physical properties. Here, we demonstrate that the crystal
orientation and mechanical anisotropy in free-standing black P thin layers can
be precisely determined by spatially resolved multimode nanomechanical
resonances. This offers a new means for resolving important crystal orientation
and anisotropy in black P device platforms in situ beyond conventional optical
and electrical calibration techniques. Furthermore, we show that
electrostatic-gating-induced straining can continuously tune the mechanical
anisotropic effects on multimode resonances in black P electromechanical
devices. Combined with finite element modeling (FEM), we also determine the
Young's moduli of multilayer black P to be 116.1 and 46.5 GPa in the zigzag and
armchair directions, respectively.
|
The two-dimensional one-component plasma is an ubiquitous model for several
vortex systems. For special values of the coupling constant $\beta q^2$ (where
$q$ is the particles charge and $\beta$ the inverse temperature), the model
also corresponds to the eigenvalues distribution of normal matrix models.
Several features of the system are discussed in the limit of large number $N$
of particles for generic values of the coupling constant. We show that the
statistics of a class of radial observables produces a rich phase diagram, and
their asymptotic behaviour in terms of large deviation functions is calculated
explicitly, including next-to-leading terms up to order 1/N. We demonstrate a
split-off phenomenon associated to atypical fluctuations of the edge density
profile. We also show explicitly that a failure of the fluid phase assumption
of the plasma can break a genuine $1/N$-expansion of the free energy. Our
findings are corroborated by numerical comparisons with exact finite-N formulae
valid for $\beta q^2=2$.
|
The widespread presence of hateful languages on social media has resulted in
adverse effects on societal well-being. As a result, addressing this issue with
high priority has become very important. Hate speech or offensive languages
exist in both explicit and implicit forms, with the latter being more
challenging to detect. Current research in this domain encounters several
challenges. Firstly, the existing datasets primarily rely on the collection of
texts containing explicit offensive keywords, making it challenging to capture
implicitly offensive contents that are devoid of these keywords. Secondly,
common methodologies tend to focus solely on textual analysis, neglecting the
valuable insights that community information can provide. In this research
paper, we introduce a novel dataset OffensiveLang, a community based implicit
offensive language dataset generated by ChatGPT 3.5 containing data for 38
different target groups. Despite limitations in generating offensive texts
using ChatGPT due to ethical constraints, we present a prompt-based approach
that effectively generates implicit offensive languages. To ensure data
quality, we evaluate the dataset with human. Additionally, we employ a
prompt-based zero-shot method with ChatGPT and compare the detection results
between human annotation and ChatGPT annotation. We utilize existing
state-of-the-art models to see how effective they are in detecting such
languages. The dataset is available here:
https://github.com/AmitDasRup123/OffensiveLang
|
A DNA polymerase (DNAP) replicates a template DNA strand. It also exploits
the template as the track for its own motor-like mechanical movement. In the
polymerase mode it elongates the nascent DNA by one nucleotide in each step.
But, whenever it commits an error by misincorporating an incorrect nucleotide,
it can switch to an exonuclease mode. In the latter mode it excises the wrong
nucleotide before switching back to its polymerase mode. We develop a
stochastic kinetic model of DNA replication that mimics an {\it in-vitro}
experiment where a single-stranded DNA, subjected to a mechanical tension $F$,
is converted to a double-stranded DNA by a single DNAP. The $F$-dependence of
the average rate of replication, which depends on the rates of both polymerase
and exonuclease activities of the DNAP, is in good qualitative agreement with
the corresponding experimental results. We introduce 9 novel distinct {\it
conditional dwell times} of a DNAP. Using the methods of first-passage times,
we also derive the exact analytical expressions for the probability
distributions of these conditional dwell times. The predicted $F$-dependence of
these distributions are, in principle, accessible to single-molecule
experiments.
|
We study a graph-theoretic model of interface dynamics called $Competitive\,
Erosion$. Each vertex of the graph is occupied by a particle, which can be
either red or blue. New red and blue particles are emitted alternately from
their respective bases and perform random walk. On encountering a particle of
the opposite color they remove it and occupy its position. We consider
competitive erosion on discretizations of `smooth', planar, simply connected
domains. The main result of this article shows that at stationarity, with high
probability, the blue and the red regions are separated by the level curves of
the Green function, with Neumann boundary conditions, which are orthogonal
circular arcs on the disc and hyperbolic geodesics on a general simply
connected domain. This establishes $conformal\,invariance$ of the model.
|
Fuzzy cellular automaton is a dynamical system with a continuous state value
embedding a cellular automaton with a discrete state value. We investigate a
fuzzy cellular automaton obtained from an elementary cellular automaton of rule
number 38. Its asymptotic solutions are classified into two types. One is a
solution where stable propagating waves exist, and the other is a static
uniform solution of constant value.
|
We present XSHOOTER observations with previous ALMA, MUSE and $HST$
observations to study the nature of radio-jet triggered star formation and the
interaction of radio jets with the interstellar medium in the brightest cluster
galaxy (BCG) in the Abell 1795 cluster. Using $HST$ UV data we determined an
ongoing star formation rate of 9.3 M$_\odot$ yr$^{-1}$. The star formation
follows the global Kennicutt-Schmidt law, however, it has a low efficiency
compared to circumnuclear starbursts in nearby galaxies with an average
depletion time of $\sim$1 Gyr. The star formation and molecular gas are offset
by $\sim1$ kpc indicating that stars have decoupled from the gas. We detected
an arc of high linewidth in ionized gas where electron densities are elevated
by a factor of $\sim$4 suggesting a shock front driven by radio jets or
peculiar motion of the BCG. An analysis of nebular emission line flux ratios
suggests that the gas is predominantly ionized by star formation with a small
contribution from shocks. We also calculated the velocity structure function
(VSF) of the ionized and molecular gases using velocity maps to characterize
turbulent motion in the gas. The ionized gas VSF suggests that the radio jets
are driving supersonic turbulence in the gas. Thus radio jets can not only heat
the atmosphere on large scales and may quench star formation on longer
timescales while triggering star formation in positive feedback on short
timescales of a few million years.
|
Image-level weakly supervised semantic segmentation has received increasing
attention due to its low annotation cost. Existing methods mainly rely on Class
Activation Mapping (CAM) to obtain pseudo-labels for training semantic
segmentation models. In this work, we are the first to demonstrate that
long-tailed distribution in training data can cause the CAM calculated through
classifier weights over-activated for head classes and under-activated for tail
classes due to the shared features among head- and tail- classes. This degrades
pseudo-label quality and further influences final semantic segmentation
performance. To address this issue, we propose a Shared Feature Calibration
(SFC) method for CAM generation. Specifically, we leverage the class prototypes
that carry positive shared features and propose a Multi-Scaled
Distribution-Weighted (MSDW) consistency loss for narrowing the gap between the
CAMs generated through classifier weights and class prototypes during training.
The MSDW loss counterbalances over-activation and under-activation by
calibrating the shared features in head-/tail-class classifier weights.
Experimental results show that our SFC significantly improves CAM boundaries
and achieves new state-of-the-art performances. The project is available at
https://github.com/Barrett-python/SFC.
|
We report the discovery of a secondary pair of radio lobes in the Seyfert
galaxy NGC 2639 with polarization-sensitive observations with the Karl G.
Jansky Very Large Array (VLA). The presence of these lobes, which are aligned
nearly perpendicular to the known set of radio lobes observed in the east-west
direction, has not been reported previously in the literature. The in-band
rotation measure image shows gradients in both the lobes indicative of
organised magnetic field structures on kpc-scales. The magnetic field structure
is aligned with the jet/lobe direction in both the lobes. Based on the settled
optical morphology of the host galaxy, it is likely that a minor merger that
did not disrupt the host galaxy structure is responsible for the observed
features in NGC 2639. This also explains the near 90$^o$ change in the jet
direction; the current jet direction being the result of a new accretion disk
formed by the minor merger, whose direction was a result of the angular
momentum of the inflowing merger gas.
|
We calculate the change in the effective mass and width of a Z boson in the
environment of a quark-gluon plasma under the conditions expected in Pb-Pb
collisions at the LHC. The change in width is predicted to be only about 1 MeV
at a temperature of 1 GeV, compared to the natural width of 2490$\pm$7 MeV. The
mass shift is even smaller. Hence no observable effects are to be expected.
|
A probabilistic algorithm for preparing Bethe eigenstates of the spin-1/2
Heisenberg spin chain on a quantum computer has recently been found. We derive
an exact formula for the success probability of this algorithm in terms of the
Gaudin determinant, and we study its large-length limit. We demonstrate the
feasibility of computing antiferromagnetic ground-state spin-spin correlation
functions for short chains. However, the success probability decreases
exponentially with the chain length, which precludes the computation of these
correlation functions for chains of moderate length. Some conjectures for
estimates of the Gaudin determinant are noted in an appendix.
|
In this note we present a remark on the paper "On the coefficient
inequalities for a class of holomorphic mappings associated with spirallike
mappings in several complex variables" by Y.~Lai and Q.~Xu \cite{LX} published
recently in the journal {\it Results in Mathematics}. We show that one of the
theorems in \cite{LX} concerning the finite-dimensional space $\mathbb{C}^n$ is
a direct consequence of another one, so it does not need an independent proof.
Moreover, we prove that a sharp norm estimate on the Fekete--Szeg\"{o}
functional over spirallike mappings in a general Banach space can be deduced
from a result in \cite{LX}.
|
On November 27, 1800, Thomas Young presents for the second time to the Royal
Society of London his theory of the musclurity of the crystalline lens, as
being the cause of the accommodation of the eye to different distances. This
question had indeed been the topic of his very first communication to the Royal
Society seven years earlier; from which he had been forced to withdraw in the
meanwhile by a series of articles claiming either the priority on his
discovery, or the demonstration of it being erroneous. Seven years later, Young
turns back to the topic with a very strongly elaborated text indeniably proving
the role of the crystalline in accommodation, as well as offering a new and
convenient method for measuring the amplitude of accommodation, discovering the
default of astigmatism of the eye, and setting the most precise and complete
measurement of the living eye of its time. For these reasons, and for the tight
intellectual connections between this text and the Theory of Light and Colours
Thomas Young will publish a year later, we thought it important to bring a
translation and commentary of this text at the disposal of the French audience.
|
We present a kinetic approach to the formation of urban agglomerations which
is based on simple rules of immigration and emigration. In most cases, the
Boltzmann-type kinetic description allows to obtain, within an asymptotic
procedure, a Fokker--Planck equation with variable coefficients of diffusion
and drift, which describes the evolution in time of some probability density of
the city size. It is shown that, in dependence of the microscopic rules of
migration, the equilibrium density can follow both a power law for large values
of the size variable, which contains as particular case a Zipf's law behavior,
and a lognormal law for middle and low values of the size variable. In
particular, connections between the value of Pareto index of the power law at
equilibrium and the disposal of the population to emigration are outlined. The
theoretical findings are tested with recent data of the populations of Italy
and Switzerland.
|
In this work we study some properties of the three dimensional $U(N)$ SUSY
Chern-Simons coupled to a scalar field in the fundamental representation in the
large $N$ limit. For large $N$ we show that the theory has two phases, one
which is conformally invariant, and other where the superconformal symmetry is
broken and masses for the matter fields are generated.
|
The increased availability of computing time, in recent years, allows for
systematic high-throughput studies of material classes with the purpose of both
screening for materials with remarkable properties and understanding how
structural configuration and material composition affect macroscopic attributes
manifestation. However, when conducting systematic high-throughput studies, the
individual ab initio calculations' success depends on the quality of the chosen
input quantities. On a large scale, improving input parameters by trial and
error is neither efficient nor systematic. We present a systematic,
high-throughput compatible, and machine learning-based approach to improve the
input parameters optimized during a DFT computation or workflow. This approach
of integrating machine learning into a typical high-throughput workflow
demonstrates the advantages and necessary considerations for a systematic study
of magnetic multilayers of 3$d$ transition metal layers on FCC noble metal
substrates. For 6660 film systems, we were able to improve the overall success
rate of our high-throughput FLAPW-based structural relaxations from $64.8 \%$
to $94.3\ \%$ while at the same time requiring $17\ \%$ less computational time
for each successful relaxation.
|
We describe right-hand skew Boolean algebras in terms of a class of
presheaves of sets over Boolean algebras called Boolean sets, and prove a
duality theorem between Boolean sets and etale spaces over Boolean spaces.
|
Feed-forward CNNs trained for image transformation problems rely on loss
functions that measure the similarity between the generated image and a target
image. Most of the common loss functions assume that these images are spatially
aligned and compare pixels at corresponding locations. However, for many tasks,
aligned training pairs of images will not be available. We present an
alternative loss function that does not require alignment, thus providing an
effective and simple solution for a new space of problems. Our loss is based on
both context and semantics -- it compares regions with similar semantic
meaning, while considering the context of the entire image. Hence, for example,
when transferring the style of one face to another, it will translate
eyes-to-eyes and mouth-to-mouth. Our code can be found at
https://www.github.com/roimehrez/contextualLoss
|
We present Mathematica7 numerical simulation of the process
$pp\rightarrow\mbox{jet}+E_{T}^{miss}$ in the framework of modified
Randall-Sundrum brane-world model with one infinite and $n$ compact extra
dimension. We compare the energy missing signature with the standard model
background $pp\rightarrow \mbox{jet}+\nu \bar{\nu}$, which was simulated at
CompHep. We show that the models with numbers of compact extra dimensions
greater than 4 can be probed at the protons center-of-mass energy equal 14 TeV.
We also find that testing the brane-world models at 7 TeV on the LHC appears to
hopeless.
|
We present new observations with the Atacama Large Millimeter/sub-millimeter
Array of the 122um and 205um fine-structure line emission of singly-ionised
nitrogen in a strongly lensed starburst galaxy at z=2.6. The 122/205um [NII]
line ratio is sensitive to electron density, n_e, in the ionised interstellar
medium, and we use this to measure n_e~300cm^-3 averaged across the galaxy.
This is over an order of magnitude higher than the Milky Way average, but
comparable to localised Galactic star-forming regions. Combined with
observations of the atomic carbon (CI(1-0)) and carbon monoxide (CO(4-3)) in
the same system, we reveal the conditions in this intensely star-forming
system. The majority of the molecular interstellar medium has been driven to
high density, and the resultant conflagration of star formation produces a
correspondingly dense ionised phase, presumably co-located with myriad HII
regions that litter the gas-rich disk.
|
Visual private information leakage is an emerging key issue for the fast
growing applications of video understanding like activity recognition. Existing
approaches for mitigating privacy leakage in action recognition require privacy
labels along with the action labels from the video dataset. However, annotating
frames of video dataset for privacy labels is not feasible. Recent developments
of self-supervised learning (SSL) have unleashed the untapped potential of the
unlabeled data. For the first time, we present a novel training framework which
removes privacy information from input video in a self-supervised manner
without requiring privacy labels. Our training framework consists of three main
components: anonymization function, self-supervised privacy removal branch, and
action recognition branch. We train our framework using a minimax optimization
strategy to minimize the action recognition cost function and maximize the
privacy cost function through a contrastive self-supervised loss. Employing
existing protocols of known-action and privacy attributes, our framework
achieves a competitive action-privacy trade-off to the existing
state-of-the-art supervised methods. In addition, we introduce a new protocol
to evaluate the generalization of learned the anonymization function to
novel-action and privacy attributes and show that our self-supervised framework
outperforms existing supervised methods. Code available at:
https://github.com/DAVEISHAN/SPAct
|
In this paper, we derive global sharp heat kernel estimates for symmetric
alpha-stable processes (or equivalently, for the fractional Laplacian with zero
exterior condition) in two classes of unbounded C^{1,1} open sets in R^d:
half-space-like open sets and exterior open sets. These open sets can be
disconnected. We focus in particular on explicit estimates for p_D(t,x,y) for
all t>0 and x, y\in D. Our approach is based on the idea that for x and y in
$D$ far from the boundary and t sufficiently large, we can compare p_D(t,x,y)
to the heat kernel in a well understood open set: either a half-space or R^d;
while for the general case we can reduce them to the above case by pushing $x$
and $y$ inside away from the boundary. As a consequence, sharp Green functions
estimates are obtained for the Dirichlet fractional Laplacian in these two
types of open sets. Global sharp heat kernel estimates and Green function
estimates are also obtained for censored stable processes (or equivalently, for
regional fractional Laplacian) in exterior open sets.
|
Measurement combined with feedback that aims to restore a presumed
pre-measurement quantum state will yield this state after a few
measurement-feedback cycles even if the actual state of the system initially
had no resemblance to the presumed state. Here we introduce this mechanism of
{\it self-fulfilling prophecy} and show that it can be used to prepare
finite-dimensional quantum systems in target states or target dynamics. Using
two-level systems as an example we demonstrate that self-fulfilling prophecy
protects the system against noise and tolerates imprecision of feedback up to
the level of the measurement strength. By means of unsharp measurements the
system can be driven deterministically into arbitrary, smooth quantum
trajectories.
|
The goal of Text-to-image person retrieval is to retrieve person images from
a large gallery that match the given textual descriptions. The main challenge
of this task lies in the significant differences in information representation
between the visual and textual modalities. The textual modality conveys
abstract and precise information through vocabulary and grammatical structures,
while the visual modality conveys concrete and intuitive information through
images. To fully leverage the expressive power of textual representations, it
is essential to accurately map abstract textual descriptions to specific
images.
To address this issue, we propose a novel framework to Unleash the
Imagination of Text (UIT) in text-to-image person retrieval, aiming to fully
explore the power of words in sentences. Specifically, the framework employs
the pre-trained full CLIP model as a dual encoder for the images and texts ,
taking advantage of prior cross-modal alignment knowledge. The Text-guided
Image Restoration auxiliary task is proposed with the aim of implicitly mapping
abstract textual entities to specific image regions, facilitating alignment
between textual and visual embeddings. Additionally, we introduce a cross-modal
triplet loss tailored for handling hard samples, enhancing the model's ability
to distinguish minor differences.
To focus the model on the key components within sentences, we propose a novel
text data augmentation technique. Our proposed methods achieve state-of-the-art
results on three popular benchmark datasets, and the source code will be made
publicly available shortly.
|
There is a one-to-one correspondence between the point set of a group
divisible design (GDD) with $v_1$ groups of $v_2$ points and the edge set of a
complete bipartite graph $K_{v_1,v_2}$. A block of GDD corresponds to a
subgraph of $K_{v_1,v_2}$. A set of subgraphs of $K_{v_1,v_2}$ is constructed
from a block set of GDDs. If the GDD satisfies the $\lambda_1, \lambda_2$
concurrence condition, then the set of subgraphs also satisfies the spanning
bipartite block design (SBBD) conditions. We also propose a method to construct
SBBD directly from an $(r,\lambda)$-design and a difference matrix over a
group. Suppose the $(r,\lambda)$-design consists of $v_2$ points and $v_1$
blocks. When $v_1 >> v_2$, we show a method to construct a SBBD with $v_1$ is
close to $v_2$ by partitioning the block set.
|
In robot-assisted minimally invasive surgery (RMIS), inverse kinematics (IK)
must satisfy a remote center of motion (RCM) constraint to prevent tissue
damage at the incision point. However, most of existing IK methods do not
account for the trade-offs between the RCM constraint and other objectives such
as joint limits, task performance and manipulability optimization. This paper
presents a novel method for manipulability maximization in constrained IK of
surgical robots, which optimizes the robot's dexterity while respecting the RCM
constraint and joint limits. Our method uses a hierarchical quadratic
programming (HQP) framework that solves a series of quadratic programs with
different priority levels. We evaluate our method in simulation on a 6D path
tracking task for constrained and unconstrained IK scenarios for redundant
kinematic chains. Our results show that our method enhances the manipulability
index for all cases, with an important increase of more than 100% when a large
number of degrees of freedom are available. The average computation time for
solving the IK problems was under 1ms, making it suitable for real-time robot
control. Our method offers a novel and effective solution to the constrained IK
problem in RMIS applications.
|
The paper addresses the question of existence of a locally self-similar
blow-up for the incompressible Euler equations. Several exclusion results are
proved based on the $L^p$-condition for velocity or vorticity and for a range
of scaling exponents. In particular, in $N$ dimensions if in self-similar
variables $u \in L^p$ and $u \sim \frac{1}{t^{\a/(1+\a)}}$, then the blow-up
does not occur provided $\a >N/2$ or $-1<\a\leq N/p$. This includes the $L^3$
case natural for the Navier-Stokes equations. For $\a = N/2$ we exclude
profiles with an asymptotic power bounds of the form $ |y|^{-N-1+\d} \lesssim
|u(y)| \lesssim |y|^{1-\d}$. Homogeneous near infinity solutions are eliminated
as well except when homogeneity is scaling invariant.
|
In this paper we compare two constructions of weight functions (off-shell
Bethe vectors) for the quantum affine algebra $U_q(\hat{\mathfrak{gl}}_N)$. The
first construction comes from the algebraic nested Bethe ansatz. The second one
is defined in terms of certain projections of products of Drinfeld currents. We
show that two constructions give the same result in tensor products of vector
representations of $U_q(\hat{\mathfrak{gl}}_N)$.
|
The Internet of Things (IoT) is transforming our physical world into a
complex and dynamic system of connected devices on an unprecedented scale.
Connecting everyday physical objects is creating new business models, improving
processes and reducing costs and risks. Recently, blockchain technology has
received a lot of attention from the community as a possible solution to
overcome security issues in IoT. However, traditional blockchains (such as the
ones used in Bitcoin and Ethereum) are not well suited to the
resource-constrained nature of IoT devices and also with the large volume of
information that is expected to be generated from typical IoT deployments. To
overcome these issues, several researchers have presented lightweight instances
of blockchains tailored for IoT. For example, proposing novel data structures
based on blocks with decoupled and appendable data. However, these researchers
did not discuss how the consensus algorithm would impact their solutions, i.e.,
the decision of which consensus algorithm would be better suited was left as an
open issue. In this paper, we improved an appendable-block blockchain framework
to support different consensus algorithms through a modular design. We
evaluated the performance of this improved version in different emulated
scenarios and studied the impact of varying the number of devices and
transactions and employing different consensus algorithms. Even adopting
different consensus algorithms, results indicate that the latency to append a
new block is less than 161ms (in the more demanding scenario) and the delay for
processing a new transaction is less than 7ms, suggesting that our improved
version of the appendable-block blockchain is efficient and scalable, and thus
well suited for IoT scenarios.
|
Uniqueness of Leray solutions of the 3D Navier-Stokes equations is a
challenging open problem. In this article we will study this problem for the 3D
stationary Navier-Stokes equations and under some additional hypotheses, stated
in terms of Lebesgue and Morrey spaces, we will show that the trivial solution
U = 0 is the unique solution. This type of results are known as Liouville
theorems.
|
Evaluation of intelligent assistants in large-scale and online settings
remains an open challenge. User behavior-based online evaluation metrics have
demonstrated great effectiveness for monitoring large-scale web search and
recommender systems. Therefore, we consider predicting user engagement status
as the very first and critical step to online evaluation for intelligent
assistants. In this work, we first proposed a novel framework for classifying
user engagement status into four categories -- fulfillment, continuation,
reformulation and abandonment. We then demonstrated how to design simple but
indicative metrics based on the framework to quantify user engagement levels.
We also aim for automating user engagement prediction with machine learning
methods. We compare various models and features for predicting engagement
status using four real-world datasets. We conducted detailed analyses on
features and failure cases to discuss the performance of current models as well
as challenges.
|
We construct a complete 4d model of fermion masses and mixings in the
Pati-Salam SU(4) x SU(2)_L x SU(2)_R framework governed by an SO(3) gauged
Family Symmetry. The relevant low energy effective Yukawa operators are
constructed so that the SO(3) flavons enter at the simplest possible one-flavon
level, with couplings enforced by an additional U(1) x Z_2 symmetry. The
simplicity of the flavon sector allows the messenger sector to be fully
specified, allowing the ultraviolet completion of the model at the 4d
renormalizable level. The model predicts approximate tri-bimaximal lepton
mixing via the see-saw mechanism with sequential dominance, and vacuum
alignment of flavons, with calculable deviations described by the neutrino sum
rule. We perform a numerical analysis of the emerging charged fermion spectra
and mixings. The 4d model is shown to result from a 5d orbifold GUT model based
on SO(3) x SO(10), where small flavon vacuum expectation values originate from
bulk volume suppression.
|
We propose a mechanism to generate Primordial Black Holes (PBHs) which is
independent of cosmological inflation and occurs slightly below the QCD phase
transition. Our setup relies on the collapse of long-lived string-domain wall
networks and is naturally realized in QCD axion models with domain wall number
$N_{DW}>1$ and Peccei-Quinn symmetry broken after inflation. In our framework,
dark matter is mostly composed of axions in the meV mass range along with a
small fraction, $\Omega_{\text{PBH}} \gtrsim 10^{-6} \Omega_{\text{CDM}} $ of
heavy $M \sim 10^4-10^7 M_\odot$ PBHs. The latter could play a role in
alleviating some of the shortcomings of the $\Lambda$CDM model on sub-galactic
scales. The scenario has distinct signatures in ongoing axion searches as well
as gravitational wave observatories.
|
Face anti-spoofing (FAS) is crucial for securing face recognition systems.
However, existing FAS methods with handcrafted binary or pixel-wise labels have
limitations due to diverse presentation attacks (PAs). In this paper, we
propose an attack type robust face anti-spoofing framework under light flash,
called ATR-FAS. Due to imaging differences caused by various attack types,
traditional FAS methods based on single binary classification network may
result in excessive intra-class distance of spoof faces, leading to a challenge
of decision boundary learning. Therefore, we employed multiple networks to
reconstruct multi-frame depth maps as auxiliary supervision, and each network
experts in one type of attack. A dual gate module (DGM) consisting of a type
gate and a frame-attention gate is introduced, which perform attack type
recognition and multi-frame attention generation, respectively. The outputs of
DGM are utilized as weight to mix the result of multiple expert networks. The
multi-experts mixture enables ATR-FAS to generate spoof-differentiated depth
maps, and stably detects spoof faces without being affected by different types
of PAs. Moreover, we design a differential normalization procedure to convert
original flash frames into differential frames. This simple but effective
processing enhances the details in flash frames, aiding in the generation of
depth maps. To verify the effectiveness of our framework, we collected a
large-scale dataset containing 12,660 live and spoof videos with diverse PAs
under dynamic flash from the smartphone screen. Extensive experiments
illustrate that the proposed ATR-FAS significantly outperforms existing
state-of-the-art methods. The code and dataset will be available at
https://github.com/Chaochao-Lin/ATR-FAS.
|
At the core of nonperturbative theories of quantum gravity lies the
holographic encoding of bulk data in large matrices. At present this mapping is
poorly understood. The plane wave matrix model provides a laboratory for
isolating aspects of this problem in a controlled setting.
At large boosts, configurations of concentric membranes become superselection
sectors, whose exact spectra are known. From the bulk point of view one expects
product states of individual membranes to be contained within the full
spectrum. However, for non-BPS states this inclusion relation is obscured by
Gauss law constraints. Its validity rests on nontrivial relations in
representation theory, which we identify and verify by explicit computation.
|
Subsets and Splits