text
stringlengths 6
128k
|
---|
We study a natural conjecture regarding ferromagnetic ordering of energy
levels in the Heisenberg model which complements the Lieb-Mattis Theorem of
1962 for antiferromagnets: for ferromagnetic Heisenberg models the lowest
energies in each subspace of fixed total spin are strictly ordered according to
the total spin, with the lowest, i.e., the ground state, belonging to the
maximal total spin subspace. Our main result is a proof of this conjecture for
the spin-1/2 Heisenberg XXX and XXZ ferromagnets in one dimension. Our proof
has two main ingredients. The first is an extension of a result of Koma and
Nachtergaele which shows that monotonicity as a function of the total spin
follows from the monotonicity of the ground state energy in each total spin
subspace as a function of the length of the chain. For the second part of the
proof we use the Temperley-Lieb algebra to calculate, in a suitable basis, the
matrix elements of the Hamiltonian restricted to each subspace of the highest
weight vectors with a given total spin. We then show that the positivity
properties of these matrix elements imply the necessary monotonicity in the
volume. Our method also shows that the first excited state of the XXX
ferromagnet on any finite tree has one less than maximal total spin.
|
We demonstrate how to match pre-equilibrium dynamics of a 0+1 dimensional
quark gluon plasma to 2nd-order viscous hydrodynamical evolution. The matching
allows us to specify the initial values of the energy density and shear tensor
at the initial time of hydrodynamical evolution as a function of the lifetime
of the pre-equilibrium period. We compare two models for the pre-equilibrium
quark-gluon plasma, longitudinal free streaming and collisionally-broadened
longitudinal expansion, and present analytic formulas which can be used to fix
the necessary components of the energy-momentum tensor. The resulting dynamical
models can be used to assess the effect of pre-equilibrium dynamics on
quark-gluon plasma observables. Additionally, we investigate the dependence of
entropy production on pre-equilibrium dynamics and discuss the limitations of
the standard definitions of the non-equilibrium entropy.
|
We present a data-driven algorithm to model and predict the socio-emotional
impact of groups on observers. Psychological research finds that highly
entitative i.e. cohesive and uniform groups induce threat and unease in
observers. Our algorithm models realistic trajectory-level behaviors to
classify and map the motion-based entitativity of crowds. This mapping is based
on a statistical scheme that dynamically learns pedestrian behavior and
computes the resultant entitativity induced emotion through group motion
characteristics. We also present a novel interactive multi-agent simulation
algorithm to model entitative groups and conduct a VR user study to validate
the socio-emotional predictive power of our algorithm. We further show that
model-generated high-entitativity groups do induce more negative emotions than
low-entitative groups.
|
Let $\mathcal A$ be a von Neumann algebra and $\mathcal M$ be a Banach
$\mathcal A-$module. It is shown that for every homomorphisms $\sigma, \tau$ on
$\mathcal A$, every bounded linear map $f:\mathcal A\to \mathcal M$ with
property that $f(p^2)=\sigma(p)f(p)+f(p)\tau(p)$ for every projection $p$ in
$\mathcal A$ is a $(\sigma,\tau)-$derivation. Also, it is shown that a bounded
linear map $f:\mathcal A \to \mathcal M $ which satisfies $f(ab)=
\sigma(a)f(b)+f(a)\tau(b)$ for all $a,b\in \mathcal A$ with $ab=S$, is a
$(\sigma,\tau)-$ derivation if $\tau(S)$ is left invertible for fixed $S$.
|
We study the construction of probability densities for time-of-arrival in
quantum mechanics. Our treatment is based upon the facts that (i) time appears
in quantum theory as an external parameter to the system, and (ii) propositions
about the time-of-arrival appear naturally when one considers histories. The
definition of time-of-arrival probabilities is straightforward in stochastic
processes. The difficulties that arise in quantum theory are due to the fact
that the time parameter of Schr\"odinger's equation does not naturally define a
probability density at the continuum limit, but also because the procedure one
follows is sensitive on the interpretation of the reduction procedure. We
consider the issue in Copenhagen quantum mechanics and in history-based schemes
like consistent histories. The benefit of the latter is that it allows a proper
passage to the continuous limit--there are however problems related to the
quantum Zeno effect and decoherence. We finally employ the histories-based
description to construct Positive-Operator-Valued-Measures (POVMs) for the
time-of-arrival, which are valid for a general Hamiltonian. These POVMs
typically depend on the resolution of the measurement device; for a free
particle, however, this dependence cancels in the physically relevant regime
and the POVM coincides with that of Kijowski.
|
The magnetic and thermal evolution of neutron stars is a very complex process
with many nonlinear interactions. For a decent understanding of neutron star
physics, these evolutions cannot be considered isolated. A brief overview is
presented, which describes the main magnetothermal interactions that determine
the fate of both isolated neutron stars and accreting ones. Special attention
is devoted to the interplay of thermal and magnetic evolution at the polar cap
of radio pulsars. There, a strong meridional temperature gradient is maintained
over the lifetime of radio pulsars. It may be strong enough to drive
thermoelectric magnetic field creation which perpetuate a toroidal magnetic
field around the polar cap rim. Such a local field component may amplify and
curve the poloidal surface field at the cap, forming a strong and small scale
magnetic field as required for the radio emission of pulsars
|
The study and modeling of driver's gaze dynamics is important because, if and
how the driver is monitoring the driving environment is vital for driver
assistance in manual mode, for take-over requests in highly automated mode and
for semantic perception of the surround in fully autonomous mode. We developed
a machine vision based framework to classify driver's gaze into context rich
zones of interest and model driver's gaze behavior by representing gaze
dynamics over a time period using gaze accumulation, glance duration and glance
frequencies. As a use case, we explore the driver's gaze dynamic patterns
during maneuvers executed in freeway driving, namely, left lane change
maneuver, right lane change maneuver and lane keeping. It is shown that
condensing gaze dynamics into durations and frequencies leads to recurring
patterns based on driver activities. Furthermore, modeling these patterns show
predictive powers in maneuver detection up to a few hundred milliseconds a
priori.
|
The main objective of this article is to study both dynamic and structural
transitions of the Taylor-Couette flow, using the dynamic transition theory and
geometric theory of incompressible flows developed recently by the authors. In
particular we show that as the Taylor number crosses the critical number, the
system undergoes either a continuous or a jump dynamic transition, dictated by
the sign of a computable, nondimensional parameter $R$. In addition, we show
that the new transition states have the Taylor vortex type of flow structure,
which is structurally stable.
|
We propose a simple model of network co-evolution in a game-dynamical system
of interacting agents that play repeated games with their neighbors, and adapt
their behaviors and network links based on the outcome of those games. The
adaptation is achieved through a simple reinforcement learning scheme. We show
that the collective evolution of such a system can be described by
appropriately defined replicator dynamics equations. In particular, we suggest
an appropriate factorization of the agents' strategies that results in a
coupled system of equations characterizing the evolution of both strategies and
network structure, and illustrate the framework on two simple examples.
|
By using laboratory experimental data, we test the uncertainty of social
strategy transitions in various competing environments of fixed paired
two-person constant sum $2 \times 2$ games. It firstly shows that, the
distributions of social strategy transitions are not erratic but obey the
principle of the maximum entropy (MaxEnt). This finding indicates that human
subject social systems and natural systems could have wider common backgrounds.
|
We continue the study undertaken in \cite{DV} of the exceptional Jordan
algebra $J = J_3^8$ as (part of) the finite-dimensional quantum algebra in an
almost classical space-time approach to particle physics. Along with reviewing
known properties of $J$ and of the associated exceptional Lie groups we argue
that the symmetry of the model can be deduced from the Borel-de Siebenthal
theory of maximal connected subgroups of simple compact Lie groups.
|
Massive machine-type communication (MTC) with sporadically transmitted small
packets and low data rate requires new designs on the PHY and MAC layer with
light transmission overhead. Compressive sensing based multiuser detection
(CS-MUD) is designed to detect active users through random access with low
overhead by exploiting sparsity, i.e., the nature of sporadic transmissions in
MTC. However, the high computational complexity of conventional sparse
reconstruction algorithms prohibits the implementation of CS-MUD in real
communication systems. To overcome this drawback, in this paper, we propose a
fast Deep learning based approach for CS-MUD in massive MTC systems. In
particular, a novel block restrictive activation nonlinear unit, is proposed to
capture the block sparse structure in wide-band wireless communication systems
(or multi-antenna systems). Our simulation results show that the proposed
approach outperforms various existing algorithms for CS-MUD and allows for
ten-fold decrease of the computing time.
|
New local well-posedness results for dispersion generalized Benjamin-Ono
equations on the torus are proved. The family of equations under consideration
links the Benjamin-Ono and Korteweg-de Vries equation. For sufficiently high
dispersion global well-posedness in $L^2(\mathbb{T})$ is derived.
|
Mitotic figure detection in histology images is a hard-to-define, yet
clinically significant task, where labels are generated with pathologist
interpretations and where there is no ``gold-standard'' independent
ground-truth. However, it is well-established that these interpretation based
labels are often unreliable, in part, due to differences in expertise levels
and human subjectivity. In this paper, our goal is to shed light on the
inherent uncertainty of mitosis labels and characterize the mitotic figure
classification task in a human interpretable manner. We train a probabilistic
diffusion model to synthesize patches of cell nuclei for a given mitosis label
condition. Using this model, we can then generate a sequence of synthetic
images that correspond to the same nucleus transitioning into the mitotic
state. This allows us to identify different image features associated with
mitosis, such as cytoplasm granularity, nuclear density, nuclear irregularity
and high contrast between the nucleus and the cell body. Our approach offers a
new tool for pathologists to interpret and communicate the features driving the
decision to recognize a mitotic figure.
|
Intermolecular van der Waals interactions are central to chemical and
physical phenomena ranging from biomolecule binding to soft-matter phase
transitions. However, there are currently very limited approaches to manipulate
van der Waals interactions. In this work, we demonstrate that strong
light-matter coupling can be used to tune van der Waals interactions, and,
thus, control the thermodynamic properties of many-molecule systems. Our
analyses reveal orientation dependent single molecule energies and interaction
energies for van der Waals molecules (for example, H$_{2}$). For example, we
find intermolecular interactions that depend on the distance between the
molecules $R$ as $R^{-3}$ and $R^{0}$. Moreover, we employ non-perturbative
\textit{ab initio} cavity quantum electrodynamics calculations to develop
machine learning-based interaction potentials for molecules inside optical
cavities. By simulating systems ranging from $12$ H$_2$ to $144$ H$_2$
molecules, we demonstrate that strong light-matter coupling can tune the
structural and thermodynamic properties of molecular fluids. In particular, we
observe varying degrees of orientational order as a consequence of
cavity-modified interactions, and we explain how quantum nuclear effects,
light-matter coupling strengths, number of cavity modes, molecular
anisotropies, and system size all impact the extent of orientational order.
These simulations and analyses demonstrate both local and collective effects
induced by strong light-matter coupling and open new paths for controlling the
properties of molecular clusters.
|
Searches for high-mass resonances in the dijet invariant mass spectrum with
one or two jets identified as $b$-jets are performed using an integrated
luminosity of $3.2$ fb$^{-1}$ of proton--proton collisions with a
centre-of-mass energy of $\sqrt{s}=13$ TeV recorded by the ATLAS detector at
the Large Hadron Collider. No evidence of anomalous phenomena is observed in
the data, which are used to exclude, at 95% credibility level, excited $b^{*}$
quarks with masses from 1.1 TeV to 2.1 TeV and leptophobic $Z'$ bosons with
masses from 1.1 TeV to 1.5 TeV. Contributions of a Gaussian signal shape with
effective cross sections ranging from approximately 0.4 to 0.001 pb are also
excluded in the mass range 1.5-5.0 TeV.
|
Early stopping based on the validation set performance is a popular approach
to find the right balance between under- and overfitting in the context of
supervised learning. However, in reinforcement learning, even for supervised
sub-problems such as world model learning, early stopping is not applicable as
the dataset is continually evolving. As a solution, we propose a new general
method that dynamically adjusts the update to data (UTD) ratio during training
based on under- and overfitting detection on a small subset of the continuously
collected experience not used for training. We apply our method to DreamerV2, a
state-of-the-art model-based reinforcement learning algorithm, and evaluate it
on the DeepMind Control Suite and the Atari $100$k benchmark. The results
demonstrate that one can better balance under- and overestimation by adjusting
the UTD ratio with our approach compared to the default setting in DreamerV2
and that it is competitive with an extensive hyperparameter search which is not
feasible for many applications. Our method eliminates the need to set the UTD
hyperparameter by hand and even leads to a higher robustness with regard to
other learning-related hyperparameters further reducing the amount of necessary
tuning.
|
A local impurity usually only strongly affects few single-particle energy
levels, thus cannot induce a quantum phase transition (QPT), or any macroscopic
quantum phenomena in a many-body system within the Hermitian regime. However,
it may happen for a non-Hermitian impurity. We investigate the many-body ground
state property of a one-dimensional tight-binding ring with an embedded single
asymmetrical dimer based on exact solutions. We introduce the concept of
semi-localization state to describe a new quantum phase, which is a crossover
from extended to localized state. The peculiar feature is that the decay length
is of the order of the system size, rather than fixed as a usual localized
state. In addition, the spectral statistics is non-analytic as asymmetrical
hopping strengths vary, resulting a sudden charge of the ground state. The
distinguishing feature of such a QPT is that the density of ground state energy
varies smoothly due to unbroken symmetry. However, there are other observables,
such as the groundstate center of mass and average current, exhibit the
behavior of second-order QPT. This behavior stems from time-reversal symmetry
breaking of macroscopic number of single-particle eigen states.
|
This paper puts forth a new training data-untethered model poisoning (MP)
attack on federated learning (FL). The new MP attack extends an adversarial
variational graph autoencoder (VGAE) to create malicious local models based
solely on the benign local models overheard without any access to the training
data of FL. Such an advancement leads to the VGAE-MP attack that is not only
efficacious but also remains elusive to detection. VGAE-MP attack extracts
graph structural correlations among the benign local models and the training
data features, adversarially regenerates the graph structure, and generates
malicious local models using the adversarial graph structure and benign models'
features. Moreover, a new attacking algorithm is presented to train the
malicious local models using VGAE and sub-gradient descent, while enabling an
optimal selection of the benign local models for training the VGAE. Experiments
demonstrate a gradual drop in FL accuracy under the proposed VGAE-MP attack and
the ineffectiveness of existing defense mechanisms in detecting the attack,
posing a severe threat to FL.
|
The extended electrodynamic theory introduced by Aharonov and Bohm (after an
earlier attempt by Ohmura) and recently developed by Van Vlaenderen and Waser,
Hively and Giakos, can be re-written and solved in a simple and effective way
in the standard covariant 4D formalism. This displays more clearly some of its
features. The theory allows a very interesting consistent generalization of the
Maxwell equations. In particular, the generalized field equations are
compatible with sources (classical, or more likely of quantum nature) for which
the continuity/conservation equation $\partial_\mu j^\mu=0$ is not valid
everywhere, or is valid only as an average above a certain scale. And yet,
remarkably, in the end the observable $F^{\mu \nu}$ field is still generated by
a conserved effective source which we denote as $(j^\nu+i^\nu)$, being $i^\nu$
a suitable non-local function of $j^\nu$. This implies that any microscopic
violation of the charge continuity condition is "censored" at the macroscopic
level, although it has real consequences, because it generates a non-Maxwellian
component of the field. We consider possible applications of this formalism to
condensed-matter systems with macroscopic quantum tunneling. The extended
electrodynamics can also be coupled to fractional quantum systems.
|
We present the measurement of light neutral mesons, $\pi^{0}$ and $\eta$, in
pp collisions at different center-of-mass energies obtained with the ALICE
experiment at the LHC. The $\pi^{0}$ and $\eta$ mesons are measured via photons
reconstructed by the electromagnetic calorimeters and the central tracking
system. The invariant cross-section of $\pi^{0}$ and $\eta$ mesons are measured
in a broad $p_{\rm T}$ range at $\sqrt{s} = 0.9, 2.76, 7, 5.02$ and 8 TeV. The
spectra of $\pi^{0}$ and $\eta$ mesons measured in pp collisions at different
collision energies show $x_{\rm T}$-scaling at high $p_{\rm T}$ and violation
of $m_{\rm T}$-scaling at low $p_{\rm T}$. The smaller $x_{\rm T}$-scaling
exponents of our measurements compared to RHIC may hint at a reduced importance
of higher twist processes at LHC.
|
Sharp Fourier type and cotype of Lebesgue spaces and Schatten classes with
respect to an arbitrary compact semisimple Lie group are investigated. In the
process, a local variant of the Hausdorff-Young inequality on such groups is
given.
|
Clustered data are common in practice. Clustering arises when subjects are
measured repeatedly, or subjects are nested in groups (e.g., households,
schools). It is often of interest to evaluate the correlation between two
variables with clustered data. There are three commonly used Pearson
correlation coefficients (total, between-, and within-cluster), which together
provide an enriched perspective of the correlation. However, these Pearson
correlation coefficients are sensitive to extreme values and skewed
distributions. They also depend on the scale of the data and are not applicable
to ordered categorical data. Current non-parametric measures for clustered data
are only for the total correlation. Here we define population parameters for
the between- and within-cluster Spearman rank correlations. The definitions are
natural extensions of the Pearson between- and within-cluster correlations to
the rank scale. We show that the total Spearman rank correlation approximates a
weighted sum of the between- and within-cluster Spearman rank correlations,
where the weights are functions of rank intraclass correlations of the two
random variables. We also discuss the equivalence between the within-cluster
Spearman rank correlation and the covariate-adjusted partial Spearman rank
correlation. Furthermore, we describe estimation and inference for the three
Spearman rank correlations, conduct simulations to evaluate the performance of
our estimators, and illustrate their use with data from a longitudinal
biomarker study and a clustered randomized trial.
|
In recent years, transformer models have achieved great success in natural
language processing (NLP) tasks. Most of the current state-of-the-art NLP
results are achieved by using monolingual transformer models, where the model
is pre-trained using a single language unlabelled text corpus. Then, the model
is fine-tuned to the specific downstream task. However, the cost of
pre-training a new transformer model is high for most languages. In this work,
we propose a cost-effective transfer learning method to adopt a strong source
language model, trained from a large monolingual corpus to a low-resource
language. Thus, using XLNet language model, we demonstrate competitive
performance with mBERT and a pre-trained target language model on the
cross-lingual sentiment (CLS) dataset and on a new sentiment analysis dataset
for low-resourced language Tigrinya. With only 10k examples of the given
Tigrinya sentiment analysis dataset, English XLNet has achieved 78.88% F1-Score
outperforming BERT and mBERT by 10% and 7%, respectively. More interestingly,
fine-tuning (English) XLNet model on the CLS dataset has promising results
compared to mBERT and even outperformed mBERT for one dataset of the Japanese
language.
|
We propose a Kullback-Leibler Divergence (KLD) filter to extract anomalies
within data series generated by a broad class of proximity sensors, along with
the anomaly locations and their relative sizes. The technique applies to
devices commonly used in engineering practice, such as those mounted on mobile
robots for non-destructive inspection of hazardous or other environments that
may not be directly accessible to humans. The raw data generated by this class
of sensors can be challenging to analyze due to the prevalence of noise over
the signal content. The proposed filter is built to detect the difference of
information content between data series collected by the sensor and baseline
data series. It is applicable in a model-based or model-free context. The
performance of the KLD filter is validated in an industrial-norm setup and
benchmarked against a peer industrially-adopted algorithm.
|
We show that the large orbital degeneracy inherent in Moir\'e
heterostructures naturally gives rise to a `high-$T_c$' like phase diagram with
a chiral twist - wherein an exotic $\textit{quantum anomalous Hall}$ insulator
phase is flanked by chiral $d+id$ superconducting domes. Specifically, we
analyze repulsively interacting fermions on hexagonal (triangular or honeycomb)
lattices near Van Hove filling, with an ${\rm SU}(N_f)$ flavor degeneracy. This
model is inspired by recent experiments on graphene Moir\'e heterostructures.
At this point, a nested Fermi surface and divergent density of states give rise
to strong ($\ln^2$) instabilities to correlated phases, the competition between
which can be controllably addressed through a combination of weak coupling
parquet renormalization group and Landau-Ginzburg analysis. For $N_f=2$ (i.e.
spin degeneracy only) it is known that chiral $d+id$ superconductivity is the
unambiguously leading weak coupling instability. Here we show that $N_f\geq4$
leads to a richer (but still unambiguous and fully controllable) behavior,
wherein at weak coupling the leading instability is to a fully gapped and
chiral $\textit{Chern insulator}$, characterized by a spontaneous breaking of
time reversal symmetry and a quantized Hall response. Upon doping this phase
gives way to a chiral $d+id$ superconductor. We further consider deforming this
minimal model by introducing an orbital splitting of the Van Hove
singularities, and discuss the resulting RG flow and phase diagram. Our
analysis thus bridges the minimal model and the practical Moir\'e band
structures, thereby providing a transparent picture of how the correlated
phases arise under various circumstances. Meanwhile, a similar analysis on the
square lattice predicts a phase diagram where (for $N_f>2$) a nodal staggered
flux phase with `loop current' order gives way upon doping to a nodal $d$-wave
superconductor.
|
We investigate a Jordan-Brans-Dicke (JBD) scalar field, $\Phi$, with
power-law potential in the presence of a second scalar field, $\phi$, with an
exponential potential, in both the Jordan and the Einstein frames. We present
the relation of our model with the induced gravity model with power-law
potential and the integrability of this kind of models is discussed when the
quintessence field $\phi$ is massless, and has a small velocity. We prove that
in JBD theory, the de Sitter solution is not a natural attractor but an
intermediate accelerated solution of the form $a(t)\simeq e^{\alpha_1
t^{p_1}}$, as $t\rightarrow \infty$ where $\alpha_1>0$ and $0<p_1<1$, for a
wide range of parameters. Furthermore, in the Einstein frame we get that the
attractor is also an intermediate accelerated solution of the form
$\mathfrak{a}(\mathfrak{t})\simeq e^{\alpha_2 \mathfrak{t}^{p_2}}$ as
$\mathfrak{t}\rightarrow \infty$ where $\alpha_2>0$ and $0<p_2<1$, for the same
conditions on the parameters as in the Jordan frame. In the special case of a
quadratic potential in the Jordan frame, or for a constant potential in the
Einstein's frame, these solutions are of saddle type. Finally, we present a
specific elaboration of our extension of the induced gravity model in the
Jordan frame, which corresponds to a linear potential of $\Phi$. The dynamical
system is then reduced to a two dimensional one, and the late-time attractor is
linked with the exact solution found for the induced gravity model. In this
example the intermediate accelerated solution does not exist, and the attractor
solution has an asymptotic de Sitter-like evolution law for the scale factor.
Apart from some fine-tuned examples such as the linear, and quadratic potential
${U}(\Phi)$ in the Jordan frame, it is true that intermediate accelerated
solutions are generic late-time attractors in a modified JBD theory.
|
We derive the full statistics of the product events in homodyne correlation
measurements, involving a single mode signal, a local oscillator, a linear
optical network, and two linear photodetectors. This is performed for the
regime of high intensities impinging on the detectors. Our description
incorporates earlier proposed homodyne correlation measurement schemes, such as
the homodyne cross-correlation and homodyne intensity-correlation measurements.
This analysis extends the amount of information retrieved from such types of
measurements, since previously attention was paid only to the expectation value
of the correlation statistics. As an example, we consider the correlation
statistics of coherent, Gaussian, and Fock states. Moreover, nonclassical light
is certified on the basis of the variance of the measurement outcome.
|
The increasing demand for automatic high-level image understanding,
particularly in detecting abstract concepts (AC) within images, underscores the
necessity for innovative and more interpretable approaches. These approaches
need to harmonize traditional deep vision methods with the nuanced,
context-dependent knowledge humans employ to interpret images at intricate
semantic levels. In this work, we leverage situated perceptual knowledge of
cultural images to enhance performance and interpretability in AC image
classification. We automatically extract perceptual semantic units from images,
which we then model and integrate into the ARTstract Knowledge Graph (AKG).
This resource captures situated perceptual semantics gleaned from over 14,000
cultural images labeled with ACs. Additionally, we enhance the AKG with
high-level linguistic frames. We compute KG embeddings and experiment with
relative representations and hybrid approaches that fuse these embeddings with
visual transformer embeddings. Finally, for interpretability, we conduct
posthoc qualitative analyses by examining model similarities with training
instances. Our results show that our hybrid KGE-ViT methods outperform existing
techniques in AC image classification. The posthoc interpretability analyses
reveal the visual transformer's proficiency in capturing pixel-level visual
attributes, contrasting with our method's efficacy in representing more
abstract and semantic scene elements. We demonstrate the synergy and
complementarity between KGE embeddings' situated perceptual knowledge and deep
visual model's sensory-perceptual understanding for AC image classification.
This work suggests a strong potential of neuro-symbolic methods for knowledge
integration and robust image representation for use in downstream intricate
visual comprehension tasks. All the materials and code are available online.
|
We introduce bi-fermion fishnet theories, a class of models describing
integrable sectors of four-dimensional gauge theories with non-maximal
supersymmetry. Bi-fermion theories are characterized by a single complex scalar
field and two Weyl fermions interacting only via chiral Yukawa couplings. The
latter generate oriented Feynman diagrams forming hexagonal lattices, whose
fishnet structure signals an underlying integrability that we exploit to
compute anomalous dimensions of BMN-vacuum operators. Furthermore, we
investigate Lunin-Maldacena deformations of $\mathcal{N}=2$ superconformal
field theories with deformation parameter $\gamma$ and prove that bi-fermion
models emerge in the limit of large imaginary $\gamma$ and vanishing 't Hooft
coupling $g$, with $g e^{-i \gamma/2}$ fixed. Finally, we explicitly find
non-trivial conformal fixed points and compute the scaling dimensions of
operators for any $\gamma$ and in presence of double-trace deformations.
|
We propose a perceptual video quality assessment (PVQA) metric for distorted
videos by analyzing the power spectral density (PSD) of a group of pictures.
This is an estimation approach that relies on the changes in video dynamic
calculated in the frequency domain and are primarily caused by distortion. We
obtain a feature map by processing a 3D PSD tensor obtained from a set of
distorted frames. This is a full-reference tempospatial approach that considers
both temporal and spatial PSD characteristics. This makes it ubiquitously
suitable for videos with varying motion patterns and spatial contents. Our
technique does not make any assumptions on the coding conditions, streaming
conditions or distortion. This approach is also computationally inexpensive
which makes it feasible for real-time and practical implementations. We
validate our proposed metric by testing it on a variety of distorted sequences
from PVQA databases. The results show that our metric estimates the perceptual
quality at the sequence level accurately. We report the correlation
coefficients with the differential mean opinion scores (DMOS) reported in the
databases. The results show high and competitive correlations compared with the
state of the art techniques.
|
We show within a very simple framework that different measures of
fluctuations lead to uncertainty relations resulting in contradictory
conclusions. More specifically we focus on Tsallis and Renyi entropic
uncertainty relations and we get that the minimum uncertainty states of some
uncertainty relations are the maximum uncertainty states of closely related
uncertainty relations, and vice versa.
|
We show that the non Hermitian Black-Scholes Hamiltonian and its various
generalizations are eta-pseudo Hermitian. The metric operator eta is explicitly
constructed for this class of Hamitonians. It is also shown that the effective
Black-Scholes Hamiltonian and its partner form a pseudo supersymmetric system.
|
We consider a particle confined in a uniformly expanding two-dimensional
square box from the point of the view of the de Broglie-Bohm pilot-wave theory.
In particular we study quantum ensembles in which the Born Law is initially
violated (quantum non-equilibrium). We show examples of such ensembles that
start close to quantum equilibrium, as measured by the standard coarse-grained
H-function, but diverge from it with time. We give an explanation of this
result and discuss the possibilities that it opens.
|
We determine Grothendieck groups of periodic derived categories. In
particular, we prove that the Grothendieck group of the $m$-periodic derived
category of finitely generated modules over an Artin algebra is a free
$\mathbb{Z}$-module if $m$ is even but an $\mathbb{F}_2$-vector space if $m$ is
odd. Its rank is equal to the number of isomorphism classes of simple modules
in both cases. As an application, we prove that the number of non-isomorphic
summands of a strict periodic tilting object $T$, which was introduced in [S21]
as a periodic analogue of tilting objects, is independent of the choice of $T$.
|
We analyze the algebra of boundary observables in canonically quantised JT
gravity with or without matter. In the absence of matter, this algebra is
commutative, generated by the ADM Hamiltonian. After coupling to a bulk quantum
field theory, it becomes a highly noncommutative algebra of Type II$_\infty$
with a trivial center. As a result, density matrices and entropies on the
boundary algebra are uniquely defined up to, respectively, a rescaling or
shift. We show that this algebraic definition of entropy agrees with the usual
replica trick definition computed using Euclidean path integrals. Unlike in
previous arguments that focused on $\mathcal{O}(1)$ fluctuations to a black
hole of specified mass, this Type II$_\infty$ algebra describes states at all
temperatures or energies. We also consider the role of spacetime wormholes. One
can try to define operators associated with wormholes that commute with the
boundary algebra, but this fails in an instructive way. In a regulated version
of the theory, wormholes and topology change can be incorporated
perturbatively. The bulk Hilbert space $\mathcal{H}_\mathrm{bulk}$ that
includes baby universe states is then much bigger than the space of states
$\mathcal{H}_\mathrm{bdry}$ accessible to a boundary observer. However, to a
boundary observer, every pure or mixed state on $\mathcal{H}_\mathrm{bulk}$ is
equivalent to some pure state in $\mathcal{H}_\mathrm{bdry}$.
|
In the last few decades, building regression models for non-scalar variables,
including time series, text, image, and video, has attracted increasing
interests of researchers from the data analytic community. In this paper, we
focus on a multivariate time series regression problem. Specifically, we aim to
learn mathematical mappings from multiple chronologically measured numerical
variables within a certain time interval S to multiple numerical variables of
interest over time interval T. Prior arts, including the multivariate
regression model, the Seq2Seq model, and the functional linear models, suffer
from several limitations. The first two types of models can only handle
regularly observed time series. Besides, the conventional multivariate
regression models tend to be biased and inefficient, as they are incapable of
encoding the temporal dependencies among observations from the same time
series. The sequential learning models explicitly use the same set of
parameters along time, which has negative impacts on accuracy. The
function-on-function linear model in functional data analysis (a branch of
statistics) is insufficient to capture complex correlations among the
considered time series and suffer from underfitting easily. In this paper, we
propose a general functional mapping that embraces the function-on-function
linear model as a special case. We then propose a non-linear
function-on-function model using the fully connected neural network to learn
the mapping from data, which addresses the aforementioned concerns in the
existing approaches. For the proposed model, we describe in detail the
corresponding numerical implementation procedures. The effectiveness of the
proposed model is demonstrated through the application to two real-world
problems.
|
The problem of balancing conflicting needs is fundamental to intelligence.
Standard reinforcement learning algorithms maximize a scalar reward, which
requires combining different objective-specific rewards into a single number.
Alternatively, different objectives could also be combined at the level of
action value, such that specialist modules responsible for different objectives
submit different action suggestions to a decision process, each based on
rewards that are independent of one another. In this work, we explore the
potential benefits of this alternative strategy. We investigate a biologically
relevant multi-objective problem, the continual homeostasis of a set of
variables, and compare a monolithic deep Q-network to a modular network with a
dedicated Q-learner for each variable. We find that the modular agent: a)
requires minimal exogenously determined exploration; b) has improved sample
efficiency; and c) is more robust to out-of-domain perturbation.
|
New discoveries and developments in almost every area of correlated electron
physics were presented at SCES 2016. Here, I provide a personal perspective on
some of these developments, highlighting some new ideas in computational
physics, discussing the "hidden order" challenges of cuprate and heavy electron
superconductors, the mysterious bulk excitations of the topological Kondo
insulator SmB$_{6}$ and new progress in research on quantum spin ice, iron
based superconductors and quantum criticality.
|
We demonstrate an exotic doubled-channeled NT GAAFET (DC NT GAAFET) structure
with Ion boost in comparison with NT GAAFET and NW GAAFET with the same
footprint. Ion gains of 64.8% and 1.7 times have been obtained in DC NT GAAFET
in compared with NT GAAFET and NW GAAFET. Ioff of DC NT GAAFET degrades by
61.8% than that of NT GAAFET, SS is almost comparable in two kinds of device
structures, whereas Ion/Ioff ratio in DC NT GAAFET still gains subtly, by 2.4%,
than NT GAAFET thanks to the substantial Ion aggrandizement, indicating the
sustained superior gate electrostatic controllability in DC NT GAAFET with
regarding to NT GAAFET regardless of additional channel incorporated. On the
other side, both DC NT GAAFET and NT GAAFET exhibit superior device performance
than NW GAAFET in terms of high operation speed and better electrostatic
controllability manifested by suppressed SCEs.
|
Monte Carlo simulations applied to the lattice formulation of quantum
chromodynamics (QCD) enable a study of the theory from first principles, in a
nonperturbative way. After over two decades of developments in the methodology
for this study and with present-day computers in the teraflops range,
lattice-QCD simulations are now able to provide quantitative predictions with
errors of a few percent. This means that these simulations will soon become the
main source of theoretical results for comparison with experiments in physics
of the strong interactions. It is therefore an important moment for the
beginning of Brazilian participation in the field.
|
The spectral index $s$ of particles diffusively accelerated in a relativistic
shock depends on the unknown angular diffusion function $\mathcal{D}$, which
itself depends on the particle distribution function $f$ if acceleration is
efficient. We develop a relaxation code to compute $s$ and $f$ for an arbitrary
functional $\mathcal{D}$ that depends on $f$. A local $\mathcal{D}(f)$
dependence is motivated and shown, when rising (falling) upstream, to soften
(harden) $s$ with respect to the isotropic case, shift the angular distribution
towards upstream (downstream) directions, and strengthen (weaken) the particle
confinement to the shock; an opposite effect on $s$ is found downstream.
However, variations in $s$ remain modest even when $\mathcal{D}$ is a strong
function of $f$, so the standard, isotropic-diffusion results remain
approximately applicable unless $\mathcal{D}$ is both highly anisotropic and
not a local function of $f$. A mild, $\sim 0.1$ softening of $s$, in both 2D
and 3D, when $\mathcal{D}(f)$ rises sufficiently fast, may be indicated by
ab-initio simulations.
|
The processes that determine the establishment of the complex morphology of
neurons during development are still poorly understood. We present experiments
that use live imaging to examine the role of vesicle transport and propose a
lattice-based model that shows symmetry breaking features similar to a neuron
during its polarization. In a otherwise symmetric situation our model predicts
that a difference in neurite length increases the growth potential of the
longer neurite indicating that vesicle transport can be regarded as a major
factor in neurite growth.
|
Currently, existing salient object detection methods based on convolutional
neural networks commonly resort to constructing discriminative networks to
aggregate high level and low level features. However, contextual information is
always not fully and reasonably utilized, which usually causes either the
absence of useful features or contamination of redundant features. To address
these issues, we propose a novel ladder context correlation complementary
network (LC3Net) in this paper, which is equipped with three crucial
components. At the beginning, we propose a filterable convolution block (FCB)
to assist the automatic collection of information on the diversity of initial
features, and it is simple yet practical. Besides, we propose a dense cross
module (DCM) to facilitate the intimate aggregation of different levels of
features by validly integrating semantic information and detailed information
of both adjacent and non-adjacent layers. Furthermore, we propose a
bidirectional compression decoder (BCD) to help the progressive shrinkage of
multi-scale features from coarse to fine by leveraging multiple pairs of
alternating top-down and bottom-up feature interaction flows. Extensive
experiments demonstrate the superiority of our method against 16
state-of-the-art methods.
|
The use of recommender systems has increased dramatically to assist online
social network users in the decision-making process and selecting appropriate
items. On the other hand, due to many different items, users cannot score a
wide range of them, and usually, there is a scattering problem for the matrix
created for users. To solve the problem, the trust-based recommender systems
are applied to predict the score of the desired item for the user. Various
criteria have been considered to define trust, and the degree of trust between
users is usually calculated based on these criteria. In this regard, it is
impossible to obtain the degree of trust for all users because of the large
number of them in social networks. Also, for this problem, researchers use
different modes of the Random Walk algorithm to randomly visit some users,
study their behavior, and gain the degree of trust between them. In the present
study, a trust-based recommender system is presented that predicts the score of
items that the target user has not rated, and if the item is not found, it
offers the user the items dependent on that item that are also part of the
user's interests. In a trusted network, by weighting the edges between the
nodes, the degree of trust is determined, and a TrustWalker is developed, which
uses the Biased Random Walk (BRW) algorithm to move between the nodes. The
weight of the edges is effective in the selection of random steps. The
implementation and evaluation of the present research method have been carried
out on three datasets named Epinions, Flixster, and FilmTrust; the results
reveal the high efficiency of the proposed method.
|
Assembling parts into an object is a combinatorial problem that arises in a
variety of contexts in the real world and involves numerous applications in
science and engineering. Previous related work tackles limited cases with
identical unit parts or jigsaw-style parts of textured shapes, which greatly
mitigate combinatorial challenges of the problem. In this work, we introduce
the more challenging problem of shape assembly, which involves textureless
fragments of arbitrary shapes with indistinctive junctions, and then propose a
learning-based approach to solving it. We demonstrate the effectiveness on
shape assembly tasks with various scenarios, including the ones with abnormal
fragments (e.g., missing and distorted), the different number of fragments, and
different rotation discretization.
|
Mass and charge identification of charged products detected with
Silicon-CsI(Tl) telescopes of the Chimera apparatus is presented. An
identification function, based on the Bethe-Bloch formula, is used to fit
empirical correlation between Delta E and E ADC readings, in order to
determine, event by event, the atomic and mass numbers of the detected charged
reaction products prior to energy calibration.
|
We report an experimental study of a binary sand bed under an oscillating
water flow. The formation and evolution of ripples is observed. The appearance
of a granular segregation is shown to strongly depend on the sand bed
preparation. The initial wavelength of the mixture is measured. In the final
steady state, a segregation in volume is observed instead of a segregation at
the surface as reported before. The correlation between this phenomenon and the
fluid flow is emphasised. Finally, different ``exotic'' patterns and their
geophysical implications are presented.
|
The Type-II solar radio burst recorded on 13 June 2010 by the radio
spectrograph of the Hiraiso Solar Observatory was employed to estimate the
magnetic-field strength in the solar corona. The burst was characterized by a
well pronounced band-splitting, which we used to estimate the density jump at
the shock and Alfven Mach number using the Rankine-Hugoniot relations. The
plasma frequency of the Type-II bursts is converted into height [R] in solar
radii using the appropriate density model, then we estimated the shock speed
[Vs], coronal Alfven velocity [Va], and the magnetic-field strength at
different heights. The relative bandwidth of the band-split is found to be in
the range 0.2 -- 0.25, corresponding to the density jump of X = 1.44 -- 1.56,
and the Alfven Mach number of MA = 1.35 -- 1.45. The inferred mean shock speed
was on the order of V ~ 667 km/s. From the dependencies V(R) and MA(R) we found
that Alfven speed slightly decreases at R ~ 1.3 -- 1.5. The magnetic-field
strength decreases from a value between 2.7 and 1.7 G at R ~ 1.3 -- 1.5 Rs
depending on the coronal-density model employed. We find that our results are
in good agreement with the empirical scaling by Dulk and McLean (Solar Phys.
57, 279, 1978) and Gopalswamy et al. (Astrophys. J. 744, 72, 2012). Our result
shows that Type-II band splitting method is an important tool for inferring the
coronal magnetic field, especially when independent measurements were made from
white light observations.
|
A non perturbative method to compute the mass of the b quark including the
1/m term in HQET has been presented in a companion talk. Following this
strategy, we find in the MS bar scheme m_b^{stat}(m_b) = 4.350(64) GeV for the
leading term, and m_b^{(1)}(m_b) = -0.049(29) GeV for the next to leading order
correction. This method involves several steps, including the simulation of the
relativistic theory in a small volume, and of the effective theory in a big
volume. Here we present some numerical details of our calculations.
|
Within framework of the $\mu$ from $\nu$ Supersymmetric Standard Model
($\mu\nu$SSM), exotic singlet right-handed neutrino superfields induce new
sources for lepton-flavor violation. In this work, we investigate some
lepton-flavor violating processes in detail in the $\mu\nu$SSM. The numerical
results indicate that the branching ratios for lepton-flavor violating
processes $\mu\rightarrow e\gamma$, $\tau\rightarrow\mu\gamma$ and
$\mu\rightarrow3e$ can reach $10^{-12}$ when $\tan\beta$ is large enough, which
can be detected in near future. We also discuss the constraint on the relevant
parameter space of the model from the muon anomalous magnetic dipole moment. In
addition, from the scalars for the $\mu\nu$SSM we strictly separate the
Goldstone bosons, which disappear in the physical gauge.
|
The cold dark matter (DM) paradigm describes the large-scale structure of the
universe remarkably well. However, there exists some tension with the observed
abundances and internal density structures of both field dwarf galaxies and
galactic satellites. Here, we demonstrate that a simple class of DM models may
offer a viable solution to all of these problems simultaneously. Their key
phenomenological properties are velocity-dependent self-interactions mediated
by a light vector messenger and thermal production with much later kinetic
decoupling than in the standard case.
|
In this paper we derive from arguments of string scattering a set of eight
tetrahedron equations, with different index orderings. It is argued that this
system of equations is the proper system that represents integrable structures
in three dimensions generalising the Yang-Baxter equation. Under additional
restrictions this system reduces to the usual tetrahedron equation in the
vertex form. Most known solutions fall under this class, but it is by no means
necessary. Comparison is made with the work on braided monoidal 2-categories
also leading to eight tetrahedron equations.
|
The $N$-body problem with a $1/r^2$ potential has, in addition to translation
and rotational symmetry, an effective scale symmetry which allows its zero
energy flow to be reduced to a geodesic flow on complex projective $N-2$-space,
minus a hyperplane arrangement. When $N=3$ we get a geodesic flow on the
two-sphere minus three points. If, in addition we assume that the three masses
are equal, then it was proved in [1] that the corresponding metric is
hyperbolic: its Gaussian curvature is negative except at two points. Does the
negative curvature property persist for $N=4$, that is, in the equal mass
$1/r^2$ 4-body problem? Here we prove `no' by computing that the corresponding
Riemannian metric in this $N=4$ case has positive sectional curvature at some
two-planes. This `no' answer dashes hopes of naively extending hyperbolicity
from $N=3$ to $N>3$.
|
Threshold and infrared divergences are studied as possible mechanisms of
particle production and compared to the usual decay process in a model quantum
field theory from which generalizations are obtained. A spectral representation
of the propagator of the decaying particle suggests that decay, threshold and
infrared singularities while seemingly different phenomena are qualitatively
related. We implement a non-perturbative dynamical resummation method to study
the time evolution of an initial state. It is manifestly unitary and yields the
asymptotic state and the distribution function of produced particles. Whereas
the survival probability in a decay process falls off as $e^{-\Gamma t}$, for
threshold and infrared divergent cases falls off instead as $e^{-\sqrt{t/t^*}}$
and $t^{-\Delta}$ respectively, with $\Gamma, \Delta \propto (coupling)^2$
whereas $1/t^* \propto (coupling)^4$. Despite the different decay dynamics, the
asymptotic state is qualitatively similar: a kinematically entangled state of
the daughter particles with a distribution function which fulfills the
unitarity condition and is strongly peaked at energy conserving transitions but
broadened by the "lifetime" $1/\Gamma~;~ t^*$ for usual decay and threshold
singularity, whereas it scales with the anomalous dimension $\Delta$ for the
infrared singular case. Threshold and infrared instabilities are production
mechanisms just as efficient as particle decay. If one of the particles is in a
dark sector and not observed, the loss of information yields an entanglement
entropy determined by the distribution functions and increases upon unitary
time evolution.
|
The fiducial argument of Fisher (1973) has been described as his biggest
blunder, but the recent review of Hannig et al. (2016) demonstrates the current
and increasing interest in this brilliant idea. This short note analyses an
example introduced by Seidenfeld (1992) where the fiducial distribution is
restricted to a string.
Keywords and phrases: Bayesian and fiducial inference, Restrictions on
parameters, Uncertainty quantification, Epistemic probability, Statistics on a
manifold.
|
In this paper, we investigate cooperative spectrum sensing (CSS) in a
cognitive radio network (CRN) where multiple secondary users (SUs) cooperate in
order to detect a primary user (PU) which possibly occupies multiple bands
simultaneously. Deep cooperative sensing (DCS), which constitutes the first CSS
framework based on a convolutional neural network (CNN), is proposed. In DCS,
instead of the explicit mathematical modeling of CSS which is hard to compute
and also hard to use in practice, the strategy for combining the individual
sensing results of the SUs is learned with a CNN using training sensing
samples. Accordingly, an environment-specific CSS which considers both spectral
and spatial correlation of individual sensing outcomes, is found in an adaptive
manner regardless of whether the individual sensing results are quantized or
not. Through simulation, we show that the performance of CSS can be improved by
the proposed DCS with low complexity even when the number of training samples
is moderate.
|
The periodic discrete Toda equation defined over finite fields has been
studied. We obtained the finite graph structures constructed by the network of
states where edges denote possible time evolutions. We simplify the graphs by
introducing a equivalence class of cyclic permutations to the initial values.
We proved that the graphs are bi-directional and that they are composed of
several arrays of complete graphs connected at one of their vertices.
|
We propose two schemes for concentration of hyperentanglement of nonlocal
multipartite states which are simultaneously entangled in the polarization and
spatial modes. One scheme uses an auxiliary singlephoton state prepared
according to the parameters of the less-entangled states. The other scheme uses
two less-entangled states with unknown parameters to distill the maximal
hyperentanglement. The procrustean concentration is realized by two parity
check measurements in both the two degrees of freedom. Nondestructive quantum
nondemolition detectors based on cross-Kerr nonlinearity are used to implement
the parity check, which makes the unsuccessful instances reusable in the next
concentration round. The success probabilities in both schemes can be made to
approach unity by iteration. Moreover, in both schemes only one of the N
parties has to perform the parity check measurements. Our schemes are efficient
and useful for quantum information processing involving hyperentanglement.
|
A plasma based isotopic separation method is proposed. Isotopes of different
masses get separated during plasma expansion. Relying on Gurevichs
(A.V.Gurevich, L.V.Pariiskaya and L.P.Pitaevskii, Sov.Phys.JETP, 36,274 (1973))
plasma expansion into a vacuum model, the enrichment factor has been
calculated. For t =15 (t being the normalized time), an increase of the
relative abundance of 30% is expected.
|
A photo-polymerization initiator based on an imidazolium and an oxometalate,
viz., (BMIm)2(DMIm) PW12O40 (where, BMIm = 1-butyl-3-methylimizodium, DMIm =
3,3'-Dimethyl-1,1'-Diimidazolium) is reported. It polymerizes several
industrially important monomers and is recoverable hence can be reused. The Mn
and PDI are controlled and a reaction pathway is proposed.
|
Time series forecasting is crucial for applications across multiple domains
and various scenarios. Although Transformer models have dramatically shifted
the landscape of forecasting, their effectiveness remains debated. Recent
findings have indicated that simpler linear models might outperform complex
Transformer-based approaches, highlighting the potential for more streamlined
architectures. In this paper, we shift focus from the overall architecture of
the Transformer to the effectiveness of self-attentions for time series
forecasting. To this end, we introduce a new architecture, Cross-Attention-only
Time Series transformer (CATS), that rethinks the traditional Transformer
framework by eliminating self-attention and leveraging cross-attention
mechanisms instead. By establishing future horizon-dependent parameters as
queries and enhanced parameter sharing, our model not only improves long-term
forecasting accuracy but also reduces the number of parameters and memory
usage. Extensive experiment across various datasets demonstrates that our model
achieves superior performance with the lowest mean squared error and uses fewer
parameters compared to existing models.
|
We construct linear network codes utilizing algebraic curves over finite
fields and certain associated Riemann-Roch spaces and present methods to obtain
their parameters.
In particular we treat the Hermitian curve and the curves associated with the
Suzuki and Ree groups all having the maximal number of points for curves of
their respective genera.
Linear network coding transmits information in terms of a basis of a vector
space and the information is received as a basis of a possibly altered vector
space. Ralf Koetter and Frank R. Kschischang
%\cite{DBLP:journals/tit/KoetterK08} introduced a metric on the set of vector
spaces and showed that a minimal distance decoder for this metric achieves
correct decoding if the dimension of the intersection of the transmitted and
received vector space is sufficiently large.
The vector spaces in our construction have minimal distance bounded from
below in the above metric making them suitable for linear network coding.
|
When an elastic object is dragged through a viscous fluid tangent to a rigid
boundary, it experiences a lift force perpendicular to its direction of motion.
An analogous lift mechanism occurs when a rigid symmetric object translates
parallel to an elastic interface or a soft substrate. The induced lift force is
attributed to an elastohydrodynamic coupling that arises from the breaking of
the flow reversal symmetry induced by the elastic deformation of the
translating object or the interface. Here we derive explicit analytical
expressions for the quasi-steady state lift force exerted on a rigid spherical
particle translating parallel to a finite-sized membrane exhibiting a
resistance toward both shear and bending. Our analytical approach proceeds
through the application of the Lorentz reciprocal theorem so as to obtain the
solution of the flow problem using a perturbation technique for small
deformations of the membrane. We find that the shear-related contribution to
the normal force leads to an attractive interaction between the particle and
the membrane. This emerging attractive force decreases quadratically with the
system size to eventually vanish in the limit of an infinitely-extended
membrane. In contrast, membrane bending leads to a repulsive interaction whose
effect becomes more pronounced upon increasing the system size, where the lift
force is found to diverge logarithmically for an infinitely-large membrane. The
unphysical divergence of the bending-induced lift force can be rendered finite
by regularizing the solution with a cut-off length beyond which the bending
forces become subdominant to an external body force.
|
Applications of neural networks to condensed matter physics are becoming
popular and beginning to be well accepted. Obtaining and representing the
ground and excited state wave functions are examples of such applications.
Another application is analyzing the wave functions and determining their
quantum phases. Here, we review the recent progress of using the multilayer
convolutional neural network, so-called deep learning, to determine the quantum
phases in random electron systems. After training the neural network by the
supervised learning of wave functions in restricted parameter regions in known
phases, the neural networks can determine the phases of the wave functions in
wide parameter regions in unknown phases; hence, the phase diagrams are
obtained. We demonstrate the validity and generality of this method by drawing
the phase diagrams of two- and higher dimensional Anderson metal-insulator
transitions and quantum percolations as well as disordered topological systems
such as three-dimensional topological insulators and Weyl semimetals. Both
real-space and Fourier space wave functions are analyzed. The advantages and
disadvantages over conventional methods are discussed.
|
Clathrin-mediated endocytosis (CME) is a key pathway for transporting cargo
into cells via membrane vesicles. It plays an integral role in nutrient import,
signal transduction, neurotransmission and cellular entry of pathogens and
drug-carrying nanoparticles. As CME entails substantial local remodeling of the
plasma membrane, the presence of membrane tension offers resistance to bending
and hence, vesicle formation. Experiments show that in such high tension
conditions, actin dynamics is required to carry out CME successfully. In this
study, we build upon these pioneering experimental studies to provide
fundamental mechanistic insights into the roles of two key endocytic proteins,
namely, actin and BAR proteins in driving vesicle formation in high membrane
tension environment. Our study reveals a new actin force induced `snap-through
instability' that triggers a rapid shape transition from a shallow invagination
to a highly invaginated tubular structure. We show that the association of BAR
proteins stabilizes vesicles and induces a milder instability. In addition, we
present a new counterintuitive role of BAR depolymerization in regulating the
shape evolution of vesicles. We show that the dissociation of BAR proteins,
supported by actin-BAR synergy, leads to considerable elongation and squeezing
of vesicles. Going beyond the membrane geometry, we put forth a new
stress-based perspective for the onset of vesicle scission and predict the
shapes and composition of detached vesicles. We present the snap-through
transition and the high in-plane stress as possible explanations for the
intriguing direct transformation of broad and shallow invaginations into
detached vesicles in BAR mutant yeast cells.
|
Let $f$ be a rational map with degree $d\geq 2$ whose Julia set is connected
but not equal to the whole Riemann sphere. It is proved that there exists a
rational map $g$ such that $g$ contains a buried Julia component on which the
dynamics is quasiconformally conjugate to that of $f$ on the Julia set if and
only if $f$ does not have parabolic basins and Siegel disks. If such $g$
exists, then the degree can be chosen such that $\text{deg}(g)\leq 7d-2$. In
particular, if $f$ is a polynomial, then $g$ can be chosen such that
$\text{deg}(g)\leq 4d+4$. Moreover, some quartic and cubic rational maps whose
Julia sets contain buried Jordan curves are also constructed.
|
During the last few years, there has been plenty of research for reducing
energy consumption in telecommunication infrastructure. However, many of the
proposals remain unim-plemented due to the lack of flexibility in legacy
networks. In this paper we demonstrate how the software defined networking
(SDN) capabilities of current networking equipment can be used to implement
some of these energy saving algorithms. In particular, we developed an ONOS
application to realize an energy-aware traffic scheduler to a bundle link made
up of Energy Efficient Ethernet (EEE) links between two SDN switches. We show
how our application is able to dynamically adapt to the traffic characteristics
and save energy by concentrating the traffic on as few ports as possible. This
way, unused ports remain in Low Power Idle (LPI) state most of the time, saving
energy.
|
Motivated by the possible existence of other universes, this paper considers
the evolution of massive stars with different values for the fundamental
constants. We focus on variations in the triple alpha resonance energy and
study its effects on the resulting abundances of $^{12}$C, $^{16}$O, and larger
nuclei. In our universe, the $0^{+}$ energy level of carbon supports a resonant
nuclear reaction that dominates carbon synthesis in stellar cores and accounts
for the observed cosmic abundances. Here we define $\Delta{E}_R$ to be the
change in this resonant energy level, and show how different values affect the
cosmic abundances of the intermediate alpha elements. Using the state of the
art computational package $MESA$, we carry out stellar evolution calculations
for massive stars in the range $M_\ast$ = $15-40M_\odot$, and for a wide range
of resonance energies. We also include both solar and low metallicity initial
conditions. For negative $\Delta{E}_R$ , carbon yields are increased relative
to standard stellar models, and such universes remain viable as long as the
production of carbon nuclei remains energetically favorable, and stars remain
stable, down to $\Delta{E}_R\approx-300$ keV. For positive $\Delta{E}_R$,
carbon yields decrease, but significant abundances can be produced for
resonance energy increments up to $\Delta{E}_R\approx+500$ keV. Oxygen yields
tend to be anti-correlated with those of carbon, and the allowed range in
$\Delta{E}_R$ is somewhat smaller. We also present yields for neon, magnesium,
and silicon. With updated stellar evolution models and a more comprehensive
survey of parameter space, these results indicate that the range of viable
universes is larger than suggested by earlier studies.
|
Magnetic, dielectric, and magnetoelectric properties in a spin-state
transition system are examined, motivated by the recent discovery of a
multiferroic behavior in a cobalt oxide. We construct an effective model
Hamiltonian based on the two-orbital Hubbard model, in which the spin-state
degrees of freedom in magnetic ions couple with ferroelectric-type lattice
distortions. A phase transition occurs from the high-temperature low-spin phase
to the low-temperature high-spin ferroelectric phase with accompanying an
increase of the spin entropy. The calculated results are consistent with the
experimental pressure-temperature phase diagram. We predict the magnetic-field
induced electric polarization in the low-spin paraelectric phase near the
ferroelectric phase boundary.
|
Panoptic Narrative Detection (PND) and Segmentation (PNS) are two challenging
tasks that involve identifying and locating multiple targets in an image
according to a long narrative description. In this paper, we propose a unified
and effective framework called NICE that can jointly learn these two panoptic
narrative recognition tasks. Existing visual grounding tasks use a two-branch
paradigm, but applying this directly to PND and PNS can result in prediction
conflict due to their intrinsic many-to-many alignment property. To address
this, we introduce two cascading modules based on the barycenter of the mask,
which are Coordinate Guided Aggregation (CGA) and Barycenter Driven
Localization (BDL), responsible for segmentation and detection, respectively.
By linking PNS and PND in series with the barycenter of segmentation as the
anchor, our approach naturally aligns the two tasks and allows them to
complement each other for improved performance. Specifically, CGA provides the
barycenter as a reference for detection, reducing BDL's reliance on a large
number of candidate boxes. BDL leverages its excellent properties to
distinguish different instances, which improves the performance of CGA for
segmentation. Extensive experiments demonstrate that NICE surpasses all
existing methods by a large margin, achieving 4.1% for PND and 2.9% for PNS
over the state-of-the-art. These results validate the effectiveness of our
proposed collaborative learning strategy. The project of this work is made
publicly available at https://github.com/Mr-Neko/NICE.
|
Wider adoption of neural networks in many critical domains such as finance
and healthcare is being hindered by the need to explain their predictions and
to impose additional constraints on them. Monotonicity constraint is one of the
most requested properties in real-world scenarios and is the focus of this
paper. One of the oldest ways to construct a monotonic fully connected neural
network is to constrain signs on its weights. Unfortunately, this construction
does not work with popular non-saturated activation functions as it can only
approximate convex functions. We show this shortcoming can be fixed by
constructing two additional activation functions from a typical unsaturated
monotonic activation function and employing each of them on the part of
neurons. Our experiments show this approach of building monotonic neural
networks has better accuracy when compared to other state-of-the-art methods,
while being the simplest one in the sense of having the least number of
parameters, and not requiring any modifications to the learning procedure or
post-learning steps. Finally, we prove it can approximate any continuous
monotone function on a compact subset of $\mathbb{R}^n$.
|
Offline optimal planning of trajectories for redundant robots along
prescribed task space paths is usually broken down into two consecutive
processes: first, the task space path is inverted to obtain a joint space path,
then, the latter is parametrized with a time law. If the two processes are
separated, they cannot optimize the same objective function, ultimately
providing sub-optimal results. In this paper, a unified approach is presented
where dynamic programming is the underlying optimization technique. Its
flexibility allows accommodating arbitrary constraints and objective functions,
thus providing a generic framework for optimal planning of real systems. To
demonstrate its applicability to a real world scenario, the framework is
instantiated for time-optimality on Franka Emika's Panda robot. The well-known
issues associated with the execution of non-smooth trajectories on a real
controller are partially addressed at planning level, through the enforcement
of constraints, and partially through post-processing of the optimal solution.
The experiments show that the proposed framework is able to effectively exploit
kinematic redundancy to optimize the performance index defined at planning
level and generate feasible trajectories that can be executed on real hardware
with satisfactory results.
|
In an Achlioptas process, starting with a graph that has n vertices and no
edge, in each round $d \geq 1$ edges are drawn uniformly at random, and using
some rule exactly one of them is chosen and added to the evolving graph. For
the class of Achlioptas processes we investigate how much impact the rule has
on one of the most basic properties of a graph: connectivity. Our main results
are twofold. First, we study the prominent class of bounded size rules, which
select the edge to add according to the component sizes of its vertices,
treating all sizes larger than some constant equally. For such rules we provide
a fine analysis that exposes the limiting distribution of the number of rounds
until the graph gets connected, and we give a detailed picture of the dynamics
of the formation of the single component from smaller components. Second, our
results allow us to study the connectivity transition of all Achlioptas
processes, in the sense that we identify a process that accelerates it as much
as possible.
|
We study the Cauchy problem for the chemotaxis Navier-Stokes equations and
the Keller-Segel-Navier-Stokes system. Local-in-time and global-in-time
solutions satisfying fundamental properties such as mass conservation and
nonnegativity preservation are constructed for low regularity data in $2$ and
higher dimensions under suitable conditions. Our initial data classes involve a
new scale of function space, that is $\Y(\rn)$ which collects divergence of
vector-fields with components in the square Campanato space
$\mathscr{L}_{2,N-2}(\rn)$, $N>2$ (and can be identified with the homogeneous
Besov space $\dot{B}^{-1}_{22}(\rn)$ when $N=2$) and are shown to be optimal in
a certain sense. Moreover, uniqueness criterion for global solutions is
obtained under certain limiting conditions.
|
Assume L(\mathbb{R},\mu) satisfies ZF+DC+\Theta>\omega_2 + \mu is a normal
fine measure on \powerset_{\omega_1}(\mathbb{R}). The main result of this paper
is the characterization theorem of L(\mathbb{R},\mu) which states that
L(\mathbb{R},\mu) satisfies \Theta>\omega_2 if and only if L(\mathbb{R},\mu)
satisfies AD^+. As a result, we obtain the equiconsistency between the two
theories: "ZFC + there are \omega^2 Woodin cardinals" and "ZF+DC+\mu is a
normal fine measure on \powerset_{\omega_1}(\mathbb{R}) + \Theta>\omega_2".
|
Deep learning based recommendation systems form the backbone of most
personalized cloud services. Though the computer architecture community has
recently started to take notice of deep recommendation inference, the resulting
solutions have taken wildly different approaches - ranging from near memory
processing to at-scale optimizations. To better design future hardware systems
for deep recommendation inference, we must first systematically examine and
characterize the underlying systems-level impact of design decisions across the
different levels of the execution stack. In this paper, we characterize eight
industry-representative deep recommendation models at three different levels of
the execution stack: algorithms and software, systems platforms, and hardware
microarchitectures. Through this cross-stack characterization, we first show
that system deployment choices (i.e., CPUs or GPUs, batch size granularity) can
give us up to 15x speedup. To better understand the bottlenecks for further
optimization, we look at both software operator usage breakdown and CPU
frontend and backend microarchitectural inefficiencies. Finally, we model the
correlation between key algorithmic model architecture features and hardware
bottlenecks, revealing the absence of a single dominant algorithmic component
behind each hardware bottleneck.
|
We construct a special class of spacelike surfaces in the Minkowski 4-space
which are one-parameter systems of meridians of the rotational hypersurface
with lightlike axis and call these surfaces meridian surfaces of parabolic
type. They are analogous to the meridian surfaces of elliptic or hyperbolic
type. Using the invariants of these surfaces we give the complete
classification of the meridian surfaces of parabolic type with constant Gauss
curvature or constant mean curvature. We also classify the Chen meridian
surfaces of parabolic type and the meridian surfaces of parabolic type with
parallel normal bundle.
|
As a common step in refining their scientific inquiry, investigators are
often interested in performing some screening of a collection of given
statistical hypotheses. For example, they may wish to determine whether any one
of several patient characteristics are associated with a health outcome of
interest. Existing generic methods for testing a multivariate hypothesis --
such as multiplicity corrections applied to individual hypothesis tests -- can
easily be applied across a variety of problems but can suffer from low power in
some settings. Tailor-made procedures can attain higher power by building
around problem-specific information but typically cannot be easily adapted to
novel settings. In this work, we propose a general framework for testing a
multivariate point null hypothesis in which the test statistic is adaptively
selected to provide increased power. We present theoretical large-sample
guarantees for our test under both fixed and local alternatives. In simulation
studies, we show that tests created using our framework can perform as well as
tailor-made methods when the latter are available, and we illustrate how our
procedure can be used to create tests in two settings in which tailor-made
methods are not currently available.
|
The problem of finding the minimizer of a sum of convex functions is central
to the field of optimization. In cases where the functions themselves are not
fully known (other than their individual minimizers and convexity parameters),
it is of interest to understand the region containing the potential minimizers
of the sum based only on those known quantities. Characterizing this region in
the case of multivariate strongly convex functions is far more complicated than
the univariate case. In this paper, we provide both outer and inner
approximations for the region containing the minimizer of the sum of two
strongly convex functions, subject to a constraint on the norm of the gradient
at the minimizer of the sum. In particular, we explicitly characterize the
boundary and interior of both outer and inner approximations. Interestingly,
the boundaries as well as the interiors turn out to be identical and we show
that the boundary of the region containing the potential minimizers is also
identical to that of the outer and inner approximations.
|
In the framework of an extended bag model the magnetic moments, M1 transition
moments, and decay widths of all ground-state heavy hadrons are calculated. For
the heavy baryons containing three quarks of different flavors the effect of
hyperfine mixing of the states is taken into account. The additional care is
taken to get more accurate theoretical estimates for the mass splittings of
heavy hadrons. The use of such improved values enables one to provide more
accurate predictions for the decay widths. These values of the hyperfine
splittings between baryons may be also useful for the further experimental
searches of new heavy hadrons. For instance, we predict
$M(\Xi_{cc}^{*})=3695\pm5$ MeV. The agreement of our results for the M1 decay
rates with available experimental data is good. We also present a wide
comparison of the predictions obtained in our work with the results obtained
using various other approaches.
|
Generic heterotic M-theory compactifications contain five-branes wrapping
non-isolated genus zero or higher genus curves in a Calabi-Yau threefold.
Non-perturbative superpotentials do not depend on moduli of such five-branes.We
show that fluxes and non-perturbative effects can stabilize them in a
non-supersymmetric AdS vacuum. We also show that these five-branes can be
stabilized in a dS vacuum, if we modify the supergravity potential energy by
Fayet-Iliopoulos terms. This allows us to stabilize all heterotic M-theory
moduli in a dS vacuum in the most general compactification scenarios. In
addition, we demonstrate that, by this modification, one can create an
inflationary potential. The inflationary phase is represented by a five-brane
approaching the visible brane. We give a qualitative argument how extra states
becoming light, when the five-brane comes too close, can terminate inflation.
Eventually, the five-brane hits the visible brane and disappears through a
small instanton transition. The post-inflationary system of moduli has simpler
stability properties. It can be stabilized in a dS vacuum with a small
cosmological constant.
|
From 16 years of INTEGRAL/SPI $\gamma$-ray observations, we derive bounds on
annihilating light dark matter particles in the halo of the Milky Way up to
masses of about 300 MeV. We test four different spatial templates for the dark
matter halo, including a Navarro-Frenk-White (NFW), Einasto, Burkert, and
isothermal sphere profile, as well as three different models for the underlying
diffuse Inverse Compton emission. We find that the bounds on the s-wave
velocity-averaged annihilation cross sections for both the electron-positron
and the photon-photon final states are the strongest to date from $\gamma$-ray
observations alone in the mass range $\lesssim 6$ MeV. We provide fitting
formulae for the upper limits and discuss their dependences on the halo
profile. The bounds on the two-photon final state are superseding the limits
from the Cosmic Microwave Background in the range of 50 keV up to $\sim 3$ MeV,
showing the great potential future MeV mission will have in probing light dark
matter.
|
We show that a Nambu-Goto string has a nontrivial zero length limit which
corresponds to a massless particle with extrinsic curvature. The system has the
set of six first class constraints, which restrict the phase space variables so
that the spin vanishes. Upon quantization, we obtain six conditions on the
state, which can be represented as a wave function of position coordinates,
$x^\mu$, and velocities, $q^\mu$. We have found a wave function $\psi(x,q)$
that turns out to be a general solution of the corresponding system of six
differential equations, if the dimensionality of spacetime is eight. Though
classically the system is just a point particle with vanishing extrinsic
curvature and spin, the quantized system is not trivial, because it is
consistent in eight, but not in arbitrary, dimensions.
|
This paper presents the derivation of an executable Krivine abstract machine
from a small step interpreter for the simply typed lambda calculus in the
dependently typed programming language Agda.
|
Volume or centrality fluctuations (CF) is one of the main uncertainties for
interpreting the centrality dependence of many experimental observables. The CF
is constrained by centrality selection based on particle multiplicity in a
reference subevent, and contributes to observables measured in another
subevent. Using a Glauber-based independent source model, we study the
influence of CF on several distributions of multiplicity $N$ and eccentricities
$\epsilon_n$: $p(N)$, $p(\epsilon_n)$, $p(\epsilon_n,\epsilon_m)$ and
$p(N,\epsilon_n)$, where the effects of CF is quantified using multi-particle
cumulants of these distributions. In mid-central collisions, a general relation
is established between the multiplicity fluctuation and resulting CF in the
reference subevent. In ultra-central collisions, where distribution of particle
production sources is strongly distorted, we find these cumulants exhibit rich
sign-change patterns, due to observable-dependent non-Gaussianity in the
underlying distributions. The details of sign-change pattern change with the
size of the collision systems. Simultaneous comparison of these different types
cumulants between model prediction and experimental data can be used to
constrain the CF and particle production mechanism in heavy-ion collisions.
Since the concept of centrality and CF are expected to fluctuate in the
longitudinal direction within a single event, we propose to use
pseudorapidity-separated subevent cumulant method to explore the nature of
intra-event fluctuations of centrality and collective dynamics. The subevent
method can be applied for any bulk observable that is sensitive to centrality,
and has the potential to separate different mechanisms for multiplicity and
flow fluctuations happening at different time scales. The forward detector
upgrades at RHIC and LHC will greatly enhance such studies in the future.
|
This paper is devoted to Hardy inequalities concerning distance functions
from submanifolds of arbitrary codimensions in the Riemannian setting. On a
Riemannian manifold with non-negative curvature, we establish several sharp
weighted Hardy inequalities in the cases when the submanifold is compact as
well as non-compact. In particular, these inequalities remain valid even if the
ambient manifold is compact, in which case we find an optimal space of smooth
functions to study Hardy inequalities. Further examples are also provided. Our
results complement in several aspects those obtained recently in the Euclidean
and Riemannian settings.
|
We report low-temperature transport studies of parallel double quantum dots
formed in GaSb/InAsSb core-shell nanowires. At negative gate voltages, regular
patterns of Coulomb diamonds are observed in the charge stability diagrams,
which we ascribe to single-hole tunneling through a quantum dot in the GaSb
core. As the gate voltage increases, the measured charge stability diagram
indicates the appearance of an additional quantum dot, which we suggest is an
electron quantum dot formed in the InAsSb shell. We find that an electron-hole
interaction induces shifts of transport resonances in the source-drain voltage
from which an average electron-hole interaction strength of 2.9 meV is
extracted. We also carry out magnetotransport measurements of a hole quantum
dot in the GaSb core and extract level-dependent g- factors and a spin-orbit
interaction.
|
Motivated by a question of R.\ Nandakumar, we show that the Euclidean plane
can be dissected into mutually incongruent convex quadrangles of the same area
and the same perimeter. As a byproduct we obtain vertex-to-vertex dissections
of the plane by mutually incongruent triangles of unit area that are
arbitrarily close to the periodic vertex-to-vertex tiling by equilateral
triangles.
|
We observe a narrow enhancement near 2mp in the invariant mass spectrum of
ppbar pairs from radiative J/psi-->gamma ppbar decays. The enhancement can be
fit with either an S- or P-wave Breit Wigner fuction. In the case of the S-wave
fit, the peak mass is below the 2mp threshold and the full width is less than
30 MeV. These mass and width values are not consistent with the properties of
any known meson resonance.
|
We show that infinitely differentiable solutions to parabolic and hyperbolic
equations, whose right-hand sides are analytical in time, are also analytical
in time at each fixed point of the space. These solutions are given in the form
of the Taylor expansion with respect to time $t$ with coefficients depending on
$x$. The coefficients of the expansion are defined by recursion relations,
which are obtained from the condition of compatibility of order $k=\infty$. The
value of the solution on the boundary is defined by the right-hand side and
initial data, so that it is not prescribed. We show that exact regular and weak
solutions to the initial-boundary value problems for parabolic and hyperbolic
equations can be determined as the sum of a function that satisfies the
boundary conditions and the limit of the infinitely differentiable solutions
for smooth approximations of the data of the corresponding problem with zero
boundary conditions. These solutions are represented in the form of the Taylor
expansion with respect to $t$. The suggested me
|
We calculate the primordial black hole (PBH) mass spectrum produced from a
collapse of the primordial density fluctuations in the early Universe using, as
an input, several theoretical models giving the curvature perturbation power
spectra with large (~ 0.01 - 0.1) values at some scale of comoving wave numbers
k. In the calculation we take into account the explicit dependence of
gravitational (Bardeen) potential on time. Using the PBH mass spectra, we
further calculate the neutrino and photon energy spectra in extragalactic space
from evaporation of light PBHs, and the energy density fraction contained in
PBHs today (for heavier PBHs). We obtain the constraints on the model
parameters using available experimental data (including data on neutrino and
photon cosmic backgrounds). We briefly discuss the possibility that the
observed 511 keV line from the Galactic center is produced by annihilation of
positrons evaporated by PBHs.
|
We study a 2D measurement-only random circuit motivated by the Bacon-Shor
error correcting code. We find a rich phase diagram as one varies the relative
probabilities of measuring nearest neighbor Pauli XX and ZZ check operators. In
the Bacon-Shor code, these checks commute with a group of stabilizer and
logical operators, which therefore represent conserved quantities. Described as
a subsystem symmetry, these conservation laws lead to a continuous phase
transition between an X-basis and Z-basis spin glass order. The two phases are
separated by a critical point where the entanglement entropy between two halves
of an L X L system scales as L ln L, a logarithmic violation of the area law.
We generalize to a model where the check operators break the subsystem
symmetries (and the Bacon-Shor code structure). In tension with established
heuristics, we find that the phase transition is replaced by a smooth
crossover, and the X- and Z-basis spin glass orders spatially coexist.
Additionally, if we approach the line of subsystem symmetries away from the
critical point in the phase diagram, some spin glass order parameters jump
discontinuously
|
In this paper, charged black holes in general relativity coupled to
Born-Infeld electrodynamics are studied as gravitational lenses. The positions
and magnifications of the relativistic images are obtained using the strong
deflection limit, and the results are compared with those corresponding to a
Reissner-Nordstrom black hole with the same mass and charge. As numerical
examples, the model is applied to the supermassive Galactic center black hole
and to a small size black hole situated in the Galactic halo.
|
Brain representations of curvature may be formed on the basis of either
vision or touch. Experimental and theoretical work by the author and her
colleagues has shown that the processing underlying such representations
directly depends on specific two-dimensional geometric properties of the curved
object, and on the symmetry of curvature. Virtual representations of curves
with mirror symmetry were displayed in 2D on a computer screen to sighted
observers for visual scaling. For tactile (haptic) scaling, the physical
counterparts of these curves were placed in the two hands of sighted observers,
who were blindfolded during the sensing experiment, and of congenitally blind
observers, who never had any visual experience. All results clearly show that
curvature, whether haptically or visually sensed, is statistically linked to
the same curve properties. Sensation is expressed psychophysically as a power
function of any symmetrical curve's aspect ratio, a scale invariant geometric
property of physical objects. The results of the author's work support
biologically motivated models of sensory integration for curvature processing.
They also promote the idea of a universal power law for adaptive brain control
and balancing of motor responses to environmental stimuli across sensory
modalities.
|
Contribution: This paper identifies four critical ethical considerations for
implementing generative AI tools to provide automated feedback to students.
Background: Providing rich feedback to students is essential for supporting
student learning. Recent advances in generative AI, particularly with large
language models (LLMs), provide the opportunity to deliver repeatable, scalable
and instant automatically generated feedback to students, making abundant a
previously scarce and expensive learning resource. Such an approach is feasible
from a technical perspective due to these recent advances in Artificial
Intelligence (AI) and Natural Language Processing (NLP); while the potential
upside is a strong motivator, doing so introduces a range of potential ethical
issues that must be considered as we apply these technologies.
Intended Outcomes: The goal of this work is to enable the use of AI systems
to automate mundane assessment and feedback tasks, without introducing a
"tyranny of the majority", where the needs of minorities in the long tail are
overlooked because they are difficult to automate.
Application Design: This paper applies an extant ethical framework used for
AI and machine learning to the specific challenge of providing automated
feedback to student engineers. The task is considered from both a development
and maintenance perspective, considering how automated feedback tools will
evolve and be used over time.
Findings: This paper identifies four key ethical considerations for the
implementation of automated feedback for students: Participation, Development,
Impact on Learning and Evolution over Time.
|
The movement of the eyes has been the subject of intensive research as a way
to elucidate inner mechanisms of cognitive processes. A cognitive task that is
rather frequent in our daily life is the visual search for hidden objects. Here
we investigate through eye-tracking experiments the statistical properties
associated with the search of target images embedded in a landscape of
distractors. Specifically, our results show that the twofold process of eye
movement, composed of sequences of fixations (small steps) intercalated by
saccades (longer jumps), displays characteristic statistical signatures. While
the saccadic jumps follow a log normal distribution of distances, which is
typical of multiplicative processes, the lengths of the smaller steps in the
fixation trajectories are consistent with a power-law distribution. Moreover,
the present analysis reveals a clear transition between a directional serial
search to an isotropic random movement as the difficulty level of the searching
task is increased.
|
Knowledge of x-ray attenuation is essential for developing and evaluating
x-ray imaging technologies. For instance, techniques to distinguish between
cysts and solid tumours at mammography screening would be highly desirable to
reduce recalls, but the development requires knowledge of the x-ray attenuation
for cysts and tumours. We have previously measured the attenuation of cyst
fluid using photon-counting spectral mammography. Data on x-ray attenuation for
solid breast lesions are available in the literature, but cover a relatively
wide range, likely caused by natural spread between samples, random measurement
errors, and different experimental conditions. In this study, we have adapted
the previously developed spectral method to measure the linear attenuation of
solid breast lesions. A total of 56 malignant and 5 benign lesions were
included in the study. The samples were placed in a holder that allowed for
thickness measurement. Spectral (energy-resolved) images of the samples were
acquired and the image signal was mapped to equivalent thicknesses of two known
reference materials, which can be used to derive the x-ray attenuation as a
function of energy. The spread in equivalent material thicknesses was
relatively large between samples, which is likely to be caused mainly by
natural variation and only to a minor extent by random measurement errors and
sample inhomogeneity. No significant difference in attenuation was found
between benign and malignant solid lesions, or between different types of
malignant lesions. The separation between cyst-fluid and tumour attenuation
was, however, significant, which suggests it may be possible to distinguish
cystic from solid breast lesions, and the results lay the groundwork for a
clinical trial. [cropped]
|
Assessment and reporting of skills is a central feature of many digital
learning platforms. With students often using multiple platforms,
cross-platform assessment has emerged as a new challenge. While technologies
such as Learning Tools Interoperability (LTI) have enabled communication
between platforms, reconciling the different skill taxonomies they employ has
not been solved at scale. In this paper, we introduce and evaluate a
methodology for finding and linking equivalent skills between platforms by
utilizing problem content as well as the platform's clickstream data. We
propose six models to represent skills as continuous real-valued vectors and
leverage machine translation to map between skill spaces. The methods are
tested on three digital learning platforms: ASSISTments, Khan Academy, and
Cognitive Tutor. Our results demonstrate reasonable accuracy in skill
equivalency prediction from a fine-grained taxonomy to a coarse-grained one,
achieving an average recall@5 of 0.8 between the three platforms. Our skill
translation approach has implications for aiding in the tedious, manual process
of taxonomy to taxonomy mapping work, also called crosswalks, within the
tutoring as well as standardized testing worlds.
|
Text style transfer refers to the task of rephrasing a given text in a
different style. While various methods have been proposed to advance the state
of the art, they often assume the transfer output follows a delta distribution,
and thus their models cannot generate different style transfer results for a
given input text. To address the limitation, we propose a one-to-many text
style transfer framework. In contrast to prior works that learn a one-to-one
mapping that converts an input sentence to one output sentence, our approach
learns a one-to-many mapping that can convert an input sentence to multiple
different output sentences, while preserving the input content. This is
achieved by applying adversarial training with a latent decomposition scheme.
Specifically, we decompose the latent representation of the input sentence to a
style code that captures the language style variation and a content code that
encodes the language style-independent content. We then combine the content
code with the style code for generating a style transfer output. By combining
the same content code with a different style code, we generate a different
style transfer output. Extensive experimental results with comparisons to
several text style transfer approaches on multiple public datasets using a
diverse set of performance metrics validate effectiveness of the proposed
approach.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.