text
stringlengths 6
128k
|
---|
The rotated stripes are a consequence of the orthorhombic crystal lattice and
the isotropy of Coulomb repulsion between pairs of doped holes, residing at
oxygen lattice sites. With stripe slanting, the doped-hole pairs come closer to
equidistance than without. The slant ratio depends on the orthorhombicity $o$
as $s = \sqrt{o/2}$.
|
In this paper we study envy-free division problems. The classical approach to
such problems, used by David Gale, reduces to considering continuous maps of a
simplex to itself and finding sufficient conditions for this map to hit the
center of the simplex. The mere continuity of the map is not sufficient for
reaching such a conclusion. Classically, one makes additional assumptions on
the behavior of the map on the boundary of the simplex (for example, in the
Knaster--Kuratowski--Mazurkiewicz and the Gale theorem).
We follow Erel Segal-Halevi, Fr\'ed\'eric Meunier, and Shira Zerbib, and
replace the boundary condition by another assumption, which has the meaning in
economy as the possibility for a player to prefer an empty part in the segment
partition problem. We solve the problem positively when $n$, the number of
players that divide the segment, is a prime power, and we provide
counterexamples for every $n$ which is not a prime power. We also provide
counterexamples relevant to a wider class of fair or envy-free division
problems when $n$ is odd and not a prime power.
In this arxiv version that appears after the official publication we have
corrected the statement and the proof of Lemma 3.4.
|
To any block idempotent $b$ of a group algebra $kG$ of a finite group $G$
over a field $k$ of characteristic $p>0$, Puig associated a fusion system and
proved that it is saturated if the $k$-algebra $kC_G(P)e$ is split, where
$(P,e)$ is a maximal $kGb$-Brauer pair. We investigate in the non-split case
how far the fusion system is from being saturated by describing it in an
explicit way as being generated by the fusion system of a related block
idempotent over a larger field together with a single automorphism of the
defect group.
|
Conventional online social networks (OSNs) are implemented in a centralized
manner. Although centralization is a convenient way for implementing OSNs, it
has several well known drawbacks. Chief among them are the risks they pose to
the security and privacy of the information maintained by the OSN; and the loss
of control over the information contributed by individual members.
These concerns prompted several attempts to create decentralized OSNs, or
DOSNs. The basic idea underlying these attempts, is that each member of a
social network keeps its data under its own control, instead of surrendering it
to a central host; providing access to it to other members of the OSN according
to its own access-control policy. Unfortunately all existing DOSN projects have
a very serious limitation. Namely, they are unable to subject the membership of
a DOSN, and the interaction between its members, to any global policy.
We adopt the decentralization idea underlying DOSNs, complementing it with a
means for specifying and enforcing a wide range of policies over the membership
of a social community, and over the interaction between its disparate
distributed members. And we do so in a scalable fashion.
|
Spectrum sensing is a crucial component of opportunistic spectrum access
schemes, which aim at improving spectrum utilization by allowing for the reuse
of idle licensed spectrum. Sensing a spectral band before using it makes sure
the legitimate users are not disturbed. Since information about these users'
signals is not necessarily available, the sensor should be able to conduct
so-called blind spectrum sensing. Historically, this has not been a feature of
cyclostationarity-based algorithms. Indeed, in many application scenarios the
information required for traditional cyclostationarity detection might not be
available, hindering its practical applicability. In this work we propose two
new cyclostationary spectrum sensing algorithms that make use of the inherent
sparsity of the cyclic autocorrelation to make blind operation possible. Along
with utilizing sparse recovery methods for estimating the cyclic
autocorrelation, we take further advantage of its structure by introducing
joint sparsity as well as general structure dictionaries into the recovery
process. Furthermore, we extend a statistical test for cyclostationarity to
accommodate sparse cyclic spectra. Our numerical results demonstrate that the
new methods achieve a near constant false alarm rate behavior in contrast to
earlier approaches from the literature.
|
We have made extensive observations of 35 distant slow (non-recycled) pulsars
discovered in the ongoing Arecibo PALFA pulsar survey. Timing observations of
these pulsars over several years at Arecibo Observatory and Jodrell Bank
Observatory have yielded high-precision positions and measurements of rotation
properties. Despite being a relatively distant population, these pulsars have
properties that mirror those of the previously known pulsar population. Many of
the sources exhibit timing noise, and one underwent a small glitch. We have
used multifrequency data to measure the interstellar scattering properties of
these pulsars. We find scattering to be higher than predicted along some lines
of sight, particularly in the Cygnus region. Lastly, we present XMM-Newton and
Chandra observations of the youngest and most energetic of the pulsars,
J1856+0245, which has previously been associated with the GeV-TeV pulsar wind
nebula HESS J1857+026.
|
In this paper we are interested in multifractional stable processes where the
self-similarity index $H$ is a function of time, in other words $H$ becomes
time changing, and the stability index $\alpha$ is a constant. Using $\beta$-
negative power variations ($-1/2<\beta<0$), we propose estimators for the value
of the multifractional function $H$ at a fixed time $t_0$ and for $\alpha$ for
two cases: multifractional Brownian motion ($\alpha=2$) and linear
multifractional stable motion ($0<\alpha<2$). We get the consistency of our
estimates for the underlying processes with the rate of convergence.
|
The construction of optimal linear block error-correcting codes is not an
easy problem, for this, many studies describe methods for generating good error
correcting codes in terms of minimum distance. In a previous work, we have
presented the multiple impulse method (MIM) to evaluate the minimum distance of
linear codes. In this paper we will present an optimization of the MIM method
by genetic algorithms, and we found many new optimal Double and Triple
Circulant Codes (DCC & TCC) with the highest known parameters using the MIM
method as an evaluator of the minimum distance. Two approaches are used in the
exploration of the space of generators; the first is based on genetic
algorithms, however the second is on the random search algorithm.
|
The distance-dependence of the anisotropic atom-wall interaction is studied.
The central result is the 1/z^6 quadrupolar anisotropy decay in the retarded
Casimir-Polder regime. Analysis of the transition region between non-retarded
van der Waals regime (in 1/z^3) and Casimir-Polder regime shows that the
anisotropy cross-over occurs at very short distances from the surface, on the
order of 0.03 Lambda, where Lambda is the atom characteristic wavelength.
Possible experimental verifications of this distance dependence are discussed.
|
Recently, unsupervised exemplar-based image-to-image translation, conditioned
on a given exemplar without the paired data, has accomplished substantial
advancements. In order to transfer the information from an exemplar to an input
image, existing methods often use a normalization technique, e.g., adaptive
instance normalization, that controls the channel-wise statistics of an input
activation map at a particular layer, such as the mean and the variance.
Meanwhile, style transfer approaches similar task to image translation by
nature, demonstrated superior performance by using the higher-order statistics
such as covariance among channels in representing a style. In detail, it works
via whitening (given a zero-mean input feature, transforming its covariance
matrix into the identity). followed by coloring (changing the covariance matrix
of the whitened feature to those of the style feature). However, applying this
approach in image translation is computationally intensive and error-prone due
to the expensive time complexity and its non-trivial backpropagation. In
response, this paper proposes an end-to-end approach tailored for image
translation that efficiently approximates this transformation with our novel
regularization methods. We further extend our approach to a group-wise form for
memory and time efficiency as well as image quality. Extensive qualitative and
quantitative experiments demonstrate that our proposed method is fast, both in
training and inference, and highly effective in reflecting the style of an
exemplar. Finally, our code is available at
https://github.com/WonwoongCho/GDWCT.
|
We present a method to calculate the soft function in Soft-Collinear
Effective Theory to NLO for N-jet events, defined with respect to arbitrarily
complicated observables and algorithms, using a subtraction-based method. We
show that at one loop the singularity structure of all observable/algorithm
combinations can be classified as one of two types. Type I jets include jets
defined with inclusive algorithms for which a jet shape is measured. Type II
jets include jets found with exclusive algorithms, as well as jets for which
only the direction and energy are measured. Cross sections that are inclusive
over a certain region of phase space, such as the forward region at a hadron
collider, are examples of Type II jets. We show that for a large class of
measurements the required subtractions are already known analytically,
including traditional jet shape measurements at hadron colliders. We
demonstrate our method by calculating the soft functions for the case of jets
defined in eta-phi space with an out-of-jet pT cut and a rapidity cut on the
jets, as well as for the case of 1-jettiness.
|
We suggest a construction of a large class of topological states using an
array of quantum wires. First, we show how to construct a Chern insulator using
an array of alternating wires that contain electrons and holes, correlated with
an alternating magnetic field. This is supported by semi-classical arguments
and a full quantum mechanical treatment of an analogous tight-binding model. We
then show how electron-electron interactions can stabilize fractional Chern
insulators (Abelian and non-Abelian). In particular, we construct a relatively
stable non-Abelian $\mathbb{Z}_{3}$ parafermion state. Our construction is
generalized to wires with alternating spin-orbit couplings, which give rise to
integer and fractional (Abelian and non-Abelian) topological insulators. The
states we construct are effectively two-dimensional, and are therefore less
sensitive to disorder than one-dimensional systems. The possibility of
experimental realization of our construction is addressed.
|
We argue that for a Higgs boson mass M_H ~125 GeV, as suggested by recent
Higgs searches at the LHC, the inclusion of electroweak radiative corrections
in the relationship between the pole and MS-bar masses of the top quark reduces
the difference to about 1 GeV. This is relevant for the scheme dependence of
electroweak observables, such as the rho parameter, as well as for the
extraction of the top quark mass from experimental data. In fact, the value
currently extracted by reconstructing the invariant mass of the top quark decay
products is expected to be close to the pole mass, while the analysis of the
total cross section of top quark pair production yields a clean determination
of the MS-bar mass.
|
There have been many proposals for algorithms segmenting human whole-body
motion in the literature. However, the wide range of use cases, datasets, and
quality measures that were used for the evaluation render the comparison of
algorithms challenging. In this paper, we introduce a framework that puts
motion segmentation algorithms on a unified testing ground and provides a
possibility to allow comparing them. The testing ground features both a set of
quality measures known from the literature and a novel approach tailored to the
evaluation of motion segmentation algorithms, termed Integrated Kernel
approach. Datasets of motion recordings, provided with a ground truth, are
included as well. They are labelled in a new way, which hierarchically
organises the ground truth, to cover different use cases that segmentation
algorithms can possess. The framework and datasets are publicly available and
are intended to represent a service for the community regarding the comparison
and evaluation of existing and new motion segmentation algorithms.
|
In this paper, we describe all traces for the BCH star-product on the dual of
a Lie algebra. First we show by an elementary argument that the BCH as well as
the Kontsevich star-product are strongly closed if and only if the Lie algebra
is unimodular. In a next step we show that the traces of the BCH star-product
are given by the $\ad$-invariant functionals. Particular examples are the
integration over coadjoint orbits. We show that for a compact Lie group and a
regular orbit one can even achieve that this integration becomes a positive
trace functional. In this case we explicitly describe the corresponding GNS
representation. Finally we discuss how invariant deformations on a group can be
used to induce deformations of spaces where the group acts on.
|
We study the coarsening dynamics of two and three dimensional biaxial nematic
liquid crystals, using Langevin dynamics. Unlike previous work, we use a model
with no a priori relationship among the three elastic constants associated with
director deformations. We find a rich variety of coarsening behavior, including
the simulataneous decay of nearly equal populations of the three classes of
half-integer disclination lines. The behavior we observed can be understood on
the basis of the relative values of the elastic constants and the resulting
decay channels of the defects.
|
Patient monitoring in intensive care units, although assisted by biosensors,
needs continuous supervision of staff. To reduce the burden on staff members,
IT infrastructures are built to record monitoring data and develop clinical
decision support systems. These systems, however, are vulnerable to artifacts
(e.g. muscle movement due to ongoing treatment), which are often
indistinguishable from real and potentially dangerous signals. Video recordings
could facilitate the reliable classification of biosignals using object
detection (OD) methods to find sources of unwanted artifacts. Due to privacy
restrictions, only blurred videos can be stored, which severely impairs the
possibility to detect clinically relevant events such as interventions or
changes in patient status with standard OD methods. Hence, new kinds of
approaches are necessary that exploit every kind of available information due
to the reduced information content of blurred footage and that are at the same
time easily implementable within the IT infrastructure of a normal hospital. In
this paper, we propose a new method for exploiting information in the temporal
succession of video frames. To be efficiently implementable using off-the-shelf
object detectors that comply with given hardware constraints, we repurpose the
image color channels to account for temporal consistency, leading to an
improved detection rate of the object classes. Our method outperforms a
standard YOLOv5 baseline model by +1.7%<EMAIL_ADDRESS>while also training over ten
times faster on our proprietary dataset. We conclude that this approach has
shown effectiveness in the preliminary experiments and holds potential for more
general video OD in the future.
|
We give a uniform asymptotic bound for the number of zeros of complete
Abelian integrals in domains bounded away from infinity and the singularities.
|
In this note we show that the configuration spaces of the kinematic system
constructed in [4] and [12] gives rise to a natural tower of sphere bundles.
Moreover, we prove that, each tower of projective bundles associated to special
multi- flags (cf [1], [13], [2], [3]), we can associate such a tower of sphere
bundles which is a two-fold covering of the previous one. In particular we give
a positive answer of some conjecture proposed in [3]
|
The importance of data hiding in the field of Information Technology is a
widely accepted. The challenge is to be able to pass information in a manner
that the very existence of the message is unknown in order to repel attention
of the potential attacker. Steganography is a technique that has been widely
used to achieve this objective. However Steganography is often found to be
lacking when it comes to hiding bulk data. Attempting to hide data in a video
overcomes this problem because of the large sized cover object (video) as
compared to an image in the case of steganography. This paper attempts to
propose a scheme using which data can be hidden in a video. We focus on the
Triangularization method and make use of the Least Significant Bit (LSB)
technique in hiding messages in a video.
|
Word embedding parameters often dominate overall model sizes in neural
methods for natural language processing. We reduce deployed model sizes of text
classifiers by learning a hard word clustering in an end-to-end manner. We use
the Gumbel-Softmax distribution to maximize over the latent clustering while
minimizing the task loss. We propose variations that selectively assign
additional parameters to words, which further improves accuracy while still
remaining parameter-efficient.
|
Five different texture methods are used to investigate their susceptibility
to subtle noise occurring in lung tumor Computed Tomography (CT) images caused
by acquisition and reconstruction deficiencies. Noise of Gaussian and Rayleigh
distributions with varying mean and variance was encountered in the analyzed CT
images. Fisher and Bhattacharyya distance measures were used to differentiate
between an original extracted lung tumor region of interest (ROI) with a
filtered and noisy reconstructed versions. Through examining the texture
characteristics of the lung tumor areas by five different texture measures, it
was determined that the autocovariance measure was least affected and the gray
level co-occurrence matrix was the most affected by noise. Depending on the
selected ROI size, it was concluded that the number of extracted features from
each texture measure increases susceptibility to noise.
|
We use analytic computations to predict the power spectrum as well as the
bispectrum of Cosmic Infrared Background (CIB) anisotropies. Our approach is
based on the halo model and takes into account the mean luminosity-mass
relation. The model is used to forecast the possibility to simultaneously
constrain cosmological, CIB and halo occupation distribution (HOD) parameters
in the presence of foregrounds. For the analysis we use wavelengths in eight
frequency channels between 200 and 900$\;\mathrm{GHz}$ with survey
specifications given by Planck and LiteBird. We explore the sensitivity to the
model parameters up to multipoles of $\ell =1000$ using auto- and
cross-correlations between the different frequency bands. With this setting,
cosmological, HOD and CIB parameters can be constrained to a few percent.
Galactic dust is modeled by a power law and the shot noise contribution as a
frequency dependent amplitude which are marginalized over. We find that dust
residuals in the CIB maps only marginally influence constraints on standard
cosmological parameters. Furthermore, the bispectrum yields tighter constraints
(by a factor four in $1\sigma$ errors) on almost all model parameters while the
degeneracy directions are very similar to the ones of the power spectrum. The
increase in sensitivity is most pronounced for the sum of the neutrino masses.
Due to the similarity of degeneracies a combination of both analysis is not
needed for most parameters. This, however, might be due to the simplified bias
description generally adopted in such halo model approaches.
|
We compute the one loop corrected effective Lagrangian for the
neutralino-neutralino-neutral Higgs interactions $\chi^{0}_{\ell}
\chi^{0}_kH^{0}_m$. The analysis completes the previous analyses where similar
corrections were computed for the $\bar{f} f H^{0}_m$ couplings, where $f$
stands for Standard Model quarks and leptons and for the
chargino-chargino-neutral Higgs couplings $\chi^{+}_l \chi^{-}_kH^{0}_m$ within
the minimal supersymmetric standard model MSSM. The effective one loop
Lagrangian is then applied to the computation of the neutral Higgs decays. The
sizes of the supersymmetric loop corrections of the neutral Higgs decay widths
into $\chi^{0}_{\ell} \chi^{0}_k$ (${\ell}=1,2,3,4$; $k=1,2,3,4$) are
investigated and the supersymmetric loop correction is found to be in the range
of $\sim 10%$ in significant regions of the parameter space. By including the
loop corrections of the other decay channels $\bar{b} b$, $\bar{t} t$,
$\bar{\tau} \tau$, $\bar{c} c$, and $\chi^{-}_i \chi^{+}_j$ ($i=1,2$; $j=1,2$),
the corrections to branching ratios for $H^{0}_m\to \chi^{0}_{\ell} \chi^{0}_k$
can reach as high as 50%. The effects of CP phases on the branching ratio are
also investigated. A discussion of the implications of the analysis for
colliders is given.
|
We consider the surface states of a spherical topological insulator
piecewise-terminated by superconductivity or ferromagnetism over various
regions of the spherical surface. Such terminations gap the surface states by
breaking U(1) particle-number symmetry or time-reversal symmetry, respectively.
Interfaces and trijunctions between differently terminated surface regions can
host propagating and bound Majorana modes, and the finite size of the spherical
system makes it easily amenable to numerical analysis via exact diagonalization
of the Bogoliubov-de Gennes Hamiltonian within a truncated Hilbert space.
Creative termination patterning therefore allows one to prototype a variety of
Majorana circuits, calculating energy spectra and plotting eigenfunctions over
the spherical surface. We develop the computational framework for this
approach, establishing a virtual breadboard for Majorana circuitry, and apply
it to circuits of interest, including the Majorana analog of a Mach-Zehnder
interferometer.
|
The radiative leptonic $B\to \gamma\ell\nu_{\ell}$ decay serves as an ideal
platform to determine the $B$-meson inverse moment which is a fundamental
nonperturbative parameter for the $B$ meson. In this paper, we explore precise
QCD contributions to this decay with an energetic photon. We reproduce the
next-to-next-to-leading-logarithmic resummation formula for the decay amplitude
at leading power in $\Lambda_{\rm QCD}/m_b$. Employing operator identities, we
calculate subleading-power contributions from the expansion of the
hard-collinear propagator of the internal up quark and the heavy-quark
expansion of the bottom quark. We update the contributions from the hadronic
structure of the photon to the $\decay$ process with the dispersion technique.
Together with other yet known power corrections, phenomenological applications
including the partial branching fraction and ratio of the branching fractions
of the radiative $B$ decay are investigated.
|
We report on nonlocal transport in superconductor hybrid structures, with
ferromagnetic as well as normal-metal tunnel junctions attached to the
superconductor. In the presence of a strong Zeeman splitting of the density of
states, both charge and spin imbalance is injected into the superconductor.
While previous experiments demonstrated spin injection from ferromagnetic
electrodes, we show that spin imbalance is also created for normal-metal
injector contacts. Using the combination of ferromagnetic and normal-metal
detectors allows us to directly discriminate between charge and spin injection,
and demonstrate a complete separation of charge and spin imbalance. The
relaxation length of the spin imbalance is of the order of several $\mu$m and
is found to increase with a magnetic field, but is independent of temperature.
We further discuss possible relaxation mechanisms for the explanation of the
spin relaxation length.
|
An introduction to the theory of sliding window detection processes, used as
alternatives to optimal Neyman-Pearson based radar detectors, is presented.
Included is an outline of their historical development, together with an
explanation for the resurgence of interest in such detectors for operation in
modern maritime surveillance radar clutter. In particular, recent research has
developed criteria that enables one to construct such detection processes with
the desired constant false alarm rate property for a comprehensive class of
clutter model. The chapter also includes some examples of the performance of
such detectors in a specific interference environment.
|
Given a suitable functor T:C -> D between model categories, we define a long
exact sequence relating the homotopy groups of any X in C with those of TX, and
use this to describe an obstruction theory for lifting an object G in D to C.
Examples include finding spaces with given homology or homotopy groups.
|
Recent promises of generative deep learning lately brought interest to its
potential uses in neural engineering. In this paper we firstly review recently
emerging studies on generating artificial electroencephalography (EEG) signals
with deep neural networks. Subsequently, we present our feasibility experiments
on generating condition-specific multichannel EEG signals using conditional
variational autoencoders. By manipulating real resting-state EEG epochs, we
present an approach to synthetically generate time-series multichannel signals
that show spectro-temporal EEG patterns which are expected to be observed
during distinct motor imagery conditions.
|
We introduce the concept of Magic Subspaces for the control of dissipative N-
level quantum systems whose dynamics are governed by Lindblad equation. For a
given purity, these subspaces can be defined as the set of density matrices for
which the rate of purity change is maximum or minimum. Adding fictitious
control fields to the system so that two density operators with the same purity
can be connected in a very short time, we show that magic subspaces allow to
derive a purity speed limit, which only depends on the relaxation rates. We
emphasize the superiority of this limit with respect to established bounds and
its tightness in the case of a two-level dissipative quantum system. The link
between the speed limit and the corresponding time-optimal solution is
discussed in the framework of this study. Explicit examples are described for
two- and three- level quantum systems.
|
With the rise of Transformers and Large Language Models (LLMs) in Chemistry
and Biology, new avenues for the design and understanding of therapeutics have
opened up to the scientific community. Protein sequences can be modeled as
language and can take advantage of recent advances in LLMs, specifically with
the abundance of our access to the protein sequence datasets. In this paper, we
developed the GPCR-BERT model for understanding the sequential design of G
Protein-Coupled Receptors (GPCRs). GPCRs are the target of over one-third of
FDA-approved pharmaceuticals. However, there is a lack of comprehensive
understanding regarding the relationship between amino acid sequence, ligand
selectivity, and conformational motifs (such as NPxxY, CWxP, E/DRY). By
utilizing the pre-trained protein model (Prot-Bert) and fine-tuning with
prediction tasks of variations in the motifs, we were able to shed light on
several relationships between residues in the binding pocket and some of the
conserved motifs. To achieve this, we took advantage of attention weights, and
hidden states of the model that are interpreted to extract the extent of
contributions of amino acids in dictating the type of masked ones. The
fine-tuned models demonstrated high accuracy in predicting hidden residues
within the motifs. In addition, the analysis of embedding was performed over 3D
structures to elucidate the higher-order interactions within the conformations
of the receptors.
|
From the axiom of the unrestricted repeatability of all experiments, Bondi
and Gold argued that the universe is in a stable, self-perpetuating equilibrium
state. This concept generalizes the usual cosmological principle to the perfect
cosmological principle in which the universe looks the same from any location
at any time. Consequently, I hypothesize that the universe is static and in an
equilibrium state (non-evolving).
New physics is proposed based on the concept that the universe is a pure wave
system. Based on the new physics and assuming a static universe, processes are
derived for the Hubble redshift and the cosmic background radiation field.
Then, following the scientific method, I test deductions of the static
universe hypothesis using precise observational data primarily from the Hubble
Space Telescope. Applying four different global tests of the space-time metric,
I find that the observational data consistently fits the static universe model.
The observational data also show that the average absolute magnitudes and
physical radii of first-rank elliptical galaxies have not changed over the last
5 to 15 billion years.
Because the static universe hypothesis is a logical deduction from the
perfect cosmological principle and the hypothesis is confirmed by the
observational data, I conclude that the universe is static and in an
equilibrium state.
|
The Workflow Satisfiability Problem (WSP) asks whether there exists an
assignment of authorized users to the steps in a workflow specification,
subject to certain constraints on the assignment. The problem is NP-hard even
when restricted to just not equals constraints. Since the number of steps $k$
is relatively small in practice, Wang and Li (2010) introduced a
parametrisation of WSP by $k$. Wang and Li (2010) showed that, in general, the
WSP is W[1]-hard, i.e., it is unlikely that there exists a fixed-parameter
tractable (FPT) algorithm for solving the WSP. Crampton et al. (2013) and Cohen
et al. (2014) designed FPT algorithms of running time $O^*(2^{k})$ and
$O^*(2^{k\log_2 k})$ for the WSP with so-called regular and user-independent
constraints, respectively. In this note, we show that there are no algorithms
of running time $O^*(2^{ck})$ and $O^*(2^{ck\log_2 k})$ for the two
restrictions of WSP, respectively, with any $c<1$, unless the Strong
Exponential Time Hypothesis fails.
|
Computing the autotopism group of a partial Latin rectangle can be performed
in a variety of ways. This pilot study has two aims: (a) to compare these
methods experimentally, and (b) to identify the design goals one should have in
mind for developing practical software. To this end, we compare six families of
algorithms (two backtracking methods and four graph automorphism methods), with
and without the use of entry invariants, on two test suites. We consider two
entry invariants: one determined by the frequencies of row, column, and symbol
representatives, and one determined by $2 \times 2$ submatrices. We find: (a)
with very few entries, many symmetries often exist, and these should be
identified mathematically rather than computationally, (b) with an intermediate
number of entries, a quick-to-compute entry invariant was effective at reducing
the need for computation, (c) with an almost-full partial Latin rectangle, more
sophisticated entry invariants are needed, and (d) the performance for (full)
Latin squares is significantly poorer than other partial Latin rectangles of
comparable size, obstructed by the existence of Latin squares with large
(possibly transitive) autotopism groups.
|
We consider proton stability based on E_6 inspired extra U(1) model with S_4
flavor symmetry. In this model, a long life time of proton is realized by the
flavor symmetry. One of the interesting effects of flavor symmetry is that the
proton decay widths of p -> mu^+, X are suppressed by cancellation. This
suppression mechanism is important in the case that Yukawa coupling constants
are hierarchical. Our model predicts p -> e^+, K^0 has larger decay width than
that of p -> mu^+, K^0.
|
A new parallelized unsplit geometrical Volume of Fluid (VoF) algorithm with
support for arbitrary unstructured meshes and dynamic local Adaptive Mesh
Refinement (AMR), as well as for two and three dimensional computation is
developed. The geometrical VoF algorithm supports arbitrary unstructured meshes
in order to enable computations involving flow domains of arbitrary geometrical
complexity. The implementation of the method is done within the framework of
the OpenFOAM library for Computational Continuum Mechanics (CCM) using the C++
programming language with modern policy based design for high program code
modularity. The development of the geometrical VoF algorithm significantly
extends the method base of the OpenFOAM library by geometrical volumetric flux
computation for two-phase flow simulations.
For the volume fraction advection, a novel unsplit geometrical algorithm is
developed, which inherently sustains volume conservation utilizing unique
Lagrangian discrete trajectories located in the mesh points. This practice
completely eliminates the possibility of an overlap between the flux polyhedra
and hence significantly increases volume conservation. A new efficient
(quadratic convergent) and accurate iterative flux correction algorithm is
developed, which avoids topological changes of the flux polyhedra. Our
geometrical VoF algorithm is dimension agnostic, providing automatic support
for both 2D and 3D computations, following the established practice in
OpenFOAM. The geometrical algorithm used for the volume fraction transport has
been extended to support dynamic local AMR available in OpenFOAM. Furthermore,
the existing dynamic mesh capability of OpenFOAM has been modified to support
the geometrical mapping algorithm executed as a part of the dynamic local AMR
cycle. The method implementation is fully parallelized using the domain
decomposition approach.
|
A simple connection between the universal $R$ matrix of $U_q(sl(2))$ (for
spins $\demi$ and $J$) and the required form of the co-product action of the
Hilbert space generators of the quantum group symmetry is put forward. This
gives an explicit operator realization of the co-product action on the
covariant operators. It allows us to derive the quantum group covariance of the
fusion and braiding matrices, although it is of a new type: the generators
depend upon worldsheet variables, and obey a new central extension of
$U_q(sl(2))$ realized by (what we call) fixed point commutation relations. This
is explained by showing that the link between the algebra of field
transformations and that of the co-product generators is much weaker than
previously thought. The central charges of our extended $U_q(sl(2))$ algebra,
which includes the Liouville zero-mode momentum in a nontrivial way are related
to Virasoro-descendants of unity. We also show how our approach can be used to
derive the Hopf algebra structure of the extended quantum-group symmetry
$U_q(sl(2))\odot U_{\qhat}(sl(2))$ related to the presence of both of the
screening charges of 2D gravity.
|
This paper gives an arbitrage-free prediction for future prices of an
arbitrary co-terminal set of options with a given maturity, based on the
observed time series of these option prices. The statistical analysis of such a
multi-dimensional time series of option prices corresponding to $n$ strikes
(with $n$ large, e.g. $n\geq 40$) and the same maturity, is a difficult task
due to the fact that option prices at any moment in time satisfy non-linear and
non-explicit no-arbitrage restrictions. Hence any $n$-dimensional time series
model also has to satisfy these implicit restrictions at each time step, a
condition that is impossible to meet since the model innovations can take
arbitrary values. We solve this problem for any $n\in\NN$ in the context of
Foreign Exchange (FX) by first encoding the option prices at each time step in
terms of the parameters of the corresponding risk-neutral measure and then
performing the time series analysis in the parameter space. The option price
predictions are obtained from the predicted risk-neutral measure by effectively
integrating it against the corresponding option payoffs. The non-linear
transformation between option prices and the risk-neutral parameters applied
here is \textit{not} arbitrary: it is the standard mapping used by market
makers in the FX option markets (the SABR parameterisation) and is given
explicitly in closed form. Our method is not restricted to the FX asset class
nor does it depend on the type of parameterisation used. Statistical analysis
of FX market data illustrates that our arbitrage-free predictions outperform
the naive random walk forecasts, suggesting a potential for building management
strategies for portfolios of derivative products, akin to the ones widely used
in the underlying equity and futures markets.
|
We propose a scheme for preparation of large-scale entangled $W$ states based
on the fusion mechanism via quantum Zeno dynamics. By sending two atoms
belonging to an $n$-atom $W$ state and an $m$-atom $W$ state, respectively,
into a vacuum cavity (or two separate cavities), we may obtain a ($n+m-2$)-atom
$W$ state via detecting the two-atom state after interaction. The present
scheme is robust against both spontaneous emission of atoms and decay of
cavity, and the feasibility analysis indicates that it can also be realized in
experiment.
|
Transitioning from fossil fuels to renewable energy sources is a critical
global challenge; it demands advances at the levels of materials, devices, and
systems for the efficient harvesting, storage, conversion, and management of
renewable energy. Researchers globally have begun incorporating machine
learning (ML) techniques with the aim of accelerating these advances. ML
technologies leverage statistical trends in data to build models for prediction
of material properties, generation of candidate structures, optimization of
processes, among other uses; as a result, they can be incorporated into
discovery and development pipelines to accelerate progress. Here we review
recent advances in ML-driven energy research, outline current and future
challenges, and describe what is required moving forward to best lever ML
techniques. To start, we give an overview of key ML concepts. We then introduce
a set of key performance indicators to help compare the benefits of different
ML-accelerated workflows for energy research. We discuss and evaluate the
latest advances in applying ML to the development of energy harvesting
(photovoltaics), storage (batteries), conversion (electrocatalysis), and
management (smart grids). Finally, we offer an outlook of potential research
areas in the energy field that stand to further benefit from the application of
ML.
|
We investigate the local properties of Berkovich spaces over Z. Using
Weierstrass theorems, we prove that the local rings of those spaces are
noetherian, regular in the case of affine spaces and excellent. We also show
that the structure sheaf is coherent. Our methods work over other base rings
(valued fields, discrete valuation rings, rings of integers of number fields,
etc.) and provide a unified treatment of complex and p-adic spaces.
|
We study the problem of finding monotone subsequences in an array from the
viewpoint of sublinear algorithms. For fixed $k \in \mathbb{N}$ and
$\varepsilon > 0$, we show that the non-adaptive query complexity of finding a
length-$k$ monotone subsequence of $f \colon [n] \to \mathbb{R}$, assuming that
$f$ is $\varepsilon$-far from free of such subsequences, is $\Theta((\log
n)^{\lfloor \log_2 k \rfloor})$. Prior to our work, the best algorithm for this
problem, due to Newman, Rabinovich, Rajendraprasad, and Sohler (2017), made
$(\log n)^{O(k^2)}$ non-adaptive queries; and the only lower bound known, of
$\Omega(\log n)$ queries for the case $k = 2$, followed from that on testing
monotonicity due to Erg\"un, Kannan, Kumar, Rubinfeld, and Viswanathan (2000)
and Fischer (2004).
|
Human motion in the vicinity of a wireless link causes variations in the link
received signal strength (RSS). Device-free localization (DFL) systems, such as
variance-based radio tomographic imaging (VRTI), use these RSS variations in a
static wireless network to detect, locate and track people in the area of the
network, even through walls. However, intrinsic motion, such as branches moving
in the wind and rotating or vibrating machinery, also causes RSS variations
which degrade the performance of a DFL system. In this paper, we propose and
evaluate two estimators to reduce the impact of the variations caused by
intrinsic motion. One estimator uses subspace decomposition, and the other
estimator uses a least squares formulation. Experimental results show that both
estimators reduce localization root mean squared error by about 40% compared to
VRTI. In addition, the Kalman filter tracking results from both estimators have
97% of errors less than 1.3 m, more than 60% improvement compared to tracking
results from VRTI.
|
In recent years, Community Question Answering (CQA) has emerged as a popular
platform for knowledge curation and archival. An interesting aspect of question
answering is that it combines aspects from natural language processing,
information retrieval, and machine learning. In this paper, we have explored
how the depth of the neural network influences the accuracy of prediction of
deleted questions in question-answering forums. We have used different shallow
and deep models for prediction and analyzed the relationships between number of
hidden layers, accuracy, and computational time. The results suggest that while
deep networks perform better than shallow networks in modeling complex
non-linear functions, increasing the depth may not always produce desired
results. We observe that the performance of the deep neural network suffers
significantly due to vanishing gradients when large number of hidden layers are
present. Constantly increasing the depth of the model increases accuracy
initially, after which the accuracy plateaus, and finally drops. Adding each
layer is also expensive in terms of the time required to train the model. This
research is situated in the domain of neural information retrieval and
contributes towards building a theory on how deep neural networks can be
efficiently and accurately used for predicting question deletion. We predict
deleted questions with more than 90\% accuracy using two to ten hidden layers,
with less accurate results for shallower and deeper architectures.
|
Multicellular organisms exhibit a high degree of structural organization with
specific cell types always occurring in characteristic locations. The
conventional framework for describing the emergence of such consistent spatial
patterns is provided by Wolpert's "French flag" paradigm. According to this
view, intra-cellular genetic regulatory mechanisms use positional information
provided by morphogen concentration gradients to differentially express
distinct fates, resulting in a characteristic pattern of differentiated cells.
However, recent experiments have shown that suppression of inter-cellular
interactions can alter these spatial patterns, suggesting that cell fates are
not exclusively determined by the regulation of gene expression by local
morphogen concentration. Using an explicit model where adjacent cells
communicate by Notch signaling, we provide a mechanistic description of how
contact-mediated interactions allow information from the cellular environment
to be incorporated into cell fate decisions. Viewing cellular differentiation
in terms of trajectories along an epigenetic landscape (as first enunciated by
Waddington), our results suggest that the contours of the landscape are moulded
differently in a cell position-dependent manner, not only by the global signal
provided by the morphogen but also by the local environment via cell-cell
interactions. We show that our results are robust with respect to different
choices of coupling between the inter-cellular signaling apparatus and the
intra-cellular gene regulatory dynamics. Indeed, we show that the broad
features can be observed even in abstract spin models. Our work reconciles
interaction-mediated self-organized pattern formation with boundary-organized
mechanisms involving signals that break symmetry.
|
The conventional wisdom suggests that transports of conserved quantities in
non-integrable quantum many-body systems at high temperatures are diffusive.
However, we discover a counterexample of this paradigm by uncovering anomalous
hydrodynamics in a spin-1/2 XXZ chain with power-law couplings. This model,
classified as non-integrable due to its Wigner-Dyson level-spacing statistics
in the random matrix theory, exhibits a surprising
superdiffusive-ballistic-superdiffusive transport transition by varying the
power-law exponent of couplings for a fixed anisotropy. Our findings are
verified by multiple observables, including the spin-spin autocorrelator,
mean-square displacement, and spin conductivity. Interestingly, we further
quantify the degree of quantum chaos using the Kullback-Leibler divergence
between the entanglement entropy distributions of the model's eigenstates and a
random state. Remarkably, an observed local maximum in the divergence near the
transition boundary suggests a link between anomalous hydrodynamics and a
suppression of quantum chaos. This work offers another deep understanding of
emergent anomalous transport phenomena in a wider range of non-integrable
quantum many-body systems
|
The theoretical description of transport in a wide class of novel materials
is based upon quantum percolation and related random resistor network (RRN)
models. We examine the localization properties of electronic states of diverse
two-dimensional quantum percolation models using exact diagonalization in
combination with kernel polynomial expansion techniques. Employing the local
distribution approach we determine the arithmetically and geometrically
averaged densities of states in order to distinguish extended, current carrying
states from localized ones. To get further insight into the nature of
eigenstates of RRN models we analyze the probability distribution of the local
density of states in the whole parameter and energy range. For a recently
proposed RRN representation of graphene sheets we discuss leakage effects.
|
An efficient perturbational treatment of spin-orbit coupling within the
framework of high-level multi-reference techniques has been implemented in the
most recent version of the COLUMBUS quantum chemistry package, extending the
existing fully variational two-component (2c) multi-reference configuration
interaction singles and doubles (MRCISD) method. The proposed scheme follows
related implementations of quasi-degenerate perturbation theory (QDPT) model
space techniques. Our model space is built either from uncontracted,
large-scale scalar relativistic MRCISD wavefunctions or based on the
scalar-relativistic solutions of the linear-response-theory-based
multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC).
The latter approach allows for a consistent, approximatively size-consistent
and size-extensive treatment of spin-orbit coupling. The approach is described
in detail and compared to a number of related techniques. The inherent accuracy
of the QDPT approach is validated by comparing cuts of the potential energy
surfaces of acrolein and its S, Se, and Te analoga with the corresponding data
obtained from matching fully variational spin-orbit MRCISD calculations. The
conceptual availability of approximate analytic gradients with respect to
geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and
2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular
dynamics simulations.
|
We present full polarization Very Long Baseline Array (VLBA) observations at
5 GHz and 15 GHz of 24 compact active galactic nuclei (AGN). These sources were
observed as part of a pilot project to demonstrate the feasibility of
conducting a large VLBI survey to further our understanding of the physical
properties and temporal evolution of AGN jets. The sample is drawn from the
Cosmic Lens All-Sky Survey (CLASS) where it overlaps with the Sloan Digital Sky
Survey at declinations north of 15 degrees. There are 2100 CLASS sources
brighter than 50 mJy at 8.4 GHz, of which we have chosen 24 for this pilot
study. All 24 sources were detected and imaged at 5 GHz with a typical dynamic
range of 500:1, and 21 of 24 sources were detected and imaged at 15 GHz. Linear
polarization was detected in 8 sources at both 5 and 15 GHz, allowing for the
creation of Faraday rotation measure (RM) images. The core RMs for the sample
were found to have an average absolute value of 390 +/- 100 rad/m^2. We also
present the discovery of a new Compact Symmetric Object, J08553+5751. All data
were processed automatically using pipelines created or adapted for the survey.
|
In aspect-level sentiment classification (ASC), state-of-the-art models
encode either syntax graph or relation graph to capture the local syntactic
information or global relational information. Despite the advantages of syntax
and relation graphs, they have respective shortages which are neglected,
limiting the representation power in the graph modeling process. To resolve
their limitations, we design a novel local-global interactive graph, which
marries their advantages by stitching the two graphs via interactive edges. To
model this local-global interactive graph, we propose a novel neural network
termed DigNet, whose core module is the stacked local-global interactive (LGI)
layers performing two processes: intra-graph message passing and cross-graph
message passing. In this way, the local syntactic and global relational
information can be reconciled as a whole in understanding the aspect-level
sentiment. Concretely, we design two variants of local-global interactive
graphs with different kinds of interactive edges and three variants of LGI
layers. We conduct experiments on several public benchmark datasets and the
results show that we outperform previous best scores by 3\%, 2.32\%, and 6.33\%
in terms of Macro-F1 on Lap14, Res14, and Res15 datasets, respectively,
confirming the effectiveness and superiority of the proposed local-global
interactive graph and DigNet.
|
Clustering problems have numerous applications and are becoming more
challenging as the size of the data increases. In this paper, we consider
designing clustering algorithms that can be used in MapReduce, the most popular
programming environment for processing large datasets. We focus on the
practical and popular clustering problems, $k$-center and $k$-median. We
develop fast clustering algorithms with constant factor approximation
guarantees. From a theoretical perspective, we give the first analysis that
shows several clustering algorithms are in $\mathcal{MRC}^0$, a theoretical
MapReduce class introduced by Karloff et al. \cite{KarloffSV10}. Our algorithms
use sampling to decrease the data size and they run a time consuming clustering
algorithm such as local search or Lloyd's algorithm on the resulting data set.
Our algorithms have sufficient flexibility to be used in practice since they
run in a constant number of MapReduce rounds. We complement these results by
performing experiments using our algorithms. We compare the empirical
performance of our algorithms to several sequential and parallel algorithms for
the $k$-median problem. The experiments show that our algorithms' solutions are
similar to or better than the other algorithms' solutions. Furthermore, on data
sets that are sufficiently large, our algorithms are faster than the other
parallel algorithms that we tested.
|
State-of-the-art Graph Neural Networks (GNNs) have limited scalability with
respect to the graph and model sizes. On large graphs, increasing the model
depth often means exponential expansion of the scope (i.e., receptive field).
Beyond just a few layers, two fundamental challenges emerge: 1. degraded
expressivity due to oversmoothing, and 2. expensive computation due to
neighborhood explosion. We propose a design principle to decouple the depth and
scope of GNNs -- to generate representation of a target entity (i.e., a node or
an edge), we first extract a localized subgraph as the bounded-size scope, and
then apply a GNN of arbitrary depth on top of the subgraph. A properly
extracted subgraph consists of a small number of critical neighbors, while
excluding irrelevant ones. The GNN, no matter how deep it is, smooths the local
neighborhood into informative representation rather than oversmoothing the
global graph into "white noise". Theoretically, decoupling improves the GNN
expressive power from the perspectives of graph signal processing (GCN),
function approximation (GraphSAGE) and topological learning (GIN). Empirically,
on seven graphs (with up to 110M nodes) and six backbone GNN architectures, our
design achieves significant accuracy improvement with orders of magnitude
reduction in computation and hardware cost.
|
Recommender systems build user profiles using concept analysis of usage
matrices. The concepts are mined as spectra and form Galois connections.
Descent is a general method for spectral decomposition in algebraic geometry
and topology which also leads to generalized Galois connections. Both
recommender systems and descent theory are vast research areas, separated by a
technical gap so large that trying to establish a link would seem foolish. Yet
a formal link emerged, all on its own, bottom-up, against authors' intentions
and better judgment. Familiar problems of data analysis led to a novel solution
in category theory. The present paper arose from a series of earlier efforts to
provide a top-down account of these developments.
|
In this study, we introduce an explicit trading-volume process into the
Almgren-Chriss model, which is a standard model for optimal execution. We
propose a penalization method for deriving a verification theorem for an
adaptive optimization problem. We also discuss the optimality of the
volume-weighted average-price strategy of a risk-neutral trader. Moreover, we
derive a second-order asymptotic expansion of the optimal strategy and verify
its accuracy numerically.
|
A consistent description of the pi0 pi0 invariant mass distribution in the
phi(1020) -> f0(980) gamma -> pi0 pi0 gamma decay and the pi0 eta in phi(1020)
-> a0(980) gamma -> pi0 eta gamma is suggested. A search for the consequences
of the flavor SU(3) symmetry for the scalar mesons can be based on such an
analysis. In order to accurately treat the pseudoscalar meson dynamics, which
is very important for the scalar meson decays, we employ Resonance Chiral
Theory.
|
Leeches are fascinating creatures: they have simple modular nervous circuitry
m yet exhibit a rich spectrum of behavioural modes. Leeches could be ideal
blue-prints for designing flexible soft robots which are modular,
multi-functional, fault-tolerant, easy to control, capable for navigating using
optical, mechanical and chemical sensorial inputs, have autonomous
inter-segmental coordination and adaptive decision-making. With future designs
of leech-robots in mind we study how leeches behave in geometrically
constrained spaces. Core results of the paper deal with leeches exploring a row
of rooms arranged along a narrow corridor. In laboratory experiments we find
that rooms closer to ends of the corridor are explored by leeches more often
than rooms in the middle of the corridor. Also, in series of scoping
experiments, we evaluate leeches capabilities to navigating in mazes towards
sources of vibration and chemo-attraction. We believe our results lay
foundation for future developments of robots mimicking behaviour of leeches.
|
Combining data from several case-control genome-wide association (GWA)
studies can yield greater efficiency for detecting associations of disease with
single nucleotide polymorphisms (SNPs) than separate analyses of the component
studies. We compared several procedures to combine GWA study data both in terms
of the power to detect a disease-associated SNP while controlling the
genome-wide significance level, and in terms of the detection probability
($\mathit{DP}$). The $\mathit{DP}$ is the probability that a particular
disease-associated SNP will be among the $T$ most promising SNPs selected on
the basis of low $p$-values. We studied both fixed effects and random effects
models in which associations varied across studies. In settings of practical
relevance, meta-analytic approaches that focus on a single degree of freedom
had higher power and $\mathit{DP}$ than global tests such as summing chi-square
test-statistics across studies, Fisher's combination of $p$-values, and forming
a combined list of the best SNPs from within each study.
|
In the setting of subFinsler Carnot groups, we consider curves that satisfy
the normal equation coming from the Pontryagin Maximum Principle. We show that,
unless it is constant, each such a curve leaves every compact set,
quantitatively. Namely, the distance between the points at time 0 and time $t$
grows at least of the order of $t^{1/s}$, where $s$ denotes the step of the
Carnot group. In particular, in subFinsler Carnot groups there are no periodic
normal geodesics.
|
This paper proposes a novel deep subspace clustering approach which uses
convolutional autoencoders to transform input images into new representations
lying on a union of linear subspaces. The first contribution of our work is to
insert multiple fully-connected linear layers between the encoder layers and
their corresponding decoder layers to promote learning more favorable
representations for subspace clustering. These connection layers facilitate the
feature learning procedure by combining low-level and high-level information
for generating multiple sets of self-expressive and informative representations
at different levels of the encoder. Moreover, we introduce a novel loss
minimization problem which leverages an initial clustering of the samples to
effectively fuse the multi-level representations and recover the underlying
subspaces more accurately. The loss function is then minimized through an
iterative scheme which alternatively updates the network parameters and
produces new clusterings of the samples. Experiments on four real-world
datasets demonstrate that our approach exhibits superior performance compared
to the state-of-the-art methods on most of the subspace clustering problems.
|
We present multi-epoch VLBA imaging of the 28SiO v=1 & v=2, J=1-0 maser
emission toward the massive YSO Orion Source I. Both SiO transitions were
observed simultaneously with an angular resolution of ~0.5 mas (~0.2 AU for
d=414 pc). Here we explore the global properties and kinematics of the emission
through two 19-epoch animated movies spanning 21 months (2001 March 19 to 2002
December 10). These movies provide the most detailed view to date of the
dynamics and temporal evolution of molecular material within ~20-100 AU of a
massive (~>8M_sun) YSO. The bulk of the SiO masers surrounding Source I lie in
an X-shaped locus; emission in the South/East arms is predominantly blueshifted
and in the North and West is predominantly redshifted. In addition, bridges of
intermediate-velocity emission connect the red and blue sides of the emission
distribution. We have measured proper motions of >1000 maser features and find
a combination of radially outward migrations along the four arms and motions
tangent to the bridges. We interpret the SiO masers as arising from a
wide-angle bipolar wind emanating from a rotating, edge-on disk. The detection
of maser features along extended, curved filaments suggests that magnetic
fields may play a role in launching and/or shaping the wind. Our observations
appear to support a picture in which stars with M ~>8 M_sun form via
disk-mediated accretion. However, we cannot rule out that the Source I disk may
have been formed or altered following a close encounter. (Abridged).
|
The next generation 2 micron sky survey should target nascent galaxies in the
epoch of reionization for spectroscopic followup on large telescopes. A 2.5
metre telescope at a site on the antarctic plateau has advantages for this
purpose and for southern hemisphere infrared surveys generally.
|
The progesterone receptor (PR) mediates progesterone regulation of female
reproductive physiology, as well as gene transcription in non-reproductive
tissues, such as brain, bone, lung and vasculature, in both women and men. An
unusual property of progesterone is its high affinity for the mineralocorticoid
receptor (MR), which regulates electrolyte transport in the kidney in humans
and other terrestrial vertebrates. In humans, rats, alligators and frogs,
progesterone antagonizes activation of the MR by aldosterone, the physiological
mineralocorticoid in terrestrial vertebrates. In contrast, in elephant shark,
ray-finned fishes and chickens, progesterone activates the MR. Interestingly,
cartilaginous fishes and ray-finned fishes do not synthesize aldosterone,
raising the question of which steroid(s) activate the MR in cartilaginous
fishes and ray-finned fishes. The simpler synthesis of progesterone, compared
to cortisol and other corticosteroids, makes progesterone a candidate
physiological activator of the MR in elephant sharks and ray-finned fishes.
Elephant shark and ray-finned fish MRs are expressed in diverse tissues,
including heart, brain and lung, as well as, ovary and testis, two reproductive
tissues that are targets for progesterone, which together suggests a
multi-faceted physiological role for progesterone activation of the MR in
elephant shark and ray-finned fish. The functional consequences of progesterone
as an antagonist of some terrestrial vertebrate MRs and as an agonist of fish
and chicken MRs are not fully understood. Indeed, little is known of
physiological activities of progesterone via any vertebrate MR.
|
We introduce a new information-theoretic complexity measure $IC_\infty$ for
2-party functions which is a lower-bound on communication complexity, and has
the two leading lower-bounds on communication complexity as its natural
relaxations: (external) information complexity ($IC$) and logarithm of
partition complexity ($\text{prt}$), which have so far appeared conceptually
quite different from each other. $IC_\infty$ is an external information
complexity measure based on R\'enyi mutual information of order infinity. In
the definition of $IC_\infty$, relaxing the order of R\'enyi mutual information
from infinity to 1 yields $IC$, while $\log \text{prt}$ is obtained by
replacing protocol transcripts with what we term "pseudotranscripts," which
omits the interactive nature of a protocol, but only requires that the
probability of any transcript given the inputs $x$ and $y$ to the two parties,
factorizes into two terms which depend on $x$ and $y$ separately. Further
understanding $IC_\infty$ might have consequences for important direct-sum
problems in communication complexity, as it lies between communication
complexity and information complexity.
We also show that applying both the above relaxations simultaneously to
$IC_\infty$ gives a complexity measure that is lower-bounded by the (log of)
relaxed partition complexity, a complexity measure introduced by Kerenidis et
al. (FOCS 2012). We obtain a sharper connection between (external) information
complexity and relaxed partition complexity than Kerenidis et al., using an
arguably more direct proof.
|
The breaking of the $CP$ symmetry in $D^0$ meson decays has been awaited for
a long time. After a set of measurements provided by the LHCb, CDF, and Belle
Collaborations leading in march 2012 to combined results that were consistent
with no $CP$ violation at a CL of $0.006\%$ suggesting $CP$ violation at
$\sim1\%$ level. Such a potentially large value of $CP$ violation in charm
decays has triggered widespread interest from the whole particle physics
community to evaluate the implications of such an interesting unexpected
results. However, a more recent combination of more up-to-date results in March
2013, has slightly changed the situation, showing that data are consistent with
the $CP$ conserving hypothesis at $2.1\%$ CL. I briefly review the method used
by the various Collaborations when extracting the quantity $\Delta A_{CP}$ and
the relative results. Finally I discuss the need for additional measurements,
and present the potential of a time-dependent analysis when looking for $CP$
violation in $D^0$ decays and how this can be used to largely improve the
current sensitivity on the mixing phase $\phi_{MIX}$.
|
A Smarandache multi-space is a union of $n$ different spaces equipped with
some different structures for an integer $n\geq 2$, which can be both used for
discrete or connected spaces, particularly for geometries and spacetimes in
theoretical physics. This is the 4th part of multi-spaces considering
applications of multi-spaces to theoretical physics, including the relativity
theory, the M-theory and the cosmology. Multi-space models for $p$-branes and
cosmos are constructed and some questions in cosmology are clarified by
multi-spaces.
|
The increasing complexity of software engineering requires effective methods
and tools to support requirements analysts' activities. While much of a
company's knowledge can be found in text repositories, current content
management systems have limited capabilities for structuring and interpreting
documents. In this context, we propose a tool for transforming text documents
describing users' requirements to an UML model. The presented tool uses Natural
Language Processing (NLP) and semantic rules to generate an UML class diagram.
The main contribution of our tool is to provide assistance to designers
facilitating the transition from a textual description of user requirements to
their UML diagrams based on GATE (General Architecture of Text) by formulating
necessary rules that generate new semantic annotations.
|
In this paper we theoretically investigate application of a bistable
Fabry-P\'{e}rot semiconductor laser under optical-injection as all-optical
activation unit for multilayer perceptron optical neural networks. The proposed
device is programmed to provide reconfigurable sigmoid-like activation
functions with adjustable thresholds and saturation points and benchmarked on
machine learning image recognition problems. Due to the reconfigurability of
the activation unit, the accuracy can be increased by up to 2% simply by
adjusting the control parameter of the activation unit to suit the specific
problem. For a simple two-layer perceptron neural network, we achieve inference
accuracies of up to 95% and 85%, for the MNIST and Fashion-MNIST datasets,
respectively.
|
The derivation for the transport coefficients of an electron system in the
presence of temperature gradient and the electric and magnetic fields are
presented. The Nernst conductivity and the transverse thermoelectric power of
the Dirac fermions in graphene under charged impurity scatterings and weak
magnetic field are calculated on basis of the self-consistent Born
approximation. The result is compared with so far the available experimental
data.
|
We report $0.14"$ resolution observations of the dust continuum at band 7,
and the CO(3--2) and HCO$^{+}$(4--3) line emissions toward the transitional
disk around Sz 91 with Atacama Large Millimeter/submillimeter Array (ALMA). The
dust disk appears to be an axisymmetric ring, peaking a radius of $\sim$95~au
from a Gaussian fit. The Gaussian fit widths of the dust ring are 24.6 and
23.7~au for the major and the minor axes, respectively, indicating that the
dust ring is not geometrically thin. The gas disk extends out to $\sim$320~au
and is also detected in the inner hole of the dust ring. A twin-line pattern is
found in the channel maps of CO, which can be interpreted as the emission from
the front and rear of the flared gas disk. We perform the radiative transfer
calculations using RADMC-3D, to check whether the twin-line pattern can be
reproduced under the assumption that the flared gas disk has a power-law form
for the column density and $T_\mathrm{gas}=T_\mathrm{dust}$. The thermal Monte
Carlo calculation in RADMC-3D shows that the disk temperature has a gradient
along the vertical direction beyond the dust ring, as it blocks the stellar
radiation, and thus the twin-line pattern can be naturally explained by the
flared gas disk in combination with the dust ring. In addition, no significant
depletion of the CO molecules in the cold midplane achieves a reasonable
agreement with the observed twin-line pattern. This result indicates that the
CO emission from the rear surface must be heavily absorbed in the cold
midplane.
|
Common models for two-phase lipid bilayer membranes are based on an energy
that consists of an elastic term for each lipid phase and a line energy at
interfaces. Although such an energy controls only the length of interfaces, the
membrane surface is usually assumed to be at least $C^1$ across phase
boundaries. We consider the spontaneous curvature model for closed rotationally
symmetric two-phase membranes without excluding tangent discontinuities at
interfaces a priorily. We introduce a family of energies for smooth surfaces
and phase fields for the lipid phases and derive a sharp interface limit that
coincides with the $\Gamma$-limit on all reasonable membranes and extends the
classical model by assigning a bending energy also to tangent discontinuities.
The theoretical result is illustrated by numerical examples.
|
We introduce conformal coupling of the Standard Model Higgs field to gravity
and discuss the subsequent modification of $R^2$-inflation. The main
observation is a lower temperature of reheating which happens mostly through
scalaron decays into gluons due to the conformal (trace) anomaly. This modifies
all predictions of the original $R^2$-inflation. To the next-to-leading order
in slow roll parameters we calculate amplitudes and indices of scalar and
tensor perturbations produced at inflation. The results are compared to the
next-to-leading order predictions of $R^2$-inflation with minimally coupled
Higgs field and of Higgs-inflation. We discuss additional features in gravity
wave signal that may help to distinguish the proposed variant of
$R^2$-inflation. Remarkably, the features are expected in the region available
for study at future experiments like BBO and DECIGO. Finally, we check that
(meta)stability of electroweak vacuum in the cosmological model is consistent
with recent results of searches for the Higgs boson at LHC.
|
Chebyshev was the first to observe a bias in the distribution of primes in
residue classes. The general phenomenon is that if $a$ is a nonsquare\mod q and
$b$ is a square\mod q, then there tend to be more primes congruent to $a\mod q$
than $b\mod q$ in initial intervals of the positive integers; more succinctly,
there is a tendency for $\pi(x;q,a)$ to exceed $\pi(x;q,b)$. Rubinstein and
Sarnak defined $\delta(q;a,b)$ to be the logarithmic density of the set of
positive real numbers $x$ for which this inequality holds; intuitively,
$\delta(q;a,b)$ is the "probability" that $\pi(x;q,a) > \pi(x;q,b)$ when $x$ is
"chosen randomly". In this paper, we establish an asymptotic series for
$\delta(q;a,b)$ that can be instantiated with an error term smaller than any
negative power of $q$. This asymptotic formula is written in terms of a
variance $V(q;a,b)$ that is originally defined as an infinite sum over all
nontrivial zeros of Dirichlet $L$-functions corresponding to characters\mod q;
we show how $V(q;a,b)$ can be evaluated exactly as a finite expression. In
addition to providing the exact rate at which $\delta(q;a,b)$ converges to
$\frac12$ as $q$ grows, these evaluations allow us to compare the various
density values $\delta(q;a,b)$ as $a$ and $b$ vary modulo $q$; by analyzing the
resulting formulas, we can explain and predict which of these densities will be
larger or smaller, based on arithmetic properties of the residue classes $a$
and $b\mod q$. For example, we show that if $a$ is a prime power and $a'$ is
not, then $\delta(q;a,1) < \delta(q;a',1)$ for all but finitely many moduli $q$
for which both $a$ and $a'$ are nonsquares. Finally, we establish rigorous
numerical bounds for these densities $\delta(q;a,b)$ and report on extensive
calculations of them.
|
Automotive-Industry 5.0 will use emerging 6G communications to provide
robust, computationally intelligent, and energy-efficient data sharing among
various onboard sensors, vehicles, and other Intelligent Transportation System
(ITS) entities. Non-Orthogonal Multiple Access (NOMA) and backscatter
communications are two key techniques of 6G communications for enhanced
spectrum and energy efficiency. In this paper, we provide an introduction to
green transportation and also discuss the advantages of using backscatter
communications and NOMA in Automotive Industry 5.0. We also briefly review the
recent work in the area of NOMA empowered backscatter communications. We
discuss different use cases of backscatter communications in NOMA-enabled 6G
vehicular networks. We also propose a multi-cell optimization framework to
maximize the energy efficiency of the backscatter-enabled NOMA vehicular
network. In particular, we jointly optimize the transmit power of the roadside
unit and the reflection coefficient of the backscatter device in each cell,
where several practical constraints are also taken into account. The problem of
energy efficiency is formulated as nonconvex which is hard to solve directly.
Thus, first, we adopt the Dinkelbach method to transform the objective function
into a subtractive one, then we decouple the problem into two subproblems.
Second, we employ dual theory and KKT conditions to obtain efficient solutions.
Finally, we highlight some open issues and future research opportunities
related to NOMA-enabled backscatter communications in 6G vehicular networks.
|
Few-shot named entity recognition (NER) targets generalizing to unseen labels
and/or domains with few labeled examples. Existing metric learning methods
compute token-level similarities between query and support sets, but are not
able to fully incorporate label semantics into modeling. To address this issue,
we propose a simple method to largely improve metric learning for NER: 1)
multiple prompt schemas are designed to enhance label semantics; 2) we propose
a novel architecture to effectively combine multiple prompt-based
representations. Empirically, our method achieves new state-of-the-art (SOTA)
results under 16 of the 18 considered settings, substantially outperforming the
previous SOTA by an average of 8.84% and a maximum of 34.51% in relative gains
of micro F1. Our code is available at https://github.com/AChen-qaq/ProML.
|
In this paper, we will prove the non-trivial bound for the weighted average
version of shifted convolution sum for $GL(3)\times GL(2)$, i.e. for any
$\epsilon >0$ and $X^{1/4+\delta} \leq H \leq X$ with $\delta >0$,
\[
\frac{1}{H}\sum_{h=1}^\infty \lambda_f(h) V\left(
\frac{h}{H}\right)\sum_{n=1}^\infty \lambda_{\pi}(1,n) \lambda_g (n+h) W\left(
\frac{n}{X} \right)\ll X^{1-\delta+\epsilon}
\]
where $V,W$ are smooth compactly supported funtions, $\lambda_f(n),
\lambda_g(n)$ and $\lambda_{\pi}(1,n)$ are the normalized n-th Fourier
coefficients of $SL(2,\mathbb{Z})$ Hecke-Maass cusp forms $f,g$ and
$SL(3,\mathbb{Z})$ Hecke-Maass cusp form $\pi$, respectively.
|
We illustrate the use of the invariant spin field for describing permissible
equilibrium spin distributions in high energy spin polarised proton beams.}
|
For a group $G$ and a natural number $m$, a subset $A$ of $G$ is called
$m$-thin if, for each finite subset $F$ of $G$, there exists a finite subset
$K$ of $G$ such that $|Fg\cap A|\leqslant m$ for every $g\in G\setminus K$. We
show that each $m$-thin subset of a group $G$ of cardinality $\aleph_n$, $n=
0,1,...$ can be partitioned into $\leqslant m^{n+1}$ 1-thin subsets. On the
other side, we construct a group $G$ of cardinality $\aleph_\omega$ and point
out a 2-thin subset of $G$ which cannot be finitely partitioned into 1-thin
subsets.
|
A quantum mechanical three-body problem for two identical fermions of mass
$m$ and a distinct particle of mass $m_1$ in the universal limit of zero-range
two-body interaction is studied. For the unambiguous formulation of the problem
in the interval $\mu_r < m/m_1 \le \mu_c$ ($\mu_r \approx 8.619$ and $\mu_c
\approx 13.607$) an additional parameter $b$ determining the wave function near
the triple-collision point is introduced; thus, a one-parameter family of
self-adjoint Hamiltonians is defined. The dependence of the bound-state
energies on $m/m_1$ and $b$ in the sector of angular momentum and parity $L^P =
1^-$ is calculated and analysed with the aid of a simple model.
|
We compute the cohomology with trivial coefficients of Lie algebras
$\mathfrak{m}_0$ and $\mathfrak{m}_2$ of maximal class over the field
$\mathbb{Z}_2$. In the infinite-dimensional case, we show that the cohomology
rings $H^*(\mathfrak{m}_0)$ and $H^*(\mathfrak{m}_2)$ are isomorphic, in
contrast with the case of the ground field of characteristic zero, and we
obtain a complete description of them. In the finite-dimensional case, we find
the first three Betti numbers of $\mathfrak{m}_0(n)$ and $\mathfrak{m}_2(n)$
over $\mathbb{Z}_2$.
|
We study the optimal approximation of the solution of an operator equation
Au=f by linear and nonlinear mappings.
We identify those cases where optimal nonlinear approximation is better than
optimal linear approximation.
|
From the set of operators for errors and its correction code, we introduce
the so-called complete unitary transformation. It can be used for encoding
while the inverse of it can be applied for correcting the errors of the encoded
qubit. We show that this unitary protocol can be applied for any code which
satisfies the quantum error correction condition.
|
We developed a non-Hermitian quantum optimization algorithm to find the
ground state of the ferromagnetic Ising model with up to 1024 spins (qubits).
Our approach leads to significant reduction of the annealing time. Analytical
and numerical results demonstrate that the total annealing time is proportional
to ln N, where N is the number of spins. This encouraging result is important
in using classical computers in combination with quantum algorithms for the
fast solutions of NP-complete problems. Additional research is proposed for
extending our dissipative algorithm to more complicated problems.
|
Nodal, excited compactons in the $\mathbb{C}P^N$ models with V-shaped
potentials are analyzed. It is shown that the solutions exist as compact
$Q$-balls and $Q$-shells. The solutions have a discontinuity in the second
derivative associated with the character of the potential, however, their
energy and charge densities are both continuous. The excited $Q$-balls and
$Q$-shells are analyzed as electrically neutral and electrically charged
objects.
|
Quantum-limited Josephson parametric amplifiers play a pivotal role in
advancing the field of circuit quantum electrodynamics by enabling the fast and
high-fidelity measurement of weak microwave signals. Therefore, it is necessary
to develop robust parametric amplifiers with low noise, broad bandwidth, and
reduced design complexity for microwave detection. However, current broadband
parametric amplifiers either have degraded noise performance or rely on complex
designs. Here, we present a device based on the broadband impedance-transformed
Josephson parametric amplifier (IMPA) that integrates a horn-like coplanar
waveguide (CPW) transmission line, which significantly decreases the design and
fabrication complexity, while keeping comparable performance. The device shows
an instantaneous bandwidth of 700(200) MHz for 15(20) dB gain with an average
saturation power of -110 dBm and near quantum-limited added noise. The
operating frequency can be tuned over 1.4 GHz using an external flux bias. We
further demonstrate the negligible back-action from our device on a transmon
qubit. The amplification performance and simplicity of our device promise its
wide adaptation in quantum metrology, quantum communication, and quantum
information processing.
|
In this paper we discuss the connected components of underlying graphs of
halving lines' configurations. We show how to create a configuration whose
underlying graph is the union of two given underlying graphs. We also prove
that every connected component of the underlying graph is itself an underlying
graph.
|
Linked Data is used in various fields as a new way of structuring and
connecting data. Cultural heritage institutions have been using linked data to
improve archival descriptions and facilitate the discovery of information. Most
archival records have digital representations of physical artifacts in the form
of scanned images that are non-machine-readable. Optical Character Recognition
(OCR) recognizes text in images and translates it into machine-encoded text.
This paper evaluates the impact of image processing methods and parameter
tuning in OCR applied to typewritten cultural heritage documents. The approach
uses a multi-objective problem formulation to minimize Levenshtein edit
distance and maximize the number of words correctly identified with a
non-dominated sorting genetic algorithm (NSGA-II) to tune the methods'
parameters. Evaluation results show that parameterization by digital
representation typology benefits the performance of image pre-processing
algorithms in OCR. Furthermore, our findings suggest that employing image
pre-processing algorithms in OCR might be more suitable for typologies where
the text recognition task without pre-processing does not produce good results.
In particular, Adaptive Thresholding, Bilateral Filter, and Opening are the
best-performing algorithms for the theatre plays' covers, letters, and overall
dataset, respectively, and should be applied before OCR to improve its
performance.
|
Foundation models have had a big impact in recent years and billions of
dollars are being invested in them in the current AI boom. The more popular
ones, such as Chat-GPT, are trained on large amounts of Internet data. However,
it is becoming apparent that this data is likely to be exhausted soon, and
technology companies are looking for new sources of data to train the next
generation of foundation models.
Reinforcement learning, RAG, prompt engineering and cognitive modelling are
often used to fine-tune and augment the behaviour of foundation models. These
techniques have been used to replicate people, such as Caryn Marjorie. These
chatbots are not based on people's actual emotional and physiological responses
to their environment, so they are, at best, a surface-level approximation to
the characters they are imitating.
To address these issues, we have developed a recording rig that captures what
the wearer is seeing and hearing as well as their skin conductance (GSR),
facial expression and brain state (14 channel EEG). AI algorithms are used to
process this data into a rich picture of the environment and internal states of
the subject. Foundation models trained on this data could replicate human
behaviour much more accurately than the personality models that have been
developed so far. This type of model has many potential applications, including
recommendation, personal assistance, GAN systems, dating and recruitment.
This paper gives some background to this work and describes the recording rig
and preliminary tests of its functionality. It then suggests how a new type of
foundation model could be created from the data captured by the rig and
outlines some applications. Data gathering and model training are expensive, so
we are currently working on the launch of a start-up that could raise funds for
the next stage of the project.
|
A persistent challenge in practical classification tasks is that labeled
training sets are not always available. In particle physics, this challenge is
surmounted by the use of simulations. These simulations accurately reproduce
most features of data, but cannot be trusted to capture all of the complex
correlations exploitable by modern machine learning methods. Recent work in
weakly supervised learning has shown that simple, low-dimensional classifiers
can be trained using only the impure mixtures present in data. Here, we
demonstrate that complex, high-dimensional classifiers can also be trained on
impure mixtures using weak supervision techniques, with performance comparable
to what could be achieved with pure samples. Using weak supervision will
therefore allow us to avoid relying exclusively on simulations for
high-dimensional classification. This work opens the door to a new regime
whereby complex models are trained directly on data, providing direct access to
probe the underlying physics.
|
The Metropolis-Hastings method is often used to construct a Markov chain with
a given $\pi$ as its stationary distribution. The method works even if $\pi$ is
known only up to an intractable constant of proportionality. Polynomial time
convergence results for such chains (rapid mixing) are hard to obtain for high
dimensional probability models where the size of the state space potentially
grows exponentially with the model dimension. In a Bayesian context, Yang,
Wainwright, and Jordan (2016) (=YWJ) used the path method to prove rapid mixing
for high dimensional linear models. This paper proposes a modification of the
YWJ approach that simplifies the theoretical argument and improves the rate of
convergence. The new approach is illustrated by an application to an
exponentially weighted aggregation estimator.
|
We extend to the case of a $d$-dimensional compact connected oriented
Riemannian manifold $\mathcal M$ the theorem of A. Bondarenko, D. Radchenko and
M. Viazovska on the existence of $L$-designs consisting of $N$ nodes, for any
$N\ge C_{\mathcal M} L^d$. For this, we need to prove a version of the
Marcinkiewicz-Zygmund inequality for the gradient of diffusion polynomials.
|
Non-differentiable controllers and rule-based policies are widely used for
controlling real systems such as telecommunication networks and robots.
Specifically, parameters of mobile network base station antennas can be
dynamically configured by these policies to improve users coverage and quality
of service. Motivated by the antenna tilt control problem, we introduce
Model-Based Residual Policy Learning (MBRPL), a practical reinforcement
learning (RL) method. MBRPL enhances existing policies through a model-based
approach, leading to improved sample efficiency and a decreased number of
interactions with the actual environment when compared to off-the-shelf RL
methods.To the best of our knowledge, this is the first paper that examines a
model-based approach for antenna control. Experimental results reveal that our
method delivers strong initial performance while improving sample efficiency
over previous RL methods, which is one step towards deploying these algorithms
in real networks.
|
Human Identity verification has always been an eye-catching goal in digital
based security system. Authentication or identification systems developed using
human characteristics such as face, finger print, hand geometry, iris, and
voice are denoted as biometric systems. Among the various characteristics, Iris
recognition trusts on the idiosyncratic human iris patterns to find out and
corroborate the identity of a person. The image is normally contemplated as a
gathering of information. Existence of noises in the input or processed image
effects degradation in the image superiority. It should be paramount to restore
original image from noises for attaining maximum amount of information from
corrupted images. Noisy images in biometric identification system cannot give
accurate identity. So Image related data or information tends to loss or
damage. Images are affected by various sorts of noises. This paper mainly
focuses on Salt and Pepper noise, Gaussian noise, Uniform noise, Speckle noise.
Different filtering techniques can be adapted for noise diminution to develop
the visual quality as well as understandability of images. In this paper, four
types of noises have been undertaken and applied on some images. The filtering
of these noises uses different types of filters like Mean, Median, Weiner,
Gaussian filter etc. A relative interpretation is performed using four
different categories of filter with finding the value of quality determined
parameters like mean square error (MSE), peak signal to noise ratio (PSNR),
average difference value (AD) and maximum difference value (MD).
|
This paper addresses a fuel-constrained, autonomous vehicle path planning
problem in the presence of multiple refueling stations. We are given a set of
targets, a set of refueling stations, and a depot where $m$ vehicles are
stationed. The vehicles are allowed to refuel at any refueling station, and the
objective of the problem is to determine a route for each vehicle starting and
terminating at the depot, such that each target is visited by at least one
vehicle, the vehicles never run out of fuel while traversing their routes, and
the total travel cost of all the routes is a minimum. We present four new
mixed-integer linear programming formulations for the problem. These
formulations are compared both analytically and empirically, and a
branch-and-cut algorithm is developed to compute an optimal solution. Extensive
computational results on a large class of test instances that corroborate the
effectiveness of the algorithm are also presented.
|
We develop an approach to characterize excited states of disordered many-body
systems using spatially resolved structures of entanglement. We show that the
behavior of the mutual information (MI) between two parties of a many-body
system can signal a qualitative difference between thermal and localized phases
-- MI is finite in insulators while it approaches zero in the thermodynamic
limit in the ergodic phase. Related quantities, such as the recently introduced
Codification Volume (CV), are shown to be suitable to quantify the correlation
length of the system. These ideas are illustrated using prototypical
non-interacting wavefunctions of localized and extended particles and then
applied to characterize states of strongly excited interacting spin chains. We
especially focus on evolution of spatial structure of quantum information
between high temperature diffusive and many-body localized phases believed to
exist in these models. We study MI as a function of disorder strength both
averaged over the eigenstates and in time-evolved product states drawn from
continuously deformed family of initial states realizable experimentally. As
expected, spectral and time-evolved averages coincide inside the ergodic phase
and differ significantly outside. We also highlight dispersion among the
initial states \emph{within} the localized phase -- some of these show
considerable generation and delocalization of quantum information.
|
We investigate the performance of a variant of Axelrod's model for
dissemination of culture - the Adaptive Culture Heuristic (ACH) - on solving an
NP-Complete optimization problem, namely, the classification of binary input
patterns of size $F$ by a Boolean Binary Perceptron. In this heuristic, $N$
agents, characterized by binary strings of length $F$ which represent possible
solutions to the optimization problem, are fixed at the sites of a square
lattice and interact with their nearest neighbors only. The interactions are
such that the agents' strings (or cultures) become more similar to the low-cost
strings of their neighbors resulting in the dissemination of these strings
across the lattice. Eventually the dynamics freezes into a homogeneous
absorbing configuration in which all agents exhibit identical solutions to the
optimization problem. We find through extensive simulations that the
probability of finding the optimal solution is a function of the reduced
variable $F/N^{1/4}$ so that the number of agents must increase with the fourth
power of the problem size, $N \propto F^ 4$, to guarantee a fixed probability
of success. In this case, we find that the relaxation time to reach an
absorbing configuration scales with $F^ 6$ which can be interpreted as the
overall computational cost of the ACH to find an optimal set of weights for a
Boolean Binary Perceptron, given a fixed probability of success.
|
In The Delta Conjecture (arxiv:1509.07058), Haglund, Remmel and Wilson
introduced a four variable $q,t,z,w$ Catalan polynomial, so named because the
specialization of this polynomial at the values $(q,t,z,w) = (1,1,0,0)$ is
equal to the Catalan number $\frac{1}{n+1}\binom{2n}{n}$. We prove the
compositional version of this conjecture (which implies the non-compositional
version) that states that the coefficient of $s_{r,1^{n-r}}$ in the expression
$\Delta_{h_\ell} \nabla C_\alpha$ is equal to a weighted sum over decorated
Dyck paths.
|
Many multimodal recommender systems have been proposed to exploit the rich
side information associated with users or items (e.g., user reviews and item
images) for learning better user and item representations to improve the
recommendation performance. Studies from psychology show that users have
individual differences in the utilization of various modalities for organizing
information. Therefore, for a certain factor of an item (such as appearance or
quality), the features of different modalities are of varying importance to a
user. However, existing methods ignore the fact that different modalities
contribute differently towards a user's preference on various factors of an
item. In light of this, in this paper, we propose a novel Disentangled
Multimodal Representation Learning (DMRL) recommendation model, which can
capture users' attention to different modalities on each factor in user
preference modeling. In particular, we employ a disentangled representation
technique to ensure the features of different factors in each modality are
independent of each other. A multimodal attention mechanism is then designed to
capture users' modality preference for each factor. Based on the estimated
weights obtained by the attention mechanism, we make recommendations by
combining the preference scores of a user's preferences to each factor of the
target item over different modalities. Extensive evaluation on five real-world
datasets demonstrate the superiority of our method compared with existing
methods.
|
We present a catalog of emission-line galaxies selected solely by their
emission-line fluxes using a wide-field integral field spectrograph. This work
is partially motivated as a pilot survey for the upcoming Hobby-Eberly
Telescope Dark Energy Experiment (HETDEX). We describe the observations,
reductions, detections, redshift classifications, line fluxes, and counterpart
information for 397 emission-line galaxies detected over 169 sq.arcmin with a
3500-5800 Ang. bandpass under 5 Ang. full-width-half-maximum (FWHM) spectral
resolution. The survey's best sensitivity for unresolved objects under
photometric conditions is between 4-20 E-17 erg/s/sq.cm depending on the
wavelength, and Ly-alpha luminosities between 3-6 E42 erg/s are detectable.
This survey method complements narrowband and color-selection techniques in the
search for high redshift galaxies with its different selection properties and
large volume probed. The four survey fields within the COSMOS, GOODS-N, MUNICS,
and XMM-LSS areas are rich with existing, complementary data. We find 104
galaxies via their high redshift Ly-alpha emission at 1.9<z<3.8, and the
majority of the remainder objects are low redshift [OII]3727 emitters at
z<0.56. The classification between low and high redshift objects depends on
rest frame equivalent width, as well as other indicators, where available.
Based on matches to X-ray catalogs, the active galactic nuclei (AGN) fraction
amongst the Ly-alpha emitters (LAEs) is 6%. We also analyze the survey's
completeness and contamination properties through simulations. We find five
high-z, highly-significant, resolved objects with full-width-half-maximum sizes
>44 sq.arcsec which appear to be extended Ly-alpha nebulae. We also find three
high-z objects with rest frame Ly-alpha equivalent widths above the level
believed to be achievable with normal star formation, EW(rest)>240 Ang.
|
B. Host and B. Kra (2005) have introduced the characteristic factors for
studying cubic ergodic averages. These factors allow one to resolve, in
particular, multiple recurrence problems introduced by H. Furstenberg (1977).
We show here that the continuity of the projection of the system in its
characteristic factors characterises cubic averages.
|
Subsets and Splits