text
stringlengths 6
128k
|
---|
We present a computational study on the impact of tensile/compressive
uniaxial ($\varepsilon_{xx}$) and biaxial ($\varepsilon_{xx}=\varepsilon_{yy}$)
strain on monolayer MoS$_{2}$ NMOS and PMOS FETs. The material properties like
band structure, carrier effective mass and the multi-band Hamiltonian of the
channel, are evaluated using the Density Functional Theory (DFT). Using these
parameters, self-consistent Poisson-Schr\"{o}dinger solution under the
Non-Equilibrium Green's Function (NEGF) formalism is carried out to simulate
the MOS device characteristics. 1.75$%$ uniaxial tensile strain is found to
provide a minor (6$%$) ON current improvement for the NMOSFET, whereas same
amount of biaxial tensile strain is found to considerably improve the PMOSFET
ON currents by 2-3 times. Compressive strain however degrades both NMOS and
PMOS device performance. It is also observed that the improvement in PMOSFET
can be attained only when the channel material becomes indirect-gap in nature.
We further study the performance degradation in the quasi-ballistic long
channel regime using a projected current method.
|
We review our knowledge of the most basic properties of the AGN obscuring
region - its location, scale, symmetry, and mean covering factor - and discuss
new evidence on the distribution of covering factors in a sample of ~9000
quasars with WISE, UKIDSS, and SDSS photometry. The obscuring regions of AGN
may be in some ways more complex than we thought - multi-scale, not symmetric,
chaotic - and in some ways simpler - with no dependence on luminosity, and a
covering factor distribution that may be determined by the simplest of
considerations - e.g. random misalignments.
|
We present the full panchromatic afterglow light curve data of GW170817,
including new radio data as well as archival optical and X-ray data, between
0.5 and 940 days post-merger. By compiling all archival data, and reprocessing
a subset of it, we have evaluated the impact of differences in data processing
or flux determination methods used by different groups, and attempted to
mitigate these differences to provide a more uniform dataset. Simple power-law
fits to the uniform afterglow light curve indicate a $t^{0.86\pm0.04}$ rise, a
$t^{-1.92\pm0.12}$ decline, and a peak occurring at $155\pm4$ days. The
afterglow is optically thin throughout its evolution, consistent with a single
spectral index ($-0.584\pm0.002$) across all epochs. This gives a precise and
updated estimate of the electron power-law index, $p=2.168\pm0.004$. By
studying the diffuse X-ray emission from the host galaxy, we place a
conservative upper limit on the hot ionized ISM density, $<$0.01 cm$^{-3}$,
consistent with previous afterglow studies. Using the late-time afterglow data
we rule out any long-lived neutron star remnant having magnetic field strength
between 10$^{10.4}$ G and 10$^{16}$ G. Our fits to the afterglow data using an
analytical model that includes VLBI proper motion from Mooley et al. (2018),
and a structured jet model that ignores the proper motion, indicates that the
proper motion measurement needs to be considered while seeking an accurate
estimate of the viewing angle.
|
Electrical transport in semiconductor superlattices is studied within a fully
self-consistent quantum transport model based on nonequilibrium Green
functions, including phonon and impurity scattering. We compute both the drift
velocity-field relation and the momentum distribution function covering the
whole field range from linear response to negative differential conductivity.
The quantum results are compared with the respective results obtained from a
Monte Carlo solution of the Boltzmann equation. Our analysis thus sets the
limits of validity for the semiclassical theory in a nonlinear transport
situation in the presence of inelastic scattering.
|
We consider the coupled chemotaxis Navier-Stokes model with logistic source
terms \[ n_t + u\cdot \nabla n = \Delta n - \chi \nabla \cdot (n \nabla c) +
\kappa n - \mu n^2\] \[ c_t + u\cdot \nabla c = \Delta c - nc\] \[ u_t +
(u\cdot \nabla)u = \Delta u +\nabla P + n\nabla \Phi + f, \quad\qquad \nabla
\cdot u=0 \] in a bounded, smooth domain $\Omega\subset \mathbb{R}^3$ under
homogeneous Neumann boundary conditions for $n$ and $c$ and homogeneous
Dirichlet boundary conditions for $u$ and with given functions $f\in
L^\infty(\Omega\times(0,\infty))$ satisfying certain decay conditions and
$\Phi\in C^{1+\beta}(\bar\Omega)$ for some $\beta\in(0,1)$.
We construct weak solutions and prove that after some waiting time they
become smooth and finally converge to the semi-trivial steady state
$(\frac{\kappa}{\mu},0,0)$.
Keywords: chemotaxis, Navier-Stokes, logistic source, boundedness, large-time
behaviour
|
Fluorescence detection is a commonly used analytical method with the
advantages of fast response, good selectivity and low destructiveness. However,
fluorescence detection, a single-mode detection method, has some limitations,
such as background interference that affects the accuracy of the fluorescence
signal, lack of visualization of the detection results, and low sensitivity for
detecting low-concentration samples. In order to overcome the shortcomings of
fluorescence single-mode detection, we used the dual-mode method of
fluorescence and colorimetry to detect ascorbic acid.
The dual-mode detection of AA by fluorescence and colorimetry in the probe
system enhances the specificity and accuracy of the detection. This bimodal
detection method solved the problem of low detection sensitivity in the low
concentration range of the analytes to be tested, and was linear in the lower
(0-50 {\mu}M) and higher (50-350 {\mu}M) concentration ranges, respectively,
and had a lower detection limit (0.034 {\mu}M). This glutathione-based gold
cluster assay is characterized by simplicity, rapidity and accuracy, and
provides a new way for the quantitative analysis of ascorbic acid. In addition,
the method was validated during the determination of AA in beverages, which has
the advantages of high sensitivity and fast response time.
|
Quadrotors are one of the popular unmanned aerial vehicles (UAVs) due to
their versatility and simple design. However, the tuning of gains for quadrotor
flight controllers can be laborious, and accurately stable control of
trajectories can be difficult to maintain under exogenous disturbances and
uncertain system parameters. This paper introduces a novel robust and adaptive
control synthesis methodology for a quadrotor robot's attitude and altitude
stabilization. The developed method is based on the fuzzy reinforcement
learning and Strictly Negative Imaginary (SNI) property. The first stage of our
control approach is to transform a nonlinear quadrotor system into an
equivalent Negative-Imaginary (NI) linear model by means of the feedback
linearization (FL) technique. The second phase is to design a control scheme
that adapts online the Strictly Negative Imaginary (SNI) controller gains via
fuzzy Q-learning, inspired by biological learning. The proposed controller does
not require any prior training. The performance of the designed controller is
compared with that of a fixed-gain SNI controller, a fuzzy-SNI controller, and
a conventional PID controller in a series of numerical simulations.
Furthermore, the stability of the proposed controller and the adaptive laws are
proofed using the NI theorem.
|
Machine learning and especially deep learning have garneredtremendous
popularity in recent years due to their increased performanceover other
methods. The availability of large amount of data has aidedin the progress of
deep learning. Nevertheless, deep learning models areopaque and often seen as
black boxes. Thus, there is an inherent need tomake the models interpretable,
especially so in the medical domain. Inthis work, we propose a locally
interpretable method, which is inspiredby one of the recent tools that has
gained a lot of interest, called localinterpretable model-agnostic explanations
(LIME). LIME generates singleinstance level explanation by artificially
generating a dataset aroundthe instance (by randomly sampling and using
perturbations) and thentraining a local linear interpretable model. One of the
major issues inLIME is the instability in the generated explanation, which is
caused dueto the randomly generated dataset. Another issue in these kind of
localinterpretable models is the local fidelity. We propose novel
modificationsto LIME by employing an autoencoder, which serves as a better
weightingfunction for the local model. We perform extensive comparisons
withdifferent datasets and show that our proposed method results in
bothimproved stability, as well as local fidelity.
|
It has long been accepted that the multiple-ion single-file transport model
is appropriate for many kinds of ion channels. However, most of the purely
theoretical works in this field did not capture all of the important features
of the realistic systems. Nowadays, large-scale atomic-level simulations are
more feasible. Discrepancy between theories, simulations and experiments are
getting obvious, enabling people to carefully examine the missing parts of the
theoretical models and methods. In this work, it is attempted to find out the
essential features that such kind of models should possess, in order that the
physical properties of an ion channel be adequately reflected.
|
A systematic analysis of Higgs-mediated contributions to the Bd and Bs mass
differences is presented in the MSSM with large values of tan(beta). In
particular, supersymmetric corrections to Higgs self-interactions are seen to
modify the correlation between Delta Mq and BR(Bq --> mu+ mu-) for light
Higgses. The present experimental upper bound on BR(Bs --> mu+ mu-) is
nevertheless still sufficient to exclude noticeable Higgs-mediated effects on
the mass differences in most of the parameter space.
|
We present a virtual element method for the Reissner-Mindlin plate bending
problem which uses shear strain and deflection as discrete variables without
the need of any reduction operator. The proposed method is conforming in
$[H^{1}(\Omega)]^2 \times H^2(\Omega)$ and has the advantages of using general
polygonal meshes and yielding a direct approximation of the shear strains. The
rotations are then obtained by a simple postprocess from the shear strain and
deflection. We prove convergence estimates with involved constants that are
uniform in the thickness $t$ of the plate. Finally, we report numerical
experiments which allow us to assess the performance of the method.
|
We prove that directions of closed geodesics in every dilation surface form a
dense subset of the circle. The proof draws on a study of the degenerations of
the Delaunay triangulation of dilation surfaces under the action of
Teichm\"{u}ller flow in the moduli space.
|
A generalization of the quantum van der Waals equation of state for a
multi-component system in the grand canonical ensemble is proposed. The model
includes quantum statistical effects and allows to specify the parameters
characterizing repulsive and attractive forces for each pair of particle
species. The model can be straightforwardly applied to the description of
asymmetric nuclear matter and also for mixtures of interacting nucleons and
nuclei. Applications of the model to the equation of state of an interacting
hadron resonance gas are discussed.
|
Let M,M' be smooth real hypersurfaces in N-dimensional space and assume that
M is k-nondegenerate at a point p in M. We prove that holomorphic mappings that
extend smoothly to M, sending a neighborhood of p in M diffeomorphically into
M' are completely determined by their 2k-jet at p. As an application of this
result, we also give sufficient conditions on a smooth real hypersurface which
guarantee that the space of infinitesimal CR automorphisms is finite
dimensional.
|
We previously introduced a family of symplectic maps of the torus whose
quantization exhibits scarring on invariant co-isotropic submanifolds. The
purpose of this note is to show that in contrast to other examples, where
failure of Quantum Unique Ergodicity is attributed to high multiplicities in
the spectrum, for these examples the spectrum is (generically) simple.
|
The relation between the Ahlfors map and Szeg\"o kernel S(z, a) is classical.
The Szeg\"o kernel is a solution of a Fredholm integral equation of the second
kind with the Kerzman-Stein kernel. The exact zeros of the Ahlfors map are
unknown except for the annulus region. This paper presents a numerical method
for computing the zeros of the Ahlfors map of any bounded doubly connected
region. The method depends on the values of S(z(t),a), S'(z(t),a) and
\theta'(t) where \theta(t) is the boundary correspondence function of Ahlfors
map. A formula is derived for computing S'(z(t),a). An integral equation is
constructed for solving \theta'(t). The numerical examples presented here prove
the effectiveness of the proposed method.
|
We present extensions of the Colorful Helly Theorem for $d$-collapsible and
$d$-Leray complexes, providing a common generalization to the matroidal
versions of the theorem due to Kalai and Meshulam, the ``very colorful" Helly
theorem introduced by Arocha, B\'ar\'any, Bracho, Fabila and Montejano, and the
``semi-intersecting" colorful Helly theorem proved by Montejano and Karasev.
As an application, we obtain the following extension of Tverberg's Theorem:
Let $A$ be a finite set of points in $\mathbb{R}^d$ with $|A|>(r-1)(d+1)$.
Then, there exist a partition $A_1,\ldots,A_r$ of $A$ and a subset $B\subset A$
of size $(r-1)(d+1)$, such that $\cap_{i=1}^r \text{conv}( (B\cup\{p\})\cap
A_i)\neq\emptyset$ for all $p\in A\setminus B$. That is, we obtain a partition
of $A$ into $r$ parts that remains a Tverberg partition even after removing all
but one arbitrary point from $A\setminus B$.
|
We experimentally demonstrate an angularly-multiplexed holographic memory
capable of intrinsic generation, storage and retrieval of multiple photons,
based on off-resonant Raman interaction in warm rubidium-87 vapors. The memory
capacity of up to 60 independent atomic spin-wave modes is evidenced by
analyzing angular distributions of coincidences between Stokes and time-delayed
anti-Stokes light, observed down to the level of single spin-wave excitation
during several-$\mu$s memory lifetime. We also propose how to practically
enhance rates of single and multiple photons generation by combining our
multimode emissive memory with existing fast optical switches.
|
Minimal input/output selection is investigated in this paper for each
subsystem of a networked system. Some novel sufficient conditions are derived
respectively for the controllability and observability of a networked system,
as well as some necessary conditions. These conditions only depend separately
on parameters of each subsystem and its in/out-degrees. It is proven that in
order to be able to construct a controllable/observable networked system, it is
necessary and sufficient that each subsystem is controllable/observable. In
addition, both sparse and dense subsystem connections are helpful in making the
whole system controllable/observable. An explicit formula is given for the
smallest number of inputs/outputs for each subsystem required to guarantee
controllability/observability of the whole system.
|
We present an analysis method that allows us to estimate the Galactic
formation of radio pulsar populations based on their observed properties and
our understanding of survey selection effects. More importantly, this method
allows us to assign a statistical significance to such rate estimates and
calculate the allowed ranges of values at various confidence levels. Here, we
apply the method to the question of the double neutron star (NS-NS) coalescence
rate using the current observed sample, and we find calculate the most likely
value for the total Galactic coalescence rate to lie in the range 3-22
Myr^{-1}, for different pulsar population models. The corresponding range of
expected detection rates of NS--NS inspiral are (1-9)x10^{-3} yr^{-1} for the
initial LIGO, and 6-50 yr^{-1} for the advanced LIGO. Based on this newly
developed statistical method, we also calculate the probability distribution
for the expected number of pulsars that could be observed by the Parkes
Multibeam survey, when acceleration searches will alleviate the effects of
Doppler smearing due to orbital motions. We suggest that the Parkes survey will
probably detect 1-2 new binary pulsars like PSRs B1913+16 and/or B1534+12.
|
We study the surface diffusion flow acting on a class of general
(non--axisymmetric) perturbations of cylinders $\mathcal{C}_r$ in ${\rm I \!
R}^3$. Using tools from parabolic theory on uniformly regular manifolds, and
maximal regularity, we establish existence and uniqueness of solutions to
surface diffusion flow starting from (spatially--unbounded) surfaces defined
over $\mathcal{C}_r$ via scalar height functions which are uniformly bounded
away from the central cylindrical axis. Additionally, we show that
$\mathcal{C}_r$ is normally stable with respect to $2 \pi$--axially--periodic
perturbations if the radius $r > 1$,and unstable if $0 < r < 1$. Stability is
also shown to hold in settings with axial Neumann boundary conditions.
|
Let $G$ be a connected, linear, real reductive Lie group with compact centre.
Let $K<G$ be compact. Under a condition on $K$, which holds in particular if
$K$ is maximal compact, we give a geometric expression for the multiplicities
of the $K$-types of any tempered representation (in fact, any standard
representation) $\pi$ of $G$. This expression is in the spirit of Kirillov's
orbit method and the quantisation commutes with reduction principle. It is
based on the geometric realisation of $\pi|_K$ obtained in an earlier paper.
This expression was obtained for the discrete series by Paradan, and for
tempered representations with regular parameters by Duflo and Vergne. We obtain
consequences for the support of the multiplicity function, and a criterion for
multiplicity-free restrictions that applies to general admissible
representations. As examples, we show that admissible representations of
$\mathrm{SU}(p,1)$, $\mathrm{SO}_0(p,1)$ and $\mathrm{SO}_0(2,2)$ restrict
multiplicity-freely to maximal compact subgroups.
|
There does not exist an algorithm that can determine whether or not a group
presented by commutators is a right-angled Artin group.
|
An exact renormalization group equation is written down for the world sheet
theory describing the bosonic open string in general backgrounds. Loop variable
techniques are used to make the equation gauge invariant. This is worked out
explicitly up to level 3. The equation is quadratic in the fields and can be
viewed as a proposal for a string field theory equation. As in the earlier loop
variable approach, the theory has one extra space dimension and mass is
obtained by dimensional reduction. Being based on the sigma model RG, it is
background independent.
It is intriguing that in contrast to BRST string field theory, the gauge
transformations are not modified by the interactions up to the level
calculated. The interactions can be written in terms of gauge invariant field
strengths for the massive higher spin fields and the non zero mass is essential
for this. This is reminiscent of Abelian Born-Infeld action (along with
derivative corrections) for the massless vector field, which is also written in
terms of the field strength.
|
Li and Haldane conjectured and numerically substantiated that the
entanglement spectrum of the reduced density matrix of ground-states of
time-reversal breaking topological phases (fractional quantum Hall states)
contains information about the counting of their edge modes when the
ground-state is cut in two spatially distinct regions and one of the regions is
traced out. We analytically substantiate this conjecture for a series of FQH
states defined as unique zero modes of pseudopotential Hamiltonians by finding
a one to one map between the thermodynamic limit counting of two different
entanglement spectra: the particle entanglement spectrum, whose counting of
eigenvalues for each good quantum number is identical (up to accidental
degeneracies) to the counting of bulk quasiholes, and the orbital entanglement
spectrum (the Li-Haldane spectrum). As the particle entanglement spectrum is
related to bulk quasihole physics and the orbital entanglement spectrum is
related to edge physics, our map can be thought of as a mathematically sound
microscopic description of bulk-edge correspondence in entanglement spectra. By
using a set of clustering operators which have their origin in conformal field
theory (CFT) operator expansions, we show that the counting of the orbital
entanglement spectrum eigenvalues in the thermodynamic limit must be identical
to the counting of quasiholes in the bulk. The latter equals the counting of
edge modes at a hard-wall boundary placed on the sample. Moreover, we show this
to be true even for CFT states which are likely bulk gapless, such as the
Gaffnian wavefunction.
|
We investigate the solution landscape of a reduced Landau--de Gennes model
for nematic liquid crystals on a two-dimensional hexagon at a fixed
temperature, as a function of $\lambda$---the edge length. This is a generic
example for reduced approaches on regular polygons. We apply the high-index
optimization-based shrinking dimer method to systematically construct the
solution landscape consisting of multiple defect solutions and relationships
between them. We report a new stable T state with index-$0$ that has an
interior $-1/2$ defect; new classes of high-index saddle points with multiple
interior defects referred to as H class and TD class; changes in the Morse
index of saddle points with $\lambda^2$ and novel pathways mediated by
high-index saddle points that can control and steer dynamical pathways. The
range of topological degrees, locations and multiplicity of defects offered by
these saddle points can be used to navigate through complex solution landscapes
of nematic liquid crystals and other related soft matter systems.
|
Cone-beam computed tomography (CBCT) is routinely collected during
image-guided radiation therapy (IGRT) to provide updated patient anatomy
information for cancer treatments. However, CBCT images often suffer from
streaking artifacts and noise caused by under-rate sampling projections and
low-dose exposure, resulting in low clarity and information loss. While recent
deep learning-based CBCT enhancement methods have shown promising results in
suppressing artifacts, they have limited performance on preserving anatomical
details since conventional pixel-to-pixel loss functions are incapable of
describing detailed anatomy. To address this issue, we propose a novel
feature-oriented deep learning framework that translates low-quality CBCT
images into high-quality CT-like imaging via a multi-task customized
feature-to-feature perceptual loss function. The framework comprises two main
components: a multi-task learning feature-selection network(MTFS-Net) for
customizing the perceptual loss function; and a CBCT-to-CT translation network
guided by feature-to-feature perceptual loss, which uses advanced generative
models such as U-Net, GAN and CycleGAN. Our experiments showed that the
proposed framework can generate synthesized CT (sCT) images for the lung that
achieved a high similarity to CT images, with an average SSIM index of 0.9869
and an average PSNR index of 39.9621. The sCT images also achieved visually
pleasing performance with effective artifacts suppression, noise reduction, and
distinctive anatomical details preservation. Our experiment results indicate
that the proposed framework outperforms the state-of-the-art models for
pulmonary CBCT enhancement. This framework holds great promise for generating
high-quality anatomical imaging from CBCT that is suitable for various clinical
applications.
|
We present an inertia-augmented relaxed micromorphic model that enriches the
relaxed micromorphic model previously introduced by the authors via a term
$\text{Curl}\dot{P}$ in the kinetic energy density. This enriched model allows
us to obtain a good overall fitting of the dispersion curves while introducing
the new possibility of describing modes with negative group velocity that are
known to trigger negative refraction effects. The inertia-augmented model also
allows for more freedom on the values of the asymptotes corresponding to the
cut-offs. In the previous version of the relaxed micromorphic model, the
asymptote of one curve (pressure or shear) is always bounded by the cut-off of
the following curve of the same type. This constraint does not hold anymore in
the enhanced version of the model. While the obtained curves' fitting is of
good quality overall, a perfect quantitative agreement must still be reached
for very small wavelengths that are close to the size of the unit cell.
|
Toda field theories are important integrable systems. They can be regarded as
constrained WZNW models, and this viewpoint helps to give their explicit
general solutions, especially when a Drinfeld-Sokolov gauge is used. The main
objective of this paper is to carry out this approach of solving the Toda field
theories for the classical Lie algebras. In this process, we discover and prove
some algebraic identities for principal minors of special matrices. The known
elegant solutions of Leznov fit in our scheme in the sense that they are the
general solutions to our conditions discovered in this solving process. To
prove this, we find and prove some differential identities for iterated
integrals. It can be said that altogether our paper gives complete mathematical
proofs for Leznov's solutions.
|
We develop a manifestly conformal approach to describe linearised
(super)conformal higher-spin gauge theories in arbitrary conformally flat
backgrounds in three and four spacetime dimensions. Closed-form expressions in
terms of gauge prepotentials are given for gauge-invariant higher-spin (super)
Cotton and (super) Weyl tensors in three and four dimensions, respectively. The
higher-spin (super) Weyl tensors are shown to be conformal primary
(super)fields in arbitrary conformal (super)gravity backgrounds, however they
are gauge invariant only if the background (super) Weyl tensor vanishes. The
proposed higher-spin actions are (super) Weyl-invariant on arbitrary curved
backgrounds, however the appropriate higher-spin gauge invariance holds only in
the conformally flat case. We also describe conformal models for generalised
gauge fields that are used to describe partially massless dynamics in three and
four dimensions. In particular, generalised higher-spin Cotton and Weyl tensors
are introduced.
|
In this paper, we introduce some classes of generalized tracial approximation
${\rm C^*}$-algebras. Consider the class of unital ${\rm C^*}$-algebras which
are tracially $\mathcal{Z}$-absorbing (or have tracial nuclear dimension at
most $n$, or have the property $\rm SP$, or are $m$-almost divisible). Then $A$
is tracially $\mathcal{Z}$-absorbing (respectively, has tracial nuclear
dimension at most $n$, has the property $\rm SP$, is weakly ($n, m$)-almost
divisible) for any simple unital ${\rm C^*}$-algebra $A$ in the corresponding
class of generalized tracial approximation ${\rm C^*}$-algebras. As an
application, let $A$ be an infinite-dimensional unital simple ${\rm
C^*}$-algebra, and let $B$ be a centrally large subalgebra of $A$. If $B$ is
tracially $\mathcal{Z}$-absorbing, then $A$ is tracially
$\mathcal{Z}$-absorbing. This result was obtained by Archey, Buck, and Phillips
in \cite{AJN}.
|
Quantum correlations between two neighbor atoms are studied. It is assumed
that one atomic system comprises a single auto-ionizing level and the other
atom does not contain any auto-ionizing level. The excitation of both atoms is
achieved by the interaction with the same mode of the quantized field. It is
shown that the long-time behavior of two atoms exhibits quantum correlations
even when the atoms do not interact directly. This can be shown using the
optical excitation of the neighbor atom. Also a measure of entanglement of two
atoms can be applied after reduction of the continuum to two levels.
|
The paper concerns parameterized equilibria governed by generalized equations
whose multivalued parts are modeled via regular normals to nonconvex conic
constraints. Our main goal is to derive a precise pointwise second-order
formula for calculating the graphical derivative of the solution maps to such
generalized equations that involves Lagrange multipliers of the corresponding
KKT systems and critical cone directions. Then we apply the obtained formula to
characterizing a Lipschitzian stability notion for the solution maps that is
known as isolated calmness.
|
We introduce for the first time a general model of biased-active particles,
where the direction of the active force has a biased angle from the principle
orientation of the anisotropic interaction between particles. We find that a
highly ordered living superlattice consisting of small clusters with dynamic
chirality emerges in a mixture of such biased-active particles and passive
particles. We show that the biased-propulsion-induced instability of
active-active particle pairs and rotating of active-passive particle pairs are
the very reason for the superlattice formation. In addition, a
biased-angle-dependent optimal active force is most favorable for both the
long-range order and global dynamical chirality of the system. Our results
demonstrate the proposed biased-active particle providing a great opportunity
to explore a variety of new fascinating collective behaviors beyond
conventional active particles.
|
This paper develops a test scenario specification procedure using crash
sequence analysis and Bayesian network modeling. Intersection two-vehicle crash
data was obtained from the 2016 to 2018 National Highway Traffic Safety
Administration Crash Report Sampling System database. Vehicles involved in the
crashes are specifically renumbered based on their initial positions and
trajectories. Crash sequences are encoded to include detailed pre-crash events
and concise collision events. Based on sequence patterns, the crashes are
characterized as 55 types. A Bayesian network model is developed to depict the
interrelationships among crash sequence types, crash outcomes, human factors,
and environmental conditions. Scenarios are specified by querying the Bayesian
network conditional probability tables. Distributions of operational design
domain attributes - such as driver behavior, weather, lighting condition,
intersection geometry, traffic control device - are specified based on
conditions of sequence types. Also, distribution of sequence types is specified
on specific crash outcomes or combinations of operational design domain
attributes.
|
The earthquake in December 2004 caused free oscillations of the Earth.
Vibrates the earth in the football mode 0S2, splits the natural frequency due
to rotation into five closely adjacent individual components. The sum of these
spectral components in the frequency band near 309 {\mu}Hz produces a beat that
gives the overall amplitude envelope a characteristic, regular pattern. From
the measured envelope the parameters frequency, amplitude, phase and damping of
generating sinusoids can be reconstructed. Since the method is extremely
sensitive to changes in frequency and phase, these quantities can be determined
precisely. The results depend on the geographical location of the site. Further
results are the half-life of the amplitude decrease and the resonator Q. It is
shown that the interaction of the five individual frequencies can be
interpreted as amplitude modulation, which requires a nonlinear process in the
Earth's interior.
|
In this paper, we study streaming algorithms that minimize the number of
changes made to their internal state (i.e., memory contents). While the design
of streaming algorithms typically focuses on minimizing space and update time,
these metrics fail to capture the asymmetric costs, inherent in modern hardware
and database systems, of reading versus writing to memory. In fact, most
streaming algorithms write to their memory on every update, which is
undesirable when writing is significantly more expensive than reading. This
raises the question of whether streaming algorithms with small space and number
of memory writes are possible.
We first demonstrate that, for the fundamental $F_p$ moment estimation
problem with $p\ge 1$, any streaming algorithm that achieves a constant factor
approximation must make $\Omega(n^{1-1/p})$ internal state changes, regardless
of how much space it uses. Perhaps surprisingly, we show that this lower bound
can be matched by an algorithm that also has near-optimal space complexity.
Specifically, we give a $(1+\varepsilon)$-approximation algorithm for $F_p$
moment estimation that uses a near-optimal
$\widetilde{\mathcal{O}}_\varepsilon(n^{1-1/p})$ number of state changes, while
simultaneously achieving near-optimal space, i.e., for $p\in[1,2]$, our
algorithm uses $\text{poly}\left(\log n,\frac{1}{\varepsilon}\right)$ bits of
space, while for $p>2$, the algorithm uses
$\widetilde{\mathcal{O}}_\varepsilon(n^{1-2/p})$ space. We similarly design
streaming algorithms that are simultaneously near-optimal in both space
complexity and the number of state changes for the heavy-hitters problem,
sparse support recovery, and entropy estimation. Our results demonstrate that
an optimal number of state changes can be achieved without sacrificing space
complexity.
|
The auditory and vestibular systems exhibit remarkable sensitivity of
detection, responding to deflections on the order of Angstroms, even in the
presence of biological noise. Further, these complex systems exhibit high
temporal acuity and frequency selectivity, allowing us to make sense of the
acoustic world around us. As this acoustic environment of interest spans
several orders of magnitude in both amplitude and frequency, these systems rely
heavily on nonlinearities and power-law scaling. The behavior of these sensory
systems has been extensively studied in the context of dynamical systems
theory, with many empirical phenomena described by critical dynamics. Other
phenomena have been explained by systems in the chaotic regime, where weak
perturbations drastically impact the future state of the system. We first
review the conceptual framework behind these two types of detectors, as well as
the detection features that they can capture. We then explore the intersection
of the two types of systems and propose ideal parameter regimes for auditory
and vestibular systems.
|
Blind image deblurring, i.e., deblurring without knowledge of the blur
kernel, is a highly ill-posed problem. The problem can be solved in two parts:
i) estimate a blur kernel from the blurry image, and ii) given estimated blur
kernel, de-convolve blurry input to restore the target image. In this paper, we
propose a graph-based blind image deblurring algorithm by interpreting an image
patch as a signal on a weighted graph. Specifically, we first argue that a
skeleton image---a proxy that retains the strong gradients of the target but
smooths out the details---can be used to accurately estimate the blur kernel
and has a unique bi-modal edge weight distribution. Then, we design a
reweighted graph total variation (RGTV) prior that can efficiently promote a
bi-modal edge weight distribution given a blurry patch. Further, to analyze
RGTV in the graph frequency domain, we introduce a new weight function to
represent RGTV as a graph $l_1$-Laplacian regularizer. This leads to a graph
spectral filtering interpretation of the prior with desirable properties,
including robustness to noise and blur, strong piecewise smooth (PWS) filtering
and sharpness promotion. Minimizing a blind image deblurring objective with
RGTV results in a non-convex non-differentiable optimization problem. We
leverage the new graph spectral interpretation for RGTV to design an efficient
algorithm that solves for the skeleton image and the blur kernel alternately.
Specifically for Gaussian blur, we propose a further speedup strategy for blind
Gaussian deblurring using accelerated graph spectral filtering. Finally, with
the computed blur kernel, recent non-blind image deblurring algorithms can be
applied to restore the target image. Experimental results demonstrate that our
algorithm successfully restores latent sharp images and outperforms
state-of-the-art methods quantitatively and qualitatively.
|
Prediction performance of a risk scoring system needs to be carefully
assessed before its adoption in clinical practice. Clinical preventive care
often uses risk scores to screen asymptomatic population. The primary clinical
interest is to predict the risk of having an event by a pre-specified future
time $t_0$. Prospective accuracy measures such as positive predictive values
have been recommended for evaluating the predictive performance. However, for
commonly used continuous or ordinal risk score systems, these measures require
a subjective cutoff threshold value that dichotomizes the risk scores. The need
for a cut-off value created barriers for practitioners and researchers. In this
paper, we propose a threshold-free summary index of positive predictive values
that accommodates time-dependent event status. We develop a nonparametric
estimator and provide an inference procedure for comparing this summary measure
between competing risk scores for censored time to event data. We conduct a
simulation study to examine the finite-sample performance of the proposed
estimation and inference procedures. Lastly, we illustrate the use of this
measure on a real data example, comparing two risk score systems for predicting
heart failure in childhood cancer survivors.
|
We show that given two hyperbolic Dirichlet to Neumann maps associated to two
Riemannian metrics of a Riemannian manifold with boundary which coincide near
the boundary are close then the lens data of the two metrics is the same. As a
consequence, we prove uniqueness of recovery a conformal factor (sound speed)
locally under some conditions on the latter.
|
Peripheral nucleon-nucleus collisions occur at the high energies mainly
through the interaction with one constituent quark from the incident nucleon.
The central collisions should involve all three constituent quarks and each of
them can interact several times. We calculate the average number of
quark-nucleus interactions for both the cases in good agreement with the
experimental data on $\phi$-meson, $K^{*0}$ and all charged secondaries
productions in $p+Pb$ collisions at LHC energy $\sqrt s = 5$ TeV.
|
Likelihood-based methods of statistical inference provide a useful general
methodology that is appealing, as a straightforward asymptotic theory can be
applied for their implementation. It is important to assess the relationships
between different likelihood-based inferential procedures in terms of accuracy
and adherence to key principles of statistical inference, in particular those
relating to conditioning on relevant ancillary statistics. An analysis is given
of the stability properties of a general class of likelihood-based statistics,
including those derived from forms of adjusted profile likelihood, and
comparisons are made between inferences derived from different statistics. In
particular, we derive a set of sufficient conditions for agreement to
$O_{p}(n^{-1})$, in terms of the sample size $n$, of inferences, specifically
$p$-values, derived from different asymptotically standard normal pivots. Our
analysis includes inference problems concerning a scalar or vector interest
parameter, in the presence of a nuisance parameter.
|
The development of Policy Iteration (PI) has inspired many recent algorithms
for Reinforcement Learning (RL), including several policy gradient methods that
gained both theoretical soundness and empirical success on a variety of tasks.
The theory of PI is rich in the context of centralized learning, but its study
under the federated setting is still in the infant stage. This paper
investigates the federated version of Approximate PI (API) and derives its
error bound, taking into account the approximation error introduced by
environment heterogeneity. We theoretically prove that a proper client
selection scheme can reduce this error bound. Based on the theoretical result,
we propose a client selection algorithm to alleviate the additional
approximation error caused by environment heterogeneity. Experiment results
show that the proposed algorithm outperforms other biased and unbiased client
selection methods on the federated mountain car problem and the Mujoco Hopper
problem by effectively selecting clients with a lower level of heterogeneity
from the population distribution.
|
In this work, we prove a generalization of Quillen's Theorem A to
2-categories equipped with a special set of morphisms which we think of as weak
equivalences, providing sufficient conditions for a 2-functor to induce an
equivalence on $(\infty,1)$-localizations. When restricted to 1-categories with
all morphisms marked, our theorem retrieves the classical Theorem A of Quillen.
We additionally state and provide evidence for a new conjecture: the cofinality
conjecture, which describes the relation between a conjectural theory of marked
$(\infty,2)$-colimits and our generalization of Theorem A.
|
This brief review presents the emerging field of mesoscopic physics with cold
atoms, with an emphasis on thermal and 'thermoelectric' transport, i.e. coupled
transport of particle and entropy. We review in particular the comparison
between theoretically predited and experimentally observed thermoelectric
effects in such systems. We also show how combining well designed transport
properties and evaporative cooling leads to an equivalent of the Peltier effect
with cold atoms, which can be used as a new cooling procedure with improved
cooling power and efficiency compared to the evaporative cooling currently used
in atomic gases. This could lead to a new generation of experiments probing
strong correlation effects of ultracold fermionic atoms at low temperatures.
|
We have found theoretically that the elementary process, p + p to K+ +
Lambda(1405) + p, which occurs in a short impact parameter (around 0.2 fm) and
with a large momentum transfer (Q ~ 1.6 GeV/c), leads to unusually large
self-trapping of Lambda(1405) by the projectile proton, when a Lambda* -p
system exists as a dense bound state (size ~ 1.0 fm) propagating to K^-pp. The
seed, called "Lambda*-p doorway", is expected to play an important role in the
(p, K*) type reactions and heavy-ion collisions to produce various Kbar nuclear
clusters.
|
In this note, we revisit the 4-dimensional theory of massive gravity through
compactification of an extra dimension and geometric symmetry breaking. We
dimensionally reduce the 5-dimensional topological Chern-Simons gauge theory of
(anti) de Sitter group on an interval. We apply non-trivial boundary conditions
at the endpoints to break all of the gauge symmetries. We identify different
components of the gauge connection as invertible vierbein and spin-connection
to interpret it as a gravitational theory. The effective field theory in four
dimensions includes the dRGT potential terms and has a tower of Kaluza-Klein
states without massless graviton in the spectrum. The UV cut of the theory is
the Planck scale of the 5-dimensional gravity $l^{-1}$. If $\zeta$ is the scale
of symmetry breaking and $L$ is the length of the interval, then the masses of
the lightest graviton $m$ and the level $n$ (for $n<Ll^{-1}$) KK gravitons
$m_{\rm KK}^{(n)}$ are determined as $m=(\zeta L^{-1})^{\frac{1}{2}}\ll m_{\rm
KK}^{(n)}=nL^{-1}$. The 4-dimensional Planck mass is $m_{\rm Pl}\sim
(Ll^{-3})^{\frac{1}{2}}$ and we find the hierarchy $\zeta< m<
L^{-1}<l^{-1}<m_{\rm Pl}$.
|
The common spatial pattern analysis (CSP) is a widely used signal processing
technique in brain-computer interface (BCI) systems to increase the
signal-to-noise ratio in electroencephalogram (EEG) recordings. Despite its
popularity, the CSP's performance is often hindered by the nonstationarity and
artifacts in EEG signals. The minmax CSP improves the robustness of the CSP by
using data-driven covariance matrices to accommodate the uncertainties. We show
that by utilizing the optimality conditions, the minmax CSP can be recast as an
eigenvector-dependent nonlinear eigenvalue problem (NEPv). We introduce a
self-consistent field (SCF) iteration with line search that solves the NEPv of
the minmax CSP. Local quadratic convergence of the SCF for solving the NEPv is
illustrated using synthetic datasets. More importantly, experiments with
real-world EEG datasets show the improved motor imagery classification rates
and shorter running time of the proposed SCF-based solver compared to the
existing algorithm for the minmax CSP.
|
We explore the analytic structure of three-point functions using contour
deformations. This method allows continuing calculations analytically from the
spacelike to the timelike regime. We first elucidate the case of two-point
functions with explicit explanations how to deform the integration contour and
the cuts in the integrand to obtain the known cut structure of the integral.
This is then applied to one-loop three-point integrals. We explicate individual
conditions of the corresponding Landau analysis in terms of contour
deformations. In particular, the emergence and position of singular points in
the complex integration plane are relevant to determine the physical
thresholds. As an exploratory demonstration of this method's numerical
implementation we apply it to a coupled system of functional equations for the
propagator and the three-point vertex of $\phi^3$ theory. We demonstrate that
under generic circumstances the three-point vertex function displays cuts which
can be determined from modified Landau conditions.
|
The FitzHugh-Nagumo equation provides a simple mathematical model of cardiac
tissue as an excitable medium hosting spiral wave vortices. Here we present
extensive numerical simulations studying long-term dynamics of knotted vortex
string solutions for all torus knots up to crossing number 11. We demonstrate
that FitzHugh-Nagumo evolution preserves the knot topology for all the examples
presented, thereby providing a novel field theory approach to the study of
knots. Furthermore, the evolution yields a well-defined minimal length for each
knot that is comparable to the ropelength of ideal knots. We highlight the role
of the medium boundary in stabilizing the length of the knot and discuss the
implications beyond torus knots. By applying Moffatt's test we are able to show
that there is not a unique attractor within a given knot topology.
|
Belief and plausibility are weaker measures of uncertainty than that of
probability. They are motivated by the situations when full probabilistic
information is not available. However, information can also be contradictory.
Therefore, the framework of classical logic is not necessarily the most
adequate. Belnap-Dunn logic was introduced to reason about incomplete and
contradictory information. Klein et al and Bilkova et al generalize the notion
of probability measures and belief functions to Belnap-Dunn logic,
respectively. In this article, we study how to update belief functions with new
pieces of information. We present a first approach via a frame semantics of
Belnap-Dunn logic.
|
Using a theoretical framework based on the next-to-leading order QCD improved
effective Hamiltonian, we have estimated the branching ratios and asymmetry
parameters for the two body charmless nonleptonic decay modes of $\Lambda_b$
baryon i.e. $\Lambda_b \to p (\pi/\rho),~p(K/K^*)$ and $\Lambda (\pi/\rho)$,
within the framework of generalized factorization. The nonfactorizable
contributions are parametrized in terms of the effective number of colors,
$N_c^{eff}$. So in addition to the naive factorization approach ($N_c^{eff}=3
$), here we have taken two more values for $N_c^{eff}$ i.e., $N_c^{eff}=2 $ and
$\infty $. The baryonic form factors at maximum momentum transfer ($ q_m^2 $)
are evaluated using the nonrelativistic quark model and the extrapolation of
the form factors from $q_m^2$ to the required $q^2$ value is done by assuming
the pole dominance. The obtained branching ratios for $\Lambda_b \to p\pi, ~pK$
processes lie within the present experimental upper limit .
|
We show that every planar graph can be represented by a monotone topological
2-page book embedding where at most 15n/16 (of potentially 3n-6) edges cross
the spine exactly once.
|
As the recently proposed voice cloning system, NAUTILUS, is capable of
cloning unseen voices using untranscribed speech, we investigate the
feasibility of using it to develop a unified cross-lingual TTS/VC system.
Cross-lingual speech generation is the scenario in which speech utterances are
generated with the voices of target speakers in a language not spoken by them
originally. This type of system is not simply cloning the voice of the target
speaker, but essentially creating a new voice that can be considered better
than the original under a specific framing. By using a well-trained English
latent linguistic embedding to create a cross-lingual TTS and VC system for
several German, Finnish, and Mandarin speakers included in the Voice Conversion
Challenge 2020, we show that our method not only creates cross-lingual VC with
high speaker similarity but also can be seamlessly used for cross-lingual TTS
without having to perform any extra steps. However, the subjective evaluations
of perceived naturalness seemed to vary between target speakers, which is one
aspect for future improvement.
|
In a series of papers Amati, Ciafaloni and Veneziano and 't Hooft conjectured
that black holes occur in the collision of two light particles at planckian
energies. In this paper we discuss a possible scenario for such a process by
using the Chandrasekhar-Ferrari-Xanthopoulos duality between the Kerr black
hole solution and colliding plane gravitational waves. We clarify issues
arising in the definition of transition amplitude from a quantum state
containing only usual matter without black holes to a state containing black
holes. Collision of two plane gravitational waves producing a space-time region
which is locally isometric to an interior of black hole solution is considered.
The phase of the transition amplitude from plane waves to white and black hole
is calculated by using the Fabbrichesi, Pettorino, Veneziano and Vilkovisky
approach. An alternative extension beyond the horizon in which the space-time
again splits into two separating gravitational waves is also discussed. Such a
process is interpreted as the scattering of plane gravitational waves through
creation of virtual black and white holes.
|
Theory of convolutional neural networks suggests the property of shift
equivariance, i.e., that a shifted input causes an equally shifted output. In
practice, however, this is not always the case. This poses a great problem for
scene text detection for which a consistent spatial response is crucial,
irrespective of the position of the text in the scene.
Using a simple synthetic experiment, we demonstrate the inherent shift
variance of a state-of-the-art fully convolutional text detector. Furthermore,
using the same experimental setting, we show how small architectural changes
can lead to an improved shift equivariance and less variation of the detector
output. We validate the synthetic results using a real-world training schedule
on the text detection network. To quantify the amount of shift variability, we
propose a metric based on well-established text detection benchmarks.
While the proposed architectural changes are not able to fully recover shift
equivariance, adding smoothing filters can substantially improve shift
consistency on common text datasets. Considering the potentially large impact
of small shifts, we propose to extend the commonly used text detection metrics
by the metric described in this work, in order to be able to quantify the
consistency of text detectors.
|
Epoxy polymers are used in wide range of applications. The properties and
performance of epoxy polymers depend upon various factors like the type of
constituents and their proportions used and other process parameters. The
conventional way of developing epoxy polymers is usually labor-intensive and
may not be fully efficient, which has resulted in epoxy polymers having a
limited performance range due to the use of predetermined blend combinations,
compositions and development parameters. Hence, in order to experiment with
more design parameters, robust and easy computational techniques need to be
established. To this end, we developed and analyzed in this study a new machine
learning (ML) based approach to predict the mechanical properties of epoxy
polymers based on their basic structural features. The results from molecular
dynamics (MD) simulations have been used to derive the ML model. The salient
feature of our work is that for the development of epoxy polymers based on
EPON-862, several new hardeners were explored in addition to the conventionally
used ones. The influence of additional parameters like the proportion of curing
agent used and the extent of curing on the mechanical properties of epoxy
polymers were also investigated. This method can be further extended by
providing the epoxy polymer with the desired properties through knowledge of
the structural characteristics of its constituents. The findings of our study
can thus lead toward development of efficient design methodologies for epoxy
polymeric systems.
|
In this work, we derive a recently proposed Abelian model to describe the
interaction of correlated monopoles, center vortices, and dual fields in three
dimensional SU(2) Yang-Mills theory. Following recent polymer techniques,
special care is taken to obtain the end-to-end probability for a single
interacting center vortex, which constitutes a key ingredient to represent the
ensemble integration.
|
Cosmic-ray observations provide a powerful probe of dark matter annihilation
in the Galaxy. In this paper we derive constraints on heavy dark matter from
the recent precise AMS-02 antiproton data. We consider all possible
annihilation channels into pairs of standard model particles. Furthermore, we
interpret our results in the context of minimal dark matter, including
higgsino, wino and quintuplet dark matter. We compare the cosmic-ray antiproton
limits to limits from $\gamma$-ray observations of dwarf spheroidal galaxies
and to limits from $\gamma$-ray and $\gamma$-line observations towards the
Galactic center. While the latter limits are highly dependent on the dark
matter density distribution and only exclude a thermal wino for cuspy profiles,
the cosmic-ray limits are more robust, strongly disfavoring the thermal wino
dark matter scenario even for a conservative estimate of systematic
uncertainties.
|
Smart grids are large and complex cyber physical infrastructures that require
real-time monitoring for ensuring the security and reliability of the system.
Monitoring the smart grid involves analyzing continuous data-stream from
various measurement devices deployed throughout the system, which are
topologically distributed and structurally interrelated. In this paper, graph
signal processing (GSP) has been used to represent and analyze the power grid
measurement data. It is shown that GSP can enable various analyses for the
power grid's structured data and dynamics of its interconnected components.
Particularly, the effects of various cyber and physical stresses in the power
grid are evaluated and discussed both in the vertex and the graph-frequency
domains of the signals. Several techniques for detecting and locating cyber and
physical stresses based on GSP techniques have been presented and their
performances have been evaluated and compared. The presented study shows that
GSP can be a promising approach for analyzing the power grid's data.
|
In this paper we study the "standardized candle method" using a sample of 37
nearby (z<0.06) Type II plateau supernovae having BVRI photometry and optical
spectroscopy. An analytic procedure is implemented to fit light curves, color
curves, and velocity curves. We find that the V-I color toward the end of the
plateau can be used to estimate the host-galaxy reddening with a precision of
0.2 mag. The correlation between plateau luminosity and expansion velocity
previously reported in the literature is recovered. Using this relation and
assuming a standard reddening law (Rv = 3.1), we obtain Hubble diagrams in the
BVI bands with dispersions of ~0.4 mag. Allowing Rv to vary and minimizing the
spread in the Hubble diagrams, we obtain a dispersion range of 0.25-0.30 mag,
which implies that these objects can deliver relative distances with precisions
of 12-14%. The resulting best-fit value of Rv is 1.4 +/- 0.1.
|
It is known that every graph with n vertices embeds stochastically into trees
with distortion $O(\log n)$. In this paper, we show that this upper bound is
sharp for a large class of graphs. As this class of graphs contains diamond
graphs, this result extends known examples that obtain this largest possible
stochastic distortion.
|
A promising approach for scalable Gaussian processes (GPs) is the
Karhunen-Lo\`eve (KL) decomposition, in which the GP kernel is represented by a
set of basis functions which are the eigenfunctions of the kernel operator.
Such decomposed kernels have the potential to be very fast, and do not depend
on the selection of a reduced set of inducing points. However KL decompositions
lead to high dimensionality, and variable selection becomes paramount. This
paper reports a new method of forward variable selection, enabled by the
ordered nature of the basis functions in the KL expansion of the Bayesian
Smoothing Spline ANOVA kernel (BSS-ANOVA), coupled with fast Gibbs sampling in
a fully Bayesian approach. It quickly and effectively limits the number of
terms, yielding a method with competitive accuracies, training and inference
times for tabular datasets of low feature set dimensionality. The inference
speed and accuracy makes the method especially useful for dynamic systems
identification, by modeling the dynamics in the tangent space as a static
problem, then integrating the learned dynamics using a high-order scheme. The
methods are demonstrated on two dynamic datasets: a `Susceptible, Infected,
Recovered' (SIR) toy problem, with the transmissibility used as forcing
function, along with the experimental `Cascaded Tanks' benchmark dataset.
Comparisons on the static prediction of time derivatives are made with a random
forest (RF), a residual neural network (ResNet), and the Orthogonal Additive
Kernel (OAK) inducing points scalable GP, while for the timeseries prediction
comparisons are made with LSTM and GRU recurrent neural networks (RNNs) along
with the SINDy package.
|
The P-wave charm-strange mesons $D_{s0}(2317)$ and $D_{s1}(2460)$ lie below
the $DK$ and $D^\ast K$ threshold respectively. They are extremely narrow
because their strong decays violate the isospin symmetry. We study the possible
heavy molecular states composed of a pair of excited charm strange mesons. As a
byproduct, we also present the numerical results for the bottonium-like
analogue.
|
We use optimal control theory with the purpose of finding the best spraying
policy with the aim of at least to minimize and possibly to eradicate the
number of parasites, i.e., the prey for the spiders living in an
agroecosystems. Two different optimal control problems are posed and solved,
and their implications discussed.
|
Mispronunciation detection and diagnosis (MDD) is designed to identify
pronunciation errors and provide instructive feedback to guide non-native
language learners, which is a core component in computer-assisted pronunciation
training (CAPT) systems. However, MDD often suffers from the data-sparsity
problem due to that collecting non-native data and the associated annotations
is time-consuming and labor-intensive. To address this issue, we explore a
fully end-to-end (E2E) neural model for MDD, which processes learners' speech
directly based on raw waveforms. Compared to conventional hand-crafted acoustic
features, raw waveforms retain more acoustic phenomena and potentially can help
neural networks discover better and more customized representations. To this
end, our MDD model adopts a co-called SincNet module to take input a raw
waveform and covert it to a suitable vector representation sequence. SincNet
employs the cardinal sine (sinc) function to implement learnable bandpass
filters, drawing inspiration from the convolutional neural network (CNN). By
comparison to CNN, SincNet has fewer parameters and is more amenable to human
interpretation. Extensive experiments are conducted on the L2-ARCTIC dataset,
which is a publicly-available non-native English speech corpus compiled for
research on CAPT. We find that the sinc filters of SincNet can be adapted
quickly for non-native language learners of different nationalities.
Furthermore, our model can achieve comparable mispronunciation detection
performance in relation to state-of-the-art E2E MDD models that take input the
standard handcrafted acoustic features. Besides that, our model also provides
considerable improvements on phone error rate (PER) and diagnosis accuracy.
|
If massive black holes constitute the dark matter in the halo surrounding the
Milky Way, the existence of low mass globular clusters in the halo suggests an
upper limit to their mass, $M_{_{BH}}$. We use a combination of the impulse
approximation and numerical simulations to constrain $M_{_{BH}} \lsim
10^3M_\odot$, otherwise several of the halo globular clusters would be heated
to disruption within one half of their lifetime. Taken at face value, this
constraint is three orders of magnitude stronger than the previous limit
provided by disk heating arguments. However, since the initial mass function of
clusters is unknown, we argue that the real constraint is at most, an order of
magnitude weaker. Our results rule out cosmological scenarios, such as versions
of the Primordial Baryonic Isocurvature fluctuation model, which invoke the low
Jeans mass at early epochs to create a large population of black holes of mass
$\sim 10^6M_\odot$.
|
We consider four-dimensional $N=2$ supergravity coupled to vector- and
hypermultiplets, where abelian isometries of the quaternionic K\"ahler
hypermultiplet scalar manifold are gauged. Using the recipe given by Meessen
and Ort\'{\i}n in arXiv:1204.0493, we analytically construct a supersymmetric
black hole solution for the case of just one vector multiplet with prepotential
${\cal F}=-i\chi^0\chi^1$, and the universal hypermultiplet. This solution has
a running dilaton, and it interpolates between $\text{AdS}_2\times\text{H}^2$
at the horizon and a hyperscaling-violating type geometry at infinity,
conformal to $\text{AdS}_2\times\text{H}^2$. It carries two magnetic charges
that are completely fixed in terms of the parameters that appear in the Killing
vector used for the gauging. In the second part of the paper, we extend the
work of Bellucci et al. on black hole attractors in gauged supergravity to the
case where also hypermultiplets are present. The attractors are shown to be
governed by an effective potential $V_{\text{eff}}$, which is extremized on the
horizon by all the scalar fields of the theory. Moreover, the entropy is given
by the critical value of $V_{\text{eff}}$. In the limit of vanishing scalar
potential, $V_{\text{eff}}$ reduces (up to a prefactor) to the usual black hole
potential.
|
In this contribution we aim to study the stability boundaries of solutions at
equilibria for a second-order oscillator networks with SN-symmetry, we look for
non-degenerate Hopf bifurcations as the time-delay between nodes increases. The
remarkably simple stability criterion for synchronous solutions which, in the
case of first-order self-oscillators, states that stability depends only on the
sign of the coupling function derivative, is extended to a generic coupling
function for second-order oscillators. As an application example, the stability
boundaries for a N-node Phase-Locked Loop network is analysed.
|
We investigate generalized quadratic forms with values in the set of rational
integers over quadratic fields. We characterize the real quadratic fields which
admit a positive definite binary generalized form of this type representing
every positive integer. We also show that there are only finitely many such
fields where a ternary generalized form with these properties exists.
|
Transposable data represents interactions among two sets of entities, and are
typically represented as a matrix containing the known interaction values.
Additional side information may consist of feature vectors specific to entities
corresponding to the rows and/or columns of such a matrix. Further information
may also be available in the form of interactions or hierarchies among entities
along the same mode (axis). We propose a novel approach for modeling
transposable data with missing interactions given additional side information.
The interactions are modeled as noisy observations from a latent noise free
matrix generated from a matrix-variate Gaussian process. The construction of
row and column covariances using side information provides a flexible mechanism
for specifying a-priori knowledge of the row and column correlations in the
data. Further, the use of such a prior combined with the side information
enables predictions for new rows and columns not observed in the training data.
In this work, we combine the matrix-variate Gaussian process model with low
rank constraints. The constrained Gaussian process approach is applied to the
prediction of hidden associations between genes and diseases using a small set
of observed associations as well as prior covariances induced by gene-gene
interaction networks and disease ontologies. The proposed approach is also
applied to recommender systems data which involves predicting the item ratings
of users using known associations as well as prior covariances induced by
social networks. We present experimental results that highlight the performance
of constrained matrix-variate Gaussian process as compared to state of the art
approaches in each domain.
|
Experiments and numerical simulations of turbulent $^4$He and $^3$He-B have
established that, at hydrodynamic length scales larger than the average
distance between quantum vortices, the energy spectrum obeys the same 5/3
Kolmogorov law which is observed in the homogeneous isotropic turbulence of
ordinary fluids. The importance of the 5/3 law is that it points to the
existence of a Richardson energy cascade from large eddies to small eddies.
However, there is also evidence of quantum turbulent regimes without Kolmogorov
scaling. This raises the important questions of why, in such regimes, the
Kolmogorov spectrum fails to form, what is the physical nature of turbulence
without energy cascade, and whether hydrodynamical models can account for the
unusual behaviour of turbulent superfluid helium. In this work we describe
simple physical mechanisms which prevent the formation of Kolmogorov scaling in
the thermal counterflow, and analyze the conditions necessary for emergence of
quasiclassical regime in quantum turbulence generated by injection of vortex
rings at low temperatures. Our models justify the hydrodynamical description of
quantum turbulence and shed light into an unexpected regime of vortex dynamics.
|
We derive and analyze a symmetric interior penalty discontinuous Galerkin
scheme for the approximation of the second-order form of the radiative transfer
equation in slab geometry. Using appropriate trace lemmas, the analysis can be
carried out as for more standard elliptic problems. Supporting examples show
the accuracy and stability of the method also numerically, for different
polynomial degrees. For discretization, we employ quad-tree grids, which allow
for local refinement in phase-space, and we show exemplary that adaptive
methods can efficiently approximate discontinuous solutions. We investigate the
behavior of hierarchical error estimators and error estimators based on local
averaging.
|
In a cosmological context dust has been always poorly understood. That is
true also for the statistic of GRBs so that we started a program to understand
its role both in relation to GRBs and in function of z. This paper presents a
composite model in this direction. The model considers a rather generic
distribution of dust in a spiral galaxy and considers the effect of changing
some of the parameters characterizing the dust grains, size in particular. We
first simulated 500 GRBs distributed as the host galaxy mass distribution,
using as model the Milky Way. If we consider dust with the same properties as
that we observe in the Milky Way, we find that due to absorption we miss about
10% of the afterglows assuming we observe the event within about 1 hour or even
within 100s. In our second set of simulations we placed GRBs randomly inside
giants molecular clouds, considering different kinds of dust inside and outside
the host cloud and the effect of dust sublimation caused by the GRB inside the
clouds. In this case absorption is mainly due to the host cloud and the
physical properties of dust play a strong role. Computations from this model
agree with the hypothesis of host galaxies with extinction curve similar to
that of the Small Magellanic Cloud, whereas the host cloud could be also
characterized by dust with larger grains. To confirm our findings we need a set
of homogeneous infrared observations. The use of coming dedicated infrared
telescopes, like REM, will provide a wealth of cases of new afterglow
observations.
|
With the success of pre-trained visual-language (VL) models such as CLIP in
visual representation tasks, transferring pre-trained models to downstream
tasks has become a crucial paradigm. Recently, the prompt tuning paradigm,
which draws inspiration from natural language processing (NLP), has made
significant progress in VL field. However, preceding methods mainly focus on
constructing prompt templates for text and visual inputs, neglecting the gap in
class label representations between the VL models and downstream tasks. To
address this challenge, we introduce an innovative label alignment method named
\textbf{LAMM}, which can dynamically adjust the category embeddings of
downstream datasets through end-to-end training. Moreover, to achieve a more
appropriate label distribution, we propose a hierarchical loss, encompassing
the alignment of the parameter space, feature space, and logits space. We
conduct experiments on 11 downstream vision datasets and demonstrate that our
method significantly improves the performance of existing multi-modal prompt
learning models in few-shot scenarios, exhibiting an average accuracy
improvement of 2.31(\%) compared to the state-of-the-art methods on 16 shots.
Moreover, our methodology exhibits the preeminence in continual learning
compared to other prompt tuning methods. Importantly, our method is synergistic
with existing prompt tuning methods and can boost the performance on top of
them. Our code and dataset will be publicly available at
https://github.com/gaojingsheng/LAMM.
|
We describe in detail two-parameter nonstandard quantum deformation of D=4
Lorentz algebra $\mathfrak{o}(3,1)$, linked with Jordanian deformation of
$\mathfrak{sl} (2;\mathbb{C})$. Using twist quantization technique we obtain
the explicit formulae for the deformed coproducts and antipodes. Further
extending the considered deformation to the D=4 Poincar\'{e} algebra we obtain
a new Hopf-algebraic deformation of four-dimensional relativistic symmetries
with dimensionless deformation parameter. Finally, we interpret
$\mathfrak{o}(3,1)$ as the D=3 de-Sitter algebra and calculate the contraction
limit $R\to\infty$ ($R$ -- de-Sitter radius) providing explicit Hopf algebra
structure for the quantum deformation of the D=3 Poincar\'{e} algebra (with
masslike deformation parameters), which is the two-parameter light-cone
$\kappa$-deformation of the D=3 Poincar\'{e} symmetry.
|
This work presents HeadArtist for 3D head generation from text descriptions.
With a landmark-guided ControlNet serving as the generative prior, we come up
with an efficient pipeline that optimizes a parameterized 3D head model under
the supervision of the prior distillation itself. We call such a process self
score distillation (SSD). In detail, given a sampled camera pose, we first
render an image and its corresponding landmarks from the head model, and add
some particular level of noise onto the image. The noisy image, landmarks, and
text condition are then fed into the frozen ControlNet twice for noise
prediction. Two different classifier-free guidance (CFG) weights are applied
during these two predictions, and the prediction difference offers a direction
on how the rendered image can better match the text of interest. Experimental
results suggest that our approach delivers high-quality 3D head sculptures with
adequate geometry and photorealistic appearance, significantly outperforming
state-ofthe-art methods. We also show that the same pipeline well supports
editing the generated heads, including both geometry deformation and appearance
change.
|
Based on the four wave mixing, a three mode nonlinear system is proposed. The
single photon blockade is discussed through analytical analysis and numerical
calculation. The analytical analysis shows that the conventional photon
blockade and unconventional photon blockade can be realized at the same time,
and the analytical conditions of the two kinds of blockade are the same. The
numerical results show that the system not only has the maximum average photon
number in the blockade region, but also can have strong photon anti-bunching in
the region with small nonlinear coupling coefficient, which greatly reduces the
experimental difficulty of the system. This optical system which can realize
the compound photon blockade effect is helpful to realize the high-purity
single photon source.
|
In this paper we establish some applications of the Scherer-Hol's theorem for
polynomial matrices. Firstly, we give a representation for polynomial matrices
positive definite on subsets of compact polyhedra. Then we establish a
Putinar-Vasilescu Positivstellensatz for homogeneous and non-homogeneous
polynomial matrices. Next we propose a matrix version of the
P\'olya-Putinar-Vasilescu Positivstellensatz. Finally, we approximate positive
semi-definite polynomial matrices using sums of squares.
|
We study resonant energy transfer in a one-dimensional chain of two to five
atoms by analyzing time-dependent probabilities as function of their
interatomic distances. The dynamics of the system are first investigated by
including the nearest-neighbour interactions and then accounting for all
next-neighbour interactions. We find that inclusion of nearest-neighbour
interactions in the Hamiltonian for three atoms chain exhibits perdiocity
during the energy transfer dynamics, however this behavior displays
aperiodicity with the all-neighbour interactions. It shows for the equidistant
chains of four and five atoms the peaks are always irregular but regular peaks
are retrieved when the inner atoms are placed closer than the atoms at both the
ends. In this arrangement, the energy transfer swings between the atoms at both
ends with very low probability of finding an atom at the center. This
phenomenon resembles with quantum notion of Newton's cradle. We also find out
the maximum distance up to which energy could be transferred within the typical
lifetimes of the Rydberg states.
|
In this evolving era of machine learning security, membership inference
attacks have emerged as a potent threat to the confidentiality of sensitive
data. In this attack, adversaries aim to determine whether a particular point
was used during the training of a target model. This paper proposes a new
method to gauge a data point's membership in a model's training set. Instead of
correlating loss with membership, as is traditionally done, we have leveraged
the fact that training examples generally exhibit higher confidence values when
classified into their actual class. During training, the model is essentially
being 'fit' to the training data and might face particular difficulties in
generalization to unseen data. This asymmetry leads to the model achieving
higher confidence on the training data as it exploits the specific patterns and
noise present in the training data. Our proposed approach leverages the
confidence values generated by the machine learning model. These confidence
values provide a probabilistic measure of the model's certainty in its
predictions and can further be used to infer the membership of a given data
point. Additionally, we also introduce another variant of our method that
allows us to carry out this attack without knowing the ground truth(true class)
of a given data point, thus offering an edge over existing label-dependent
attack methods.
|
We measure the redshift-dependent luminosity function and the comoving radial
density of galaxies in the Sloan Digital Sky Survey Data Release 1 (SDSS DR1).
Both measurements indicate that the apparent number density of bright galaxies
increases by a factor ~3 as redshift increases from z=0 to z=0.3. This result
is robust to the assumed cosmology, to the details of the K-correction and to
direction on the sky. These observations are most naturally explained by
significant evolution in the luminosity and/or number density of galaxies at
redshifts z < 0.3. Such evolution is also consistent with the steep
number-magnitude counts seen in the APM Galaxy Survey, without the need to
invoke a local underdensity in the galaxy distribution distribution or
magnitude scale errors.
|
The trace of a family of sets $\mathcal{A}$ on a set $X$ is
$\mathcal{A}|_X=\{A\cap X:A\in \mathcal{A}\}$. If $\mathcal{A}$ is a family of
$k$-sets from an $n$-set such that for any $r$-subset $X$ the trace
$\mathcal{A}|_X$ does not contain a maximal chain, then how large can
$\mathcal{A}$ be? Patk\'os conjectured that, for $n$ sufficiently large, the
size of $\mathcal{A}$ is at most $\binom{n-k+r-1}{r-1}$. Our aim in this paper
is to prove this conjecture.
|
We propose a fluctuation analysis to quantify spatial correlations in complex
networks. The approach considers the sequences of degrees along shortest paths
in the networks and quantifies the fluctuations in analogy to time series. In
this work, the Barabasi-Albert (BA) model, the Cayley tree at the percolation
transition, a fractal network model, and examples of real-world networks are
studied. While the fluctuation functions for the BA model show exponential
decay, in the case of the Cayley tree and the fractal network model the
fluctuation functions display a power-law behavior. The fractal network model
comprises long-range anti-correlations. The results suggest that the
fluctuation exponent provides complementary information to the fractal
dimension.
|
In this work, long-term spatiotemporal changes in rainfall are analysed and
evaluated using whole-year data from Rajasthan, India, at the meteorological
divisional level. In order to determine how the rainfall pattern has changed
over the past 10 years, I examined the data from each of the thirteen tehsils
in the Jaipur district. For the years 2012 through 2021, daily rainfall
information is available from the Indian Meteorological Department (IMD) in
Jaipur. We primarily compare data broken down by tehsil in the Jaipur district
of Rajasthan, India.
|
We calculate the equation of state of a gas of strings at high density in a
large toroidal universe, and use it to determine the cosmological evolution of
background metric and dilaton fields in the entire large radius Hagedorn
regime, (ln S)^{1/d} << R << S^{1/d} (with S the total entropy). The pressure
in this regime is not vanishing but of O(1), while the equation of state is
proportional to volume, which makes our solutions significantly different from
previously published approximate solutions. For example, we are able to
calculate the duration of the high-density "Hagedorn" phase, which increases
exponentially with increasing entropy, S. We go on to discuss the difficulties
of the scenario, quantifying the problems of establishing thermal equilibrium
and producing a large but not too weakly-coupled universe.
|
The Electric Vehicle (EV) Industry has seen extraordinary growth in the last
few years. This is primarily due to an ever increasing awareness of the
detrimental environmental effects of fossil fuel powered vehicles and
availability of inexpensive Lithium-ion batteries (LIBs). In order to safely
deploy these LIBs in Electric Vehicles, certain battery states need to be
constantly monitored to ensure safe and healthy operation. The use of Machine
Learning to estimate battery states such as State-of-Charge and State-of-Health
have become an extremely active area of research. However, limited availability
of open-source diverse datasets has stifled the growth of this field, and is a
problem largely ignored in literature. In this work, we propose a novel method
of time-series battery data augmentation using deep neural networks. We
introduce and analyze the method of using two neural networks working together
to alternatively produce synthetic charging and discharging battery profiles.
One model produces battery charging profiles, and another produces battery
discharging profiles. The proposed approach is evaluated using few public
battery datasets to illustrate its effectiveness, and our results show the
efficacy of this approach to solve the challenges of limited battery data. We
also test this approach on dynamic Electric Vehicle drive cycles as well.
|
We introduce a machine-learning approach to predict the complex non-Markovian
dynamics of supercooled liquids from static averaged quantities. Compared to
techniques based on particle propensity, our method is built upon a theoretical
framework that uses as input and output system-averaged quantities, thus being
easier to apply in an experimental context where particle resolved information
is not available. In this work, we train a deep neural network to predict the
self intermediate scattering function of binary mixtures using their static
structure factor as input. While its performance is excellent for the
temperature range of the training data, the model also retains some
transferability in making decent predictions at temperatures lower than the
ones it was trained for, or when we use it for similar systems. We also develop
an evolutionary strategy that is able to construct a realistic memory function
underlying the observed non-Markovian dynamics. This method lets us conclude
that the memory function of supercooled liquids can be effectively
parameterized as the sum of two stretched exponentials, which physically
corresponds to two dominant relaxation modes.
|
We performed a high energy resolution ARPES investigation of over-doped
Ba0.1K0.9Fe2As2 with T_c= 9 K. The Fermi surface topology of this material is
similar to that of KFe2As2 and differs from that of slightly less doped
Ba0.3K0.7Fe2As2, implying that a Lifshitz transition occurred between x=0.7 and
x=0.9. Albeit for a vertical node found at the tip of the emerging
off-M-centered Fermi surface pocket lobes, the superconducting gap structure is
similar to that of Ba0.3K0.7Fe2As2, suggesting that the paring interaction is
not driven by the Fermi surface topology.
|
We present a detailed abundance analysis of high-quality HARPS, UVES and UES
spectra of 95 solar analogs, 33 with and 62 without detected planets. These
spectra have S/N > 350. We investigate the possibility that the possible
presence of terrestrial planets could affect the volatile-to-refratory
abundance ratios. We do not see clear differences between stars with and
without planets, either in the only seven solar twins or even when considering
the whole sample of 95 solar analogs in the metallicity range -0.3< [Fe/H] <
0.5 . We demonstrate that after removing the Galactic chemical evolution
effects the possible differences between stars with and without planets in
these samples practically disappear and the volatile-to-refractory abundance
ratios are very similar to solar values. We investigate the abundance ratios of
volatile and refractory elements versus the condensation temperature of this
sample of solar analogs, in particular, paying a special attention to those
stars harbouring super-Earth-like planets.
|
We propose an audio-visual spatial-temporal deep neural network with: (1) a
visual block containing a pretrained 2D-CNN followed by a temporal
convolutional network (TCN); (2) an aural block containing several parallel
TCNs; and (3) a leader-follower attentive fusion block combining the
audio-visual information. The TCN with large history coverage enables our model
to exploit spatial-temporal information within a much larger window length
(i.e., 300) than that from the baseline and state-of-the-art methods (i.e., 36
or 48). The fusion block emphasizes the visual modality while exploits the
noisy aural modality using the inter-modality attention mechanism. To make full
use of the data and alleviate over-fitting, cross-validation is carried out on
the training and validation set. The concordance correlation coefficient (CCC)
centering is used to merge the results from each fold. On the test (validation)
set of the Aff-Wild2 database, the achieved CCC is 0.463 (0.469) for valence
and 0.492 (0.649) for arousal, which significantly outperforms the baseline
method with the corresponding CCC of 0.200 (0.210) and 0.190 (0.230) for
valence and arousal, respectively. The code is available at
https://github.com/sucv/ABAW2.
|
A log generic hypersurface in $\mathbb{P}^n$ with respect to a birational
modification of $\mathbb{P}^n$ is by definition the image of a generic element
of a high power of an ample linear series on the modification. A log
very-generic hypersurface is defined similarly but restricting to line bundles
satisfying a non-resonance condition. Fixing a log resolution of a product
$f=f_1\ldots f_p$ of polynomials, we show that the monodromy conjecture,
relating the motivic zeta function with the complex monodromy, holds for the
tuple $(f_1,\ldots,f_p,g)$ and for the product $fg$, if $g$ is log generic. We
also show that the stronger version of the monodromy conjecture, relating the
motivic zeta function with the Bernstein-Sato ideal, holds for the tuple
$(f_1,\ldots,f_p,g)$ and for the product $fg$, if $g$ is log very-generic.
|
We discuss the Bosonic sector of a class of supersymmetric non-Lorentzian
five-dimensional gauge field theories with an $SU(1,3)$ conformal symmetry.
These actions have a Lagrange multiplier which imposes a novel
$\Omega$-deformed anti-self-dual gauge field constraint. Using a generalised 't
Hooft ansatz we find the constraint equation linearizes allowing us to
construct a wide class of explicit solutions. These include finite action
configurations that describe worldlines of anti-instantons which can be created
and annihilated. We also describe the dynamics on the constraint surface.
|
Niobium-based Superconducting Radio Frequency (SRF) cavity performance is
sensitive to localized defects that give rise to quenches at high accelerating
gradients. In order to identify these material defects on bulk Nb surfaces at
their operating frequency and temperature, it is important to develop a new
kind of wide bandwidth microwave microscopy with localized and strong RF
magnetic fields. By taking advantage of write head technology widely used in
the magnetic recording industry, one can obtain ~200 mT RF magnetic fields,
which is on the order of the thermodynamic critical field of Nb, on submicron
length scales on the surface of the superconductor. We have successfully
induced the nonlinear Meissner effect via this magnetic write head probe on a
variety of superconductors. This design should have a high spatial resolution
and is a promising candidate to find localized defects on bulk Nb surfaces and
thin film coatings of interest for accelerator applications.
|
The spectral problem for O(D) symmetric polynomial potentials allows for a
partial algebraic solution after analytical continuation to negative even
dimensions D. This fact is closely related to the disappearance of the
factorial growth of large orders of the perturbation theory at negative even D.
As a consequence, certain quantities constructed from the perturbative
coefficients exhibit fast inverse factorial convergence to the asymptotic
values in the limit of large orders. This quantum mechanical construction can
be generalized to the case of quantum field theory.
|
With rapid advances in neuroimaging techniques, the research on brain
disorder identification has become an emerging area in the data mining
community. Brain disorder data poses many unique challenges for data mining
research. For example, the raw data generated by neuroimaging experiments is in
tensor representations, with typical characteristics of high dimensionality,
structural complexity and nonlinear separability. Furthermore, brain
connectivity networks can be constructed from the tensor data, embedding subtle
interactions between brain regions. Other clinical measures are usually
available reflecting the disease status from different perspectives. It is
expected that integrating complementary information in the tensor data and the
brain network data, and incorporating other clinical parameters will be
potentially transformative for investigating disease mechanisms and for
informing therapeutic interventions. Many research efforts have been devoted to
this area. They have achieved great success in various applications, such as
tensor-based modeling, subgraph pattern mining, multi-view feature analysis. In
this paper, we review some recent data mining methods that are used for
analyzing brain disorders.
|
We report a fluctuation-driven state of matter that develops near an
accidental degeneracy point of two symmetry-distinct primary phases. Due to
symmetry mixing, this bound-state order exhibits unique signatures,
incompatible with either parent phase. Within a field-theoretical formalism, we
derive the generic phase diagram for system with bound-state order, study its
response to strain, and evaluate analytic expressions for a specific model. Our
results support the $(d + ig)$-superconducting state as a candidate for
$\mathrm{Sr}_{2}\mathrm{Ru}\mathrm{O}_{4}$: Most noticeably, the derived
strain-dependence is in excellent agreement with recent experiments [Hicks
\textit{et al.} Science (2014) and Grinenko \textit{et al.} arXiv (2020)]. The
evolution above a non-vanishing strain from a joint onset of superconductivity
and time-reversal symmetry-breaking to two split phase transitions provides a
testable prediction for this scenario.
|
We study the conjecture claiming that, over a flexible field, isotropic Chow
groups coincide with numerical Chow groups (with ${\Bbb{F}}_p$-coefficients).
This conjecture is essential for understanding the structure of the isotropic
motivic category and that of the tensor triangulated spectrum of Voevodsky
category of motives. We prove the conjecture for the new range of cases. In
particular, we show that, for a given variety $X$, it holds for sufficiently
large primes $p$. We also prove the $p$-adic analogue. This permits to
interpret integral numerically trivial classes in $CH(X)$ as
$p^{\infty}$-anisotropic ones.
|
In this article we investigate a system of geometric evolution equations
describing a curvature driven motion of a family of 3D curves in the normal and
binormal directions. Evolving curves may be subject of mutual interactions
having both local or nonlocal character where the entire curve may influence
evolution of other curves. Such an evolution and interaction can be found in
applications. We explore the direct Lagrangian approach for treating the
geometric flow of such interacting curves. Using the abstract theory of
nonlinear analytic semi-flows, we are able to prove local existence, uniqueness
and continuation of classical H\"older smooth solutions to the governing system
of nonlinear parabolic equations. Using the finite volume method, we construct
an efficient numerical scheme solving the governing system of nonlinear
parabolic equations. Additionally, a nontrivial tangential velocity is
considered allowing for redistribution of discretization nodes. We also present
several computational studies of the flow combining the normal and binormal
velocity and considering nonlocal interactions.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.