text
stringlengths 6
128k
|
---|
For functions $p(z) = 1 + \sum_{n=1}^\infty p_n z^n$ holomorphic in the unit
disk, satisfying $ {\rm Re}\, p(z) > 0$, we generalize two inequalities proved
by Livingston in 1969 and 1985, and simplify their proofs. One of our results
states that $|p_n -w p_k p_{n-k}|\leq 2\max\{1, |1-2w|\}, w\in\mathbb{C}$.
Another result involves certain determinants whose entries are the coefficients
$p_n$. Both results are sharp. As applications we provide a simple proof of a
theorem of J.E. Brown and various inequalities for the coefficients of
holomorphic self-maps of the unit disk.
|
This paper is concerned with a certain aspect of the spectral theory of
unitary operators in a Hilbert space and its aim is to give an explicit
construction of continuous functions of unitary operators. Starting from a
given unitary operator we give a family of sequences of trigonometric
polynomials converging weakly to the complex measures which allow us to define
functions of the operator.
|
The marginal likelihood or evidence in Bayesian statistics contains an
intrinsic penalty for larger model sizes and is a fundamental quantity in
Bayesian model comparison. Over the past two decades, there has been steadily
increasing activity to understand the nature of this penalty in singular
statistical models, building on pioneering work by Sumio Watanabe. Unlike
regular models where the Bayesian information criterion (BIC) encapsulates a
first-order expansion of the logarithm of the marginal likelihood, parameter
counting gets trickier in singular models where a quantity called the real log
canonical threshold (RLCT) summarizes the effective model dimensionality. In
this article, we offer a probabilistic treatment to recover non-asymptotic
versions of established evidence bounds as well as prove a new result based on
the Gibbs variational inequality. In particular, we show that mean-field
variational inference correctly recovers the RLCT for any singular model in its
canonical or normal form. We additionally exhibit sharpness of our bound by
analyzing the dynamics of a general purpose coordinate ascent algorithm (CAVI)
popularly employed in variational inference.
|
We study the effect of nanoscale precipitates on lattice thermal conduction
in thermoelectric PbTe using a combination of ab-initio phonon calculations and
molecular dynamics. We take into account the effects of mass difference and
change in force constants, and find an enhanced influence of the latter with
increased precipitate concentration. As a consequence, our inclusion of the
change in force constants in the calculation affords a smaller predicted
optimal nano-precipitate size that minimizes the thermal conductivity. These
results suggest that the phonon scattering by nanoprecipitates in
thermoelectric composites could be stronger than previously thought.
|
Dispersionless flat bands are proposed to be a fundamental ingredient to
achieve the various sought after quantum states of matter including
high-temperature superconductivity1-4 and fractional quantum Hall effect5-6.
Materials with such peculiar electronic states, however, are very rare and
often exhibit very complex band structures. Here, we report on the emergence of
a flat band with a possible insulating ground state in the sub-monolayer VSe2 /
Bi2Se3 heterostructure by means of angle-resolved photoemission spectroscopy
and scanning tunneling microscopy. The flat band is dispersionless along the
kll and kz momenta, filling the entire Brillouin zone, and it exhibits a
complex circular dichroism signal reversing the sign at several points of the
Brillouin zone. These properties together with the presence of a Moir\'e
patterns in VSe2 suggest that the flat band is not a trivial disorder or
confinement effect and could even be topologically non-trivial. Another
intriguing finding is that the flat band does not modify the Dirac cone of
Bi2Se3 around the Dirac point. Furthermore, we found that the flat band and the
Dirac surface states of Bi2Se3 have opposite energy shifts with electron
doping. This opens a novel way of controlling the spin texture of photocurrents
as well as the transport properties of the heterostructure. These features make
this flat band remarkably distinguishable from previous findings and our
methodology can be applied to other systems opening a promising pathway to
realize strongly correlated quantum effects in topological materials.
|
Understanding flow and transport of bacteria in porous media is crucial to
technologies such as bioremediation, biomineralization or enhanced oil
recovery. While physicochemical bacteria filtration is well-documented, recent
studies showed that bacterial motility plays a key role in the transport
process. Flow and transport experiments performed in microfluidic chips
containing randomly placed obstacles confirmed that the distributions of
non-motile particles stays compact, whereas for the motile strains, the
distributions are characterized by both significant retention as well as fast
downstream motion. For motile bacteria, the detailed microscopic study of
individual bacteria trajectories reveals two salient features: (i) the
emergence of an active retention process triggered by motility, (ii)
enhancement of dispersion due to the exchange between fast flow channels and
low flow regions in the vicinity of the solid grains. We propose a physical
model based on a continuous time random walk approach. This approach accounts
for bacteria dispersion via variable pore-scale flow velocities through a
Markov model for equidistant particle speeds. Motility of bacteria is modeled
by a two-rate trapping process that accounts for the motion towards and active
trapping at the obstacles. This approach captures the forward tails observed
for the distribution of bacteria displacements, and quantifies an enhanced
hydrodynamic dispersion effect that originates in the interaction between flow
at the pore-scale and bacterial motility. The model reproduces the experimental
observations, and predicts bacteria dispersion and transport at the macroscale.
|
We present a new class of interacting Markov chain Monte Carlo algorithms for
solving numerically discrete-time measure-valued equations. The associated
stochastic processes belong to the class of self-interacting Markov chains. In
contrast to traditional Markov chains, their time evolutions depend on the
occupation measure of their past values. This general methodology allows us to
provide a natural way to sample from a sequence of target probability measures
of increasing complexity. We develop an original theoretical analysis to
analyze the behavior of these iterative algorithms which relies on
measure-valued processes and semigroup techniques. We establish a variety of
convergence results including exponential estimates and a uniform convergence
theorem with respect to the number of target distributions. We also illustrate
these algorithms in the context of Feynman-Kac distribution flows.
|
We investigate the effects of fibre morphologies, such as single and granular
chain with torus bead on liquid film evolution using experimental and
axi-symmetric numerical simulations with a one-fluid formulation. We introduce
a non-dimensional parameter 'Bead Ratio'($BR$), that is, the ratio of bead
diameter to the film height. When both the $BR$ and its distance from the
nozzle exceed critical values, selection mechanism leading to development of
'dominating' waves: from regularly spaced droplets to coarsening or droplet
merging. The two mechanisms, influenced by the $BR$ for a single bead,
contribute to droplet merging: the formation of the downstream healing length,
which precipitates the initial stage, and it's oscillating behaviour resulting
in droplet merging. When the bead position is away from the healing length far
from the nozzle, the transient simulations capture behavior similar to the
finite amplitude perturbations at the inlet. However, when the bead is within
the healing length, the film evolution has only coarsening effect on the
droplet spacing. When the bead spacing on a granular chain is less than the
droplet spacing of a Rayleigh-Plateau regime is significantly altered.
|
It is of importance to investigate the significance of a subset of covariates
$W$ for the response $Y$ given covariates $Z$ in regression modeling. To this
end, we propose a significance test for the partial mean independence problem
based on machine learning methods and data splitting. The test statistic
converges to the standard chi-squared distribution under the null hypothesis
while it converges to a normal distribution under the fixed alternative
hypothesis. Power enhancement and algorithm stability are also discussed. If
the null hypothesis is rejected, we propose a partial Generalized Measure of
Correlation (pGMC) to measure the partial mean dependence of $Y$ given $W$
after controlling for the nonlinear effect of $Z$. We present the appealing
theoretical properties of the pGMC and establish the asymptotic normality of
its estimator with the optimal root-$N$ convergence rate. Furthermore, the
valid confidence interval for the pGMC is also derived. As an important special
case when there are no conditional covariates $Z$, we introduce a new test of
overall significance of covariates for the response in a model-free setting.
Numerical studies and real data analysis are also conducted to compare with
existing approaches and to demonstrate the validity and flexibility of our
proposed procedures.
|
We consider the prepotential of Dijkgraaf and Vafa (DV) as one more (and in
fact, singular) example of the Seiberg-Witten (SW) prepotentials and discuss
its properties from this perspective. Most attention is devoted to the issue of
complete system of moduli, which should include not only the sizes of the cuts
(in matrix model interpretation), but also their positions, i.e. the number of
moduli should be almost doubled, as compared to the DV consideration. We
introduce the notion of regularized DV system (not necessarilly related to
matrix model) and discuss the WDVV equations. These definitely hold before
regularization is lifted, but an adequate limiting procedure, preserving all
ingredients of the SW theory, remains to be found.
|
We investigate the use of message-passing algorithms for the problem of
finding the max-weight independent set (MWIS) in a graph. First, we study the
performance of the classical loopy max-product belief propagation. We show that
each fixed point estimate of max-product can be mapped in a natural way to an
extreme point of the LP polytope associated with the MWIS problem. However,
this extreme point may not be the one that maximizes the value of node weights;
the particular extreme point at final convergence depends on the initialization
of max-product. We then show that if max-product is started from the natural
initialization of uninformative messages, it always solves the correct LP -- if
it converges. This result is obtained via a direct analysis of the iterative
algorithm, and cannot be obtained by looking only at fixed points.
The tightness of the LP relaxation is thus necessary for max-product
optimality, but it is not sufficient. Motivated by this observation, we show
that a simple modification of max-product becomes gradient descent on (a
convexified version of) the dual of the LP, and converges to the dual optimum.
We also develop a message-passing algorithm that recovers the primal MWIS
solution from the output of the descent algorithm. We show that the MWIS
estimate obtained using these two algorithms in conjunction is correct when the
graph is bipartite and the MWIS is unique.
Finally, we show that any problem of MAP estimation for probability
distributions over finite domains can be reduced to an MWIS problem. We believe
this reduction will yield new insights and algorithms for MAP estimation.
|
We describe how to formulate Khovanov's functor-valued invariant of tangles
in the language of bordered Heegaard Floer homology. We then give an alternate
construction of Lawrence Roberts' Type D and Type A structures in Khovanov
homology, and his algebra $\mathcal{B}\Gamma_n$, in terms of Khovanov's theory
of modules over the ring $H^n$. We reprove invariance and pairing properties of
Roberts' bordered modules in this language. Along the way, we obtain an
explicit generators-and-relations description of $H^n$ which may be of
independent interest.
|
The problem of estimating a high-dimensional sparse vector
$\boldsymbol{\theta} \in \mathbb{R}^n$ from an observation in i.i.d. Gaussian
noise is considered. The performance is measured using squared-error loss. An
empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior,
is analyzed and compared with the well-known soft-thresholding estimator. We
obtain concentration inequalities for the Stein's unbiased risk estimate and
the loss function of both estimators. The results show that for large $n$, both
the risk estimate and the loss function concentrate on deterministic values
close to the true risk.
Depending on the underlying $\boldsymbol{\theta}$, either the proposed
empirical Bayes (eBayes) estimator or soft-thresholding may have smaller loss.
We consider a hybrid estimator that attempts to pick the better of the
soft-thresholding estimator and the eBayes estimator by comparing their risk
estimates. It is shown that: i) the loss of the hybrid estimator concentrates
on the minimum of the losses of the two competing estimators, and ii) the risk
of the hybrid estimator is within order $\frac{1}{\sqrt{n}}$ of the minimum of
the two risks. Simulation results are provided to support the theoretical
results. Finally, we use the eBayes and hybrid estimators as denoisers in the
approximate message passing (AMP) algorithm for compressed sensing, and show
that their performance is superior to the soft-thresholding denoiser in a wide
range of settings.
|
User intent classification is an important task in information retrieval. In
this work, we introduce a revised taxonomy of user intent. We take the widely
used differentiation between navigational, transactional and informational
queries as a starting point, and identify three different sub-classes for the
informational queries: instrumental, factual and abstain. The resulting
classification of user queries is more fine-grained, reaches a high level of
consistency between annotators, and can serve as the basis for an effective
automatic classification process. The newly introduced categories help
distinguish between types of queries that a retrieval system could act upon,
for example by prioritizing different types of results in the ranking.We have
used a weak supervision approach based on Snorkel to annotate the ORCAS dataset
according to our new user intent taxonomy, utilising established heuristics and
keywords to construct rules for the prediction of the intent category. We then
present a series of experiments with a variety of machine learning models,
using the labels from the weak supervision stage as training data, but find
that the results produced by Snorkel are not outperformed by these competing
approaches and can be considered state-of-the-art. The advantage of a
rule-based approach like Snorkel's is its efficient deployment in an actual
system, where intent classification would be executed for every query issued.
The resource released with this paper is the ORCAS-I dataset: a labelled
version of the ORCAS click-based dataset of Web queries, which provides 18
million connections to 10 million distinct queries.
|
In this paper, we propose adding enzymes to the propagation environment of a
diffusive molecular communication system as a strategy for mitigating
intersymbol interference. The enzymes form reaction intermediates with
information molecules and then degrade them so that they have a smaller chance
of interfering with future transmissions. We present the reaction-diffusion
dynamics of this proposed system and derive a lower bound expression for the
expected number of molecules observed at the receiver. We justify a
particle-based simulation framework, and present simulation results that show
both the accuracy of our expression and the potential for enzymes to improve
communication performance.
|
We present the first fully two-dimensional attenuation imaging technique
developed for pulse-echo ultrasound systems. Unlike state-of-the-art
techniques, which use line-by-line acquisitions, our method uses steered
emissions to constrain attenuation values at each location with multiple
crossing wave paths, essential to resolve the spatial variations of this tissue
property. At every location, we compute normalized cross-correlations between
the beamformed images that are obtained from emissions at different steering
angles. We demonstrate that their log-amplitudes provide the changes between
attenuation-induced amplitude losses undergone by the different incident waves.
This allows us to formulate a linear tomographic problem, which we efficiently
solve via a Tikhonov-regularized least-squares approach. The performance of our
tomography technique is first validated in numerical examples and then
experimentally demonstrated in custom-made tissue-mimicking phantoms with
inclusions of varying size, echogenicity, and attenuation. We show that this
technique is particularly good at resolving lateral variations in tissue
attenuation and remains accurate in media with varying echogenicity. Based on a
similar principle, this method can be easily combined with computed ultrasound
tomography in echo mode (CUTE) for speed-of-sound imaging, paving the way
towards a multi-modal ultrasound tomography framework characterizing multiple
acoustic tissue properties simultaneously.
|
The combination of the traditional convolutional network (i.e., an
auto-encoder) and the graph convolutional network has attracted much attention
in clustering, in which the auto-encoder extracts the node attribute feature
and the graph convolutional network captures the topological graph feature.
However, the existing works (i) lack a flexible combination mechanism to
adaptively fuse those two kinds of features for learning the discriminative
representation and (ii) overlook the multi-scale information embedded at
different layers for subsequent cluster assignment, leading to inferior
clustering results. To this end, we propose a novel deep clustering method
named Attention-driven Graph Clustering Network (AGCN). Specifically, AGCN
exploits a heterogeneity-wise fusion module to dynamically fuse the node
attribute feature and the topological graph feature. Moreover, AGCN develops a
scale-wise fusion module to adaptively aggregate the multi-scale features
embedded at different layers. Based on a unified optimization framework, AGCN
can jointly perform feature learning and cluster assignment in an unsupervised
fashion. Compared with the existing deep clustering methods, our method is more
flexible and effective since it comprehensively considers the numerous and
discriminative information embedded in the network and directly produces the
clustering results. Extensive quantitative and qualitative results on commonly
used benchmark datasets validate that our AGCN consistently outperforms
state-of-the-art methods.
|
Sterile neutrinos with keV masses can constitute all or part of the
cosmological dark matter. The electroweak-singlet fermions, which are usually
introduced to explain the masses of active neutrinos, need not be heavier than
the electroweak scale; if one of them has a keV-scale mass, it can be the
dark-matter particle, and it can also explain the observed pulsar kicks. The
relic sterile neutrinos could be produced by several different mechanisms. If
they originate primarily from the Higgs decays at temperatures of the order of
100 GeV, the resulting dark matter is much ``colder'' than the warm dark matter
produced in neutrino oscillations. The signature of this form of dark matter is
the spectral line from the two-body decay, which can be detected by the X-ray
telescopes. The same X-rays can have other observable manifestations, in
particular, though their effects on the formation of the first stars.
|
We introduce a method to perform imaginary time evolution in a controllable
quantum system using measurements and conditional unitary operations. By
performing a sequence of weak measurements based on the desired Hamiltonian
constructed by a Suzuki-Trotter decomposition, an evolution approximating
imaginary time evolution can be realized. The randomness due to measurement is
corrected using conditional unitary operations, making the evolution
deterministic. Both the measurements required for the algorithm and the
conditional unitary operations can be constructed efficiently. We show that the
algorithm converges only below a specified energy threshold and the complexity
is estimated for some specific problem instances.
|
Partial Quantum Nearest Neighbor Probability Density Functions (PQNNPDF's)
are formulated for the purpose of determining the behavior of quantum mixed
systems in equilibrium in a manner analogous to that provided for classical
multi-component systems. Developments in partial quantum m-tuplet distribution
functions, a generalization of the partial quantum radial distribution
function, along with their relationship to PQNNPDF's, are briefly elucidated.
The calculation of statistical thermodynamic properties of quantum mixtures is
presented for arbitrary material systems. Application to the limiting case of
dilute, weakly correlated quantum gas mixtures has been outlined and the second
virial coefficient is derived. The case of dilute strongly degenerate mixtures
is also addressed, providing an expression for the PQNNPDF applicable in this
thermodynamic regime.
|
We consider charge quantization in a small superconducting grain that is
contacted by a normal-metal electrode and is controlled by a capacitively
coupled gate. At zero temperature and zero conductance $G$ between the grain
and the electrode, the charge $Q$ as a function of the gate voltage $V_g$
changes in steps. The step height is $e$ if $\Delta<E_c$, where $\Delta$ and
$E_c$ are, respectively, the superconducting gap and the charging energy of the
grain. Quantum charge fluctuations at finite conductance remove the
discontinuity in the dependence of $Q$ on $V_g$ and lead to a finite step width
$\propto G^2\Delta$. The resulting shape of the Coulomb blockade staircase is
of a novel type. The grain charge is a continuous function of $V_g$ while the
differential capacitance, $dQ/dV_g$, has discontinuities at certain values of
the gate voltage. We determine analytically the shape of the Coulomb blockade
staircase also at non-zero temperatures.
|
We consider the problem of estimating a $d$-dimensional $s$-sparse discrete
distribution from its samples observed under a $b$-bit communication
constraint. The best-known previous result on $\ell_2$ estimation error for
this problem is $O\left( \frac{s\log\left( {d}/{s}\right)}{n2^b}\right)$.
Surprisingly, we show that when sample size $n$ exceeds a minimum threshold
$n^*(s, d, b)$, we can achieve an $\ell_2$ estimation error of $O\left(
\frac{s}{n2^b}\right)$. This implies that when $n>n^*(s, d, b)$ the convergence
rate does not depend on the ambient dimension $d$ and is the same as knowing
the support of the distribution beforehand.
We next ask the question: ``what is the minimum $n^*(s, d, b)$ that allows
dimension-free convergence?''. To upper bound $n^*(s, d, b)$, we develop novel
localization schemes to accurately and efficiently localize the unknown
support. For the non-interactive setting, we show that $n^*(s, d, b) = O\left(
\min \left( {d^2\log^2 d}/{2^b}, {s^4\log^2 d}/{2^b}\right) \right)$. Moreover,
we connect the problem with non-adaptive group testing and obtain a
polynomial-time estimation scheme when $n = \tilde{\Omega}\left({s^4\log^4
d}/{2^b}\right)$. This group testing based scheme is adaptive to the sparsity
parameter $s$, and hence can be applied without knowing it. For the interactive
setting, we propose a novel tree-based estimation scheme and show that the
minimum sample-size needed to achieve dimension-free convergence can be further
reduced to $n^*(s, d, b) = \tilde{O}\left( {s^2\log^2 d}/{2^b} \right)$.
|
The paper is concerned with a node-based, gradient-driven, continuous adjoint
two-phase flow procedure to optimize the shapes of free-floating vessels and
discusses three topics. First, we aim to convey that elements of a
Cahn-Hilliard formulation should augment the frequently employed
Volume-of-Fluid two-phase flow model to maintain dual consistency. It is seen
that such consistency serves as the basis for a robust primal/adjoint coupling
in practical applications at huge Reynolds and Froude numbers. The second topic
covers different adjoint coupling strategies. A central aspect of the
application is the floating position, particularly the trim and the sinkage,
that interact with a variation of hydrodynamic loads induced by the shape
updates. Other topics addressed refer to the required level of density coupling
and a more straightforward -- yet non-frozen -- adjoint treatment of
turbulence. The third part discusses the computation of a descent direction
within a node-based environment. We will illustrate means to deform both the
volume mesh and the hull shape simultaneously and at the same time obey
technical constraints on the vessel's displacement and its extensions. The
Hilbert-space approach provides smooth shape updates using the established
coding infrastructure of a computational fluid dynamics algorithm and provides
access to managing additional technical constraints. Verification and
validation follow from a submerged 2D cylinder case. The application includes a
full-scale offshore supply vessel at Re=3E+08 and Fn=0.37. Results illustrate
that the fully parallel procedure can automatically reduce the drag of an
already pre-optimized shape by 9-13 within approximately O(10.000-30.000) CPUh
depending on the considered couplings and floatation aspects.
|
Understanding the mechanism of protein secondary structure formation is an
essential part of protein-folding puzzle. Here we describe a simple model for
the formation of the $\beta$-hairpin, motivated by the fact that folding of a
$\beta$-hairpin captures much of the basic physics of protein folding. We argue
that the coupling of ``primary'' backbone stiffness and ``secondary'' contact
formation (similar to the coupling between the ``secondary'' and ``tertiary''
structure in globular proteins), caused for example by side-chain packing
regularities, is responsible for producing an all-or-none 2-state
$\beta$-hairpin formation. We also develop a recursive relation to compute the
phase diagram and single exponential folding/unfolding rate arising via a
dominant transition state.
|
The mixed valence Cr compound NaCr$_2$O$_4$, synthesized using a
high-pressure technique, offers a unique playground for investigating
unconventional physical properties in condensed matter. In the present study,
muon spin rotation/relaxation ($\mu^+$SR) and high-resolution neutron powder
diffraction (NPD) measurements were carried out to clarify the true magnetic
ground state of this interesting compound. Our detailed study brings new
insight, allowing us to confirm the existence of a commensurate
antiferromagnetic order (C-AFM) and to extract its ordered Cr moment $\mu^{\rm
C}_{\rm Cr}=(4.30\pm0.01)\mu_B$. Such a value of the ordered moment is in fact
compatible with the existence of high-spin Cr sites. Further, the value of the
canting angle of the Cr spin axial vector is refined as $\theta_{\rm
c}=(8.8\pm0.5)^{\circ}$. Employing high-quality samples in combination with
time-of-flight NPD, a novel magnetic supercell was also revealed. Such
supercell display an incommensurate (IC)-AFM propagation vector
(0~0~${\textstyle \frac{1}{2}-}\delta$), having an ordered moment $\mu^{\rm
IC}_{\rm Cr}=(2.20\pm0.03)\mu_B$. It is suggested that the C-AFM and IC-AFM
modulations are due to itinerant and localized contributions to the magnetic
moment, respectively. Finally, the direct measurement of the magnetic order
parameter provided a value of the critical exponent $\beta = 0.245 \approx
\frac{1}{4}$, suggesting a non conventional critical behavior for the magnetic
phase transition in NaCr$_2$O$_4$.
|
In this paper, we tackle the significant challenge of simultaneous
stabilization in control systems engineering, where the aim is to employ a
single controller to ensure stability across multiple systems. We delve into
both scalar and multivariable scenarios. For the scalar case, we present the
necessary and sufficient conditions for a single controller to stabilize
multiple plants and reformulate these conditions to interpolation constraints,
which expand Ghosh's results by allowing derivative constraints. Furthermore,
we implement a methodology based on a Riccati-type matrix equation, called the
Covariance Extension Equation. This approach enables us to parameterize all
potential solutions using a monic Schur polynomial. Consequently, we extend our
result to the multivariable scenario and derive the necessary and sufficient
conditions for a group of $m\times m$ plants to be simultaneously stabilizable,
which can also be solved by our analytic interpolation method. Finally, we
construct four numerical examples, showcasing the application of our method
across various scenarios encountered in control systems engineering and
highlighting its ability to stabilize diverse systems efficiently and reliably.
|
Dihadron correlations at intermediate p_T revealed novel structures on the
away side of high p_T trigger particles at RHIC. The away-side correlations in
central Au+Au collisions are significantly broader than in pp and d+Au
collisions and in restricted kinematic range, double-peaked away from
\Delta\phi=\pi. Three-particle correlations indicate conical emission of the
away-side correlated hadrons at angles independent of associated particle p_T,
consistent with formation of Mach-cone shock waves. In this talk we further
investigate the conical emission phenomenon exploiting dihadron correlations as
a function of the trigger particle azimuth from the reaction plane. Such
correlations are sensitive to the collision geometry. We study these
geometrical effects and discuss how they might be used to further our
understanding of the medium created in heavy-ion collisions.
|
Extended Chebyshev spaces that also comprise the constants represent large
families of functions that can be used in real-life modeling or engineering
applications that also involve important (e.g. transcendental) integral or
rational curves and surfaces. Concerning computer aided geometric design, the
unique normalized B-bases of such vector spaces ensure optimal shape preserving
properties, important evaluation or subdivision algorithms and useful shape
parameters. Therefore, we propose global explicit formulas for the entries of
those transformation matrices that map these normalized B-bases to the
traditional (or ordinary) bases of the underlying vector spaces. Then, we also
describe general and ready to use control point configurations for the exact
representation of those traditional integral parametric curves and (hybrid)
surfaces that are specified by coordinate functions given as (products of
separable) linear combinations of ordinary basis functions. The obtained
results are also extended to the control point and weight based exact
description of the rational counterpart of these integral parametric curves and
surfaces. The universal applicability of our methods is presented through
polynomial, trigonometric, hyperbolic or mixed extended Chebyshev vector
spaces.
|
Large Language Models (LLMs) have revolutionized the field of natural
language processing, but they fall short in comprehending biological sequences
such as proteins. To address this challenge, we propose InstructProtein, an
innovative LLM that possesses bidirectional generation capabilities in both
human and protein languages: (i) taking a protein sequence as input to predict
its textual function description and (ii) using natural language to prompt
protein sequence generation. To achieve this, we first pre-train an LLM on both
protein and natural language corpora, enabling it to comprehend individual
languages. Then supervised instruction tuning is employed to facilitate the
alignment of these two distinct languages. Herein, we introduce a knowledge
graph-based instruction generation framework to construct a high-quality
instruction dataset, addressing annotation imbalance and instruction deficits
in existing protein-text corpus. In particular, the instructions inherit the
structural relations between proteins and function annotations in knowledge
graphs, which empowers our model to engage in the causal modeling of protein
functions, akin to the chain-of-thought processes in natural languages.
Extensive experiments on bidirectional protein-text generation tasks show that
InstructProtein outperforms state-of-the-art LLMs by large margins. Moreover,
InstructProtein serves as a pioneering step towards text-based protein function
prediction and sequence design, effectively bridging the gap between protein
and human language understanding.
|
We use STEREO imagery to study the morphology of a shock driven by a fast
coronal mass ejection (CME) launched from the Sun on 2011 March 7. The source
region of the CME is located just to the east of a coronal hole. The CME ejecta
is deflected away from the hole, in contrast with the shock, which readily
expands into the fast outflow from the coronal hole. The result is a CME with
ejecta not well centered within the shock surrounding it. The shock shape
inferred from the imaging is compared with in situ data at 1 AU, where the
shock is observed near Earth by the Wind spacecraft, and at STEREO-A. Shock
normals computed from the in situ data are consistent with the shock morphology
inferred from imaging.
|
Withdrawn by the author in favour of math.GT/0511602
|
The transport of organelles and vesicles in living cells can be well
described by a kinetic tug-of-war model advanced by M\"uller, Klumpp and
Lipowsky. In which, the cargo is attached by two motor species, kinesin and
dynein, and the direction of motion is determined by the number of motors which
bind to the track. In recent work [Phys. Rev. E 79, 061918 (2009)], this model
was studied by mean field theory, and it was found that, usually the tug-of-war
model has one, two, or three distinct stable stationary points. However, the
results there are mostly obtained by numerical calculations, since it is hard
to do detailed theoretical studies to a two-dimensional nonlinear system. In
this paper, we will carry out further detailed analysis about this model, and
try to find more properties theoretically. Firstly, the tug-of-war model is
simplified to a one-dimensional equation. Then we claim that the stationary
points of the tug-of-war model correspond to the roots of the simplified
equation, and the stable stationary points correspond to the roots with
positive derivative. Bifurcation occurs at the corresponding parameters, under
which the simplified one-dimensional equation exists root with zero derivative.
Using the simplified equation, not only more properties of the tug-of-war model
can be obtained analytically, the related numerical calculations will become
more accurate and more efficient. This simplification will be helpful to future
studies of the tug-of-war model.
|
The objective of this article is to give an effective algebraic
characterization of normal crossing hypersurfaces in complex manifolds. It is
shown that a hypersurface has normal crossings if and only if it is a free
divisor, has a radical Jacobian ideal and a smooth normalization. Using K.
Saito's theory of free divisors, also a characterization in terms of
logarithmic differential forms and vector fields is found and and finally
another one in terms of the logarithmic residue using recent results of M.
Granger and M. Schulze.
|
While significant progress has been made on understanding hand-object
interactions in computer vision, it is still very challenging for robots to
perform complex dexterous manipulation. In this paper, we propose a new
platform and pipeline DexMV (Dexterous Manipulation from Videos) for imitation
learning. We design a platform with: (i) a simulation system for complex
dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer
vision system to record large-scale demonstrations of a human hand conducting
the same tasks. In our novel pipeline, we extract 3D hand and object poses from
videos, and propose a novel demonstration translation method to convert human
motion to robot demonstrations. We then apply and benchmark multiple imitation
learning algorithms with the demonstrations. We show that the demonstrations
can indeed improve robot learning by a large margin and solve the complex tasks
which reinforcement learning alone cannot solve. More details can be found in
the project page: https://yzqin.github.io/dexmv
|
Multigrid is one of the most efficient methods for solving large-scale linear
systems that arise from discretized partial differential equations. As a
foundation for multigrid analysis, two-grid theory plays an important role in
motivating and analyzing multigrid algorithms. For symmetric positive definite
problems, the convergence theory of two-grid methods with exact solution of the
Galerkin coarse-grid system is mature, and the convergence factor of exact
two-grid methods can be characterized by an identity. Compared with the exact
case, the convergence theory of inexact two-grid methods (i.e., the coarse-grid
system is solved approximately) is of more practical significance, while it is
still less developed in the literature (one reason is that the error
propagation matrix of inexact coarse-grid correction is not a projection). In
this paper, we develop a theoretical framework for the convergence analysis of
inexact two-grid methods. More specifically, we present two-sided bounds for
the energy norm of the error propagation matrix of inexact two-grid methods,
from which one can readily obtain the identity for exact two-grid convergence.
As an application, we establish a unified convergence theory for multigrid
methods, which allows the coarsest-grid system to be solved approximately.
|
This chapter argues that the general philosophy of science should learn
metaphilosophical lessons from the case of metaphysical underdetermination, as
it occurs in non-relativistic quantum mechanics. Section 2 presents the
traditional discussion of metaphysical underdetermination regarding the
individuality and non-individuality of quantum particles. Section 3 discusses
three reactions to it found in the literature: eliminativism about
individuality; conservatism about individuality; eliminativism about objects.
Section 4 wraps it all up with metametaphysical considerations regarding the
epistemology of metaphysics of science.
|
For any stratified pseudomanifold $X$ and any suitable action of the unit
circle $S^1$ on $X$ preserving the strata and the local topological structure,
the orbit space $B=X/S^1$ is again a stratified pseudomanifold and the orbit
map $\pi/X\to B$ is a stratified morphism. For each perversity in $X$ this
arrow induces a long exact sequence relating the intersection cohomologies of
$X$ and $B$; we call it the Gysin sequence of $X$ induced by the action.
The third term of the Gysin sequence is called the Gysin term; its cohomology
depends on basic global and local data. Global data concerns the intersection
cohomology of $B$ and the Euler class induced by the action; while local data
is related to the Euler class of the links of the fixed strata. The cohomology
of the Gysin term is calculated trhough a residual constructible sheaf in $B$
with support in the fixed points set.
In the last modification of this paper, we improved the definition of the
Euler class in intersection cohomology, which is made by recursion and implies
the definition of an Euler perversity. We also verify that the Euler class is
functorial for a suitable family of equivariant morphisms (i.e., not every
equivariant morphism preserves the Euler class).
|
We propose a novel deep-learning-based system for vessel segmentation.
Existing methods using CNNs have mostly relied on local appearances learned on
the regular image grid, without considering the graphical structure of vessel
shape. To address this, we incorporate a graph convolutional network into a
unified CNN architecture, where the final segmentation is inferred by combining
the different types of features. The proposed method can be applied to expand
any type of CNN-based vessel segmentation method to enhance the performance.
Experiments show that the proposed method outperforms the current
state-of-the-art methods on two retinal image datasets as well as a coronary
artery X-ray angiography dataset.
|
The site of Zn production remains an elusive and challenging problem in
astrophysics. A large enhancement of the [Zn/Fe] ratios of very metal-poor
stars in the Galactic halo suggests the death of short-lived massive stars,
i.e., core-collapse supernovae (CCSNe), as one major site for Zn production.
Previous studies have claimed that some specific CCSNe can produce Zn in
sufficient quantities. However, it remains unclear which models can withstand
the critical test of observations. Using a Zn abundance feature similar to that
of r-process elements in faint satellite galaxies, we find evidence that Zn
production took place through much rarer events than canonical CCSNe. This
finding can be unified with the implied decrease in the rate of Zn production
with an increasing metallicity for Galactic halo stars, which narrows down the
major site of Zn production in the early galaxy to magneto-rotational SNe
(MR-SNe). On the other hand, in the later phase of galactic evolution, we
predict that the major Zn-production site switched from MR-SNe to thermonuclear
SNe (SNe Ia). According to this scenario, an accumulation of the contributions
from two types of SNe eventually led to the solar isotope composition of Zn
which mainly owes 66,68Zn to MR-SNe and 64Zn to SNe Ia triggered by
He-detonation. The requirement of Zn production in SNe Ia sheds a new light on
the hot debate on the scenario for SN Ia progenitors, suggesting that a
He-detonation model might be one major channel for SNe Ia.
|
In strong-coupling superconductors with a short electron mean free path the
self-energy effects in the superconducting order parameter play a major role in
the phonon manifestation of the point-contact spectra at above-gap energies. We
compare the expressions for the nonlinear conductivity of tunnel, ballistic,
and diffusive point-contacts and show that these expression are similar and
correspond to the measurements of the phonon structure in the point-contact
spectra for the $\pi$-band of MgB$_{2}$.
|
Pulsar glitches provide a unique way to study neutron star microphysics
because short post-glitch dynamics are directly linked to strong frictional
processes on small scales. To illustrate this connection between macroscopic
observables and microphysics, we review calculations of vortex interactions
focusing on Kelvin wave excitations and determine the corresponding mutual
friction strength for realistic microscopic parameters in the inner crust.
These density-dependent crustal coupling profiles are combined with a
simplified treatment of the core coupling and implemented in a three-component
neutron star model to construct a predictive framework for glitch rises. As a
result of the density-dependent dynamics, we find the superfluid to transfer
angular momentum to different parts of the crust and the core on different
timescales. This can cause the spin frequency change to become non-monotonic in
time, allowing for a maximum value much larger than the measured glitch size,
as well as a delay in the recovery. The exact shape of the calculated glitch
rise is strongly dependent on the relative strength between the crust and core
mutual friction, providing the means to probe not only the crustal superfluid
but also the deeper neutron star interior. To demonstrate the potential of this
approach, we compare our predictive model with the first pulse-to-pulse
observations recorded during the December 2016 glitch of the Vela pulsar. Our
analysis suggests that the glitch rise behavior is relatively insensitive to
the crustal mutual friction strength as long as $\mathcal{B} \gtrsim 10^{-3}$,
while being strongly dependent on the core coupling strength, which we find to
be in the range $3 \times 10^{-5} \lesssim \mathcal{B}_{\rm core} \lesssim
10^{-4}$.
|
We present new imaging and spectral analysis of the recently discovered
extended X-ray emission around the high-magnetic-field rotating radio transient
RRAT J1819-1458. We used two Chandra observations, taken on 2008 May 31 and
2011 May 28. The diffuse X-ray emission was detected with a significance of
~19sigma in the image obtained by combining the two observations. Long-term
spectral variability has not been observed. Possible scenarios for the origin
of this diffuse X-ray emission, further detailed in Camero-Arranz et al.
(2012), are here discussed.
|
Here we present an original study of the effect of hydration on Terahertz
absorption signatures in biomolecular (lactose monohydrate and biotin) and
bioparticles( Bacillus thuringiensis and Bacillus cereus spores). We observe
"read-shift" in center frequency with increasing hydration in all samples,
consistent with Lorentzian-oscillator behavior. But the effect of hydration on
linewidth is ambiguous, sometimes increasing the linewdith (consistent with
Lorentzian behaviour) and sometimes decreasing the linewidth.
|
A scheme for suppressing the correlated noise in signals transmitted over the
bosonic Gaussian memory channels is proposed. This is a compromise solution
rather than removing the noise completely. The scheme is based on linear
optical elements, two $N$-port splitters and $N$ number of phase flips. The
proposed scheme has the advantages that the correlated noise of the memory
channels are greatly suppressed, and the input signal states can be protected
excellently when transmitting over the noise channels. We examine the
suppressing efficiency of the scheme for the correlated noise, both from
quantum information of the states directly transmitted through the noise
channel and also from the entanglement teleportation. The phase flips are very
important aspects for the suppressions of the correlated noise, which transform
the roles of the memory factor from completely negative to positive in quantum
information communications. Increasing the number of beam splitters also can
improve the suppressing efficiency of the scheme in communications.
|
An alternative approximation scheme has been used in solving the Schroedinger
equation for the exponential-cosine-screened Coulomb potential. The bound state
energies for various eigenstates and the corresponding wave functions are
obtained analytically up to the second perturbation term.
|
Two dimensional electronic spectroscopy and transient grating measurements
were performed, for the first time, on nitrogen-vacancy centers in diamond.
These measurements reveal energy transfer and vibrational pathways with
consequences for spin coherence.
|
Initializing classical data in a quantum device is an essential step in many
quantum algorithms. As a consequence of measurement and noisy operations, some
algorithms need to reinitialize the prepared state several times during its
execution. In this work, we propose a quantum state preparation algorithm
called CVO-QRAM with computational cost O(kM), where M is the number of nonzero
probability amplitudes and $k$ is the maximum number of bits with value 1 in
the patterns to be stored. The proposed algorithm can be an alternative to
create sparse states in future NISQ devices.
|
In this paper we propose and study a technique to reduce the number of
parameters and computation time in fully-connected layers of neural networks
using Kronecker product, at a mild cost of the prediction quality. The
technique proceeds by replacing Fully-Connected layers with so-called Kronecker
Fully-Connected layers, where the weight matrices of the FC layers are
approximated by linear combinations of multiple Kronecker products of smaller
matrices. In particular, given a model trained on SVHN dataset, we are able to
construct a new KFC model with 73\% reduction in total number of parameters,
while the error only rises mildly. In contrast, using low-rank method can only
achieve 35\% reduction in total number of parameters given similar quality
degradation allowance. If we only compare the KFC layer with its counterpart
fully-connected layer, the reduction in the number of parameters exceeds 99\%.
The amount of computation is also reduced as we replace matrix product of the
large matrices in FC layers with matrix products of a few smaller matrices in
KFC layers. Further experiments on MNIST, SVHN and some Chinese Character
recognition models also demonstrate effectiveness of our technique.
|
This work introduces TrimTuner, the first system for optimizing machine
learning jobs in the cloud to exploit sub-sampling techniques to reduce the
cost of the optimization process while keeping into account user-specified
constraints. TrimTuner jointly optimizes the cloud and application-specific
parameters and, unlike state of the art works for cloud optimization, eschews
the need to train the model with the full training set every time a new
configuration is sampled. Indeed, by leveraging sub-sampling techniques and
data-sets that are up to 60x smaller than the original one, we show that
TrimTuner can reduce the cost of the optimization process by up to 50x.
Further, TrimTuner speeds-up the recommendation process by 65x with respect to
state of the art techniques for hyper-parameter optimization that use
sub-sampling techniques. The reasons for this improvement are twofold: i) a
novel domain specific heuristic that reduces the number of configurations for
which the acquisition function has to be evaluated; ii) the adoption of an
ensemble of decision trees that enables boosting the speed of the
recommendation process by one additional order of magnitude.
|
The increasing demand for target tracking, environmental surveys,
surveillance and mapping requires multi-axis gimbal systems with high tracking
and stabilization performance. In this paper, first, computed torque model is
generated to estimate the complex disturbances acting on the system. Then, two
different control strategies based on active disturbance rejection control
(ADRC) and computed torque model are implemented on a two-axis gimbal system.
The purpose is to improve the robustness, environmental adaptability and
tracking accuracy of the system and reduce the tuning effort of ADRC by
integrating a neural network (NN) based disturbance compensator (NN assisted
ADRC). In the second control strategy, NN is replaced with a computed torque
model (CTM assisted ADRC), whose inputs come from plant outputs. The simulation
results show that, NN and CTM assisted ADRC structures can decrease mean
tracking errors up to 85.4% and 40.8%, respectively.
|
Recent exact analytical results developed for the random number generators
with taps are reported. These results are applicable to a wide class of
algorithms, including random walks, cluster algorithms, Ising models. Practical
considerations on the improvement of the quality of random numbers are
discussed as well.
|
The shear and the bulk viscosities of the hadron gas at low temperatures are
studied in the model with constant elastic cross sections being relativistic
generalization of the hard spheres model. One effective radius ${r=0.4 fm}$ is
chosen for all elastic collisions. Only elastic collisions are considered which
are supposed to be dominant at temperatures ${T\leq 120-140 MeV}$. The
calculations are done in the framework of the Boltzmann equation with the
Boltzmann statistics distribution functions and the ideal gas equation of
state. The applicability of these approximations is discussed. It's found that
the bulk viscosity of the hadron gas is much larger than the bulk viscosity of
the pion gas while the shear viscosity is found to be less sensitive to the
mass spectrum of hadrons. The constant cross sections and the Boltzmann
statistics approximation allows one not only to conduct precise numerical
calculations of transport coefficients in the hadron gas but also to obtain
some relatively simple relativistic analytical closed-form expressions. Namely,
the correct single-component first-order shear viscosity coefficient is found.
The single-component first-order nonequilibrium distribution function, some
analytical results for the binary mixture and expressions for mean collision
rates, mean free paths and times are presented. Comparison with some previous
calculations for the hadron gas and the pion gas is done too. This paper is the
first step towards calculations with inelastic processes included.
|
Starting from the Kirchhoff-Huygens representation and Duhamel's principle of
time-domain wave equations, we propose novel butterfly-compressed Hadamard
integrators for self-adjoint wave equations in both time and frequency domain
in an inhomogeneous medium. First, we incorporate the leading term of
Hadamard's ansatz into the Kirchhoff-Huygens representation to develop a
short-time valid propagator. Second, using the Fourier transform in time, we
derive the corresponding Eulerian short-time propagator in frequency domain; on
top of this propagator, we further develop a time-frequency-time (TFT) method
for the Cauchy problem of time-domain wave equations. Third, we further propose
the time-frequency-time-frequency (TFTF) method for the corresponding
point-source Helmholtz equation, which provides Green's functions of the
Helmholtz equation for all angular frequencies within a given frequency band.
Fourth, to implement TFT and TFTF methods efficiently, we introduce butterfly
algorithms to compress oscillatory integral kernels at different frequencies.
As a result, the proposed methods can construct wave field beyond caustics
implicitly and advance spatially overturning waves in time naturally with
quasi-optimal computational complexity and memory usage. Furthermore, once
constructed the Hadamard integrators can be employed to solve both time-domain
wave equations with various initial conditions and frequency-domain wave
equations with different point sources. Numerical examples for two-dimensional
wave equations illustrate the accuracy and efficiency of the proposed methods.
|
We prove that in space-times a velocity field that is shear, vorticity and
acceleration-free, if any, is unique up to reflection, with these exceptions:
generalized Robertson-Walker space-times whose space sub-manifold is warped,
and twisted space-times (the scale function is space-time dependent) whose
space sub-manifold is doubly twisted. In space-time dimension n = 4, the Ricci
and the Weyl tensors are specified, and the Einstein equations yield a mixture
of two perfect fluids.
|
Fine-tuning on task-specific question-answer pairs is a predominant method
for enhancing the performance of instruction-tuned large language models (LLMs)
on downstream tasks. However, in certain specialized domains, such as
healthcare or harmless content generation, it is nearly impossible to obtain a
large volume of high-quality data that matches the downstream distribution. To
improve the performance of LLMs in data-scarce domains with domain-mismatched
data, we re-evaluated the Transformer architecture and discovered that not all
parameter updates during fine-tuning contribute positively to downstream
performance. Our analysis reveals that within the self-attention and
feed-forward networks, only the fine-tuned attention parameters are
particularly beneficial when the training set's distribution does not fully
align with the test set. Based on this insight, we propose an effective
inference-time intervention method: Training All parameters but Inferring with
only Attention (\trainallInfAttn). We empirically validate \trainallInfAttn
using two general instruction-tuning datasets and evaluate it on seven
downstream tasks involving math, reasoning, and knowledge understanding across
LLMs of different parameter sizes and fine-tuning techniques. Our comprehensive
experiments demonstrate that \trainallInfAttn achieves superior improvements
compared to both the fully fine-tuned model and the base model in most
scenarios, with significant performance gains. The high tolerance of
\trainallInfAttn to data mismatches makes it resistant to jailbreaking tuning
and enhances specialized tasks using general data.
|
In a recent letter (Phys.Rev.Lett. 82, 1526 (1999)), van den Brom and van
Ruitenbeek found a pronounced suppression of the shot noise in atom-size gold
contacts with conductances near integer multiples of $G_0=2e^2/h$, revealing
unambiguously the quantized nature of the electronic transport. However, the ad
hoc model they introduced to describe the contribution of partially-open
conductance channels to the shot noise is unable to fit either the maxima or
minima of their shot noise data. Here we point out that a model of
quantum-confined electrons with disorder quantitatively reproduces their
measurements.
|
The virtualization and softwarization of modern computer networks introduces
interesting new opportunities for a more flexible placement of network
functions and middleboxes (firewalls, proxies, traffic optimizers, virtual
switches, etc.). This paper studies approximation algorithms for the
incremental deployment of a minimum number of middleboxes at optimal locations,
such that capacity constraints at the middleboxes and length constraints on the
communication routes are respected. Our main contribution is a new, purely
combinatorial and rigorous proof for the submodularity of the function
maximizing the number of communication requests that can be served by a given
set of middleboxes. Our proof allows us to devise a deterministic approximation
algorithm which uses an augmenting path approach to compute the submodular
function. This algorithm does not require any changes to the locations of
existing middleboxes or the preemption of previously served communication pairs
when additional middleboxes are deployed, previously accepted communication
pairs just can be handed over to another middlebox. It is hence particularly
attractive for incremental deployments.We prove that the achieved
polynomial-time approximation bound is optimal, unless P = NP. This paper also
initiates the study of a weighted problem variant, in which entire groups of
nodes need to communicate via a middlebox (e.g., a multiplexer or a shared
object), possibly at different rates. We present an LP relaxation and
randomized rounding algorithm for this problem, leveraging an interesting
connection to scheduling.
|
We consider a single non-holonomic Dubins-like robot traveling with a
constant longitudinal speed in an a priori unknown and unsteady planar
environment. The robot should detect, locate, and track the boundary of a
dynamic environmental scalar field. The field is measured by an on-board sensor
in a point-wise fashion at the robot's location. The focus is on unsteady
boundaries that evolve over time in an arbitrary fashion, including
deformations, i.e., changes of shapes and sizes. We present a sliding mode
control method for localizing and tracking such boundaries: the robot is
steered to the boundary and circulates in its close proximity afterwards. The
proposed control algorithm does not require estimation of the spatial gradient
of the field and is non-demanding with respect to both computation and motion.
The paper offers the proofs of technical facts required for rigorous
justification of non-local convergence of the proposed control law, as well as
theoretical illustrations of its performance in specific scenarios.
|
We present first quantitative results of the surface magnetic field
measurements in selected M-dwarfs based on detailed spectra synthesis conducted
simultaneously in atomic and molecular lines of the FeH Wing-Ford
$F^4\,\Delta-X^4\,\Delta$ transitions. A modified version of the Molecular
Zeeman Library (MZL) was used to compute Land\'e g-factors for FeH lines in
different Hund's cases. Magnetic spectra synthesis was performed with the
Synmast code. We show that the implementation of different Hund's case for FeH
states depending on their quantum numbers allows us to achieve a good fit to
the majority of lines in a sunspot spectrum in an automatic regime. Strong
magnetic fields are confirmed via the modelling of atomic and FeH lines for
three M-dwarfs YZ~CMi, EV~Lac, and AD~Leo, but their mean intensities are found
to be systematically lower than previously reported. A much weaker field
($1.7-2$~kG against $2.7$~kG) is required to fit FeH lines in the spectra of
GJ~1224. Our method allows us to measure average magnetic fields in very
low-mass stars from polarized radiative transfer. The obtained results indicate
that the fields reported in earlier works were probably overestimated by about
$15-30$\%. Higher quality observations are needed for more definite results.
|
We show that it is now possible to image optically thick $\lya$ clouds in
fluorescent $\lya$ emission with a relatively long ($\sim 20 $hr) integration
on a large ($\sim 10 $m) telescope. For a broad range of column densities
($N\gsim 10^{18.5} \cm^{-2}$), the flux of $\lya$ photons from recombination
cascades is equal to $\sim 0.6$ times the flux of ionizing photons, independent
of the geometry of the cloud. Additional $\lya$ photons are produced by
collisional excitations when these are the cloud's primary cooling mechanism.
For typical physical conditions expected in optically thick clouds, these
mechanisms together lead to a $\lya$ emission flux that is $\sim (2/3)
\VEV{\nu}/\nu_0$ times the flux of ionizing photons, where $\VEV{\nu}$ is the
mean frequency of ionizing background photons and $\nu_0$ is the Lyman limit
frequency. Hence, measurement of the surface brightness from an optically thick
cloud (known to exist, e.g., from a quasar absorption line) gives a direct
measure of the energy in the ionizing radiation background. Moreover, in the
same long slit spectrum one could hope to detect emission from $\sim 200$ other
$\lya$ systems. Such detections would allow one to make a 2-dimensional map of
the $\lya$ forest. By taking a series of such spectra, one could map the forest
in three dimensions, revealing structure in the high-redshift universe.
|
The aim of this paper is to study a system of three equations for ionized gas
dynamics at high temperature, in one spatial dimension. In addition to the mass
density, pressure and particle velocity, a further quantity is needed, namely,
the degree of ionization. The system is supplemented by the first and second
law of thermodynamics and by an equation of state; all of them involve the
degree of ionization. At last, under the assumption of thermal equilibrium, the
system is closed by requiring Saha's ionization equation. The geometric
properties of the system are rather complicated: in particular, we prove the
loss of convexity (genuine nonlinearity) for both forward and backward
characteristic fields, and hence the loss of concavity of the physical entropy.
This takes place in a small bounded region, which we are able to characterize
by numerical estimates on the state functions. The structure of shock waves is
also studied by a detailed analysis of the Hugoniot locus, which will be used
in a forthcoming paper to study the shock tube problem.
|
Diffeomorphic matching (only one of several names for this technique) is a
technique for non-rigid registration of curves and surfaces in which the curve
or surface is embedded in the flow of a time-series of vector fields. One seeks
the flow between two topologically-equivalent curves or surfaces which
minimises some metric defined on the vector fields, \emph{i.e.} the flow
closest to the identity in some sense.
In this paper, we describe a new particle-mesh discretisation for the
evolution of the geodesic flow and the embedded shape. Particle-mesh algorithms
are very natural for this problem because Lagrangian particles (particles
moving with the flow) can represent the movement of the shape whereas the
vector field is Eulerian and hence best represented on a static mesh. We
explain the derivation of the method, and prove conservation properties: the
discrete method has a set of conserved momenta corresponding to the
particle-relabelling symmetry which converge to conserved quantities in the
continuous problem. We also introduce a new discretisation for the geometric
current matching condition of (Vaillant and Glaunes, 2005). We illustrate the
method and the derived properties with numerical examples.
|
We investigate to what degree the steady laminar flow in typical micro- and
mini-channels with offset strip fin arrays can be described as developed on a
macro-scale level, in the presence of channel entrance and side-wall effects.
Hereto, the extent of the developed and quasi-developed flow regions in such
channels is determined through large-scale numerical flow simulations. It is
observed that the onset point of developed flow increases linearly with the
Reynolds number and channel width, but remains small relative to the total
channel length. Further, we find that the local macro-scale pressure gradient
and closure force for the (double) volume-averaged Navier-Stokes equations are
adequately modeled by a developed friction factor correlation, as typical
discrepancies are below 15% in both the developed and developing flow region.
We show that these findings can be attributed to the eigenvalues and mode
amplitudes which characterize the quasi-developed flow in the entrance region
of the channel. Finally, we discuss the influence of the channel side walls on
the flow periodicity, the mass flow rate, as well as the macro-scale velocity
profile, which we capture by a displacement factor and slip length coefficient.
Our findings are supported by extensive numerical data for fin height-to-length
ratios up to 1, fin pitch-to-length ratios up to 0.5, and channel aspect ratios
between 1/5 and 1/17, covering Reynolds numbers from 28 to 1224.
|
We investigate the possibility of detecting in redshift surveys a
hemispherical power asymmetry similar to that first reported in CMB
observations. We assume the hemispherical asymmetry arises from a linear
gradient in comoving coordinates in the perturbation amplitude. We predict the
resulting clustering of galaxy or galaxy cluster tracers using an excursion set
approach; doing so accounts for the variation of both the underlying clustering
and the tracer bias. Based on the predicted variation of the clustering of
tracers, we perform a Fisher matrix forecast of the galaxy clustering amplitude
and calculate the statistical significance for ideal surveys and planned
surveys. The results indicate that the DESI galaxy survey would be able to
detect this signal with higher than $3\sigma$ significance if the asymmetry
does exist. We also investigate the amplitude and scale dependence of the above
result. The DESI galaxy survey can probe the dipole amplitude higher than 0.04,
which correspond to a $\pm4\%$ difference of the temperature fluctuation along
and opposite the dipole direction, at least at the $2\sigma$ level.
Additionally, we investigate a modulation of the power spectrum that exhibits
asymmetry only for large scales. This modulation is potentially detectable. For
Milky Way galaxy mass tracers, the scale-dependent modulation yields a larger
change in the large scale power spectrum than does a scale-independent
modulation, because the former does not alter the bias.
|
Rotating black holes exhibit a remarkable set of hidden symmetries near their
horizon. These hidden symmetries have been shown to determine phenomena such as
absorption scattering, superradiance and more recently tidal deformations, also
known as Love numbers. They have also led to a proposal for a dual thermal CFT
with left and right movers recovering the entropy of the black hole.
In this work we provide a constructive explanation of these hidden symmetries
via analytic continuation to Klein signature. We first show that the
near-horizon region of extremal black holes is a Kleinian static solution with
mass $M$ and NUT charge $N$. We then analyze the self-dual solution, namely a
Kerr black hole with a NUT charge $N=\pm M$. Remarkably, the self-dual solution
is self-similar to its near-horizon region and hence approximate symmetries
become exact: in particular, the original two isometries of Kerr are promoted
to seven exact symmetries embedded in a conformal algebra. We analyze its full
conformal group in Kleinian twistor space, where a breaking $SO(4,2) \to
SL(2,\mathbb{R})\times SL(2,\mathbb{R})$ occurs due to the insertion of a
preferred time direction for the black hole. Finally, we show that the spectrum
of the self-dual black hole is integrable and that the eigenvalue problem can
be mapped exactly to the Hydrogen atom where the wavefunction is solved in
terms of elementary polynomials. Perturbing to astrophysical black holes with
$N=0$, we obtain a hyperfine splitting structure.
|
This paper examines the controllability for quantum control systems with
SU(1,1) dynamical symmetry, namely, the ability to use some electromagnetic
field to redirect the quantum system toward a desired evolution. The problem is
formalized as the control of a right invariant bilinear system evolving on the
Lie group SU(1,1) of two dimensional special pseudo-unitary matrices. It is
proved that the elliptic condition of the total Hamiltonian is both sufficient
and necessary for the controllability. Conditions are also given for small time
local controllability and strong controllability. The results obtained are also
valid for the control systems on the Lie groups SO(2,1) and SL(2,R).
|
Visual tracking is challenging due to image variations caused by various
factors, such as object deformation, scale change, illumination change and
occlusion. Given the superior tracking performance of human visual system
(HVS), an ideal design of biologically inspired model is expected to improve
computer visual tracking. This is however a difficult task due to the
incomplete understanding of neurons' working mechanism in HVS. This paper aims
to address this challenge based on the analysis of visual cognitive mechanism
of the ventral stream in the visual cortex, which simulates shallow neurons (S1
units and C1 units) to extract low-level biologically inspired features for the
target appearance and imitates an advanced learning mechanism (S2 units and C2
units) to combine generative and discriminative models for target location. In
addition, fast Gabor approximation (FGA) and fast Fourier transform (FFT) are
adopted for real-time learning and detection in this framework. Extensive
experiments on large-scale benchmark datasets show that the proposed
biologically inspired tracker performs favorably against state-of-the-art
methods in terms of efficiency, accuracy, and robustness. The acceleration
technique in particular ensures that BIT maintains a speed of approximately 45
frames per second.
|
We show that there are $n!$ matchings on $2n$ points without, so called, left
(neighbor) nestings. We also define a set of naturally labeled $(2+2)$-free
posets, and show that there are $n!$ such posets on $n$ elements. Our work was
inspired by Bousquet-M\'elou, Claesson, Dukes and Kitaev [J. Combin. Theory
Ser. A. 117 (2010) 884--909]. They gave bijections between four classes of
combinatorial objects: matchings with no neighbor nestings (due to Stoimenow),
unlabeled $(2+2)$-free posets, permutations avoiding a specific pattern, and so
called ascent sequences. We believe that certain statistics on our matchings
and posets could generalize the work of Bousquet-M\'elou et al.\ and we make a
conjecture to that effect. We also identify natural subsets of matchings and
posets that are equinumerous to the class of unlabeled $(2+2)$-free posets.
We give bijections that show the equivalence of (neighbor) restrictions on
nesting arcs with (neighbor) restrictions on crossing arcs. These bijections
are thought to be of independent interest. One of the bijections maps via
certain upper-triangular integer matrices that have recently been studied by
Dukes and Parviainen [Electron. J. Combin. 17 (2010) \#R53]
|
Starting with the Dirac equation in the extreme Kerr metric we derive an
integral representation for the propagator of solutions of the Cauchy problem
with initial data in the class of smooth compactly supported functions.
|
G359.87+0.18 is an enigmatic object located 15' from Sgr A*. It has been
variously classified as an extragalactic source, Galactic jet source, and young
supernova remnant. We present new observations of G359.87+0.18 between 0.33 and
15 GHz and use these to argue that this source is an Faranoff-Riley II radio
galaxy. We are able to place a crude limit on its redshift of z > 0.1. The
source has a spectral index \alpha < -1 (S \propto \nu^\alpha), suggestive of a
radio galaxy with a redshift z >~ 2.
The scattering diameters of Sgr A* and several nearby OH masers (~ 1" at 1
GHz) indicate that a region of enhanced scattering is along the line of sight
to the Galactic center. If the region covers the Galactic center uniformly, the
implied diameter for a background source is at least 600" at 0.33 GHz, in
contrast with the observed 20" diameter of G359.87+0.18. Using the scattering
diameter of a nearby OH maser OH 359.762+0.120 and the widths of two, nearby,
non-thermal threads, G0.08+0.15 and G359.79+0.17, we show that a uniform
scattering region should cover G359.87+0.18. We therefore conclude that the
Galactic center scattering region is inhomogeneous on a scale of 5' (~ 10 pc at
a distance of 8.5 kpc). This scale is comparable to the size scale of molecular
clouds in the Galactic center. The close agreement between these two lengths
scales is an indication that the scattering region is linked intimately to the
Galactic center molecular clouds.
|
In light of the well-known fact that the $n$th divided difference of any
polynomial of degree $m$ must be zero while $m<n$,the present paper proves the
$(\alpha,\beta)$-inversion formula conjectured by Hsu and Ma [J. Math. Res.
$\&$ Exposition 25(4) (2005) 624].
As applications of $(\alpha,\beta)$-inversion, we not only recover some known
matrix inversions due to Gasper, Schlosser, and Warnaar, but also fin three new
matrix inversions related to elliptic divisibility sequence and theta
functions.
|
Research into developing dual modality probes enabled for magnetic resonance
imaging (MRI) and positron emission tomography (PET) has been on the rise
recently due to the potential to combine the high resolution of MRI and the
high sensitivity of PET. Current synthesis techniques for developing multimodal
probes is largely hindered in part by prolonged reaction times during
radioisotope incorporation - leading to a weakening of the radioactivity. Along
with a time-efficient synthesis, the resulting products must fit within a
critical size range (between 20-100nm) to increase blood retention time. In
this work, we describe a novel, rapid, microwave-based synthesis technique to
grow dextran-coated iron oxide nanoparticles doped with copper (DIO/Cu).
Traditional methods for coprecipitation of dextran-coated iron oxide
nanoparticles require refluxing for 2 hours and result in approximately 50 nm
diameter particles. We demonstrate that microwave synthesis can produce 50 nm
nanoparticles with 5 minutes of heating. We discuss the various parameters used
in the microwave synthesis protocol to vary the size distribution of DIO/Cu,
and demonstrate the successful incorporation of 64Cu into these particles with
the aim of future use for dual-mode MR/PET imaging.
|
We seek a practical method for establishing dense correspondences between two
images with similar content, but possibly different 3D scenes. One of the
challenges in designing such a system is the local scale differences of objects
appearing in the two images. Previous methods often considered only small
subsets of image pixels; matching only pixels for which stable scales may be
reliably estimated. More recently, others have considered dense
correspondences, but with substantial costs associated with generating, storing
and matching scale invariant descriptors. Our work here is motivated by the
observation that pixels in the image have contexts -- the pixels around them --
which may be exploited in order to estimate local scales reliably and
repeatably. Specifically, we make the following contributions. (i) We show that
scales estimated in sparse interest points may be propagated to neighboring
pixels where this information cannot be reliably determined. Doing so allows
scale invariant descriptors to be extracted anywhere in the image, not just in
detected interest points. (ii) We present three different means for propagating
this information: using only the scales at detected interest points, using the
underlying image information to guide the propagation of this information
across each image, separately, and using both images simultaneously. Finally,
(iii), we provide extensive results, both qualitative and quantitative,
demonstrating that accurate dense correspondences can be obtained even between
very different images, with little computational costs beyond those required by
existing methods.
|
We prove the representability theorem in derived analytic geometry. The
theorem asserts that an analytic moduli functor is a derived analytic stack if
and only if it is compatible with Postnikov towers, has a global analytic
cotangent complex, and its truncation is an analytic stack. Our result applies
to both derived complex analytic geometry and derived non-archimedean analytic
geometry (rigid analytic geometry). The representability theorem is of both
philosophical and practical importance in derived geometry. The conditions of
representability are natural expectations for a moduli functor. So the theorem
confirms that the notion of derived analytic space is natural and sufficiently
general. On the other hand, the conditions are easy to verify in practice. So
the theorem enables us to enhance various classical moduli spaces with derived
structures, thus provides plenty of down-to-earth examples of derived analytic
spaces. For the purpose of proof, we study analytification, square-zero
extensions, analytic modules and cotangent complexes in the context of derived
analytic geometry. We will explore applications of the representability theorem
in our subsequent works. In particular, we will establish the existence of
derived mapping stacks via the representability theorem.
|
We obtain the exact asymptotic result for the disorder-averaged probability
distribution function for a random walk in a biased Sinai model and show that
it is characterized by a creeping behavior of the displacement moments with
time, <x^n> ~ t^{\mu n} where \mu is dimensionless mean drift. We employ a
method originated in quantum diffusion which is based on the exact mapping of
the problem to an imaginary-time Schr\"{odinger} equation. For nonzero drift
such an equation has an isolated lowest eigenvalue separated by a gap from
quasi-continuous excited states, and the eigenstate corresponding to the former
governs the long-time asymptotic behavior.
|
We define and compute higher rank analogs of Pandharipande-Thomas stable pair
invariants in primitive classes for K3 surfaces. Higher rank stable pair
invariants for Calabi-Yau threefolds have been defined by Sheshmani
\cite{shesh1,shesh2} using moduli of pairs of the form $\O^n\into \F$ for $\F$
purely one-dimensional and computed via wall-crossing techniques. These
invariants may be thought of as virtually counting embedded curves decorated
with a $(n-1)$-dimensional linear system. We treat invariants counting pairs
$\O^n\into \E$ on a $\K3$ surface for $\E$ an arbitrary stable sheaf of a fixed
numerical type ("coherent systems" in the language of \cite{KY}) whose first
Chern class is primitive, and fully compute them geometrically. The ordinary
stable pair theory of $\K3$ surfaces is treated by \cite{MPT}; there they prove
the KKV conjecture in primitive classes by showing the resulting partition
functions are governed by quasimodular forms. We prove a "higher" KKV
conjecture by showing that our higher rank partition functions are modular
forms.
|
In this paper, we study boundary actions of CAT(0) spaces from a point of
view of topological dynamics and $C^*$-algebras. First, we investigate the
actions of right-angled Coexter groups and right-angled Artin groups with
finite defining graphs on the visual boundaries and the Nevo-Sageev boundaries
of their natural assigned CAT(0) cube complexes. In particular, we establish
(strongly) pure infiniteness results for reduced crossed product $C^*$-algebras
of these actions through investigating the corresponding $\cat$ cube complexes
and establishing necessary dynamical properties such as minimality, topological
freeness and pure infiniteness of the actions. In addition, we study actions of
fundamental groups of graphs of groups on the visual boundaries of their
Bass-Serre trees. We show that the existence of repeatable paths essentially
implies that the action is $2$-filling, from which, we also obtain a large
class of unital Kirchberg algebras. Furthermore, our result also provides a new
method in identifying $C^*$-simple generalized Baumslag-Solitar groups. The
examples of groups obtained from our method have $n$-paradoxical towers in the
sense of \cite{G-G-K-N}. This class particularly contains non-degenerated free
products, Baumslag-Solitar groups and fundamental groups of $n$-circles or
wedge sums of $n$-circles.
|
This paper reports on the temperature evolution of local elastic interactions
between ferromagnetic CoFe films and ferroelectric BaTiO3 substrates.
Polarization microscopy measurements indicate that growth-induced stripe
domains in the CoFe films are preserved and strengthened during cooling and
heating through the structural phase transitions of BaTiO3. Moreover, rotation
of the magnetic easy axes at the tertragonal-to-orthorhombic transition (T =
278 K) and at T = 380 K simultaneously switches the local magnetization of both
uniaxial domains by 90 degrees. Irreversible changes in the ferromagnetic
domain pattern are induced when the room-temperature ferroelectric domain
structure is altered after temperature cycling.
|
We give a geometric construction of the multivariable Conway potential
function for colored links. In the case of a single color, it is Kauffman's
definition of the Conway polynomial in terms of a Seifert matrix.
|
We present a unified treatment of the Aharonov--Bohm (AB) effect for
two-dimensional multiband electronic systems possessing isotropic band
structures. We propose an integral representation of the AB scattering state of
an electron scattered by an infinitely thin solenoid. Moreover, we derive the
asymptotic form of the AB scattering state and obtain the differential cross
section from that. We found a remarkable result, namely that this cross section
is {\it the same for all isotropic systems} and agrees with that obtained first
by Aharonov and Bohm for spinless free particle systems. To demonstrate the
generality of our theory, we consider several specific multiband systems
relevant to condensed matter physics.
|
Understanding the long-term impact that changes in a city's transportation
infrastructure have on its spatial interactions remains a challenge. The
difficulty arises from the fact that the real impact may not be revealed in
static or aggregated mobility measures, as these are remarkably robust to
perturbations. More generally, the lack of longitudinal, cross-sectional data
demonstrating the evolution of spatial interactions at a meaningful urban scale
also hinders us from evaluating the sensitivity of movement indicators,
limiting our capacity to understand the evolution of urban mobility in depth.
Using very large mobility records distributed over three years we quantify the
impact of the completion of a metro line extension: the circle line (CCL) in
Singapore. We find that the commonly used movement indicators are almost
identical before and after the project was completed. However, in comparing the
temporal community structure across years, we do observe significant
differences in the spatial reorganization of the affected geographical areas.
The completion of CCL enables travelers to re-identify their desired
destinations collectively with lower transport cost, making the community
structure more consistent. These changes in locality are dynamic, and
characterized over short time-scales, offering us a different approach to
identify and analyze the long-term impact of new infrastructures on cities and
their evolution dynamics.
|
In this paper we develop a Markov Chain Monte Carlo code to study the dark
matter properties in interpreting the recent observations of cosmic ray
electron/positron excesses. We assume that the dark matter particles couple
dominantly to leptons and consider two cases separately with annihilating or
decaying into lepton pairs. The constraint on the central density profile from
diffuse $\gamma$-rays around the Galactic center is also included in the Markov
Chain Monte Carlo code self-consistently. In the numerical study, 7 parameters
are introduced. We fit two data sets independently and find that for the Data
set I (PAMELA+ATIC), dark matter with $m_{\chi}\approx0.7$ TeV for annihilation
(or 1.4 TeV for decay) and pure $e^+e^-$ channel is favored, while for the Data
set II (PAMELA+Fermi-LAT+H.E.S.S.) $m_{\chi}\approx 2$ TeV for annihilation (or
4 TeV for decay) and the combination of $\mu^+\mu^-$ and $\tau^+\tau^-$ final
states can best fit the data. The H.E.S.S. observation of the Galactic center
$\gamma$-rays puts a strong constraint on the central density profile of the
dark matter halo for the annihilation dark matter scenario. In this case the
NFW profile which is regarded as the typical predication from the cold dark
matter scenario, is excluded with a high significance ($>5\sigma$). For the
decaying dark matter scenario, the constraint is much weaker.
|
The study of the kinematics of galaxies within clusters or groups has the
limitation that only one of the three velocity components and only two of the
three spatial components of a galaxy position in six-dimensional phase space
can normally be measured. However, if multiple topological images of a cluster
exist, then the radial positions and sky plane mean velocities of galaxies in
the cluster may also be measurable from photometry of the two cluster images.
The vector arithmetic and principles of the analysis are presented. These are
demonstrated by assuming the suggested topological identification of the
clusters RX J1347.5-1145 and CL 09104+4109 to be correct and deducing the
sky-plane relative velocity component along the axis common to both images of
this would-be single cluster.
Three out of four of the inferred transverse velocities are consistent with
those expected in a rich cluster. A control sample of random `common' sky-plane
axes, independent of the topological hypothesis, implies that this is not
surprising. This shows that while galaxy kinematics are deducible from
knowledge of cosmological topology, it is not easy to use them to refute a
specific candidate manifold.
|
The classical limit of wave quantum mechanics is analyzed. It is shown that
the general requirements of continuity and finiteness to the solution
$\psi(x)=Ae^{i\phi(x)}+ Be^{-i\phi(x)}$, where $\phi(x)=\frac 1\hbar W(x)$ and
$W(x)$ is the reduced classical action of the physical system, result in the
asymptote of the exact solution and general quantization condition for $W(x)$,
which yields the exact eigenvalues of the system.
|
Here we apply the Generalized Second Law of Thermodynamics (GSL) to black
holes accreting and emitting in the present Universe and derive upper limits on
the variation in the gravitational constant G. The limits depend on how the
gravitational mass M varies with G. Parameterizing M goes as G^n, if n > -1/2
(including n = 0), the GSL applied to the full range of black holes
theoretically allowed in the present Universe does not constrain an increase in
G but any decrease must be less than about |(1/G) dG/dt| = 10^-52 per second.
If n < -1/2, the GSL does not constrain a decrease in G but any increase must
be less than about |(1/G) dG/dt| = 10^-52 per second. At earlier redshifts,
these constraints weaken as z^3. If n = -1/2, the GSL does not constrain a
decrease but any increase must be less than about |(1/G) dG/dt| = (1/t). If the
mass range is restricted to those black holes which have been astronomically
observed, the present constraints on n > -1/2 and n < -1/2 are only weakened by
a factor of about 10^8 with the tightest constraints coming from stellar mass
black holes and the n = -1/2 bound does not change. The stellar mass black hole
limits should constrain the variation of G in Standard Model physics and all
extension models which approximate classical physics on astronomical scales.
|
Quasiperiodic Jacobi operators arise as mathematical models of quasicrystals
and in more general studies of structures exhibiting aperiodic order. The
spectra of these self-adjoint operators can be quite exotic, such as Cantor
sets, and their fine properties yield insight into associated dynamical
systems. Quasiperiodic operators can be approximated by periodic ones, the
spectra of which can be computed via two finite dimensional eigenvalue
problems. Since long periods are necessary to get detailed approximations, both
computational efficiency and numerical accuracy become a concern. We describe a
simple method for numerically computing the spectrum of a period-$K$ Jacobi
operator in $O(K^2)$ operations, and use it to investigate the spectra of
Schr\"odinger operators with Fibonacci, period doubling, and Thue-Morse
potentials.
|
We propose and demonstrate a photon-efficient optical classifier to overcome
the Rayleigh limit in spatial resolution. It utilizes mode-selective sum
frequency generation and single-pixel photon detection to resolve closely
spaced incoherent sources based on photon counting statistics. Super-resolving
and photon efficient, this technique can find applications in microscopy, light
detection and ranging (LiDAR), and astrophysics.
|
Let $M(\mathit{odd})\subset Z/2[[x]]$ be the space of odd mod~2 modular forms
of level $\Gamma_{0}(3)$. It is known that the formal Hecke operators
$T_{p}:Z/2[[x]]\rightarrow Z/2[[x]]$, $p$ an odd prime other than $3$,
stabilize $M(\mathit{odd})$ and act locally nilpotently on it. So
$M(\mathit{odd})$ is an $\mathcal{O} = Z/2[[t_{5},t_{7}, t_{11},
t_{13}]]$-module with $t_{p}$ acting by $T_{p}$, $p\in \{5,7,11,13\}$. We show:
(1) Each $T_{p}:M(\mathit{odd})\rightarrow M(\mathit{odd})$, $p\ne 3$, is
multiplication by some $u$ in the maximal ideal, $m$, of $\mathcal{O}$.
(2) The kernel, $I$, of the action of $\mathcal{O}$ on $M(\mathit{odd})$ is
$(A^{2},AC,BC)$ where $A,B,C$ have leading forms $t_{5}+t_{7}+t_{13},\,
t_{7},\, t_{11}$.
We prove analogous results in level $\Gamma_{0}(5)$. Now $\mathcal{O}$ is
$Z/2[[t_{3},t_{7},t_{11},t_{13}]]$, and the leading forms of $A,B,C$ are
$t_{3}+t_{7}+t_{11},\, t_{7},\, t_{13}$.
Let $\mathit{HE}$, "the shallow mod~2 Hecke algebra (of level $\Gamma_{0}(3)$
or $\Gamma_{0}(5)$)" be $\mathcal{O}/I$. (1) and (2) above show that
$\mathit{HE}$ is a 1 variable power series ring over the 1-dimensional local
ring $Z/2[[A,B,C]]/(A^{2},AC,BC)$. For another approach to all these results,
based on deformation theory, see Deo and Medvedovsky, "Explicit old components
of mod-2 Hecke algebras with trivial $\bar{\rho}$."
|
Images Stacks as Parametric Surfaces (ISPS) is a powerful model that was
originally proposed for image registration. Being closely related to mutual
information (MI) - the most classic similarity measure for image registration,
ISPS works well across different categories of registration problems. The
Signals as Parametric Curves (SPC) model is derived from ISPS extended to
1-dimensional signals. Blind Source Separation (BSS) is a classic problem in
signal processing, where Independent Component Analysis (ICA) based approaches
are popular and effective. Since MI plays an important role in ICA, based on
the close relationship with MI, we apply SPC model to BSS in this paper, and
propose a group of geometrical objective functions that are simple yet
powerful, and serve as replacements of original MI-based objective functions.
Motivated by the geometrical objective functions, we also propose a
second-order-statistics approach, FT-PCA. Both geometrical objective functions
and FT-PCA consider signals as functions instead of stochastic processes, make
use of derivative information of signals, and do not rely on the independence
assumption. In this paper, we discuss the reasonability of the assumptions of
geometrical objective functions and FT-PCA, and show their effectiveness by
synthetic experiments, comparing with other previous classic approaches.
|
The simplest toroidally compactified string theories exhibit a duality
between large and small radii: compactification on a circle, for example, is
invariant under R goes to 1/R. Compactification on more general Lorentzian
lattices (i.e. toroidal compactification in the presence of background metric,
antisymmetric tensor, and gauge fields) yields theories for which large-small
invariance is not so simple. Here an equivalence is demonstrated between large
and small geometries for all toroidal compactifications. By repeatedly
transforming the momentum mode corresponding to the smallest winding length to
another mode on the lattice, it is possible to increase the volume to exceed a
finite lower bound.
|
We calculate the nonequilibrium dynamic evolution of a one-dimensional system
of two-component fermionic atoms after a strong local quench by using a
time-dependent spin-density-functional theory. The interaction quench is also
considered to see its influence on the spin-charge separation. It is shown that
the charge velocity is larger than the spin velocity for the system of on-site
repulsive interaction (Luttinger liquid), and vise versa for the system of
on-site attractive interaction (Luther-Emery liquid). We find that both the
interaction quench and polarization suppress the spin-charge separation.
|
Using a phenomenological Hamiltonian, we investigate the quasiparticle
lifetimes and dispersions in the three low energy bands, gamma, beta, and alpha
of Sr2RuO4. Couplings in the Hamiltonian are fixed so as to produce the mass
renormalization as measured in magneto-oscillation experiments. We thus find
reasonable agreement in all bands between our computed lifetimes and those
measured in ARPES experiments by Kidd et al. [1] and Ingle et al. [2]. In
comparing computed to measured quasiparticle dispersions, we however find good
agreement in the alpha-band alone.
|
The connotation of transaction costs has never been definitively determined,
and the independence of the concept has never been rigorously demonstrated.
This paper delves into the thought systems of several prominent economists in
the development of transaction cost economics, starting from first-hand
materials. By combining multiple works of the authors, it reconstructs the true
meanings and identifies endogeneity issues and logical inconsistencies. The
conclusion of this paper is bold. Previous research has been largely filled
with misinterpretations and misunderstandings, as people have focused solely on
the wording of transaction cost definitions, neglecting the nature of
transaction costs. The intention of transaction cost theory has been
unwittingly assimilated into the objects it intends to criticize. After
delineating the framework of "transaction costs-property rights-competition",
this paper reconstructs the concept of transaction costs and the history of
transaction cost concepts, providing a direct response to this theoretical
puzzle that has plagued the academic community for nearly a century.
|
In this paper, we present a simple lattice-theoretic characterization for
affine buildings of type A. We introduce a class of modular lattices, called
uniform modular lattices, and show that uniform modular lattices and affine
buildings of type A constitute the same object. This is an affine counterpart
of the well-known equivalence between projective geometries ($\simeq$
complemented modular lattices) and spherical buildings of type A.
|
A new method is presented for obtaining the general conformally flat
radiation metric by using the differential operators of Machado Ramos and
Vickers (a generalisation of the GHP operators) which are invariant under null
rotations and spin and boosts. The solution is found by constructing involutive
tables of these derivatives applied to the quantities which arise in the
Karlhede classification of metrics.
|
Pragmas for loop transformations, such as unrolling, are implemented in most
mainstream compilers. They are used by application programmers because of their
ease of use compared to directly modifying the source code of the relevant
loops. We propose additional pragmas for common loop transformations that go
far beyond the transformations today's compilers provide and should make most
source rewriting for the sake of loop optimization unnecessary. To encourage
compilers to implement these pragmas, and to avoid a diversity of incompatible
syntaxes, we would like to spark a discussion about an inclusion to the OpenMP
standard.
|
By building on a recently introduced genetic-inspired attribute-based
conceptual framework for safety risk analysis, we propose a novel methodology
to compute construction univariate and bivariate construction safety risk at a
situational level. Our fully data-driven approach provides construction
practitioners and academicians with an easy and automated way of extracting
valuable empirical insights from databases of unstructured textual injury
reports. By applying our methodology on an attribute and outcome dataset
directly obtained from 814 injury reports, we show that the frequency-magnitude
distribution of construction safety risk is very similar to that of natural
phenomena such as precipitation or earthquakes. Motivated by this observation,
and drawing on state-of-the-art techniques in hydroclimatology and insurance,
we introduce univariate and bivariate nonparametric stochastic safety risk
generators, based on Kernel Density Estimators and Copulas. These generators
enable the user to produce large numbers of synthetic safety risk values
faithfully to the original data, allowing safetyrelated decision-making under
uncertainty to be grounded on extensive empirical evidence. Just like the
accurate modeling and simulation of natural phenomena such as wind or
streamflow is indispensable to successful structure dimensioning or water
reservoir management, we posit that improving construction safety calls for the
accurate modeling, simulation, and assessment of safety risk. The underlying
assumption is that like natural phenomena, construction safety may benefit from
being studied in an empirical and quantitative way rather than qualitatively
which is the current industry standard. Finally, a side but interesting finding
is that attributes related to high energy levels and to human error emerge as
strong risk shapers on the dataset we used to illustrate our methodology.
|
Rigid origami, with applications ranging from nano-robots to unfolding solar
sails in space, describes when a material is folded along straight crease line
segments while keeping the regions between the creases planar. Prior work has
found explicit equations for the folding angles of a flat-foldable degree-4
origami vertex and some cases of degree-6 vertices. We extend this work to
generalized symmetries of the degree-6 vertex where all sector angles equal
$60^\circ$. We enumerate the different viable rigid folding modes of these
degree-6 crease patterns and then use $2^{nd}$-order Taylor expansions and
prior rigid folding techniques to find algebraic folding angle relationships
between the creases. This allows us to explicitly compute the configuration
space of these degree-6 vertices, and in the process we uncover new
explanations for the effectiveness of Weierstrass substitutions in modeling
rigid origami. These results expand the toolbox of rigid origami mechanisms
that engineers and materials scientists may use in origami-inspired designs.
|
The practical application of sodium-ion hybrid capacitors is limited by their
low energy densities resulted from the kinetics mismatch between cathodes and
anodes, and the fire safety related to the flammable electrolyte-separator
system. Hence, we report a rational design of metal-organic frameworks (MOFs,
UiO-66) modified PVDF-HFP separator. High tensile strength and dimensional
thermal stability of the separator reduce the risk of electrode short circuit
caused by the separator deformation. MCC test demonstrates a reduction of 75%
in peak heat release rate (pHRR), indicating an enhanced fire-resistant
property of the separator. This is due to the transformation of UiO-66 into
ZrO2 accompanied by the consumption of oxygen and the formation of the barrier
char that suppresses further heat release. Quasi-solid-state electrolyte
prepared based on this separator presents an enhanced ionic conductivity of
2.44 mS*cm-1 and Na-ion transference number of 0.55, which are related to the
high porosity ( >70%) and electrolyte uptake (~ 320%) of the separator.
Moreover, the open metal sites of UiO-66 can capture PF6- and consequently
liberate the Na+ for faster migration, thus reducing the kinetics mismatch
between cathodes and anodes. Such multifunctional separator enables the
quasi-solid-state Na-ion hybrid capacitor to achieve high energy density (182
Wh*kg-1 @31 W*kg-1) and power density (5280 W*kg-1 @22 Wh*kg-1), as well as
excellent cyclic stability (10000 cycles @1000 mA*g-1).
Keywords: Quasi-solid-state; PVDF-HFP; Metal-organic frameworks; Dimensional
thermal stability; Fire safety; Selective charge transfer
|
A search is reported for excited $\tau$-leptons and leptoquarks in events
with two hadronically decaying $\tau$-leptons and two or more jets. The search
uses proton-proton (pp) collision data at $\sqrt{s} = 13$ TeV recorded by the
ATLAS experiment during the Run 2 of the Large Hadron Collider in 2015-2018.
The total integrated luminosity is 139 fb$^{-1}$. The excited $\tau$-lepton is
assumed to be produced and to decay via a four-fermion contact interaction into
an ordinary $\tau$-lepton and a quark-antiquark pair. The leptoquarks are
assumed to be produced in pairs via the strong interaction, and each leptoquark
is assumed to couple to a charm or lighter quark and a $\tau$-lepton. No excess
over the background prediction is observed. Excited $\tau$-leptons with masses
below 2.8 TeV are excluded at 95% CL in scenarios with the contact interaction
scale $\Lambda$ set to 10 TeV. At the extreme limit of model validity where
$\Lambda$ is set equal to the excited $\tau$-lepton mass, excited
$\tau$-leptons with masses below 4.6 TeV are excluded. Leptoquarks with masses
below 1.3 TeV are excluded at 95% CL if their branching ratio to a charm quark
and a $\tau$-lepton equals 1. The analysis does not exploit flavour-tagging in
the signal region.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.