text
stringlengths 6
128k
|
---|
We present deep polarimetric observations at 1420 MHz of the European Large
Area ISO Survey North 1 region (ELAIS N1) as part of the Dominion Radio
Astrophysical Observatory Planck Deep Fields project. By combining closely
spaced aperture synthesis fields, we image a region of 7.43 square degrees to a
maximum sensitivity in Stokes Q and U of 78 microJy/beam, and detect 786
compact sources in Stokes I. Of these, 83 exhibit polarized emission. We find
that the differential source counts (log N - log p) for polarized sources are
nearly constant down to p > 500 microJy, and that these faint polarized radio
sources are more highly polarized than the strong source population. The median
fractional polarization is (4.8 +/- 0.7)% for polarized sources with Stokes I
flux density between 1 and 30 mJy; approximately three times larger than
sources with I > 100 mJy. The majority of the polarized sources have been
identified with galaxies in the Spitzer Wide Area Infrared Extragalactic Survey
(SWIRE) image of ELAIS N1. Most of the galaxies occupy regions in the IRAC
5.8/3.6 micron vs. 8.0/4.5 micron color-color diagram associated with dusty
AGNs, or with ellipticals with an aging stellar population. A few host galaxies
have colors that suggests significant PAH emission in the near-infrared. A
small fraction, 12%, of the polarized sources are not detected in the SWIRE
data. None of the polarized sources in our sample appears to be associated with
an actively star-forming galaxy.
|
We show storage of the circular polarisation of an optical field,
transferring it to the spin-state of an individual electron confined in a
single semiconductor quantum dot. The state is subsequently readout through the
electronically-triggered emission of a single photon. The emitted photon shares
the same polarisation as the initial pulse but has a different energy, making
the transfer of quantum information between different physical systems
possible. With an applied magnetic field of 2 Tesla, spin memory is preserved
for at least 1000 times more than the exciton's radiative lifetime.
|
Negotiation is a complex activity involving strategic reasoning, persuasion,
and psychology. An average person is often far from an expert in negotiation.
Our goal is to assist humans to become better negotiators through a
machine-in-the-loop approach that combines machine's advantage at data-driven
decision-making and human's language generation ability. We consider a
bargaining scenario where a seller and a buyer negotiate the price of an item
for sale through a text-based dialog. Our negotiation coach monitors messages
between them and recommends tactics in real time to the seller to get a better
deal (e.g., "reject the proposal and propose a price", "talk about your
personal experience with the product"). The best strategy and tactics largely
depend on the context (e.g., the current price, the buyer's attitude).
Therefore, we first identify a set of negotiation tactics, then learn to
predict the best strategy and tactics in a given dialog context from a set of
human-human bargaining dialogs. Evaluation on human-human dialogs shows that
our coach increases the profits of the seller by almost 60%.
|
Spin-glass and chiral-glass orderings in three-dimensional Heisenberg spin
glasses are studied both by equilibrium and off-equilibrium Monte Carlo
simulations. Fully isotropic model is found to exhibit a finite-temperature
chiral-glass transition without the conventional spin-glass order. Although
chirality is an Ising-like quantity from symmetry, universality class of the
chiral-glass transition appears to be different from that of the standard Ising
spin glass. In the off-equilibrium simulation, while the spin autocorrelation
exhibits only an interrupted aging, the chirality autocorrelation persists to
exhibit a pronounced aging effect reminisecnt of the one observed in the
mean-field model. Effects of random magnetic anisotropy is also studied by the
off-equilibrium simulation, in which asymptotic mixing of the spin and the
chirality is observed.
|
Using an extensive sample of nearby galaxies (the Nearby Galaxies Catalog, by
Tully), we investigate the environment of the galaxies hosting low-luminosity
AGNs (Seyferts and LINERs). We define the local galaxy density, adopting a new
correction for the incompleteness of the galaxy sample at large distances. We
consider both a complete sample of bright and nearby AGNs, identified from the
nuclear spectra obtained in available wide optical spectroscopic surveys, and a
complete sample of nearby Seyferts. Basically, we compare the local galaxy
density distributions of the AGNs with those of non-AGN samples, chosen in
order to match the magnitude and morphological type distributions of the AGN
samples. We find, only for the early-type spirals more luminous than $\sim
M^*$, that both LINERs and Seyferts tend to reside in denser environments on
all the scales tested, from tenths of Mpc to a few Mpc; moreover Seyferts show
an enhanced small-scale density segregation with respect to LINERs. This gives
support to the idea that AGNs can be stimulated by interactions. On larger
scales, tens of Mpc, we find that the AGNs hosted in luminous early-type
spirals show a tendency to stay near the center of the Local Supercluster.
Finally we discuss the interpretations of our findings and their consequences
for some possible scenarios of AGN formation and evolution and for the problem
of how AGNs trace the large-scale structures.
|
In the present paper one-dimensional two-component atomic Fermi gas is
considered in long-wave limit as a Luttinger liquid. The mechanisms leading to
instability of the non-Fermi-liquid state of a Luttinger liquid with two-level
impurities are proposed. Since exchange scattering in 1D systems is two-channel
scattering in a certain range of parameters, several types of non-Fermi-liquid
excitations with different quantum numbers exist in the vicinity of the Fermi
level. These excitations include, first, charge density fluctuations in the
Luttinger liquid and, second, many-particle excitations due to two-channel
exchange interaction, which are associated with band-type as well as impurity
fermion states. It is shown that mutual scattering of many-particle excitations
of various types leads to the emergence of an additional Fermi-liquid
singularity in the vicinity of the Fermi level. The conditions under which the
Fermi-liquid state with a new energy scale (which is much smaller than the
Kondo temperature) is the ground state of the system are formulated.
|
Let $G$ be a finite group of Lie type. In studying the cross-characteristic
representation theory of $G$, the (specialized) Hecke algebra
$H=\End_G(\ind_B^G1_B)$ has played a important role. In particular, when
$G=GL_n(\mathbb F_q)$ is a finite general linear group, this approach led to
the Dipper-James theory of $q$-Schur algebras $A$. These algebras can be
constructed over $\sZ:=\mathbb Z[t,t^{-1}]$ as the $q$-analog (with $q=t^2$) of
an endomorphism algebra larger than $H$, involving parabolic subgroups. The
algebra $A$ is quasi-hereditary over $\sZ$. An analogous algebra, still denoted
$A$, can always be constructed in other types. However, these algebras have so
far been less useful than in the $GL_n$ case, in part because they are not
generally quasi-hereditary.
Several years ago, reformulating a 1998 conjecture, the authors proposed (for
all types) the existence of a $\sZ$-algebra $A^+$ having a stratified derived
module category, with strata constructed via Kazhdan-Lusztig cell theory. The
algebra $A$ is recovered as $A=eA^+e$ for an idempotent $e\in A^+$. A main goal
of this monograph is to prove this conjecture completely. The proof involves
several new homological techniques using exact categories. Following the proof,
we show that $A^+$ does become quasi-hereditary after the inversion of the bad
primes. Some first applications of the result -- e.g., to decomposition
matrices -- are presented, together with several open problems.
|
Ultra-high resolution image segmentation has raised increasing interests in
recent years due to its realistic applications. In this paper, we innovate the
widely used high-resolution image segmentation pipeline, in which an ultra-high
resolution image is partitioned into regular patches for local segmentation and
then the local results are merged into a high-resolution semantic mask. In
particular, we introduce a novel locality-aware context fusion based
segmentation model to process local patches, where the relevance between local
patch and its various contexts are jointly and complementarily utilized to
handle the semantic regions with large variations. Additionally, we present the
alternating local enhancement module that restricts the negative impact of
redundant information introduced from the contexts, and thus is endowed with
the ability of fixing the locality-aware features to produce refined results.
Furthermore, in comprehensive experiments, we demonstrate that our model
outperforms other state-of-the-art methods in public benchmarks. Our released
codes are available at: https://github.com/liqiokkk/FCtL.
|
We analyze an extremely deep 450-$\mu$m image
($1\sigma=0.56$\,mJy\,beam$^{-1}$) of a $\simeq 300$\,arcmin$^{2}$ area in the
CANDELS/COSMOS field as part of the SCUBA-2 Ultra Deep Imaging EAO Survey
(STUDIES). We select a robust (signal-to-noise ratio $\geqslant 4$) and
flux-limited ($\geqslant 4$\,mJy) sample of 164 sub-millimeter galaxies (SMGs)
at 450-$\mu$m that have $K$-band counterparts in the COSMOS2015 catalog
identified from radio or mid-infrared imaging. Utilizing this SMG sample and
the 4705 $K$-band-selected non-SMGs that reside within the noise level
$\leqslant 1$\,mJy\,beam$^{-1}$ region of the 450-$\mu$m image as a training
set, we develop a machine-learning classifier using $K$-band magnitude and
color-color pairs based on the thirteen-band photometry available in this
field. We apply the trained machine-learning classifier to the wider COSMOS
field (1.6\,deg$^{2}$) using the same COSMOS2015 catalog and identify a sample
of 6182 450-$\mu$m SMG candidates with similar colors. The number density,
radio and/or mid-infrared detection rates, redshift and stellar mass
distributions, and the stacked 450-$\mu$m fluxes of these SMG candidates, from
the S2COSMOS observations of the wide field, agree with the measurements made
in the much smaller CANDELS field, supporting the effectiveness of the
classifier. Using this 450-$\mu$m SMG candidate sample, we measure the
two-point autocorrelation functions from $z=3$ down to $z=0.5$. We find that
the 450-$\mu$m SMG candidates reside in halos with masses of $\simeq
(2.0\pm0.5) \times10^{13}\,h^{-1}\,\rm M_{\odot}$ across this redshift range.
We do not find evidence of downsizing that has been suggested by other recent
observational studies.
|
Recent work on the logical structure of non-locality has constructed
scenarios where observations of multi-partite systems cannot be adequately
described by compositions of non-signaling subsystems. In this paper we apply
these frameworks to economics. First we construct a empirical model of choice,
where choices are understood as observable outcomes in a certain sense. An
analysis of contextuality within this framework allows us to characterize which
scenarios allow for the possible construction of an adequate global choice
rule. In essence, we mathematically characterize when it makes sense to
consider the choices of a group as composed of individual choices. We then map
out the logical space of some relevant empirical principles, relating
properties of these contextual choice scenarios to no-signalling theories and
to the weak axiom of revealed preference.
|
Electron scattering on a thin layer where the potential depends
self-consistently on the wave function has been studied. When the amplitude of
the incident wave exceeds a certain threshold, a soliton-shaped brightening
(darkening) appears on the layer causing diffraction of the wave. Thus the
spontaneously formed transverse pattern can be viewed as a self-induced
nonlinear quantum screen. Attractive or repulsive nonlinearities result in
different phase shifts of the wave function on the screen, which give rise to
quite different diffraction patterns. Among others, the nonlinearity can cause
self-focusing of the incident wave into a ``beam'', splitting in two ``beams'',
single or double traces with suppressed reflection or transmission, etc.
|
The processes occurring in climatic change evolution and their variations
play a major role in environmental engineering. Different techniques are used
to model the relationship between temperatures, dew point and relative
humidity. Gene expression programming is capable of modelling complex realities
with great accuracy, allowing, at the same time, the extraction of knowledge
from the evolved models compared to other learning algorithms. This research
aims to use Gene Expression Programming for modelling of dew point. Generally,
accuracy of the model is the only objective used by selection mechanism of GEP.
This will evolve large size models with low training error. To avoid this
situation, use of multiple objectives, like accuracy and size of the model are
preferred by Genetic Programming practitioners. Multi-objective problem finds a
set of solutions satisfying the objectives given by decision maker.
Multiobjective based GEP will be used to evolve simple models. Various
algorithms widely used for multi objective optimization like NSGA II and SPEA 2
are tested for different test cases. The results obtained thereafter gives idea
that SPEA 2 is better algorithm compared to NSGA II based on the features like
execution time, number of solutions obtained and convergence rate. Thus
compared to models obtained by GEP, multi-objective algorithms fetch better
solutions considering the dual objectives of fitness and size of the equation.
These simple models can be used to predict dew point.
|
A well known result on pseudodifferential operators states that the
noncommutative residue (Wodzicki residue) of a pseudodifferential projection
vanishes. This statement is non-local and implies the regularity of the eta
invariant at zero of Dirac type operators. We prove that in a filtered algebra
the value of a projection under any residual trace depends only on the
principal part of the projection. This general, purely algebraic statement
applied to the algebra of projective pseudodifferential operators implies that
the noncommutative residue factors to a map from the twisted K-theory of the
co-sphere bundle. We use arguments from twisted K-theory to show that this map
vanishes, thus showing that the noncommutative residue of a projective
pseudodifferential projection vanishes. This also gives a very short proof in
the classical setting.
|
We give a short proof of the Gauss-Bonnet theorem for a real oriented
Riemannian vector bundle $E$ of even rank over a closed compact orientable
manifold $M$. This theorem reduces to the classical Gauss-Bonnet-Chern theorem
in the special case when $M$ is a Riemannian manifold and $E$ is the tangent
bundle of $M$ endowed with the Levi-Civita connection. The proof is based on an
explicit geometric construction of the Thom class for 2-plane bundles.
|
To synthesize peptides alongside the RNAs making the so-called RNA world,
some genetic coding involving RNA had to develop. Herein, it is proposed that
the first real-coding setup was a direct one, made up of continuous
poly-tRNA-like molecules, with each tRNA-like moiety carrying, beyond and near
its 5 prime or 3 prime end, a trinucleotide site for specific amino acid
binding: the sequence and continuity of the tRNA moieties of a particular
poly-tRNA would ensure the sequence and continuity of the amino acids of the
corresponding peptide or small protein. In parallel with these particular
entities, and enhancing their peptide-forming function, a proto-ribosome and
primitive amino acid-activation system would develop. At some stage, one
critical innovation would be the appearance of RNA fragments that could tighten
several adjacent tRNA moieties together on a particular poly-tRNA molecule, by
pairing with the second trinucleotide sequence (identical to the first one
carrying the specific amino acid-binding site) situated at, or close to, the
middle of each tRNA moiety (i.e., the present anticodon site). These RNA
fragments, acting as authentic co-ribozymes in the peptide-synthesizing
apparatus, would constitute the ancestors of the present mRNAs. Later, on these
mRNA-like guiding fragments, free tRNA forms would be additionally used, first
keeping their amino acid-binding sites, then losing them in favor of a specific
amino acid attachment at a CCA arm at their 3 prime end. Finally, these latter
mechanisms would progressively prevail, leading to the modern and universal
indirect genetic coding system. Experimental and theoretical arguments are
presented and discussed in favor of such a scenario for the origin and
evolution of genetic coding.
|
Measurements of the $^{17}$O nuclear magnetic resonance (NMR) quadrupolar
spectrum of apical oxygen in HgBa$_{2}$CuO$_{4+\delta}$ were performed over a
range of magnetic fields from 6.4 to 30\,T in the superconducting state. Oxygen
isotope exchanged single crystals were investigated with doping corresponding
to superconducting transition temperatures from 74\,K underdoped, to 78\,K
overdoped. The apical oxygen site was chosen since its NMR spectrum has narrow
quadrupolar satellites that are well separated from any other resonance.
Non-vortex contributions to the spectra can be deconvolved in the time domain
to determine the local magnetic field distribution from the vortices. Numerical
analysis using Brandt's Ginzburg-Landau theory was used to find structural
parameters of the vortex lattice, penetration depth, and coherence length as a
function of magnetic field in the vortex solid phase. From this analysis we
report a vortex structural transition near 15\,T from an oblique lattice with
an opening angle of $73^{\circ}$ at low magnetic fields to a triangular lattice
with $60^{\circ}$ stabilized at high field. The temperature for onset of vortex
dynamics has been identified with vortex lattice melting. This is independent
of the magnetic field at sufficiently high magnetic field similar to that
reported for YBa$_2$Cu$_3$O$_7$ and Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$
and is correlated with mass anisotropy of the material. This behavior is
accounted for theoretically only in the limit of very high anisotropy.
|
Diffusion Purification, purifying noised images with diffusion models, has
been widely used for enhancing certified robustness via randomized smoothing.
However, existing frameworks often grapple with the balance between efficiency
and effectiveness. While the Denoising Diffusion Probabilistic Model (DDPM)
offers an efficient single-step purification, it falls short in ensuring
purified images reside on the data manifold. Conversely, the Stochastic
Diffusion Model effectively places purified images on the data manifold but
demands solving cumbersome stochastic differential equations, while its
derivative, the Probability Flow Ordinary Differential Equation (PF-ODE),
though solving simpler ordinary differential equations, still requires multiple
computational steps. In this work, we demonstrated that an ideal purification
pipeline should generate the purified images on the data manifold that are as
much semantically aligned to the original images for effectiveness in one step
for efficiency. Therefore, we introduced Consistency Purification, an
efficiency-effectiveness Pareto superior purifier compared to the previous
work. Consistency Purification employs the consistency model, a one-step
generative model distilled from PF-ODE, thus can generate on-manifold purified
images with a single network evaluation. However, the consistency model is
designed not for purification thus it does not inherently ensure semantic
alignment between purified and original images. To resolve this issue, we
further refine it through Consistency Fine-tuning with LPIPS loss, which
enables more aligned semantic meaning while keeping the purified images on data
manifold. Our comprehensive experiments demonstrate that our Consistency
Purification framework achieves state-of the-art certified robustness and
efficiency compared to baseline methods.
|
Using the recently discovered Clifford statistics we propose a simple model
for the grand canonical ensemble of the carriers in the theory of fractional
quantum Hall effect. The model leads to a temperature limit associated with the
permutational degrees of freedom of such an ensemble.
We also relate Schur's theory of projective representations of the
permutation groups to physics, and remark on possible extensions of the second
quantization procedure.
|
Use expert visualization or conventional clinical indices can lack accuracy
for borderline classications. Advanced statistical approaches based on
eigen-decomposition have been mostly concerned with shape and motion indices.
In this paper, we present a new approach to identify CVDs from cine-MRI by
estimating large pools of radiomic features (statistical, shape and textural
features) encoding relevant changes in anatomical and image characteristics due
to CVDs. The calculated cine-MRI radiomic features are assessed using
sequential forward feature selection to identify the most relevant ones for
given CVD classes (e.g. myocardial infarction, cardiomyopathy, abnormal right
ventricle). Finally, advanced machine learning is applied to suitably integrate
the selected radiomics for final multi-feature classification based on Support
Vector Machines (SVMs). The proposed technique was trained and cross-validated
using 100 cine-MRI cases corresponding to five different cardiac classes from
the ACDC MICCAI 2017 challenge
\footnote{https://www.creatis.insa-lyon.fr/Challenge/acdc/index.html}. All
cases were correctly classified in this preliminary study, indicating potential
of using large-scale radiomics for MRI-based diagnosis of CVDs.
|
We give upper bounds on the size of the gap between the constant term and the
next non-zero Fourier coefficient of an entire modular form of given weight for
\Gamma_0(2). Numerical evidence indicates that a sharper bound holds for the
weights h \equiv 2 . We derive upper bounds for the minimum positive integer
represented by level two even positive-definite quadratic forms. Our data
suggest that, for certain meromorphic modular forms and p=2,3, the p-order of
the constant term is related to the base-p expansion of the order of the pole
at infinity, and they suggest a connection between divisibility properties of
the Ramanujan tau function and those of the Fourier coefficients of 1/j.
|
The 146Sm/144Sm ratio in the early solar system has been constrained by Nd/Sm
isotope ratios in meteoritic material. Predictions of 146Sm and 144Sm
production in the gamma-process in massive stars are at odds with these
constraints and this is partly due to deficiences in the prediction of the
reaction rates involved. The production ratio depends almost exclusively on the
(gamma,n)/(gamma,alpha) branching at 148Gd. A measurement of
144Sm(alpha,gamma)148Gd at low energy had discovered considerable discrepancies
between cross section predictions and the data. Although this reaction cross
section mainly depends on the optical alpha+nucleus potential, no global
optical potential has yet been found which can consistently describe the
results of this and similar alpha-induced reactions at the low energies
encountered in astrophysical environments. The untypically large deviation in
144Sm(alpha,gamma) and the unusual energy dependence can be explained, however,
by low-energy Coulomb excitation which is competing with compound nucleus
formation at very low energies. Considering this additional reaction channel,
the cross sections can be described with the usual optical potential
variations, compatible with findings for (n,alpha) reactions in this mass
range. Low-energy (alpha,gamma) and (alpha,n) data on other nuclei can also be
consistently explained in this way. Since Coulomb excitation does not affect
alpha-emission, the 148Gd(gamma,alpha) rate is much higher than previously
assumed. This leads to small 146Sm/144Sm stellar production ratios, in even
more pronounced conflict with the meteorite data.
|
We investigated the origin of the high reverse leakage current in light
emitting diodes (LEDs) based on (In,Ga)N/GaN nanowire (NW) ensembles grown by
molecular beam epitaxy on Si substrates. To this end, capacitance deep level
transient spectroscopy (DLTS) and temperature-dependent current-voltage (I-V)
measurements were performed on a fully processed NW-LED. The DLTS measurements
reveal the presence of two distinct electron traps with high concentrations in
the depletion region of the p-i-n junction. These band gap states are located
at energies of $570\pm20$ and $840\pm30$ meV below the conduction band minimum.
The physical origin of these deep level states is discussed. The
temperature-dependent I-V characteristics, acquired between 83 and 403 K, show
that different conduction mechanisms cause the observed leakage current. On the
basis of all these results, we developed a quantitative physical model for
charge transport in the reverse bias regime. By taking into account the mutual
interaction of variable range hopping and electron emission from Coulombic trap
states, with the latter being described by phonon-assisted tunnelling and the
Poole-Frenkel effect, we can model the experimental I-V curves in the entire
range of temperatures with a consistent set of parameters. Our model should be
applicable to planar GaN-based LEDs as well. Furthermore, possible approaches
to decrease the leakage current in NW-LEDs are proposed.
|
The 4-year light curves of 156,717 stars observed with NASA's Kepler mission
are analyzed using the AutoRegressive Planet Search (ARPS) methodology
described by Caceres et al. (2019). The three stages of processing are: maximum
likelihood ARIMA modeling of the light curves to reduce stellar brightness
variations; constructing the Transit Comb Filter periodogram to identify
transit-like periodic dips in the ARIMA residuals; Random Forest classification
trained on Kepler Team confirmed planets using several dozen features from the
analysis. Orbital periods between 0.2 and 100 days are examined. The result is
a recovery of 76% of confirmed planets, 97% when period and transit depth
constraints are added. The classifier is then applied to the full Kepler
dataset; 1,004 previously noticed and 97 new stars have light curve criteria
consistent with the confirmed planets, after subjective vetting removes clear
False Alarms and False Positive cases. The 97 Kepler ARPS Candidate Transits
mostly have periods $P<10$ days; many are UltraShort Period hot planets with
radii $<1$% of the host star. Extensive tabular and graphical output from the
ARPS time series analysis is provided to assist in other research relating to
the Kepler sample.
|
In this paper, on the first, we prove $\Delta r=2H$ where $\Delta $ is the
Laplacian operator, $r=\left( r_{1},r_{2},r_{3}\right) $ the position vector
field and $H$ is the mean curvature vector field of a surface $\mathcal{S}$ in
the 3-dimensional Heisenberg group $H_{3}.$ In the second, we classify the
ruled surfaces by straight geodesic lines, which are of finite type in $H_{3}.$
The straight geodesic lines belong to $\ker \omega ,$ where $\omega $ is the
Darboux form.
|
Modern autonomous vehicle systems use complex perception and control
components. These components can rapidly change during development of such
systems, requiring constant re-testing. Unfortunately, high-fidelity
simulations of these complex systems for evaluating vehicle safety are costly.
The complexity also hinders the creation of less computationally intensive
surrogate models.
We present GAS, the first approach for creating surrogate models of complete
(perception, control, and dynamics) autonomous vehicle systems containing
complex perception and/or control components. GAS's two-stage approach first
replaces complex perception components with a perception model. Then, GAS
constructs a polynomial surrogate model of the complete vehicle system using
Generalized Polynomial Chaos (GPC). We demonstrate the use of these surrogate
models in two applications. First, we estimate the probability that the vehicle
will enter an unsafe state over time. Second, we perform global sensitivity
analysis of the vehicle system with respect to its state in a previous time
step. GAS's approach also allows for reuse of the perception model when vehicle
control and dynamics characteristics are altered during vehicle development,
saving significant time.
We consider five scenarios concerning crop management vehicles that must not
crash into adjacent crops, self driving cars that must stay within their lane,
and unmanned aircraft that must avoid collision. Each of the systems in these
scenarios contain a complex perception or control component. Using GAS, we
generate surrogate models for these systems, and evaluate the generated models
in the applications described above. GAS's surrogate models provide an average
speedup of $3.7\times$ for safe state probability estimation (minimum
$2.1\times$) and $1.4\times$ for sensitivity analysis (minimum $1.3\times$),
while still maintaining high accuracy.
|
We investigate the capability of LISA to measure the sky position of
equal-mass, nonspinning black hole binaries, combining for the first time the
entire inspiral-merger-ringdown signal, the effect of the LISA orbits, and the
complete three-channel LISA response. We consider an ensemble of systems near
the peak of LISA's sensitivity band, with total rest mass of 2\times10^6
M\odot, a redshift of z = 1, and randomly chosen orientations and sky
positions. We find median sky localization errors of approximately \sim3
arcminutes. This is comparable to the field of view of powerful electromagnetic
telescopes, such as the James Webb Space Telescope, that could be used to
search for electromagnetic signals associated with merging massive black holes.
We investigate the way in which parameter errors decrease with measurement
time, focusing specifically on the additional information provided during the
merger-ringdown segment of the signal. We find that this information improves
all parameter estimates directly, rather than through diminishing correlations
with any subset of well- determined parameters. Although we have employed the
baseline LISA design for this study, many of our conclusions regarding the
information provided by mergers will be applicable to alternative mission
designs as well.
|
The upgraded IGISOL facility with JYFLTRAP, at the accelerator laboratory of
the University of Jyv\"askyl\"a, has been supplied with a new cyclotron which
will provide protons of the order of 100 {\mu}A with up to 30 MeV energy, or
deuterons with half the energy and intensity. This makes it an ideal place for
measurements of neutron-induced fission products from various actinides, in
view of proposed future nuclear fuel cycles. The groups at Uppsala University
and University of Jyv\"askyl\"a are working on the design of a neutron
converter that will be used as neutron source in fission yield studies. The
design is based on simulations with Monte Carlo codes and a benchmark
measurement that was recently performed at The Svedberg Laboratory in Uppsala.
In order to obtain a competitive count rate the fission targets will be placed
very close to the neutron converter. The goal is to have a flexible design that
will enable the use of neutron fields with different energy distributions. In
the present paper, some considerations for the design of the neutron converter
will be discussed, together with different scenarios for which fission targets
and neutron energies to focus on.
|
We study long-term radio/X-ray correlations in Cyg X-1. We find the
persistent existence of a compact radio jet in its soft state. This represents
a new phenomenon in black-hole binaries, in addition to compact jets in the
hard state and episodic ejections of ballistic blobs in the intermediate state.
While the radio emission in the hard state is strongly correlated with both the
soft and hard X-rays, the radio flux in the soft state is not directly
correlated with the flux of the dominant disk blackbody in soft X-rays, but
instead it is lagged by about a hundred days. We interpret the lag as occurring
in the process of advection of the magnetic flux from the donor through the
accretion disk. On the other hand, the soft-state radio flux is very tightly
correlated with the hard X-ray, 15--50 keV, flux without a measurable lag and
at the same rms. This implies that the X-ray emitting disk corona and the
soft-state jet are powered by the same process, probably magnetically.
|
In this paper we continue investigations that we began in our previous works,
where we proved, that the phase diagram of Toda system on special linear groups
can be identified with the Bruhat order on symmetric group, when all the
eigenvalues of Lax matrix are distinct, or with the Bruhat order on
permutations of a multiset, if there are multiple eigenvalues. We show, that
the coincidence of the phase portrait of Toda system and the Hasse diagram of
Bruhat order holds in the case of arbitrary simple Lie groups of rank $2$: to
this end we need only to check this property for the two remaining groups of
second rank, $Sp(4,\mathbb R)$ and the real form of $G_2$.
|
We study models with a generalized inhomogeneous equation of state fluids, in
the context of singular inflation, focusing to so-called Type IV singular
evolution. In the simplest case, this cosmological fluid is described by an
equation of state with constant $w$, and therefore a direct modification of
this constant $w$ fluid, is achieved by using a generalized form of an equation
of state. We investigate from which models with generalized phenomenological
equation of state, a Type IV singular inflation can be generated and what the
phenomenological implications of this singularity would be. We support our
results with illustrative examples and we also study the impact of the Type IV
singularities on the slow-roll parameters and on the observational inflationary
indices, showing the consistency with Planck mission results. The unification
of singular inflation with singular dark energy era for specific generalized
fluids is also proposed.
|
Viewing Kan complexes as $\infty$-groupoids implies that pointed and
connected Kan complexes are to be viewed as $\infty$-groups. A fundamental
question is then: to what extent can one "do group theory" with these objects?
In this paper we develop a notion of a finite $\infty$-group: an $\infty$-group
with finitely many non-trivial homotopy groups which are all finite. We prove a
homotopical analog of the Sylow theorems for finite $\infty$-groups. We derive
two corollaries: the first is a homotopical analog of the Burnside's fixed
point lemma for $p$-groups and the second is a "group-theoretic"
characterization of (finite) nilpotent spaces.
|
We investigate the structural stability and magnetic properties of cubic,
tetragonal and hexagonal phases of Mn3Z (Z=Ga, Sn and Ge) Heusler compounds
using first-principles density-functional theory. We propose that the cubic
phase plays an important role as an intermediate state in the phase transition
from the hexagonal to the tetragonal phases. Consequently, Mn3Ga and Mn3Ge
behave differently from Mn3Sn, because the relative energies of the cubic and
hexagonal phases are different. This result agrees with experimental
observations from these three compounds. The weak ferromagnetism of the
hexagonal phase and the perpendicular magnetocrystalline anisotropy of the
tetragonal phase obtained in our calculations are also consistent with
experiment.
|
We present the results from our two-loop calculations of masses,
decay-constants, vacuum-expectation-values and the $K_{\ell4}$ form-factors in
three-flavour Chiral Perturbation Theory (CHPT). We use this to fit the $L_i^r$
to two-loops and discuss the ensuing predictions for $\pi\pi$-threshold
parameters.
|
A multi-species generalization of the asymmetric simple exclusion process
(ASEP) is studied in ordered sequential and sub-lattice parallel updating
schemes. In this model particles hop with their own specific probabilities to
their rightmost empty site and fast particles overtake slow ones with a
definite probability. Using Matrix Product Ansatz (MPA), we obtain the relevant
algebra, and study the uncorrelated stationary state of the model both for an
open system and on a ring. A complete comparison between the physical results
in these updates and those of random sequential introduced in [20,21] is made.
|
Traditional recommendation algorithms develop techniques that can help people
to choose desirable items. However, in many real-world applications, along with
a set of recommendations, it is also essential to quantify each
recommendation's (un)certainty. The conformal recommender system uses the
experience of a user to output a set of recommendations, each associated with a
precise confidence value. Given a significance level $\varepsilon$, it provides
a bound $\varepsilon$ on the probability of making a wrong recommendation. The
conformal framework uses a key concept called \emph{nonconformity measure} that
measures the strangeness of an item concerning other items. One of the
significant design challenges of any conformal recommendation framework is
integrating nonconformity measures with the recommendation algorithm. This
paper introduces an inductive variant of a conformal recommender system. We
propose and analyze different nonconformity measures in the inductive setting.
We also provide theoretical proofs on the error-bound and the time complexity.
Extensive empirical analysis on ten benchmark datasets demonstrates that the
inductive variant substantially improves the performance in computation time
while preserving the accuracy.
|
In this dissertation, we prove a number of results regarding the conformal
method of finding solutions to the Einstein constraint equations. These results
include necessary and sufficient conditions for the Lichnerowicz equation to
have solutions, global supersolutions which guarantee solutions to the
conformal constraint equations for near-constant-mean-curvature (near-CMC) data
as well as for far-from-CMC data, a proof of the limit equation criterion in
the near-CMC case, as well as a model problem on the relationship between the
asymptotic constants of solutions and the ADM mass. We also prove a
characterization of the Yamabe classes on asymptotically Euclidean manifolds
and resolve the (conformally) prescribed scalar curvature problem on
asymptotically Euclidean manifolds for the case of nonpositive scalar
curvatures.
Many, though not all, of the results in this dissertation have been
previously published in [Dilts13b], [DIMM14], [DL14], [DM15], and [DGI15]. This
article is the author's Ph.D. dissertation, except for a few minor changes.
|
We open out one of incorrect solutions of the Driac equation in the Coulomb
field given in a published paper. By introducing a transformation of function,
the paper transformed the original radial first-order Dirac-Coulomb equation
into two second-order Dirac-Coulomb equation. However, each of the second-order
differential equations has differential energy eigenvalues set. The original
paper wrote the two differential equations into one of forms, and then gave the
distinguished energy eigenvalues. The mathematical procedure is not correct.
For the same quantum system, introducing a transformation of function yields
two different energy eigenvaluse, the result violates the uniqueness of
solution. It actually shows that the given second-order differential equations
have no solution. On the other hand, the given formal solutions of the
second-order Dirac-Coulomb equations violate the conditions for determining
solution. Consequently, the solutions given by the author are pseudo solution,
and the corresponding energy eigenvalues set is also a pseudo eigenvalues set.
|
Context. The origin of the large star-to-star variation of the [Eu/Fe] ratios
observed in the extremely metal-poor (at [Fe/H]$\leq-3$) stars of the Galactic
halo is still a matter of debate.\\ Aims. In this paper, we explore this
problem by putting our stochastic chemical evolution model in the hierarchical
clustering framework, with the aim of explaining the observed spread in the
halo.\\ Methods. We compute the chemical enrichment of Eu occurring in the
building blocks that have possibly formed the Galactic halo. In this framework,
the enrichment from neutron star mergers can be influenced by the dynamics of
the binary systems in the gravitational potential of the original host galaxy.
In the least massive systems, the neutron stars can merge outside the host
galaxy and so only a small fraction of newly produced Eu can be retained by the
parent galaxy itself.\\ Results. In the framework of this new scenario, the
accreted merging neutron stars are able to explain the presence of stars with
sub-solar [Eu/Fe] ratios at [Fe/H]$\leq-3$, but only if we assume a delay time
distribution for merging of the neutron stars $\propto t^{-1.5}$. We confirm
the correlation between the dispersion of [Eu/Fe] at a given metallicity and
the fraction of massive stars which give origin to neutron star mergers. The
mixed scenario, where both neutron star mergers and magneto-rotational
supernovae do produce Eu, can explain the observed spread in the Eu abundance
also for a delay time distribution for mergers going either as $\propto t^{-1}$
or $\propto t^{-1.5}$.
|
Several specific Franklin squares and magic squares are decomposed into their
quotient and remainder squares. The results support the conjecture that
Franklin used the Eulerian composition method to construct many of his squares.
This method also can be used to construct new Franklin squares as illustrated
herein.
|
We provide a complete characterization of theories of tracial von Neumann
algebras that admit quantifier elimination. We also show that the theory of a
separable tracial von Neumann algebra $\mathcal{N}$ is never model complete if
its direct integral decomposition contains $\mathrm{II}_1$ factors
$\mathcal{M}$ such that $M_2(\mathcal{M})$ embeds into an ultrapower of
$\mathcal{M}$. The proof in the case of $\mathrm{II}_1$ factors uses an
explicit construction based on random matrices and quantum expanders.
|
Mirror symmetry of a wave system imposes corresponding even or odd parity on
its eigenmodes. For a discrete system, eigenmode parity on a specific subset of
sites may also originate from so-called latent symmetry. This symmetry is
hidden, but can be revealed in an effective model upon reduction of the
original system onto the latently symmetric sites. Here we show how latent
symmetries can be leveraged for continuous wave setups in the form of acoustic
networks. These are systematically designed to have point-wise amplitude parity
between selected waveguide junctions for all low frequency eigenmodes. We
further develop a modular principle: latently symmetric networks can be
interconnected to feature multiple latently symmetric junction pairs, allowing
the design of arbitrarily large latently symmetric networks. By connecting such
networks to a mirror symmetric subsystem, we design asymmetric setups featuring
eigenmodes with domain-wise parity. Bridging the gap between discrete and
continuous models, our work takes a pivotal step towards exploiting hidden
geometrical symmetries in realistic wave setups.
|
The upper bound on the ratio of the proton structure functions $F_L/F_2$
tested in the recent paper "The New $F_L$ Measurement from HERA and the Dipole
Model", contrary to what is said therein, does not provide a model-independent
"rigorous" experimental test of the color-dipole picture. The validity of the
theoretical upper bound depends on an ad hoc assumption on the dipole cross
section. -- The analysis in the paper "The New $F_L$ Measurement from HERA and
the Dipole Model" can be reinterpreted as an additional confirmation of the
absolute model-independent prediction from the color-dipole picture of $F_L =
0.27 F_2$.
|
This paper presents an exclusive classification of the largest crashes in Dow
Jones Industrial Average (DJIA), SP500 and NASDAQ in the past century. Crashes
are objectively defined as the top-rank filtered drawdowns (loss from the last
local maximum to the next local minimum disregarding noise fluctuations), where
the size of the filter is determined by the historical volatility of the index.
It is shown that {\it all} crashes can be linked to either an external shock,
{\it e.g.}, outbreak of war, {\it or} a log-periodic power law (LPPL) bubble
with an empirically well-defined complex value of the exponent. Conversely,
with one sole exception {\it all} previously identified LPPL bubbles are
followed by a top-rank drawdown. As a consequence, the analysis presented
suggest a one-to-one correspondence between market crashes defined as top-rank
filtered drawdowns on one hand and surprising news and LPPL bubbles on the
other. We attribute this correspondence to the Efficient Market Hypothesis
effective on two quite different time scales depending on whether the market
instability the crash represent is internally or externally generated.
|
Prior gradient-based attribution-map methods rely on handcrafted propagation
rules for the non-linear/activation layers during the backward pass, so as to
produce gradients of the input and then the attribution map. Despite the
promising results achieved, such methods are sensitive to the non-informative
high-frequency components and lack adaptability for various models and samples.
In this paper, we propose a dedicated method to generate attribution maps that
allow us to learn the propagation rules automatically, overcoming the flaws of
the handcrafted ones. Specifically, we introduce a learnable plugin module,
which enables adaptive propagation rules for each pixel, to the non-linear
layers during the backward pass for mask generating. The masked input image is
then fed into the model again to obtain new output that can be used as a
guidance when combined with the original one. The introduced learnable module
can be trained under any auto-grad framework with higher-order differential
support. As demonstrated on five datasets and six network architectures, the
proposed method yields state-of-the-art results and gives cleaner and more
visually plausible attribution maps.
|
In this paper, we analyze the two geometrical passages in Plato's Meno, (81c
-- 85c) and (86e4 -- 87b2), from the points of view of a geometer in Plato's
time and today. We give, in our opinion, a complete explanation of the
difficult second geometrical passage. Our explanation solves an ingenious
geometry puzzle that has baffled readers of Plato's Meno for over 2,400 years.
|
Complexity analysis becomes a common task in supervisory control. However,
many results of interest are spread across different topics. The aim of this
paper is to bring several interesting results from complexity theory and to
illustrate their relevance to supervisory control by proving new nontrivial
results concerning nonblockingness in modular supervisory control of discrete
event systems modeled by finite automata.
|
We report detailed shape measurements of the tips of three-dimensional
ammonium chloride dendrites grown from supersaturated aqueous solution. For
growth at small supersaturation, we compare two different models: parabolic
with a fourth-order correction, and power law. Neither is ideal, but the
fourth-order fit appears to provide the most robust description of both the tip
shape and position for this material. For that fit, the magnitude of the
fourth-order coefficient is about half of the theoretically expected value.
|
Filament channel (FC), a plasma volume where the magnetic field is primarily
aligned with the polarity inversion line, is believed to be the pre-eruptive
configuration of coronal mass ejections. Nevertheless, evidence for how the FC
is formed is still elusive. In this paper, we present a detailed study on the
build-up of a FC to understand its formation mechanism. The New Vacuum Solar
Telescope of Yunnan Observatories and Optical and Near-Infrared Solar Eruption
Tracer of Nanjing University, as well as the AIA and HMI on board Solar
Dynamics Observatory are used to study the grow-up process of the FC.
Furthermore, we reconstruct the non-linear force-free field (NLFFF) of the
active region using the regularized Biot-Savart laws (RBSL) and
magnetofrictional method to reveal three-dimension (3D) magnetic field
properties of the FC. We find that partial filament materials are quickly
transferred to longer magnetic field lines formed by small-scale magnetic
reconnection, as evidenced by dot-like H{\alpha}/EUV brightenings and
subsequent bidirectional outflow jets, as well as untwisting motions. The
H{\alpha}/EUV bursts appear repeatedly at the same location and are closely
associated with flux cancellation, which occurs between two small-scale
opposite polarities and is driven by shearing and converging motions. The 3D
NLFFF model reveals that the reconnection takes place in a hyperbolic flux tube
that is located above the flux cancellation site and below the FC. The FC is
gradually built up toward a twisted flux rope via series of small-scale
reconnection events that occur intermittently prior to the eruption.
|
Galactic rotation curves are often considered the first robust evidence for
the existence of dark matter. However, even in the presence of a dark matter
halo, other galactic-scale observations, such as the Baryonic Tully-Fisher
Relation and the Radial Acceleration Relation, remain challenging to explain.
This has motivated long-distance, infrared modifications to gravity as an
alternative to the dark matter hypothesis as well as various DM theories with
similar phenomenology. In general, the standard lore has been that any model
that reduces to the phenomenology of MOdified Newtonian Dynamics (MOND) on
galactic scales explains essentially all galaxy-scale observables. We present a
framework to test precisely this statement using local Milky Way observables,
including the vertical acceleration field, the rotation curve, the baryonic
surface density, and the stellar disk profile. We focus on models that predict
scalar amplifications of gravity, i.e., models that increase the magnitude but
do not change the direction of the gravitational acceleration. We find that
models of this type are disfavored relative to a simple dark matter halo model
because the Milky Way data requires a substantial amplification of the radial
acceleration with little amplification of the vertical acceleration. We
conclude that models which result in a MOND-like force struggle to
simultaneously explain both the rotational velocity and vertical motion of
nearby stars in the Milky Way.
|
For each nonempty integer partition $\pi$, we define the maximal excludant of
$\pi$ to be the largest nonnegative integer smaller than the largest part of
$\pi$ that is not a part of $\pi$. Let $\sigma\!\operatorname{maex}(n)$ be the
sum of maximal excludants over all partitions of $n$. We show that the
generating function of $\sigma\!\operatorname{maex}(n)$ is closely related to a
mock theta function studied by Andrews \textit{et al.} and Cohen. Further, we
show that, as $n\to \infty$, $\sigma\!\operatorname{maex}(n)$ is asymptotic to
the sum of largest parts of all partitions of $n$. Finally, the expectation of
the difference of the largest part and the maximal excludant over all
partitions of $n$ is shown to converge to $1$ as $n\to \infty$.
|
Text relevance or text matching of query and product is an essential
technique for the e-commerce search system to ensure that the displayed
products can match the intent of the query. Many studies focus on improving the
performance of the relevance model in search system. Recently, pre-trained
language models like BERT have achieved promising performance on the text
relevance task. While these models perform well on the offline test dataset,
there are still obstacles to deploy the pre-trained language model to the
online system as their high latency. The two-tower model is extensively
employed in industrial scenarios, owing to its ability to harmonize performance
with computational efficiency. Regrettably, such models present an opaque
``black box'' nature, which prevents developers from making special
optimizations. In this paper, we raise deep Bag-of-Words (DeepBoW) model, an
efficient and interpretable relevance architecture for Chinese e-commerce. Our
approach proposes to encode the query and the product into the sparse BoW
representation, which is a set of word-weight pairs. The weight means the
important or the relevant score between the corresponding word and the raw
text. The relevance score is measured by the accumulation of the matched word
between the sparse BoW representation of the query and the product. Compared to
popular dense distributed representation that usually suffers from the drawback
of black-box, the most advantage of the proposed representation model is highly
explainable and interventionable, which is a superior advantage to the
deployment and operation of online search engines. Moreover, the online
efficiency of the proposed model is even better than the most efficient inner
product form of dense representation ...
|
A nonzero rational number is called a cube sum if it is of form $a^3+b^3$
with $a,b\in \mathbb{Q}^\times$. In this paper, we prove that for any odd
integer $k\geq 1$, there exist infinitely many cube-free odd integers $n$ with
exactly $k$ distinct prime factors such that $2n$ is a cube sum (resp. not a
cube sum). We give also a general construction of Heegner point and obtain an
explicit Gross-Zagier formula which is used to prove the Birch and
Swinnerton-Dyer conjecture for certain elliptic curve related to the cube sum
problem.
|
We investigate the fluctuations of thermodynamic state-variables in
compressible aerodynamic wall-turbulence, using results of direct numerical
simulation (DNS) of compressible turbulent plane channel flow. The basic
transport equations governing the behaviour of thermodynamic variables
(density, pressure, temperature and entropy) are reviewed and used to derive
the exact transport equations for the variances and fluxes (transport by the
fluctuating velocity field) of the thermodynamic fluctuations. The scaling with
Reynolds and Mach number of compressible turbulent plane channel flow is
discussed. Correlation coefficients and higher-order statistics of the
thermodynamic fluctuations are examined. Finally, detailed budgets of the
transport equations for the variances and fluxes of the thermodynamic variables
from a well-resolved DNS are analysed. Implications of these results both to
the understanding of the thermodynamic interactions in compressible
wall-turbulence and to possible improvements in statistical modelling are
assessed. Finally, the required extension of existing DNS data to fully
characterise this canonical flow is discussed.
|
We report object 282P/(323137) 2003 BM80 is undergoing a sustained activity
outburst, lasting over 15 months thus far. These findings stem in part from our
NASA Partner Citizen Science project Active Asteroids
(http://activeasteroids.net), which we introduce here. We acquired new
observations of 282P via our observing campaign (Vatican Advanced Technology
Telescope, Lowell Discovery Telescope, and the Gemini South telescope),
confirming 282P was active on UT 2022 June 7, some 15 months after 2021 March
images showed activity in the 2021/2022 epoch. We classify 282P as a member of
the Quasi-Hilda Objects, a group of dynamically unstable objects found in an
orbital region similar to, but distinct in their dynamical characteristics to,
the Hilda asteroids (objects in 3:2 resonance with Jupiter). Our dynamical
simulations show 282P has undergone at least five close encounters with Jupiter
and one with Saturn over the last 180 years. 282P was most likely a Centaur or
Jupiter Family Comet (JFC) 250 years ago. In 350 years, following some 15
strong Jovian interactions, 282P will most likely migrate to become a JFC or,
less likely, a main-belt asteroid. These migrations highlight a dynamical
pathway connecting Centaurs and JFC with Quasi-Hildas and, potentially, active
asteroids. Synthesizing these results with our thermodynamical modeling and new
activity observations, we find volatile sublimation is the primary activity
mechanism. Observations of a quiescent 282P, which we anticipate will be
possible in 2023, will help confirm our hypothesis by measuring a rotation
period and ascertaining spectral type.
|
We train a bilingual Arabic-Hebrew language model using a transliterated
version of Arabic texts in Hebrew, to ensure both languages are represented in
the same script. Given the morphological, structural similarities, and the
extensive number of cognates shared among Arabic and Hebrew, we assess the
performance of a language model that employs a unified script for both
languages, on machine translation which requires cross-lingual knowledge. The
results are promising: our model outperforms a contrasting model which keeps
the Arabic texts in the Arabic script, demonstrating the efficacy of the
transliteration step. Despite being trained on a dataset approximately 60%
smaller than that of other existing language models, our model appears to
deliver comparable performance in machine translation across both translation
directions.
|
Skin cancer, a deadly form of cancer, exhibits a 23\% survival rate in the
USA with late diagnosis. Early detection can significantly increase the
survival rate, and facilitate timely treatment. Accurate biomedical image
classification is vital in medical analysis, aiding clinicians in disease
diagnosis and treatment. Deep learning (DL) techniques, such as convolutional
neural networks and transformers, have revolutionized clinical decision-making
automation. However, computational cost and hardware constraints limit the
implementation of state-of-the-art DL architectures. In this work, we explore a
new type of neural network that does not need backpropagation (BP), namely the
Forward-Forward Algorithm (FFA), for skin lesion classification. While FFA is
claimed to use very low-power analog hardware, BP still tends to be superior in
terms of classification accuracy. In addition, our experimental results suggest
that the combination of FFA and BP can be a better alternative to achieve a
more accurate prediction.
|
We analyze whether a multidimensional parity check (MDPC) or a Reed-Solomon
(RS) code in combination with an auxiliary channel can improve the throughput
and extend the THz transmission distance. While channel quality is addressed by
various coding approaches, and an effective THz system configuration is enabled
by other approaches with additional channels, their combination is new with the
potential for significant improvements in quality of the data transmission. Our
specific solution is designed to correct data bits at the physical layer by
using a low complexity erasure code (MDPC or RS), whereby original and parity
data are transferred over two separate and parallel THz channels, including one
main channel and one additional channel. The results are theoretically analyzed
to see that our new solution can improve throughput, support higher modulation
levels and transfer data over the longer distances with THz communications.
|
This paper presents the opto-mechanical integration and alignment, functional
and optical performance verification of the NIR arm of Son Of X-Shooter (SOXS)
instrument. SOXS will be a single object spectroscopic facility for the ESO-NTT
3.6-m telescope, made by two arms high efficiency spectrographs, able to cover
the spectral range 350 2050 nm with a mean resolving power R~4500. In
particular the NIR arm is a cryogenic echelle cross-dispersed spectrograph
spanning the 780-2050 nm range. We describe the integration and alignment
method performed to assemble the different opto-mechanical elements and their
installation on the NIR vacuum vessel, which mostly relies on mechanical
characterization. The tests done to assess the image quality, linear dispersion
and orders trace in laboratory conditions are summarized. The full optical
performance verification, namely echellogram format, image quality and
resulting spectral resolving power in the whole NIR arm (optical path and
science detector) is detailed. Such verification is one of the most relevant
prerequisites for the subsequent full instrument assembly and provisional
acceptance in Europe milestone, foreseen in 2024.
|
Frequency combs have revolutionized the field of optical spectroscopy,
enabling researchers to probe molecular systems with a multitude of accurate
and precise optical frequencies. While there have been tremendous strides in
direct frequency comb spectroscopy, these approaches have been unable to record
high resolution spectra on the nanosecond timescale characteristic of many
physiochemical processes. Here we demonstrate a new approach to optical
frequency comb generation in which a pair of electro-optic combs is produced in
the near-infrared and subsequently transferred with high mutual coherence and
efficiency into the mid-infrared within a single optical parametric oscillator.
The high power, mutual coherence, and agile repetition rates of these combs as
well as the large mid-infrared absorption of many molecular species enable
fully resolved spectral transitions to be recorded in timescales as short as 20
ns. We have applied this approach to study the rapid dynamics occurring within
a supersonic pulsed jet, however we note that this method is widely applicable
to fields such as chemical and quantum physics, atmospheric chemistry,
combustion science, and biology.
|
In previous work, we have combined computable structure theory and
algorithmic learning theory to study which families of algebraic structures are
learnable in the limit (up to isomorphism). In this paper, we measure the
computational power that is needed to learn finite families of structures. In
particular, we prove that, if a family of structures is both finite and
learnable, then any oracle which computes the Halting set is able to achieve
such a learning. On the other hand, we construct a pair of structures which is
learnable but no computable learner can learn it.
|
We continue the works of Gurevich-Shelah and Lifsches-Shelah by showing that
it is consistent with ZFC that the first-order theory of random graphs is not
interpretable in the monadic theory of all chains. It is provable from ZFC that
the theory of random graphs is not interpretable in the monadic second order
theory of short chains (hence, in the monadic theory of the real line).
|
We theoretically investigate spin transfer between a system of
quasiequilibrated Bose-Einstein condensed magnons in an insulator in direct
contact with a conductor. While charge transfer is prohibited across the
interface, spin transport arises from the exchange coupling between insulator
and conductor spins. In normal insulator phase, spin transport is governed
solely by the presence of thermal and spin-diffusive gradients; the presence of
Bose-Einstein condensation (BEC), meanwhile, gives rise to a
temperature-independent condensate spin current. Depending on the thermodynamic
bias of the system, spin may flow in either direction across the interface,
engendering the possibility of a dynamical phase transition of magnons. We
discuss experimental feasibility of observing a BEC steady state (fomented by a
spin Seebeck effect), which is contrasted to the more familiar spin-transfer
induced classical instabilities.
|
Strong Stark splitting, which is nearly independent of the R-ions
replacement, has been observed through the photoluminescence investigation of
electronic ferroelectric Er1-xYbxFe2O4 (x=0, 0.8, 0.9 and 0.95). Initially
multiple radiative decay channels have been investigated, especially the
visible transition 4F9/2-->4I15/2, of which a quenching effect has been
observed. A series of small non-Raman peaks have been observed superimposed on
a broadband photoluminescence spectrum, of which we tentatively assign Stark
splitting to be the cause. The splitting of the 4F9/2 and 4I15/2 levels is
found to be 54meV and 66meV, respectively. This unusually large Stark splitting
at visible range indicates the existence of strong local field originated from
the W-layer in the charge-frustrated ErFe2O4.
|
We present a novel generative 3D modeling system, coined CraftsMan, which can
generate high-fidelity 3D geometries with highly varied shapes, regular mesh
topologies, and detailed surfaces, and, notably, allows for refining the
geometry in an interactive manner. Despite the significant advancements in 3D
generation, existing methods still struggle with lengthy optimization
processes, irregular mesh topologies, noisy surfaces, and difficulties in
accommodating user edits, consequently impeding their widespread adoption and
implementation in 3D modeling software. Our work is inspired by the craftsman,
who usually roughs out the holistic figure of the work first and elaborates the
surface details subsequently. Specifically, we employ a 3D native diffusion
model, which operates on latent space learned from latent set-based 3D
representations, to generate coarse geometries with regular mesh topology in
seconds. In particular, this process takes as input a text prompt or a
reference image and leverages a powerful multi-view (MV) diffusion model to
generate multiple views of the coarse geometry, which are fed into our
MV-conditioned 3D diffusion model for generating the 3D geometry, significantly
improving robustness and generalizability. Following that, a normal-based
geometry refiner is used to significantly enhance the surface details. This
refinement can be performed automatically, or interactively with user-supplied
edits. Extensive experiments demonstrate that our method achieves high efficacy
in producing superior-quality 3D assets compared to existing methods. HomePage:
https://craftsman3d.github.io/, Code: https://github.com/wyysf-98/CraftsMan
|
The H-T phase diagram of CuGeO$_3$ has been determined, for different values
of the hydrostatic pressure, utilizing optical absorption spectroscopy on the
Cu$^{2+}$ d-d transitions. It is shown that the intensity of the related zero
phonon line transition is very sensitive to the local environment of Cu$^{2+}$,
allowing for precise determination of all phase transitions. It is found that
the phase diagrams at various pressures do not scale according to the
Cross-Fischer theory. An alternative scaling is proposed.
|
Haptic interfaces have untapped the sense of touch to assist multimodal music
learning. We have recently seen various improvements of interface design on
tactile feedback and force guidance aiming to make instrument learning more
effective. However, most interfaces are still quite static; they cannot yet
sense the learning progress and adjust the tutoring strategy accordingly. To
solve this problem, we contribute an adaptive haptic interface based on the
latest design of haptic flute. We first adopted a clutch mechanism to enable
the interface to turn on and off the haptic control flexibly in real time. The
interactive tutor is then able to follow human performances and apply the
"teacher force" only when the software instructs so. Finally, we incorporated
the adaptive interface with a step-by-step dynamic learning strategy.
Experimental results showed that dynamic learning dramatically outperforms
static learning, which boosts the learning rate by 45.3% and shrinks the
forgetting chance by 86%.
|
Among critical infrastructures, power grids and communication infrastructure
are identified as uniquely critical since they enable the operation of all
other sectors. Due to their vital role, the research community has undertaken
extensive efforts to understand the complex dynamics and resilience
characteristics of these infrastructures, albeit independently. However, power
and communication infrastructures are also interconnected, and the nature of
the Internet's dependence on power grids is poorly understood.
In this paper, we take the first step toward characterizing the role of power
grids in Internet resilience by analyzing the overlap of global power and
Internet infrastructures. We investigate the impact of power grid failures on
Internet availability and find that nearly $65\%$ of the public Internet
infrastructure components are concentrated in a few ($< 10$) power grid failure
zones. More importantly, power grid dependencies severely limit the number of
disjoint availability zones of cloud providers. When dependency on grids
serving data center locations is taken into account, the number of isolated AWS
Availability Zones reduces from 87 to 19. Building upon our findings, we
develop NetWattZap, an Internet resilience analysis tool that generates power
grid dependency-aware deployment suggestions for Internet infrastructure and
application components, which can also take into account a wide variety of user
requirements.
|
Pd(100) ultrathin films show ferromagnetism induced by the confinement of
electrons in the film, i.e., the quantum-well mechanism. In this study, we
investigate the effect of the change in the interface structure between a Pd
film and SrTiO3 substrate on quantum-well induced ferromagnetism using the
structural phase transition of SrTiO3. During repeated measurement of
temperature- dependent magnetization of the Pd/SrTiO3 system, cracks were
induced in the Pd overlayer near the interface region by the structural phase
transition of SrTiO3, thereby changing the film-thickness dependence of the
magnetic moment. This is explained by the concept that as the magnetic moment
in Pd(100) changed, so too did the thickness of the quantum-well. In addition,
we observed that the ferromagnetism in the Pd(100) disappeared with the
accumulation of cracks due to the repetition of the temperature cycle through
the phase-transition temperature. This suggests that lowering the crystallinity
of the interface structure by producing a large number of cracks has a negative
effect on quantum-well induced ferromagnetism.
|
We present results of a Monte Carlo study of the equilibrium dynamics of the
one dimensional long-range Ising spin glass model. By tuning a parameter
$\sigma$, this model interpolates between the mean field
Sherrington-Kirkpatrick model and a proxy of the finite dimensional
Edward-Anderson model. Activated scaling fits for the behavior of the
relaxation time $\tau$ as a function of the number of spins $N$ (Namely
$\ln(\tau)\propto N^{\psi}$) give values of $\psi$ that are not stable against
inclusion of subleading corrections. Critical scaling ($\tau\propto N^{\rho}$)
gives more stable fits, at least in the non mean field region. We also present
results on the scaling of the time decay of the critical remanent magnetization
of the Sherrington-Kirkpatrick model, a case where the simulation can be done
with quite large systems and that shows the difficulties in obtaining precise
values for dynamical exponents in spin glass models.
|
The adoption of advanced deep learning (DL) architecture in stuttering
detection (SD) tasks is challenging due to the limited size of the available
datasets. To this end, this work introduces the application of speech
embeddings extracted with pre-trained deep models trained on massive audio
datasets for different tasks. In particular, we explore audio representations
obtained using emphasized channel attention, propagation, and
aggregation-time-delay neural network (ECAPA-TDNN) and Wav2Vec2.0 model trained
on VoxCeleb and LibriSpeech datasets respectively. After extracting the
embeddings, we benchmark with several traditional classifiers, such as a
k-nearest neighbor, Gaussian naive Bayes, and neural network, for the
stuttering detection tasks. In comparison to the standard SD system trained
only on the limited SEP-28k dataset, we obtain a relative improvement of 16.74%
in terms of overall accuracy over baseline. Finally, we have shown that
combining two embeddings and concatenating multiple layers of Wav2Vec2.0 can
further improve SD performance up to 1% and 2.64% respectively.
|
It is stated by C. Simon, quant-ph/0410032, that the definition of
"classicality" used in quant-ph/0310116 is "much narrower than Bell's concept
of local hidden variables" and that, in the separable quantum case, the
validity of the perfect correlation form of the original Bell inequality is
necessarily linked with "the assumption of perfect correlations if the same
(quantum) observable is measured on both sides". Here, I prove that these and
other statements in quant-ph/0410032 are misleading.
|
It is argued that D=10 type II strings and M-theory in D=11 have D-5 branes
and 9-branes that are not standard p-branes coupled to anti-symmetric tensors.
The global charges in a D-dimensional theory of gravity consist of a momentum
$P_M$ and a dual D-5 form charge $K_{M_1...M_{D-5}}$, which is related to the
NUT charge. On dimensional reduction, P gives the electric charge and K the
magnetic charge of the graviphoton. The charge K is constructed and shown to
occur in the superalgebra and BPS bounds in $D\ge 5$, and leads to a NUT-charge
modification of the BPS bound in D=4. $K$ is carried by Kaluza-Klein monopoles,
which can be regarded as D-5 branes. Supersymmetry and U-duality imply that the
type IIB theory has (p,q) 9-branes. Orientifolding with 32 (0,1) 9-branes gives
the type I string, while modding out by a related discrete symmetry with 32
(1,0) 9-branes gives the SO(32) heterotic string. Symmetry enhancement, the
effective world-volume theories and the possibility of a twelve dimensional
origin are discussed.
|
We gave an extensive study for the quasi-periodic perturbations on the time
profiles of the line of sight (LOS) magnetic field in 10x10 sub-areas in a
solar plage region (corresponds to a facula on the photosphere). The
perturbations are found to be associated with enhancement of He I 10830 A
absorption in a moss region, which is connected to loops with million-degree
plasma. FFT analysis to the perturbations gives a kind of spectrum similar to
that of Doppler velocity: a number of discrete periods around 5 minutes. The
amplitudes of the magnetic perturbations are found to be proportional to
magnetic field strength over these sub-areas. In addition, magnetic
perturbations lag behind a quarter of cycle in phase with respect to the p-mode
Doppler velocity. We show that the relationships can be well explained with an
MHD solution for the magneto-acoustic oscillations in high-\b{eta} plasma.
Observational analysis also shows that, for the two regions with the stronger
and weaker magnetic field, the perturbations are always anti-phased. All
findings show that the magnetic perturbations are actually magneto-acoustic
oscillations on the solar surface, the photosphere, powered by p-mode
oscillations. The findings may provide a new diagnostic tool for exploring the
relationship between magneto-acoustic oscillations and the heating of solar
upper atmosphere, as well as their role in helioseismology.
|
Darboux transformation is constructed for superfields of the super
sine-Gordon equation and the superfields of the associated linear problem. The
Darboux transformation is shown to be related to the super B\"{a}cklund
transformation and is further used to obtain $N$ super soliton solutions.
|
This paper gives a PDE for multi-time joint probability of the Airy process,
which generalizes Adler and van Moerbeke's result on the 2-time case. As an
intermediate step, the PDE for the multi-time joint probability of the Dyson
Brownian motion is also given.
|
The light scattering in the periodic dielectric cylinder array is studied. We
analytically calculate the diffusive-ballistic transport crossover and find the
weak localization superimposing on it.
Possible experimental observations are analyzed.
|
Normal estimation for 3D point clouds is a fundamental task in 3D geometry
processing. The state-of-the-art methods rely on priors of fitting local
surfaces learned from normal supervision. However, normal supervision in
benchmarks comes from synthetic shapes and is usually not available from real
scans, thereby limiting the learned priors of these methods. In addition,
normal orientation consistency across shapes remains difficult to achieve
without a separate post-processing procedure. To resolve these issues, we
propose a novel method for estimating oriented normals directly from point
clouds without using ground truth normals as supervision. We achieve this by
introducing a new paradigm for learning neural gradient functions, which
encourages the neural network to fit the input point clouds and yield unit-norm
gradients at the points. Specifically, we introduce loss functions to
facilitate query points to iteratively reach the moving targets and aggregate
onto the approximated surface, thereby learning a global surface representation
of the data. Meanwhile, we incorporate gradients into the surface approximation
to measure the minimum signed deviation of queries, resulting in a consistent
gradient field associated with the surface. These techniques lead to our deep
unsupervised oriented normal estimator that is robust to noise, outliers and
density variations. Our excellent results on widely used benchmarks demonstrate
that our method can learn more accurate normals for both unoriented and
oriented normal estimation tasks than the latest methods. The source code and
pre-trained model are publicly available at https://github.com/LeoQLi/NeuralGF.
|
Active galactic nucleus (AGN) jets are believed to be important in solving
the cooling flow problem in the intracluster medium (ICM), while the detailed
mechanism is still in debate. Here we present a systematic study on the energy
coupling efficiency $\eta_{\rm cp}$, the fraction of AGN jet energy transferred
to the ICM. We first estimate the values of $\eta_{\rm cp}$ analytically in two
extreme cases, which are further confirmed and extended with a parameter study
of spherical outbursts in a uniform medium using hydrodynamic simulations. We
find that $\eta_{\rm cp}$ increases from $\sim 0.4$ for a weak isobaric
injection to $\gtrsim 0.8$ for a powerful point injection. For any given
outburst energy, we find two characteristic outburst powers that separate these
two extreme cases. We then investigate the energy coupling efficiency of AGN
jet outbursts in a realistic ICM with hydrodynamic simulations, finding that
jet outbursts are intrinsically different from spherical outbursts. For both
powerful and weak jet outbursts, $\eta_{\rm cp}$ is typically around $0.7-0.9$,
partly due to the non-spherical nature of jet outbursts, which produce
backflows emanating from the hotspots, significantly enhancing the ejecta-ICM
interaction. While for powerful outbursts a dominant fraction of the energy
transferred from the jet to the ICM is dissipated by shocks, shock dissipation
only accounts for $\lesssim 30\%$ of the injected jet energy for weak
outbursts. While both powerful and weak outbursts could efficiently heat
cooling flows, powerful thermal-energy-dominated jets are most effective in
delaying the onset of the central cooling catastrophe.
|
Recurrent connections in the visual cortex are thought to aid object
recognition when part of the stimulus is occluded. Here we investigate if and
how recurrent connections in artificial neural networks similarly aid object
recognition. We systematically test and compare architectures comprised of
bottom-up (B), lateral (L) and top-down (T) connections. Performance is
evaluated on a novel stereoscopic occluded object recognition dataset. The task
consists of recognizing one target digit occluded by multiple occluder digits
in a pseudo-3D environment. We find that recurrent models perform significantly
better than their feedforward counterparts, which were matched in parametric
complexity. Furthermore, we analyze how the network's representation of the
stimuli evolves over time due to recurrent connections. We show that the
recurrent connections tend to move the network's representation of an occluded
digit towards its un-occluded version. Our results suggest that both the brain
and artificial neural networks can exploit recurrent connectivity to aid
occluded object recognition.
|
On June 25th, 2018, Huang et al. published a computational method SAVER on
Nature Methods for imputing dropout gene expression levels in single cell RNA
sequencing (scRNA-seq) data. Huang et al. performed a set of comprehensive
benchmarking analyses, including comparison with the data from RNA fluorescence
in situ hybridization, to demonstrate that SAVER outperformed two existing
scRNA-seq imputation methods, scImpute and MAGIC. However, their computational
analyses were based on semi-synthetic data that the authors had generated
following the Poisson-Gamma model used in the SAVER method. We have therefore
re-examined Huang et al.'s study. We find that the semi-synthetic data have
very different properties from those of real scRNA-seq data and that the cell
clusters used for benchmarking are inconsistent with the cell types labeled by
biologists. We show that a reanalysis based on real scRNA-seq data and grounded
on biological knowledge of cell types leads to different results and
conclusions from those of Huang et al.
|
This chapter presents an overview of Interactive Machine Learning (IML)
techniques applied to the analysis and design of musical gestures. We go
through the main challenges and needs related to capturing, analysing, and
applying IML techniques to human bodily gestures with the purpose of performing
with sound synthesis systems. We discuss how different algorithms may be used
to accomplish different tasks, including interacting with complex synthesis
techniques and exploring interaction possibilities by means of Reinforcement
Learning (RL) in an interaction paradigm we developed called Assisted
Interactive Machine Learning (AIML). We conclude the chapter with a description
of how some of these techniques were employed by the authors for the
development of four musical pieces, thus outlining the implications that IML
have for musical practice.
|
We performed inelastic neutron scattering (INS) experiments to measure spin
dynamics on a polycrystalline sample of a spin tube candidate CsCrF$_{4}$. The
compound exhibits a successive phase transition from a paramagnetic phase
through an intermediate temperature (IT) phase of a 120$^{\circ}$ structure to
a low temperature (LT) phase of another 120$^{\circ}$ structure. Elaborate
comparison between observed and calculated neutron spectra in LT phase reveals
that the spin Hamiltonian is identified as antiferromagnetic spin tubes
including perturbative terms of intertube interaction, Dzyaloshinskii-Moriya
interaction, and single ion anisotropy. A phase diagram for the ground state is
classically calculated. A set of parameters in the spin Hamiltonian obtained
from the INS spectra measured in LT phase is quite close to a boundary to the
phase of the 120$^{\circ}$ structure of IT phase. The INS spectra measured in
IT phase is, surprisingly, the same as those in LT phase in the level of powder
averaged spectra, even though the magnetic structures in IT and LT phases are
different. Identical dynamical structures compatible with two different static
structures are observed. No difference in the observed spectra indicates no
change of the spin Hamiltonian with the temperature, suggesting that the origin
of the successive phase transition being order-by-disorder mechanism.
|
Spin-rotation coupling, or Mashhoon effect, is a phenomenon associated with
rotating observers. We show that the effect exists and plays a fundamental role
in the determination of the anomalous magnetic moment of the muon.
|
In the this paper, the authors propose to estimate the density of a targeted
population with a weighted kernel density estimator (wKDE) based on a weighted
sample. Bandwidth selection for wKDE is discussed. Three mean integrated
squared error based bandwidth estimators are introduced and their performance
is illustrated via Monte Carlo simulation. The least-squares cross-validation
method and the adaptive weight kernel density estimator are also studied. The
authors also consider the boundary problem for interval bounded data and apply
the new method to a real data set subject to informative censoring.
|
Over suitable monoidal model categories, we construct a Dwyer-Kan model
category structure on the category of algebras over an augmented operadic
collection. As examples we obtain Dwyer-Kan model category structure on the
categories of enriched wheeled props, wheeled properads, and wheeled operads,
among others. This result extends known model category structure on the
categories of operads, properads, and props enriched in simplicial sets and
other monoidal model categories. We also show that our Dwyer-Kan model category
structure is well behaved with respect to simultaneous changes of the
underlying monoidal model category and the augmented operadic collection.
|
Economic issues related to the information processing techniques are very
important. The development of such technologies is a major asset for developing
countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and
Thailand. The MotAMot project aims to computerize an under-resourced language:
Khmer, spoken mainly in Cambodia. The main goal of the project is the
development of a multilingual lexical system targeted for Khmer. The
macrostructure is a pivot one with each word sense of each language linked to a
pivot axi. The microstructure comes from a simplification of the explanatory
and combinatory dictionary. The lexical system has been initialized with data
coming mainly from the conversion of the French-Khmer bilingual dictionary of
Denis Richer from Word to XML format. The French part was completed with
pronunciation and parts-of-speech coming from the FeM French-english-Malay
dictionary. The Khmer headwords noted in IPA in the Richer dictionary were
converted to Khmer writing with OpenFST, a finite state transducer tool. The
resulting resource is available online for lookup, editing, download and remote
programming via a REST API on a Jibiki platform.
|
We study a supersymmetry breaking deformation of the M-theory background
found in arXiv:hep-th/0012011. The supersymmetric solution is a warped product
of R^{2,1} and the 8-dimensional Stenzel space, which is a higher dimensional
generalization of the deformed conifold. At the bottom of the warped throat
there is a 4-sphere threaded by \tilde{M} units of 4-form flux. The dual
(2+1)-dimensional theory has a discrete spectrum of bound states. We add p
anti-M2 branes at a point on the 4-sphere, and show that they blow up into an
M5-brane wrapping a 3-sphere at a fixed azimuthal angle on the 4-sphere. This
supersymmetry breaking state turns out to be metastable for p / \tilde{M} <
0.054. We find a smooth O(3)-symmetric Euclidean bounce solution in the
M5-brane world volume theory that describes the decay of the false vacuum.
Calculation of the Euclidean action shows that the metastable state is
extremely long-lived. We also describe the corresponding metastable states and
their decay in the type IIA background obtained by reduction along one of the
spatial directions of R^{2,1}.
|
In this paper we study a semilinear wave equation with nonlinear,
time-dependent damping in one space dimension. For this problem, we prove a
well-posedness result in $W^{1,\infty}$ in the space-time domain $(0,1)\times
[0,+\infty)$. Then we address the problem of the time-asymptotic stability of
the zero solution and show that, under appropriate conditions, the solution
decays to zero at an exponential rate in the space $W^{1,\infty}$. The proofs
are based on the analysis of the corresponding semilinear system for the first
order derivatives, for which we show a contractive property of the invariant
domain.
|
We consider possible designs and experimental realiza-tions in synthesized
rather than naturally occurring bio-chemical systems of a selection of basic
bio-inspired information processing steps. These include feed-forward loops,
which have been identified as the most common information processing motifs in
many natural pathways in cellular functioning, and memory-involving processes,
specifically, associative memory. Such systems should not be designed to
literally mimic nature. Rather, we can be guided by nature's mechanisms for
experimenting with new information/signal processing steps which are based on
coupled biochemical reactions, but are vastly simpler than natural processes,
and which will provide tools for the long-term goal of understanding and
harnessing nature's information processing paradigm. Our biochemical processes
of choice are enzymatic cascades because of their compatibility with
physiological processes in vivo and with electronics (e.g., electrodes) in
vitro allowing for networking and interfacing of enzyme-catalyzed processes
with other chemical and biochemical reactions. In addition to designing and
realizing feed-forward loops and other processes, one has to develop approaches
to probe their response to external control of the time-dependence of the
input(s), by measuring the resulting time-dependence of the output. The goal
will be to demonstrate the expected features, for example, the delayed response
and stabilizing effect of the feed-forward loops.
|
Space efficient algorithms play a central role in dealing with large amount
of data. In such settings, one would like to analyse the large data using small
amount of "working space". One of the key steps in many algorithms for
analysing large data is to maintain a (or a small number) random sample from
the data points. In this paper, we consider two space restricted settings --
(i) streaming model, where data arrives over time and one can use only a small
amount of storage, and (ii) query model, where we can structure the data in low
space and answer sampling queries. In this paper, we prove the following
results in above two settings:
- In the streaming setting, we would like to maintain a random sample from
the elements seen so far. We prove that one can maintain a random sample using
$O(\log n)$ random bits and $O(\log n)$ space, where $n$ is the number of
elements seen so far. We can extend this to the case when elements have weights
as well.
- In the query model, there are $n$ elements with weights $w_1, ..., w_n$
(which are $w$-bit integers) and one would like to sample a random element with
probability proportional to its weight. Bringmann and Larsen (STOC 2013) showed
how to sample such an element using $nw +1 $ space (whereas, the information
theoretic lower bound is $n w$). We consider the approximate sampling problem,
where we are given an error parameter $\varepsilon$, and the sampling
probability of an element can be off by an $\varepsilon$ factor. We give
matching upper and lower bounds for this problem.
|
Many cells contain non-centrosomal arrays of microtubules (MT), but the
assembly, organisation and function of these arrays are poorly understood. We
present the first theoretical model for the non-centrosomal MT cytoskeleton in
$Drosophila$ oocytes, in which $bicoid$ and $oskar$ mRNAs become localised to
establish the anterior-posterior body axis. Constrained by experimental
measurements, the model shows that a simple gradient of cortical MT nucleation
is sufficient to reproduce the observed MT distribution, cytoplasmic flow
patterns and localisation of $oskar$ and naive $bicoid$ mRNAs. Our simulations
exclude a major role for cytoplasmic flows in localisation and reveal an
organisation of the MT cytoskeleton that is more ordered than previously
thought. Furthermore, modulating cortical MT nucleation induces a bifurcation
in cytoskeletal organisation that accounts for the phenotypes of polarity
mutants. Thus, our three-dimensional model explains many features of the MT
network and highlights the importance of differential cortical MT nucleation
for axis formation.
|
We study the mutual information estimation for mixed-pair random variables.
One random variable is discrete and the other one is continuous. We develop a
kernel method to estimate the mutual information between the two random
variables. The estimates enjoy a central limit theorem under some regular
conditions on the distributions. The theoretical results are demonstrated by
simulation study.
|
In this paper, we report on an implementation in the free software Mathemagix
of lacunary factorization algorithms, distributed as a library called
Lacunaryx. These algorithms take as input a polynomial in sparse
representation, that is as a list of nonzero monomials, and an integer $d$, and
compute its irreducible degree-$\le d$ factors. The complexity of these
algorithms is polynomial in the sparse size of the input polynomial and $d$.
|
Bibliometrics offers a particular representation of science. Through
bibliometric methods a bibliometrician will always highlight particular
elements of publications, and through these elements operationalize particular
representations of science, while obscuring other possible representations from
view. Understanding bibliometrics as representation implies that a bibliometric
analysis is always performative: a bibliometric analysis brings a particular
representation of science into being that potentially influences the science
system itself. In this review we analyze the ways the humanities have been
represented throughout the history of bibliometrics, often in comparison to
other scientific domains or to a general notion of the sciences. Our review
discusses bibliometric scholarship between 1965 and 2016 that studies the
humanities empirically. We distinguish between two periods of bibliometric
scholarship. The first period, between 1965 and 1989, is characterized by a
sociological theoretical framework, the development and use of the Price index,
and small samples of journal publications as data sources. The second period,
from the mid-1980s up until the present day, is characterized by a new
hinterland, that of science policy and research evaluation, in which
bibliometric methods become embedded.
|
Trypsin and chymotrypsin are both serine proteases with high sequence and
structural similarities, but with different substrate specificity. Previous
experiments have demonstrated the critical role of the two loops outside the
binding pocket in controlling the specificity of the two enzymes. To understand
the mechanism of such a control of specificity by distant loops, we have used
the Gaussian Network Model to study the dynamic properties of trypsin and
chymotrypsin and the roles played by the two loops. A clustering method was
introduced to analyze the correlated motions of residues. We have found that
trypsin and chymotrypsin have distinct dynamic signatures in the two loop
regions which are in turn highly correlated with motions of certain residues in
the binding pockets. Interestingly, replacing the two loops of trypsin with
those of chymotrypsin changes the motion style of trypsin to chymotrypsin-like,
whereas the same experimental replacement was shown necessary to make trypsin
have chymotrypsin's enzyme specificity and activity. These results suggest that
the cooperative motions of the two loops and the substrate-binding sites
contribute to the activity and substrate specificity of trypsin and
chymotrypsin.
|
In this note we obtain a unique continuation result for the differential
inequality $|\bar{\partial}u|\leq|Vu|$, where
$\bar{\partial}=(i\partial_y+\partial_x)/2$ denotes the Cauchy-Riemann operator
and $V(x,y)$ is a function in $L^2(\mathbb{R}^2)$.
|
The analysis of the experimental data of Crystal Barrel Collaboration on the
p anti-p annihilation in flight with the production of mesons in the final
state resulted in a discovery of a large number of mesons over the region
1900-2400 MeV, thus allowing us to systematize quark-antiquark states in the
(n,M^2) and (J,M^2) planes, where n and J are radial quantum number and spin of
the meson with the mass M. The data point to meson trajectories in these planes
being approximately linear, with a universal slope. Basing on these data and
results of the recent K-matrix analysis a nonet classification is performed. In
the scalar-isoscalar sector, the broad resonance state f0(1200-1600) is
superfluous for the q anti-q classification, i.e. it is an exotic state. The
ratios of coupling constants for the transitions f0-> pi pi, K anti-K, eta eta,
eta eta' point to the gluonium nature of the broad state f0(1200-1600). The
problem of the location of the lightest pseudoscalar glueball is also
discussed.
|
We give a conjectural but full and explicit description of the (K-theoretic)
equivariant vertex for Pandharipande--Thomas stable pairs on toric Calabi--Yau
4-folds, by identifying torus-fixed loci as certain quiver Grassmannians and
prescribing a canonical half of the tangent-obstruction theory. For any number
of non-trivial legs, the DT/PT vertex correspondence can then be verified by
computer in low degrees.
|
The limacon-shaped semiconductor microcavity is a ray-chaotic cavity
sustaining low-loss modes with mostly unidirectional emission patterns.
Investigating these modes systematically, we show that the modes correspond to
ray description collectively, rather than individually. In addition, we present
experimental data on multimode lasing emission patterns that show high
unidirectionality and closely agree with the ray description. The origin of
this agreement is well explained by the collective correspondence mechanism.
|
We study a class of dark matter models in which the dark matter is a
baryon-like composite particle of a confining gauge group and also a
pseudo-Nambu-Goldstone boson associated with the breaking of an enhanced chiral
symmetry group. The approximate symmetry decouples the dark matter mass from
the confinement scale of the new gauge group, leading to correct thermal relic
abundances for dark matter masses far below the unitary bound, avoiding the
typical conclusion of thermally produced composite dark matter. We explore the
available parameter space in a minimal example model based on an SU(2) gauge
group, and discuss prospects for experimental detection.
|
Subsets and Splits