text
stringlengths 6
128k
|
---|
Persistent homology, an algebraic method for discerning structure in abstract
data, relies on the construction of a sequence of nested topological spaces
known as a filtration. Two-parameter persistent homology allows the analysis of
data simultaneously filtered by two parameters, but requires a bifiltration --
a sequence of topological spaces simultaneously indexed by two parameters. To
apply two-parameter persistence to digital images, we first must consider
bifiltrations constructed from digital images, which have scarcely been
studied. We introduce the value-offset bifiltration for grayscale digital image
data. We present efficient algorithms for computing this bifiltration with
respect to the taxicab distance and for approximating it with respect to the
Euclidean distance. We analyze the runtime complexity of our algorithms,
demonstrate the results on sample images, and contrast the bifiltrations
obtained from real images with those obtained from random noise.
|
We introduce the notion of integrality of Grothendieck categories as a
simultaneous generalization of the primeness of noncommutative noetherian rings
and the integrality of locally noetherian schemes. Two different spaces
associated to a Grothendieck category yield respective definitions of
integrality, and we prove the equivalence of these definitions using a
Grothendieck-categorical version of Gabriel's correspondence, which originally
related indecomposable injective modules and prime two-sided ideals for
noetherian rings. The generalization of prime two-sided ideals is also used to
classify locally closed localizing subcategories. As an application of the main
results, we develop a theory of singular objects in a Grothendieck category and
deduce Goldie's theorem on the existence of the quotient ring as its
consequence.
|
In this paper we study the behaviour of the continuous spectrum of the
Laplacian on a complete Riemannian manifold of bounded curvature under
perturbations of the metric. The perturbations that we consider are such that
its covariant derivatives up to some order decay with some rate in the geodesic
distance from a fixed point. Especially we impose no conditions on the
injectivity radius. One of the main results are conditions on the rate of
decay, depending on geometric properties of the underlying manifold, that
guarantee the existence and completeness of the wave operators.
|
There are indications from the study of quasar absorption spectra that the
fine structure constant $\alpha$ may have been measurably smaller for redshifts
$z>2.$ Analyses of other data ($^{149}$Sm fission rate for the Oklo natural
reactor, variation of $^{187}$Re $\beta$-decay rate in meteorite studies,
atomic clock measurements) which probe variations of $\alpha$ in the more
recent past imply much smaller deviations from its present value. In this work
we tie the variation of $\alpha$ to the evolution of the quintessence field
proposed by Albrecht and Skordis, and show that agreement with all these data,
as well as consistency with WMAP observations, can be achieved for a range of
parameters. Some definite predictions follow for upcoming space missions
searching for violations of the equivalence principle.
|
Monitoring chemical reactions in solutions at the scale of individual
entities is challenging: single particle detection requires small confocal
volumes which are hardly compatible with Brownian motion, particularly when
long integration times are necessary. Here, we propose a real-time (10 Hz)
holography-based nm-precision 3D tracking of single moving nanoparticles. Using
this localization, the confocal collection volume is dynamically adjusted to
follow the moving nanoparticle and allow continuous spectroscopic monitoring.
This concept is applied to the study galvanic exchange in freely-moving
collo{\"i}dal silver nanoparticles with gold ions generated in-situ. While the
Brownian trajectory reveals particle size, spectral shifts dynamically reveal
composition changes and transformation kinetics at the single object level,
pointing at different transformation kinetics for free and tethered particles.
|
This paper considers a point process model with a monotonically decreasing or
increasing ROCOF and the underlying distributions from the location-scale
family, known as the geometric process (Lam, 1988). In terms of repairable
system reliability analysis, the process is capable of modeling various
restoration types including "better-than-new", i.e., the one not covered by the
popular G-Renewal model (Kijima & Sumita, 1986). The distinctive property of
the process is that the times between successive events are obtained from the
underlying distributions as the scale parameter of each is monotonically
decreasing or increasing. The paper discusses properties and maximum likelihood
estimation of the model for the case of the Exponential and Weibull underlying
distributions.
|
The ability to live in coherent superpositions is a signature trait of
quantum systems and constitutes an irreplaceable resource for quantum-enhanced
technologies. However, decoherence effects usually destroy quantum
superpositions. It has been recently predicted that, in a composite quantum
system exposed to dephasing noise, quantum coherence in a transversal reference
basis can stay protected for indefinite time. This can occur for a class of
quantum states independently of the measure used to quantify coherence, and
requires no control on the system during the dynamics. Here, such an invariant
coherence phenomenon is observed experimentally in two different setups based
on nuclear magnetic resonance at room temperature, realising an effective
quantum simulator of two- and four-qubit spin systems. Our study further
reveals a novel interplay between coherence and various forms of correlations,
and highlights the natural resilience of quantum effects in complex systems.
|
We develop NF set theory using intuitionistic logic; we call this theory INF.
We develop the theories of finite sets and their power sets, finite cardinals
and their ordering, cardinal exponentiation, addition, and multiplication. We
follow Rosser and Specker with appropriate constructive modifications,
especially replacing "arbitrary subset" by "separable subset" in the
definitions of exponentiation and order. It is not known whether \INF\ proves
that the set of finite cardinals is infinite, so the whole development must
allow for the possibility that there is a maximum integer; arithmetical
computations might "overflow" as in a computer or odometer, and theorems about
them must be carefully stated to allow for this possibility. The work presented
here is intended as a substrate for further investigations of INF.
|
In this manuscript we provide necessary and sufficient conditions for the
$\textnormal{weak}(1,p)$ boundedness, $1< p<\infty,$ of discrete Fourier
multipliers (Fourier multipliers on $\mathbb{Z}^n$). Our main goal is to apply
the results obtained to discrete fractional integral operators. Discrete
versions of the Calder\'on-Vaillancourt Theorem and the Gohberg Lemma also are
proved.
|
We present a concise review of where we stand in particle physics today.
First we discuss QCD, then the electroweak sector and finally the motivations
and the avenues for new physics beyond the Standard Model.
|
We derive correlations between X-ray temperature, luminosity, and gas mass
for a sample of 22 distant, z>0.4, galaxy clusters observed with Chandra. We
detect evolution in all three correlations between z>0.4 and the present epoch.
In particular, in the Omega=0.3, Lambda=0.7 cosmology, the luminosity
corresponding to a fixed temperature scales approximately as (1+z)**(1.5+-0.3);
the gas mass for a fixed luminosity scales as (1+z)**(-1.8+-0.4); and the gas
mass for a fixed temperature scales as (1+z)**(-0.5+-0.4) (all uncertainties
are 90% confidence). We briefly discuss the implication of these results for
cluster evolution models.
|
It was recently shown that tunneling wavefunction proposal is consistent with
loop quantum geometry corrections including both holonomy and inverse scale
factor corrections in the gravitational part of a spatially closed isotropic
model with a positive cosmological constant. However, in presence of an
inflationary potential the initial singularity is kinetic dominated and the
effective minisuperspace potential again diverges at zero scale factor. Since
the wavefunction in loop quantum cosmology cannot increase towards the zero
scale factor, the tunneling wavefunction seems incompatible. We show that
consistently including inverse scale factor modifications in scalar field
Hamiltonian changes the effective potential into a barrier potential allowing
the tunneling proposal. We also discuss a potential quantum instability of the
cyclic universe resulting from tunneling.
|
The ultra high-energy (UHE) diffuse gamma-ray background holds important
information on the propagation of cosmic rays in the Galaxy. However, its
measurements suffer from a contamination from unresolved sources whose
importance remains unclear. In this Letter, we propose a novel data-driven
estimate of the contribution of unresolved leptonic sources based on the
information present in the ATNF and the LHAASO catalogs. We find that in the
inner Galaxy at most $\sim60\%$ of the diffuse flux measured by LHAASO at
$10\,\rm{TeV}$ may originate from unresolved leptonic sources, and this
fraction drops with energy to less than $20\%$ at $100\,\rm{TeV}$. In the outer
Galaxy, the contribution of unresolved leptonic sources is always subdominant.
It is less than $\sim 20\%$ at $10\,\rm{TeV}$ and less than $\sim 8\%$ above
$\sim25\,\rm{TeV}$. We conclude that the UHE diffuse background should be
dominated by photons from a hadronic origin above a few tens of $\rm{TeV}$.
|
Several recently proposed methods aim to learn conceptual space
representations from large text collections. These learned representations
asso- ciate each object from a given domain of interest with a point in a
high-dimensional Euclidean space, but they do not model the concepts from this
do- main, and can thus not directly be used for catego- rization and related
cognitive tasks. A natural solu- tion is to represent concepts as Gaussians,
learned from the representations of their instances, but this can only be
reliably done if sufficiently many in- stances are given, which is often not
the case. In this paper, we introduce a Bayesian model which addresses this
problem by constructing informative priors from background knowledge about how
the concepts of interest are interrelated with each other. We show that this
leads to substantially better pre- dictions in a knowledge base completion
task.
|
In this talk we present a model to demonstrate how time-periodic potential
can be used to manipulate quantum metastability of a system. We study
metastability of a particle trapped in a well with a time-periodically
oscillating barrier in the Floquet formalism. It is shown that the oscillating
barrier causes the system to decay faster in general. However, avoided
crossings of metastable states can occur with the less stable states crossing
over to the more stable ones. If in the static well there exists a bound state,
then it is possible to stabilize a metastable state by adiabatically increasing
the oscillating frequency of the barrier so that the unstable state eventually
cross-over to the stable bound state. It is also found that increasing the
amplitude of the oscillating field may change a direct crossing of states into
an avoided one. Hence, one can manipulate the stability of different states in
a quantum potential by a combination of adiabatic changes of the frequency and
the amplitude of the oscillating barrier.
|
The fine structure of the 0.7 MeV resonance in the 230Th neutron-induced
cross section is investigated within the hybrid model. A very good agreement
with experimental data is obtained. It is suggested that fine structure of the
cross section quantify the changes of the intrinsic states of the nucleus
during the disintegration process.
|
We study the non-equilibrium phase transition between survival and extinction
of spatially extended biological populations using an agent-based model. We
especially focus on the effects of global temporal fluctuations of the
environmental conditions, i.e., temporal disorder. Using large-scale
Monte-Carlo simulations of up to $3\times 10^7$ organisms and $10^5$
generations, we find the extinction transition in time-independent environments
to be in the well-known directed percolation universality class. In contrast,
temporal disorder leads to a highly unusual extinction transition characterized
by logarithmically slow population decay and enormous fluctuations even for
large populations. The simulations provide strong evidence for this transition
to be of exotic infinite-noise type, as recently predicted by a renormalization
group theory. The transition is accompanied by temporal Griffiths phases
featuring a power-law dependence of the life time on the population size.
|
We investigate galactic rotation curves in $f(T)$ gravity, where $T$
represents a torsional quantity. Our study centers on the particular Lagrangian
$f(T)=T+\alpha{T^n}$, where $|n|\neq 1$ and $\alpha$ is a small unknown
constant. To do this we treat galactic rotation curves as being composed from
two distinct features of galaxies, namely the disk and the bulge. This process
is carried out for several values of the index $n$. The resulting curve is then
compared with Milky Way profile data to constrain the value of the index $n$
while fitting for the parameter $\alpha$. These values are then further tested
on three other galaxies with different morphologies. On the galactic scale we
find that $f(T)$ gravity departs from standard Newtonian theory in an important
way. For a small range of values of $n$ we find good agreement with data
without the need for exotic matter components to be introduced.
|
UV-to-visual spectra of eight young star clusters in the merger remnant and
protoelliptical galaxy NGC 7252, obtained with the Blanco 4-m telescope on
Cerro Tololo, are presented. These clusters lie at projected distances of 3-15
kpc from the center and move with a velocity dispersion of 140+/-35 km/s in the
line of sight. Seven of the clusters show strong Balmer absorption lines in
their spectra [EW(H-beta)= 6-13 Angstrom], while the eighth lies in a giant HII
region and shows no detectable absorption features.
Based on comparisons with model-cluster spectra by Bruzual & Charlot (1996)
and Bressan, Chiosi, & Tantalo (1996), six of the absorption-line clusters have
ages in the range of 400-600 Myr, indicating that they formed early on during
the recent merger. These clusters are globular clusters as judged by their
small effective radii and ages corresponding to ~100 core crossing times. The
one emission-line object is <10 Myr old and may be a nascent globular cluster
or an OB association.
The mean metallicities measured for three clusters are solar to within
+/-0.15 dex, suggesting that the merger of two likely Sc galaxies in NGC 7252
formed a globular-cluster system with a bimodal metallicity distribution. Since
NGC 7252 itself shows the characteristics of a 0.5-1 Gyr old protoelliptical,
its second-generation solar-metallicity globulars provide direct evidence that
giant ellipticals with bimodal globular-cluster systems can form through major
mergers of gas-rich disk galaxies.
|
The melting-like transitions of Na8 and Na20 are investigated by ab initio
constant energy molecular dynamics simulations, using a variant of the
Car-Parrinello method which employs an explicit electronic kinetic energy
functional of the density, thus avoiding the use of one-particle orbitals.
Several melting indicators are evaluated in order to determine the nature of
the various transitions, and compared with other simulations. Both Na8 and Na20
melt over a wide temperature range. For Na8, a transition is observed to begin
at approx. 110 K, between a rigid phase and a phase involving isomerizations
between the different permutational isomers of the ground state structure. The
``liquid'' phase is completely established at approx. 220 K. For Na20, two
transitions are observed: the first, at approx. 110 K, is associated with
isomerization transitions between those permutational isomers of the ground
state structure which are obtained by interchanging the positions of the
surface-like atoms; the second, at approx. 160 K, involves a structural
transition from the ground state isomer to a new set of isomers with the
surface molten. The cluster is completely ``liquid'' at approx. 220 K.
|
For an undirected, simple, finite, connected graph $G$, we denote by $V(G)$
and $E(G)$ the sets of its vertices and edges, respectively. A function
$\varphi:E(G)\rightarrow\{1,2,\ldots,t\}$ is called a proper edge $t$-coloring
of a graph $G$ if adjacent edges are colored differently and each of $t$ colors
is used. An arbitrary nonempty subset of consecutive integers is called an
interval. If $\varphi$ is a proper edge $t$-coloring of a graph $G$ and $x\in
V(G)$, then $S_G(x,\varphi)$ denotes the set of colors of edges of $G$ which
are incident with $x$. A proper edge $t$-coloring $\varphi$ of a graph $G$ is
called a cyclically-interval $t$-coloring if for any $x\in V(G)$ at least one
of the following two conditions holds: a) $S_G(x,\varphi)$ is an interval, b)
$\{1,2,\ldots,t\}\setminus S_G(x,\varphi)$ is an interval. For any $t\in
\mathbb{N}$, let $\mathfrak{M}_t$ be the set of graphs for which there exists a
cyclically-interval $t$-coloring, and let
$$\mathfrak{M}\equiv\bigcup_{t\geq1}\mathfrak{M}_t.$$ For an arbitrary tree
$G$, it is proved that $G\in\mathfrak{M}$ and all possible values of $t$ are
found for which $G\in\mathfrak{M}_t.$
|
The conditions for sequences $\{f_{k}\}_{k=1}^{\infty}$ and
$\{g_{k}\}_{k=1}^{\infty}$ being Bessel sequences, frames or Riesz bases, can
be expressed in terms of the so-called cross-Gram matrix. In this paper we
investigate the cross-Gram operator, $G$, associated to the sequence $\{\langle
f_{k}, g_{j}\rangle\}_{j, k=1}^{\infty}$ and sufficient and necessary
conditions for boundedness, invertibility, compactness and positivity of this
operator are determined depending on the associated sequences. We show that
invertibility of $G$ is not possible when the associated sequences are frames
but not Riesz Bases or at most one of them is Riesz basis. In the special case
we prove that $G$ is a positive operator when $\{g_{k}\}_{k=1}^{\infty}$ is the
canonical dual of $\{f_{k}\}_{k=1}^{\infty}$.
|
The current status of optical potentials employed in the prediction of
thermonuclear reaction rates for astrophysics in the Hauser-Feshbach formalism
is discussed. Special emphasis is put on $\alpha$+nucleus potentials. A novel
approach for the prediction of $\alpha$+nucleus potentials is proposed. Further
experimental efforts are motivated.
|
Dark-field illumination is shown to make planar chiral nanoparticle
arrangements exhibit circular dichroism in extinction analogous to true chiral
scatterers. Circular dichrosim is experimentally observed at the maximum
scattering of single oligomers consisting rotationally symmetric arrangements
of gold nanorods, with strong agreement to numerical simulation. A dipole model
is developed to show that this effect is caused by a difference in the
geometric projection of a nanorod onto the handed orientation of electric
fields created by a circularly polarized dark-field that is normally incident
on a glass substrate. Owing to this geometric origin, the wavelength of the
peak chiral response is also experimentally shown to shift depending on the
separation between nanoparticles. All presented oligomers have physical
dimensions less than the operating wavelength, and the applicable extension to
closely packed planar arrays of oligomers is demonstrated to amplify the
magnitude of circular dichroism. The realization of strong chirality in these
oligomers demonstrates a new path to engineer optical chirality from planar
devices using dark-field illumination.
|
We review and discuss the original Kaluza-Klein theory in the framework of
modern embedding theories of the spacetime, such as the recent induced matter
approach. We show that in spite of their seeming similarity they constitute
rather distinct proposals as far as their geometrical structure is concerned.
|
In this paper, we propose a multi-objective camera ISP framework that
utilizes Deep Reinforcement Learning (DRL) and camera ISP toolbox that consist
of network-based and conventional ISP tools. The proposed DRL-based camera ISP
framework iteratively selects a proper tool from the toolbox and applies it to
the image to maximize a given vision task-specific reward function. For this
purpose, we implement total 51 ISP tools that include exposure correction,
color-and-tone correction, white balance, sharpening, denoising, and the
others. We also propose an efficient DRL network architecture that can extract
the various aspects of an image and make a rigid mapping relationship between
images and a large number of actions. Our proposed DRL-based ISP framework
effectively improves the image quality according to each vision task such as
RAW-to-RGB image restoration, 2D object detection, and monocular depth
estimation.
|
Stationary expansion shocks have been recently identified as a new type of
solution to hyperbolic conservation laws regularized by non-local dispersive
terms that naturally arise in shallow-water theory. These expansion shocks were
studied in (El, Hoefer, Shearer 2016) for the Benjamin-Bona-Mahony equation
using matched asymptotic expansions. In this paper, we extend the analysis of
(El, Hoefer, Shearer 2016) to the regularized Boussinesq system by using
Riemann invariants of the underlying dispersionless shallow water equations.
The extension for a system is non-trivial, requiring a combination of small
amplitude, long-wave expansions with high order matched asymptotics. The
constructed asymptotic solution is shown to be in excellent agreement with
accurate numerical simulations of the Boussinesq system for a range of
appropriately smoothed Riemann data.
|
In Section 1 we review various equivalent definitions of a vertex algebra V.
The main novelty here is the definition in terms of an indefinite integral of
the lambda-bracket. In Section 2 we construct, in the most general framework,
the Zhu algebra Zhu_G V, an associative algebra which "controls" G-twisted
representations of the vertex algebra V with a given Hamiltonian operator H. An
important special case of this construction is the H-twisted Zhu algebra Zhu_H
V. In Section 3 we review the theory of non-linear Lie conformal algebras
(respectively non-linear Lie algebras). Their universal enveloping vertex
algebras (resp. universal enveloping algebras) form an important class of
freely generated vertex algebras (resp. PBW generated associative algebras). We
also introduce the H-twisted Zhu non-linear Lie algebra Zhu_H R of a non-linear
Lie conformal algebra R and we show that its universal enveloping algebra is
isomorphic to the H-twisted Zhu algebra of the universal enveloping vertex
algebra of R. After a discussion of the necessary cohomological material in
Section 4, we review in Section 5 the construction and basic properties of
affine and finite W-algebras, obtained by the method of quantum Hamiltonian
reduction. Those are some of the most intensively studied examples of freely
generated vertex algebras and PBW generated associative algebras. Applying the
machinery developed in Sections 3 and 4, we then show that the H-twisted Zhu
algebra of an affine W-algebra is isomorphic to the finite W-algebra, attached
to the same data. In Section 6 we define the Zhu algebra of a Poisson vertex
algebra, and we discuss quasiclassical limits.
|
Can a gas behave like a crystal? Supersolidity is an intriguing and
challenging state of matter which combines key features of superfluids and
crystals. Predicted a long time ago, its experimental realization has been
recently achieved in Bose-Einstein condensed (BEC) atomic gases inside optical
resonators, spin-orbit coupled BEC's and atomic gases interacting with long
range dipolar forces. The activity on dipolar gases has been particularly
vibrant in the last few years. This perspective article summarizes the main
experimental and theoretical achievements concerning supersolidity in the field
of dipolar gases, like the observation of the density modulations caused by the
spontaneous breaking of translational invariance, the effects of coherence and
the occurrence of novel Goldstone modes. A series of important issues for the
future experimental and theoretical research are outlined including, among
others, the possible realization of quantized vortices inside these novel
crystal structure, the role of dimensionality, the characterisation of the
crystal properties and the nature of the phase transitions.
At the end a brief overview on some other (mainly cold atomic) platforms,
where supersolidity has been observed or where supersolidty is expected to
emerge is provided.
|
The superposition of atomic vibrations and flexoelectronic effect gives rise
to a cross correlation between free charge carriers and temporal magnetic
moment of phonons in conducting heterostructures under an applied strain
gradient. The resulting dynamical coupling is expected to give rise to
quasiparticle excitations called as magnetoelectronic electromagnon that
carries electronic charge and temporal magnetic moment. Here, we report
experimental evidence of magnetoelectronic electromagnon in the freestanding
degenerately doped p-Si based heterostructure thin film samples. These
quasiparticle excitations give rise to long-distance (>100um) spin transport;
demonstrated using spatially modulated transverse magneto-thermoelectric and
non-local resistance measurements. The magnetoelectronic electromagnons are
non-reciprocal and give rise to large magnetochiral anisotropy (0.352 A-1T-1)
that diminishes at lower temperatures. The superposition of non-reciprocal
magnetoelectronic electromagnons gives rise to longitudinal and transverse
modulations in charge carrier density, spin density and magnetic moment;
demonstrated using the Hall effect and edge dependent magnetoresistance
measurements, which can also be called as inhomogeneous magnetoelectronic
multiferroic effect. These quasiparticle excitations are analogues to photons
where time dependent polarization and temporal magnetic moment replaces
electric and magnetic field, respectively and most likely topological because
it manifests topological Nernst effect. Hence, the magnetoelectronic
electromagnon can potentially give rise to quantum interference and
entanglement effects in conducting solid state system at room temperature in
addition to efficient spin transport.
|
Neutron diffraction is used to re-investigate the magnetic phase diagram of
the noncentrosymmetric tetragonal antiferromagnet Ba2CuGe2O7. A novel
incommensurate double-k magnetic phase is detected near the
commensurate-incommensurate phase transition. This phase is stable only for
magnetic field closely aligned with the 4-fold symmetry axis. The results
emphasize the inadequacy of existing theoretical models for this unique
material, and points to additional terms in the Hamiltonian or lattice effects.
|
Denoising is the process of removing noise from sound signals while improving
the quality and adequacy of the sound signals. Denoising sound has many
applications in speech processing, sound events classification, and machine
failure detection systems. This paper describes a method for creating an
autoencoder to map noisy machine sounds to clean sounds for denoising purposes.
There are several types of noise in sounds, for example, environmental noise
and generated frequency-dependent noise from signal processing methods. Noise
generated by environmental activities is environmental noise. In the factory,
environmental noise can be created by vehicles, drilling, people working or
talking in the survey area, wind, and flowing water. Those noises appear as
spikes in the sound record. In the scope of this paper, we demonstrate the
removal of generated noise with Gaussian distribution and the environmental
noise with a specific example of the water sink faucet noise from the induction
motor sounds. The proposed method was trained and verified on 49 normal
function sounds and 197 horizontal misalignment fault sounds from the Machinery
Fault Database (MAFAULDA). The mean square error (MSE) was used as the
assessment criteria to evaluate the similarity between denoised sounds using
the proposed autoencoder and the original sounds in the test set. The MSE is
below or equal to 0.14 when denoise both types of noises on 15 testing sounds
of the normal function category. The MSE is below or equal to 0.15 when
denoising 60 testing sounds on the horizontal misalignment fault category. The
low MSE shows that both the generated Gaussian noise and the environmental
noise were almost removed from the original sounds with the proposed trained
autoencoder.
|
Availability of an explainable deep learning model that can be applied to
practical real world scenarios and in turn, can consistently, rapidly and
accurately identify specific and minute traits in applicable fields of
biological sciences, is scarce. Here we consider one such real world example
viz., accurate identification, classification and quantification of biotic and
abiotic stresses in crop research and production. Up until now, this has been
predominantly done manually by visual inspection and require specialized
training. However, such techniques are hindered by subjectivity resulting from
inter- and intra-rater cognitive variability. Here, we demonstrate the ability
of a machine learning framework to identify and classify a diverse set of
foliar stresses in the soybean plant with remarkable accuracy. We also present
an explanation mechanism using gradient-weighted class activation mapping that
isolates the visual symptoms used by the model to make predictions. This
unsupervised identification of unique visual symptoms for each stress provides
a quantitative measure of stress severity, allowing for identification,
classification and quantification in one framework. The learnt model appears to
be agnostic to species and make good predictions for other (non-soybean)
species, demonstrating an ability of transfer learning.
|
A method is described intended for distributed calibration of a probe
microscope scanner consisting in a search for a net of local calibration
coefficients (LCCs) in the process of automatic measurement of a standard
surface, whereby each point of the movement space of the scanner can be defined
by a unique set of scale factors. Feature-oriented scanning (FOS) methodology
is used to implement the distributed calibration, which permits to exclude in
situ the negative influence of thermal drift, creep and hysteresis on the
obtained results. The sensitivity of LCCs to errors in determination of
position coordinates of surface features forming the local calibration
structure (LCS) is eliminated by performing multiple repeated measurements
followed by building regression surfaces. There are no principle restrictions
on the number of repeated LCS measurements. Possessing the calibration database
enables correcting in one procedure all the spatial distortions caused by
nonlinearity, nonorthogonality and spurious crosstalk couplings of the
microscope scanner piezomanipulators. To provide high precision of spatial
measurements in nanometer range, the calibration is carried out using natural
standards - constants of crystal lattice. The method allows for automatic
characterization of crystal surfaces. The method may be used with any scanning
probe instrument.
|
We have used UVES on VLT-UT2 to take spectra of 15 individual red giant stars
in the centers of four nearby dwarf spheroidal galaxies: Sculptor, Fornax,
Carina and Leo I. We measure the abundance variations of numerous elements in
these low mass stars with a range of ages (1-15Gyr old). This means that we can
effectively measure the chemical evolution of these galaxies WITH TIME. Our
results show a significant spread in metallicity with age, but an overall trend
consistent with what might be expected from a closed (or perhaps leaky) box
chemical evolution scenario over the last 10-15Gyr. We notice that each of
these galaxies show broadly similar abundance patterns for all elements
measured. This suggests a fairly uniform progression of chemical evolution with
time, despite quite a large range of star formation histories. It seems likely
that these galaxies had similar initial conditions, and evolve in a similar
manner with star formation occurring at a uniformly low rate, even if at
different times. With our accurate measurements we find evidence for small
variations in abundances which are correlated to variations in star formation
histories. The alpha-elements suggest that dSph chemical evolution has not been
affected by very high mass stars (>15-20 Msun). The abundance patterns we
measure for stars in dwarf spheroidal galaxies are significantly different from
those typically observed in the disk, bulge and inner-halo of our Galaxy. This
suggests that it is NOT possible to construct a significant fraction of our
Galaxy from STARS formed in these dwarf spheroidal galaxies which subsequently
merged into our own. Any merger scenario involving dSph has to occur in the
very early Universe whilst they are still gas rich, so the majority of mass
transfer is gas, and few stars.
|
We study theoretically the formation of long-wavelength instability patterns
observed at spreading of nematic droplets on liquid substrates. The role of
surface-like elastic terms such as saddle-splay and anchoring in nematic films
of submicron thickness is (re)examined by extending our previous work
[Manyuhina et al EPL, 92, 16005 (2010)] to hybrid aligned nematics. We identify
the upper threshold for the formation of stripes and compare our results with
experimental observations. We find that the wavelength and the amplitude of the
in-plane director undulations can be related to the small but finite azimuthal
anchoring. Within a simplified model we analyse the possibility of non-planar
base state below the Barbero-Barberi critical thickness.
|
We study the Schr\"odinger-Poisson (SP) method in the context of cosmological
large-scale structure formation in an expanding background. In the limit $\hbar
\to 0$, the SP technique can be viewed as an effective method to sample the
phase space distribution of cold dark matter that remains valid on non-linear
scales. We present results for the 2D and 3D matter correlation function and
power spectrum at length scales corresponding to the baryon acoustic
oscillation (BAO) peak. We discuss systematic effects of the SP method applied
to cold dark matter and explore how they depend on the simulation parameters.
In particular, we identify a combination of simulation parameters that controls
the scale-independent loss of power observed at low redshifts, and discuss the
scale relevant to this effect.
|
The Ly-$\alpha$ forest 1D flux power spectrum is a powerful probe of several
cosmological parameters. Assuming a $\Lambda$CDM cosmology including massive
neutrinos, we find that the latest SDSS DR14 BOSS and eBOSS Ly-$\alpha$ forest
data is in very good agreement with current weak lensing constraints on
$(\Omega_m, \sigma_8)$ and has the same small level of tension with Planck. We
did not identify a systematic effect in the data analysis that could explain
this small tension, but we show that it can be reduced in extended cosmological
models where the spectral index is not the same on the very different times and
scales probed by CMB and Ly-$\alpha$ data. A particular case is that of a
$\Lambda$CDM model including a running of the spectral index on top of massive
neutrinos. With combined Ly-$\alpha$ and Planck data, we find a slight
(3$\sigma$) preference for negative running, $\alpha_s= -0.010 \pm 0.004$ (68%
CL). Neutrino mass bounds are found to be robust against different assumptions.
In the $\Lambda$CDM model with running, we find $\sum m_\nu <0.11$ eV at the
95% confidence level for combined Ly-$\alpha$ and Planck (temperature and
polarisation) data, or $\sum m_\nu < 0.09$ eV when adding CMB lensing and BAO
data. We further provide strong and nearly model-independent bounds on the mass
of thermal warm dark matter. For a conservative configuration consisting of
SDSS data restricted to $z<4.5$ combined with XQ-100 \lya data, we find $m_X >
5.3\;\mathrm{keV}$ (95\%CL).
|
Aims: G15.4+0.1 is a faint supernova remnant (SNR) that has recently been
associated with the gamma-ray source HESS J1818-154. We investigate a hadronic
scenario for the production of the gamma-ray emission. Methods: Molecular 13CO
(J=1-0) taken from the Galactic Ring Survey (GRS) and neutral hydrogen (HI)
data from the Southern Galactic Plane Survey (SGPS) have been used in
combination with new 1420 MHz radio continuum observations carried out with the
Giant Metrewave Radio Telescope (GMRT). Results: From the new observations and
analysis of archival data we provided for the first time a reliable estimate
for the distance to the SNR G15.4+0.1 and discovered molecular clouds located
at the same distance. On the basis of HI absorption features, we estimate the
distance to G15.4+0.1 in 4.8+/-1.0 kpc. The 13CO observations clearly show a
molecular cloud about 5 arcmin in size with two bright clumps, labeled A and B,
clump A positionally associated with the location of HESS J1818-154 and clump B
in coincidence with the brightest northern border of the radio SNR shell. The
HI absorption and the 13CO emission study indicates a possible interaction
between the molecular material and the remnant. We estimate the masses and
densities of the molecular gas as (1.2+/-0.5)X10^3 M_sun and (1.5+/-0.4)X10^3
cm^-3 for clump A and (3.0+/-0.7)X10^3 M_sun and (1.1+/-0.3)X10^3 cm^-3 for
clump B. Calculations show that the average density of the molecular clump A is
sufficient to produce the detected gamma-ray flux, thus favoring a hadronic
origin for the high-energy emission.
|
Algorithmic recommendations and decisions have become ubiquitous in today's
society. Many of these and other data-driven policies, especially in the realm
of public policy, are based on known, deterministic rules to ensure their
transparency and interpretability. For example, algorithmic pre-trial risk
assessments, which serve as our motivating application, provide relatively
simple, deterministic classification scores and recommendations to help judges
make release decisions. How can we use the data based on existing deterministic
policies to learn new and better policies? Unfortunately, prior methods for
policy learning are not applicable because they require existing policies to be
stochastic rather than deterministic. We develop a robust optimization approach
that partially identifies the expected utility of a policy, and then finds an
optimal policy by minimizing the worst-case regret. The resulting policy is
conservative but has a statistical safety guarantee, allowing the policy-maker
to limit the probability of producing a worse outcome than the existing policy.
We extend this approach to common and important settings where humans make
decisions with the aid of algorithmic recommendations. Lastly, we apply the
proposed methodology to a unique field experiment on pre-trial risk assessment
instruments. We derive new classification and recommendation rules that retain
the transparency and interpretability of the existing instrument while
potentially leading to better overall outcomes at a lower cost.
|
Stacking two-dimensional layered materials such as graphene and transitional
metal dichalcogenides with nonzero interlayer twist angles has recently become
attractive because of the emergence of novel physical properties. Stacking of
one-dimensional nanomaterials offers the lateral stacking offset as an
additional parameter for modulating the resulting material properties. Here, we
report that the edge states of twisted bilayer zigzag graphene nanoribbons
(TBZGNRs) can be tuned with both the twist angle and the stacking offset.
Strong edge state variations in the stacking region are first revealed by
density functional theory (DFT) calculations. We construct and characterize
twisted bilayer zigzag graphene nanoribbon (TBZGNR) systems on a Au(111)
surface using scanning tunneling microscopy. A detailed analysis of three
prototypical orthogonal TBZGNR junctions exhibiting different stacking offsets
by means of scanning tunneling spectroscopy reveals emergent near-zero-energy
states. From a comparison with DFT calculations, we conclude that the emergent
edge states originate from the formation of flat bands whose energy and spin
degeneracy are highly tunable with the stacking offset. Our work highlights
fundamental differences between 2D and 1D twistronics and spurs further
investigation of twisted one-dimensional systems.
|
Given a metric space $(F \cup C, d)$, we consider star covers of $C$ with
balanced loads. A star is a pair $(f, C_f)$ where $f \in F$ and $C_f \subseteq
C$, and the load of a star is $\sum_{c \in C_f} d(f, c)$. In minimum load
$k$-star cover problem $(\mathrm{MLkSC})$, one tries to cover the set of
clients $C$ using $k$ stars that minimize the maximum load of a star, and in
minimum size star cover $(\mathrm{MSSC})$ one aims to find the minimum number
of stars of load at most $T$ needed to cover $C$, where $T$ is a given
parameter.
We obtain new bicriteria approximations for the two problems using novel
rounding algorithms for their standard LP relaxations. For $\mathrm{MLkSC}$, we
find a star cover with $(1+\varepsilon)k$ stars and
$O(1/\varepsilon^2)\mathrm{OPT}_{\mathrm{MLk}}$ load where
$\mathrm{OPT}_{\mathrm{MLk}}$ is the optimum load. For $\mathrm{MSSC}$, we find
a star cover with $O(1/\varepsilon^2) \mathrm{OPT}_{\mathrm{MS}}$ stars of load
at most $(2 + \varepsilon) T$ where $\mathrm{OPT}_{\mathrm{MS}}$ is the optimal
number of stars for the problem. Previously, non-trivial bicriteria
approximations were known only when $F = C$.
|
Neural network algorithms simulated on standard computing platforms typically
make use of high resolution weights, with floating-point notation. However, for
dedicated hardware implementations of such algorithms, fixed-point synaptic
weights with low resolution are preferable. The basic approach of reducing the
resolution of the weights in these algorithms by standard rounding methods
incurs drastic losses in performance. To reduce the resolution further, in the
extreme case even to binary weights, more advanced techniques are necessary. To
this end, we propose two methods for mapping neural network algorithms with
high resolution weights to corresponding algorithms that work with low
resolution weights and demonstrate that their performance is substantially
better than standard rounding. We further use these methods to investigate the
performance of three common neural network algorithms under fixed memory size
of the weight matrix with different weight resolutions. We show that dedicated
hardware systems, whose technology dictates very low weight resolutions (be
they electronic or biological) could in principle implement the algorithms we
study.
|
Biomolecular condensates play a central role in the spatial organization of
living matter. Their formation is now well understood as a form of
liquid-liquid phase separation that occurs very far from equilibrium. For
instance, they can be modeled as active droplets, where the combination of
molecular interactions and chemical reactions result in microphase separation.
However, so far, models of chemically active droplets are spatially continuous
and deterministic. Therefore, the relationship between the microscopic
parameters of the models and some crucial properties of active droplets (such
as their polydispersity, their shape anisotropy, or their typical lifetime) is
yet to be established. In this work, we address this question computationally,
using Brownian dynamics simulations of chemically active droplets: the building
blocks are represented explicitly as particles that interact with attractive or
repulsive interactions, depending on whether they are in a droplet-forming
state or not. Thanks to this microscopic and stochastic view of the problem, we
reveal how driving the system away from equilibrium in a controlled way
determines the fluctuations and dynamics of active emulsions.
|
We review how (dimensionally regulated) scattering amplitudes in N=4
super-Yang-Mills theory provide a useful testing ground for perturbative QCD
calculations relevant to collider physics, as well as another avenue for
investigating the AdS/CFT correspondence. We describe the iterative relation
for two-loop scattering amplitudes in N=4 super-Yang-Mills theory found in C.
Anastasiou et al., Phys. Rev. Lett. 91:251602 (2003), and discuss recent
progress toward extending it to three loops.
|
UAV control system is a huge and complex system, and to design and test a UAV
control system is time-cost and money-cost. This paper considered the
simulation of identification of a nonlinear system dynamics using artificial
neural networks approach. This experiment develops a neural network model of
the plant that we want to control. In the control design stage, experiment uses
the neural network plant model to design (or train) the controller. We use
Matlab to train the network and simulate the behavior. This chapter provides
the mathematical overview of MRC technique and neural network architecture to
simulate nonlinear identification of UAV systems. MRC provides a direct and
effective method to control a complex system without an equation-driven model.
NN approach provides a good framework to implement MEC by identifying
complicated models and training a controller for it.
|
This paper studies a diffusion control problem motivated by challenges faced
by public health agencies who run clinics to serve the public. A key challenge
for these agencies is to motivate individuals to participate in the services
provided. They must manage the flow of (voluntary) participants so that the
clinic capacity is highly utilized, but not overwhelmed. The organization can
deploy costly promotion activities to increase the inflow of participants.
Ideally, the system manager would like to have enough participants waiting in a
queue to serve as many individuals as possible and efficiently use clinic
capacity. However, if too many participants sign up, resulting in a long wait,
participants may become irritated and hesitate to participate again in the
future. We develop a diffusion model of managing participant inflow mechanisms.
Each mechanism corresponds to choosing a particular drift rate parameter for
the diffusion model. The system manager seeks to balance three different costs
optimally: i) a linear holding cost that captures the congestion concerns; ii)
an idleness penalty corresponding to wasted clinic capacity and negative impact
on public health, and iii) costs of promotion activities. We show that a
nested-threshold policy for deployment of participant inflow mechanisms is
optimal under the long-run average cost criterion. In this policy, the system
manager progressively deploys mechanisms in increasing order of cost, as the
number of participants in the queue decreases. We derive explicit formulas for
the queue length thresholds that trigger each promotion activity, providing the
system manager with guidance on when to use each mechanism.
|
Field transformation rules of the standard fermionic T-duality require
fermionic isometries to anticommute, which leads to complexification of the
Killing spinors and results in complex valued dual backgrounds. We generalize
the field transformations to the setting with non-anticommuting fermionic
isometries and show that the resulting backgrounds are solutions of double
field theory. Explicit examples of non-abelian fermionic T-dualities that
produce real backgrounds are given. Some of our examples can be bosonic
T-dualized into usual supergravity solutions, while the others are genuinely
non-geometric. Comparison with alternative treatment based on sigma models on
supercosets shows consistency.
|
Two-dimensional kinematics of the central region of M 83 (NGC 5236) were
obtained through three-dimensional NIR spectroscopy with Gemini South
telescope. The spatial region covered by the integral field unit (~5" x 13" or
~90 x 240 pc), was centered approximately at the center of the bulge isophotes
and oriented SE-NW. The Pa_beta emission at half arcsecond resolution clearly
reveals spider-like diagrams around three centers, indicating the presence of
extended masses, which we describe in terms of Satoh distributions. One of the
mass concentrations is identified as the optical nucleus (ON), another as the
center of the bulge isophotes, similar to the CO kinematical center (KC), and
the third as a condensation hidden at optical wavelengths (HN), coincident with
the largest lobe in 10 micron emission. We run numerical simulations that take
into account ON, KC and HN and four more clusters, representing the star
forming arc at the SW of the optical nucleus. We show that ON, KC and HN suffer
strong evaporation and merge in 10-50 Myr. The star-forming arc is scattered in
less than one orbital period, also falling into the center. Simulations also
show that tidal-striping boosts the external shell of the condensations to
their escape velocity. This fact might lead to an overestimation of the mass of
the condensations in kinematical observations with spatial resolution smaller
than the condensations' apparent sizes. Additionally the existence of two ILR
resonances embracing the chain of HII regions, claimed by different authors,
might not exist due to the similarity of the masses of the different components
and the fast dynamical evolution of M83 central 300 pc.
|
In a recent study [Phys. Rev. E \textbf{94}, 022103 (2016)] it has been shown
that, for a fluid film subject to critical adsorption, the resulting critical
Casimir force (CCF) may significantly depend on the thermodynamic ensemble.
Here, we extend that study by considering fluid films within the so-called
ordinary surface universality class. We focus on mean-field theory, within
which the OP profile satisfies Dirichlet boundary conditions and produces a
nontrivial CCF in the presence of external bulk fields or, respectively, a
nonzero total order parameter within the film. Our analytical results are
supported by Monte Carlo simulations of the three-dimensional Ising model. We
show that, in the canonical ensemble, i.e., when fixing the so-called total
mass within the film, the CCF is typically repulsive instead of attractive as
in the grand canonical ensemble. Based on the Landau-Ginzburg free energy, we
furthermore obtain analytic expressions for the order parameter profiles and
analyze the relation between the total mass in the film and the external bulk
field.
|
In this work, we study the relation of cosmic environment and morphology with
the star-formation (SF) and the stellar population of galaxies. Most
importantly, we examine if this relation differs for systems with active and
non-active supermassive black holes. For that purpose, we use 551 X-ray
detected active galactic nuclei (AGN) and 16,917 non-AGN galaxies in the
COSMOS-Legacy survey, for which the surface-density field measurements are
available. The sources lie at redshift of $\rm 0.3<z<1.2$, probe X-ray
luminosities of $\rm 42<log\,[L_{X,2-10keV}(erg\,s^{-1})]<44$ and have stellar
masses, $\rm 10.5<log\,[M_*(M_\odot)]<11.5$. Our results show that isolated AGN
(field) have lower SFR compared to non AGN, at all L$_X$ spanned by our sample.
However, in denser environments (filaments, clusters), moderate L$_X$ AGN ($\rm
log\,[L_{X,2-10keV}(erg\,s^{-1})]>43$) and non-AGN galaxies have similar SFR.
We, also, examine the stellar populations and the morphology of the sources in
different cosmic fields. For the same morphological type, non-AGN galaxies tend
to have older stellar populations and are less likely to have undergone a
recent burst in denser environments compared to their field counterparts. The
differences in the stellar populations with the density field are, mainly,
driven by quiescent systems. Moreover, low L$_X$ AGN present negligible
variations of their stellar populations, in all cosmic environments, whereas
moderate L$_X$ AGN have, on average, younger stellar populations and are more
likely to have undergone a recent burst, in high density fields. Finally, in
the case of non-AGN galaxies, the fraction of bulge-dominated (BD) systems
increases with the density field, while BD AGN are scarce in denser
environments. Our results are consistent with a scenario in which a common
mechanism, such as mergers, triggers both the SF and the AGN activity.
|
We investigate the fundamental properties of quantum Borcherds-Bozec algebras
and their representations. Among others, we prove that the quantum
Borcherds-Bozec algebras have a triangular decomposition and the category of
integrable representations is semi-simple.
|
Recent experiments have realized an all-optical photon transistor using a
cold atomic gas. This approach relies on electromagnetically induced
transparency (EIT) in conjunction with the strong interaction among atoms
excited to high-lying Rydberg states. The transistor is gated via a so-called
Rydberg spinwave, in which a single Rydberg excitation is coherently shared by
the whole ensemble. In its absence the incoming photon passes through the
atomic ensemble by virtue of EIT while in its presence the photon is scattered
rendering the atomic gas opaque. An important current challenge is to preserve
the coherence of the Rydberg spinwave during the operation of the transistor,
which would enable for example its coherent optical read-out and its further
processing in quantum circuits. With a combined field theoretical and quantum
jump approach and by employing a simple model description we investigate
systematically and comprehensively how the coherence of the Rydberg spinwave is
affected by photon scattering. With large-scale numerical calculations we show
how coherence becomes increasingly protected with growing interatomic
interaction strength. For the strongly interacting limit we derive analytical
expressions for the spinwave fidelity as a function of the optical depth and
bandwidth of the incoming photon.
|
We address the challenging issue of how CP violation is realized in higher
dimensional gauge theories without higher dimensional elementary scalar fields.
In such theories interactions are basically governed by a gauge principle and
therefore to get CP violating phases is a non-trivial task. It is demonstrated
that CP violation is achieved as the result of compactification of extra
dimensions, which is incompatible with the 4-dimensional CP transformation. As
a simple example we adopt a 6-dimensional U(1) model compactified on a
2-dimensional orbifold $T^{2}/Z_{4}$. We argue that the 4-dimensional CP
transformation is related to the complex structure of the extra space and show
how the $Z_{4}$ orbifolding leads to CP violation. We confirm by explicit
calculation of the interaction vertices that CP violating phases remain even
after the re-phasing of relevant fields. For completeness, we derive a
re-phasing invariant CP violating quantity, following a similar argument in the
Kobayashi-Maskawa model which led to the Jarlskog parameter. As an example of a
CP violating observable we briefly comment on the electric dipole moment of the
electron.
|
We study the ramifications of increased commitment power for information
provision in an oligopolistic market with search frictions. Although prices are
posted and, therefore, guide search, if firms cannot commit to information
provision policies, there is no active search at equilibrium so consumers visit
(and purchase from) at most one firm. If firms can guide search by both their
prices and information policies, there exists a unique symmetric equilibrium
exhibiting price dispersion and active search. Nevertheless, when the market is
thin, consumers prefer the former case, which features intense price
competition. Firms always prefer the latter.
|
High resolution gravity plus smoothed particle hydrodynamics simulations are
used to study the formation of galaxies within the context of hierarchical
structure formation. The simulations have sufficient dynamic range to resolve
from ten kpc scale galactic disks up to many Mpc scale filaments. Over this
range of scales, we find that hierarchical structure development proceeds
through a series of increasingly larger filamentary collapses. The well
resolved simulated galaxies contain hundreds to thousands of particles and have
varied morphologies covering the entire expected range from disks to tidally
distorted objects. The epoch of galaxy formation occurs early, about redshift
2.5 for 10^12 M_sun galaxies. Hierarchical formation naturally produces
correlations among the mass, age, morphology, and local density of galaxies
which match the trends in the observed morphology--density relation. We also
describe a method of spiral galaxy formation in which galactic disks form
through the discrete accretion of gas clouds which transport most of the
angular momentum to the inner regions. Such a process is characteristic of the
somewhat chaotic nature of hierarchical structure formation where simple
analytical ideas of spherical collapse appear incongruous.
|
This study investigated the electronic structure of SrTi$_{1-x}$V$_x$O$_3$
(STVO) thin films, which are solid solutions of strongly correlated transparent
conductive oxide (TCO) SrVO$_3$ and oxide semiconductor SrTiO$_3$, using ${in
situ}$ photoemission spectroscopy. STVO is one of the most promising candidates
for correlated-metal TCO because it has the capability of optimizing the
performance of transparent electrodes by varying ${x}$. Systematic and
significant spectral changes were found near the Fermi level (${E_{\rm F}}$) as
a function of ${x}$, while the overall electronic structure of STVO is in good
agreement with the prediction of band structure calculations. As ${x}$
decreases from 1.0, spectral weight transfer occurs from the coherent band near
${E_{\rm F}}$ to the incoherent states (lower Hubbard band) around 1.0-1.5 eV.
Simultaneously, a pseudogap is formed at ${E_{\rm F}}$, indicating a
significant reduction in quasiparticle spectral weight within close vicinity of
${E_{\rm F}}$. This pseudogap seems to evolve into an energy gap at ${x}$ =
0.4, suggesting the occurrence of a composition-driven metal-insulator
transition. From angle-resolved photoemission spectroscopic results, the
carrier concentration ${n}$ changes proportionally as a function of ${x}$ in
the metallic range of ${x}$ = 0.6-1.0. In contrast, the mass enhancement
factor, which is proportional to the effective mass (${m^*}$), does not change
significantly with varying ${x}$. These results suggest that the key factor of
${n/m^*}$ in optimizing the performance of correlated-metal TCO is tuned by
${x}$, highlighting the potential of STVO to achieve the desired TCO
performance in the metallic region.
|
Ji\v{r}\'i Matou\v{s}ek (1963-2015) had many breakthrough contributions in
mathematics and algorithm design. His milestone results are not only profound
but also elegant. By going beyond the original objects --- such as Euclidean
spaces or linear programs --- Jirka found the essence of the challenging
mathematical/algorithmic problems as well as beautiful solutions that were
natural to him, but were surprising discoveries to the field.
In this short exploration article, I will first share with readers my initial
encounter with Jirka and discuss one of his fundamental geometric results from
the early 1990s. In the age of social and information networks, I will then
turn the discussion from geometric structures to network structures, attempting
to take a humble step towards the holy grail of network science, that is to
understand the network essence that underlies the observed
sparse-and-multifaceted network data. I will discuss a simple result which
summarizes some basic algebraic properties of personalized PageRank matrices.
Unlike the traditional transitive closure of binary relations, the personalized
PageRank matrices take "accumulated Markovian closure" of network data. Some of
these algebraic properties are known in various contexts. But I hope featuring
them together in a broader context will help to illustrate the desirable
properties of this Markovian completion of networks, and motivate systematic
developments of a network theory for understanding vast and ubiquitous
multifaceted network data.
|
We performed a systematic study of the temperature- and field-dependence of
magnetization and resistivity of Gd2PdSi3, which is a centrosymmetric skyrmion
crystal. While the magnetization behavior is consistent with the reported phase
diagram based on susceptibility, we show that a phase diagram can also be
constructed based on the anomalous magnetoresistance with one-to-one
correspondence among all the features. In addition, the crossover boundary into
the field-induced ferromagnetic state is also identified. Our results suggest
that the ferromagnetic spin fluctuations above the N\'eel temperature play a
key role in the high sensitivity of the resistivity anomalies to magnetic
field, pointing to the rich interplay of different magnetic correlations at
zero and finite wave vectors underlying the skyrmion lattice in this frustrated
itinerant magnet.
|
A first-principles study of the structural and electronic properies of carbon
impurities in CuIn$_{1-x}$Ga$_x$Se$_{2}$ is presented. Carbon is present in
organic molecules in the precursor solutions used in nonvacuum growth methods,
making more efficient use of material, time and energy than traditional vacuum
methods. The formation energies of several carbon impurities are calculated
using the hybrid HSE06 functional. C$_{\mathrm{Cu}}$ acts as a shallow donor,
C$_{\mathrm{In}}$ and interstitial C yield deep donor levels in CuInSe$_{2}$,
while in CuGaSe$_{2}$ C$_{\mathrm{Ga}}$ and interstitial C act as deep
amphoteric defects. So, if present, these defects reduce the majority carrier
(hole) concentration by compensating the acceptor levels and become trap states
for the photogenerated minority carriers (electrons). However, the formation
energies of the calculated carbon impurities are high, even under C-rich growth
conditions. Therefore, these impurities are not likely to form and will
probably be expelled to the intergranular region and out of the absorber layer.
|
The fast pace at which new online services emerge leads to a rapid surge in
the volume of network traffic. A recent approach that the research community
has proposed to tackle this issue is in-network computing, which means that
network devices perform more computations than before. As a result, processing
demands become more varied, creating the need for flexible packet-processing
architectures. State-of-the-art approaches provide a high degree of flexibility
at the expense of performance for complex applications, or they ensure high
performance but only for specific use cases. In order to address these
limitations, we propose FlexCross. This flexible packet-processing design can
process network traffic with diverse processing requirements at over 100 Gbit/s
on FPGAs. Our design contains a crosspoint-queued crossbar that enables the
execution of complex applications by forwarding incoming packets to the
required processing engines in the specified sequence. The crossbar consists of
distributed logic blocks that route incoming packets to the specified targets
and resolve contentions for shared resources, as well as memory blocks for
packet buffering. We implemented a prototype of FlexCross in Verilog and
evaluated it via cycle-accurate register-transfer level simulations. We also
conducted test runs with real-world network traffic on an FPGA. The evaluation
results demonstrate that FlexCross outperforms state-of-the-art flexible
packet-processing designs for different traffic loads and scenarios. The
synthesis results show that our prototype consumes roughly 21% of the resources
on a Virtex XCU55 UltraScale+ FPGA.
|
We present long-slit spectroscopy and spectro-astrometry of HeI 1.083 micron
emission in the T Tauri star, DG Tau. We identify three components in the HeI
feature: (1) a blueshifted emission component atv -200 km s^-1, (2) a bright
emission component at zero-velocity with a FWZI of ~500 km s^-1, and (3) a
blueshifted absorption feature at velocities between -250 and -500 km s^-1. The
position and velocity of the blueshifted HeI emission coincide with a
high-velocity component (HVC) of the [FeII] 1.257 micron emission, which arises
from a jet within an arcsecond of the star. The presence of such a high
excitation line (excitation energy ~ 20 eV) within the jet supports the
scenario of shock heating. The bright HeI component does not show any spatial
extension, and it is likely to arise from magnetospheric accretion columns.
The blueshifted absorption shows greater velocities than that in H-alpha,
suggesting that these absorption features arise from the accelerating wind
close to the star.
|
Following our reformulation of sheaf-theoretic Virasoro constraints with
applications to curves and surfaces joint with Lim-Moreira, I describe in the
present work the quiver analog. After phrasing a universal approach to Virasoro
constraints for moduli of quiver-representations, I prove them for any finite
quiver with relations, with frozen vertices, but without cycles. I use partial
flag varieties which are special cases of moduli spaces of framed
representations as a guiding example throughout. These results are applied to
give an independent proof of Virasoro constraints for all Gieseker semistable
sheaves on $\mathbb{P}^2$ and $\mathbb{P}^1 \times \mathbb{P}^1$ by using
derived equivalences to quivers with relations. Combined with an existing
universality argument for Virasoro constraints on Hilbert schemes of points on
surfaces, this leads to the proof of this rank 1 case for any $S$ which is
independent of the previous results in Gromov-Witten theory.
|
We propose a novel method to improve deep learning model performance on
highly-imbalanced tasks. The proposed method is based on CycleGAN to achieve
balanced dataset. We show that data augmentation with GAN helps to improve
accuracy of pneumonia binary classification task even if the generative network
was trained on the same training dataset.
|
Creating temperature gradients in magnetic nanostructures has resulted in a
new research direction, i.e., the combination of magneto- and thermoelectric
effects. Here, we demonstrate the observation of one important effect of this
class: the magneto-Seebeck effect. It is observed when a magnetic configuration
changes the charge based Seebeck coefficient. In particular, the Seebeck
coefficient changes during the transition from a parallel to an antiparallel
magnetic configuration in a tunnel junction. In that respect, it is the analog
to the tunneling magnetoresistance. The Seebeck coefficients in parallel and
antiparallel configuration are in the order of the voltages known from the
charge-Seebeck effect. The size and sign of the effect can be controlled by the
composition of the electrodes' atomic layers adjacent to the barrier and the
temperature. Experimentally, we realized 8.8 % magneto-Seebeck effect, which
results from a voltage change of about -8.7 {\mu}V/K from the antiparallel to
the parallel direction close to the predicted value of -12.1 {\mu}V/K.
|
We introduce Nevanlinna classes associated to non radial weights in the unit
disc in the complex plane and we get Blaschke type theorems relative to these
classes by use of several complex variables methods. This gives alternative
proofs and improve some results of Boritchev, Golinski and Kupin useful, in
particular, for the study of eigenvalues of non self adjoint Schr\"odinger
operators.
|
We study the existence and uniqueness of solutions to stochastic differential
equations with Volterra processes driven by L\'evy noise. For this purpose, we
study in detail smoothness properties of these processes. Special attention is
given to two kinds of Volterra-Gaussian processes that generalize the compact
interval representation of fractional Brownian motion and to stochastic
equations with such processes.
|
In this paper we use a formal discrete-to-continuum procedure to derive a
continuum variational model for two chains of atoms with slightly
incommensurate lattices. The chains represent a cross-section of a
three-dimensional system consisting of a graphene sheet suspended over a
substrate. The continuum model recovers both qualitatively and quantitatively
the behavior observed in the corresponding discrete model. The numerical
solutions for both models demonstrate the presence of large commensurate
regions separated by localized incommensurate domain walls.
|
Epistemic emotions, such as curiosity and interest, drive the inquiry
process. This study proposes a novel formulation of epistemic emotions such as
curiosity and interest using two types of information gain generated by the
principle of free energy minimization: Kullback-Leibler divergence(KLD) from
Bayesian posterior to prior, which represents free energy reduction in
recognition, and Bayesian surprise (BS), which represents the expected
information gain by Bayesian prior update. By applying a Gaussian generative
model with an additional uniform likelihood, we found that KLD and BS form an
upward-convex function of surprise (minimized free energy and prediction
error), similar to Berlyne's arousal potential functions, or the Wundt curve.
We consider that the alternate maximization of BS and KLD generates an ideal
inquiry cycle to approach the optimal arousal level with fluctuations in
surprise, and that curiosity and interest drive to facilitate the cyclic
process. We exhaustively analyzed the effects of prediction uncertainty (prior
variance) and observation uncertainty (likelihood variance) on the peaks of the
information gain function as optimal surprises. The results show that greater
prediction uncertainty, meaning an open-minded attitude, and less observational
uncertainty, meaning precise observation with attention, are expected to
provide greater information gains through a greater range of exploration. The
proposed mathematical framework unifies the free energy principle of the brain
and the arousal potential theory to explain the Wundt curve as an information
gain function and suggests an ideal inquiry process driven by epistemic
emotions.
|
Deep learning, a multi-layered neural network approach inspired by the brain,
has revolutionized machine learning. One of its key enablers has been
backpropagation, an algorithm that computes the gradient of a loss function
with respect to the weights in the neural network model, in combination with
its use in gradient descent. However, the implementation of deep learning in
digital computers is intrinsically wasteful, with energy consumption becoming
prohibitively high for many applications. This has stimulated the development
of specialized hardware, ranging from neuromorphic CMOS integrated circuits and
integrated photonic tensor cores to unconventional, material-based computing
systems. The learning process in these material systems, taking place, e.g., by
artificial evolution or surrogate neural network modelling, is still a
complicated and time-consuming process. Here, we demonstrate an efficient and
accurate homodyne gradient extraction method for performing gradient descent on
the loss function directly in the material system. We demonstrate the method in
our recently developed dopant network processing units, where we readily
realize all Boolean gates. This shows that gradient descent can in principle be
fully implemented in materio using simple electronics, opening up the way to
autonomously learning material systems.
|
Computations of higher-order QCD corrections for processes with exclusive
final states require a subtraction method for real-radiation contributions. We
present the first-ever generalisation of a subtraction method for third-order
(N3LO) QCD corrections. The Projection-to-Born method is used to combine
inclusive N3LO coefficient functions with an exclusive second-order (NNLO)
calculation for a final state with an extra jet. The input requirements,
advantages, and potential applications of the method are discussed, and
validations at lower orders are performed. As a test case, we compute the N3LO
corrections to kinematical distributions and production rates for single-jet
production in deep inelastic scattering in the laboratory frame, and compare
them with data from the ZEUS experiment at HERA. The corrections are small in
the central rapidity region, where they stabilize the predictions to sub
per-cent level. The corrections increase substantially towards forward rapidity
where large logarithmic effects are expected, thereby yielding an improved
description of the data in this region.
|
We explore the use of a sufficient statistic based on the identified members
that are obtained for samples that are selected under the $M_0$
capture-recapture closed population model (Schwarz and Seber, 1999). A
Rao-Blackwellized version of the estimator based on a sufficient statistic is
then presented. We explore the efficiency of the improved estimator via a
simulation study. The R code for the simulation is provided in the appendix.
|
We investigate a superconducting state irradiated by a laser beam with spin
and orbital angular momentum. It is shown that superconducting vortices are
created by the laser beam due to heating effect and transfer of angular
momentum of light. Possible experiments to verify our prediction are also
discussed.
|
Recently, a novel framework to handle stochastic processes has emerged from a
series of studies in biology, showing situations beyond 'It\^o versus
Stratonovich'. Its internal consistency can be demonstrated via the zero mass
limit of a generalized Klein-Kramers equation. Moreover, the connection to
other integrations becomes evident: the obtained Fokker-Planck equation defines
a new type of stochastic calculus that in general differs from the
{\alpha}-type interpretation. A unique advantage of this new approach is a
natural correspondence between stochastic and deterministic dynamics, which is
useful or may even be essential in practice. The core of the framework is a
transformation from the usual Langevin equation to a form that contains a
potential function with two additional dynamical matrices, which reveals an
underlying symplectic structure. The framework has a direct physical meaning
and a straightforward experimental realization. A recent experiment has offered
a first empirical validation of this new stochastic integration.
|
This paper summarizes the modeling, statistics, simulation, and computing
needs of direct dark matter detection experiments in the next decade.
|
XENON100 and the LHC are two of the most promising machines to test the
physics beyond the Standard Model. In the meantime, indirect hints push us to
believe that the dark matter and Higgs boson could be the two next fundamental
particles to be discovered. Whereas ATLAS and CMS have just released their new
limits on the Higgs searches, XENON100 obtained very recently strong
constraints on DM-proton elastic scattering. In this work, we show that when we
combined WMAP and the most recent results of XENON100, the invisible width of
the Higgs to scalar dark matter is negligible($\lesssim 10%$), except in a
small region with very light dark matter ($\lesssim 10$ GeV) not yet excluded
by XENON100 or around 60 GeV where the ratio can reach 50% to 60%. The new
results released by the Higgs searches of ATLAS and CMS set very strong limits
on the elastic scattering cross section, even restricting it to the region $8
\times 10^{-46} \mrm{cm^2} \lesssim \sigma_{S-p}^{SI}\lesssim 2 \times 10^{-45}
\mrm{cm^{2}}$ in the hypothesis $135 \mrm{GeV} \lesssim M_H \lesssim 155
\mrm{GeV}$.
|
We report on the outcome of an audit of Twitter's Home Timeline ranking
system. The goal of the audit was to determine if authors from some racial
groups experience systematically higher impression counts for their Tweets than
others. A central obstacle for any such audit is that Twitter does not
ordinarily collect or associate racial information with its users, thus
prohibiting an analysis at the level of individual authors. Working around this
obstacle, we take US counties as our unit of analysis. We associate each user
in the United States on the Twitter platform to a county based on available
location data. The US Census Bureau provides information about the racial
decomposition of the population in each county. The question we investigate
then is if the racial decomposition of a county is associated with the
visibility of Tweets originating from within the county. Focusing on two racial
groups, the Black or African American population and the White population as
defined by the US Census Bureau, we evaluate two statistical measures of bias.
Our investigation represents the first large-scale algorithmic audit into
racial bias on the Twitter platform. Additionally, it illustrates the
challenges of measuring racial bias in online platforms without having such
information on the users.
|
We evaluate the uncertainty quality in neural networks using anomaly
detection. We extract uncertainty measures (e.g. entropy) from the predictions
of candidate models, use those measures as features for an anomaly detector,
and gauge how well the detector differentiates known from unknown classes. We
assign higher uncertainty quality to candidate models that lead to better
detectors. We also propose a novel method for sampling a variational
approximation of a Bayesian neural network, called One-Sample Bayesian
Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We
compare the following candidate neural network models: Maximum Likelihood,
Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational
approximation. We show that Bayesian Dropout and OSBA provide better
uncertainty information than Maximum Likelihood, and are essentially equivalent
to the standard variational approximation, but much faster.
|
The distribution of recurrence times or return intervals between extreme
events is important to characterize and understand the behavior of physical
systems and phenomena in many disciplines. It is well known that many physical
processes in nature and society display long range correlations. Hence, in the
last few years, considerable research effort has been directed towards studying
the distribution of return intervals for long range correlated time series.
Based on numerical simulations, it was shown that the return interval
distributions are of stretched exponential type. In this paper, we obtain an
analytical expression for the distribution of return intervals in long range
correlated time series which holds good when the average return intervals are
large. We show that the distribution is actually a product of power law and a
stretched exponential form. We also discuss the regimes of validity and perform
detailed studies on how the return interval distribution depends on the
threshold used to define extreme events.
|
We present the fourth edition of the Sloan Digital Sky Survey (SDSS) Quasar
Catalog. The catalog contains 77,429 objects; this is an increase of over
30,000 entries since the previous edition. The catalog consists of the objects
in the SDSS Fifth Data Release that have luminosities larger than M_i = -22.0
(in a cosmology with H_0 = 70 km/s/Mpc, Omega_M = 0.3, and Omega_Lambda = 0.7)
have at least one emission line with FWHM larger than 1000 km/s, or have
interesting/complex absorption features, are fainter than i=15.0, and have
highly reliable redshifts. The area covered by the catalog is 5740 sq. deg. The
quasar redshifts range from 0.08 to 5.41, with a median value of 1.48; the
catalog includes 891 quasars at redshifts greater than four, of which 36 are at
redshifts greater than five. Approximately half of the catalog quasars have i <
19; nearly all have i < 21. For each object the catalog presents positions
accurate to better than 0.2 arcsec. rms per coordinate, five-band (ugriz)
CCD-based photometry with typical accuracy of 0.03 mag, and information on the
morphology and selection method. The catalog also contains basic radio,
near-infrared, and X-ray emission properties of the quasars, when available,
from other large-area surveys. The calibrated digital spectra cover the
wavelength region 3800--9200A at a spectral resolution of ~2000. The spectra
can be retrieved from the public database using the information provided in the
catalog. The average SDSS colors of quasars as a function of redshift, derived
from the catalog entries, are presented in tabular form. Approximately 96% of
the objects in the catalog were discovered by the SDSS.
|
The study of the dynamic behavior of cross-sectional ranks over time for
functional data and the ranks of the observed curves at each time point and
their temporal evolution can yield valuable insights into the time dynamics of
functional data. This approach is of interest in various application areas. For
the analysis of the dynamics of ranks, estimation of the cross-sectional ranks
of functional data is a first step. Several statistics of interest for ranked
functional data are proposed. To quantify the evolution of ranks over time, a
model for rank derivatives is introduced, where rank dynamics are decomposed
into two components. One component corresponds to population changes and the
other to individual changes that both affect the rank trajectories of
individuals. The joint asymptotic normality for suitable estimates of these two
components is established. The proposed approaches are illustrated with
simulations and three longitudinal data sets: Growth curves obtained from the
Z\"urich Longitudinal Growth Study, monthly house price data in the US from
1996 to 2015 and Major League Baseball offensive data for the 2017 season.
|
We present a simple example of toughening mechanism in the homogenization of
composites with soft inclusions, produced by crack deflection at microscopic
level. We show that the mechanism is connected to the irreversibility of the
crack process. Because of that it cannot be detected through the standard
homogenization tool of the $\Gamma$-convergence.
|
We consider the minimum cut problem in undirected, weighted graphs. We give a
simple algorithm to find a minimum cut that $2$-respects (cuts two edges of) a
spanning tree $T$ of a graph $G$. This procedure can be used in place of the
complicated subroutine given in Karger's near-linear time minimum cut algorithm
(J. ACM, 2000). We give a self-contained version of Karger's algorithm with the
new procedure, which is easy to state and relatively simple to implement. It
produces a minimum cut on an $m$-edge, $n$-vertex graph in $O(m \log^3 n)$ time
with high probability, matching the complexity of Karger's approach.
|
Galaxy groups are the least massive systems where the bulk of baryons begin
to be accounted for. Not simply the scaled-down versions of rich clusters
following self-similar relations, galaxy groups are ideal systems to study
baryon physics, which is important for both cluster cosmology and galaxy
formation. We review the recent observational results on the hot gas in galaxy
groups. The first part of the paper is on the scaling relations, including
X-ray luminosity, entropy, gas fraction, baryon fraction and metal abundance.
Compared to clusters, groups have a lower fraction of hot gas around the center
(e.g., r < r_2500), but may have a comparable gas fraction at large radii
(e.g., r_2500 < r < r_500). Better constraints on the group gas and baryon
fractions require sample studies with different selection functions and deep
observations at r > r_500 regions. The hot gas in groups is also iron poor at
large radii (0.3 r_500 - 0.7 r_500). The iron content of the hot gas within the
central regions (r < 0.3 r_500) correlates with the group mass, in contrast to
the trend of the stellar mass fraction. It remains to be seen where the missing
iron in low-mass groups is. In the second part, we discuss several aspects of
X-ray cool cores in galaxy groups, including their difference from cluster cool
cores, radio AGN heating in groups and the cold gas in group cool cores.
Because of the vulnerability of the group cool cores to radio AGN heating and
the weak heat conduction in groups, group cool cores are important systems to
test the AGN feedback models and the multiphase cool core models. At the end of
the paper, some outstanding questions are listed.
|
Tor and I2P are well-known anonymity networks used by many individuals to
protect their online privacy and anonymity. Tor's centralized directory
services facilitate the understanding of the Tor network, as well as the
measurement and visualization of its structure through the Tor Metrics project.
In contrast, I2P does not rely on centralized directory servers, and thus
obtaining a complete view of the network is challenging. In this work, we
conduct an empirical study of the I2P network, in which we measure properties
including population, churn rate, router type, and the geographic distribution
of I2P peers. We find that there are currently around 32K active I2P peers in
the network on a daily basis. Of these peers, 14K are located behind NAT or
firewalls.
Using the collected network data, we examine the blocking resistance of I2P
against a censor that wants to prevent access to I2P using address-based
blocking techniques. Despite the decentralized characteristics of I2P, we
discover that a censor can block more than 95% of peer IP addresses known by a
stable I2P client by operating only 10 routers in the network. This amounts to
severe network impairment: a blocking rate of more than 70% is enough to cause
significant latency in web browsing activities, while blocking more than 90% of
peer IP addresses can make the network unusable. Finally, we discuss the
security consequences of the network being blocked, and directions for
potential approaches to make I2P more resistant to blocking.
|
Toroidal backgrounds for bosonic strings are used to understand target space
duality as a symmetry of string field theory and to study explicitly issues in
background independence. Our starting point is the notion that the string field
coordinates $X(\sigma)$ and the momenta $P(\sigma)$ are background independent
objects whose field algebra is always the same; backgrounds correspond to
inequivalent representations of this algebra. We propose classical string field
solutions relating any two toroidal backgrounds and discuss the space where
these solutions are defined.
String field theories formulated around dual backgrounds are shown to be
related by a homogeneous field redefinition, and are therefore equivalent, if
and only if their string field coupling constants are identical. Using this
discrete equivalence of backgrounds and the classical solutions we find
discrete symmetry transformations of the string field leaving the string action
invariant. These symmetries, which are spontaneously broken for generic
backgrounds, are shown to generate the full group of duality symmetries, and in
general are seen to arise from the string field gauge group.
|
We investigate the problem of optimal transport in the so-called Beckmann
form, i.e. given two Radon measures on a compact set, we seek an optimal flow
field which is a vector valued Radon measure on the same set that describes a
flow between these two measures and minimizes a certain linear cost function.
We consider $L^\alpha$ regularization of the problem, which guarantees
uniqueness and forces the solution to be an integrable function rather than a
Radon measure. This regularization naturally gives rise to a semi-smooth Newton
scheme that can be used to solve the problem numerically. Besides motivating
and developing the numerical scheme, we also include approximation results for
vanishing regularization in the continuous setting.
|
Semi-inclusive deep inelastic scattering of polarized leptons off hadrons
enables one to measure the antisymmetric part of the hadron tensor. For
unpolarized hadrons this piece is odd under time reversal. In deep inelastic
scattering it shows up as a $\langle \sin \phi \rangle$ asymmetry for the
produced hadrons. This asymmetry can be expressed as the product of a
twist-three "hadron $\rightarrow$ quark" distribution function and a time
reversal odd twist-two "quark $\rightarrow$ hadron" fragmentation function.
This fragmentation function can only be measured for nonzero transverse momenta
of the produced hadron.
|
The full non-linear evolution of the tidal instability is studied numerically
in an ellipsoidal fluid domain relevant for planetary cores applications. Our
numerical model, based on a finite element method, is first validated by
reproducing some known analytical results. This model is then used to address
open questions that were up to now inaccessible using theoretical and
experimental approaches. Growth rates and mode selection of the instability are
systematically studied as a function of the aspect ratio of the ellipsoid and
as a function of the inclination of the rotation axis compared to the
deformation plane. We also quantify the saturation amplitude of the flow driven
by the instability and calculate the viscous dissipation that it causes. This
tidal dissipation can be of major importance for some geophysical situations
and we thus derive general scaling laws which are applied to typical planetary
cores.
|
We determine the spin susceptibility $\chi$ in the weak interaction regime of
a tunable, high quality, two-dimensional electron system in a GaAs/AlGaAs
heterostructure. The band structure effects, modifying mass and g-factor, are
carefully taken into accounts since they become appreciable for the large
electron densities of the weak interaction regime. When properly normalized,
$\chi$ decreases monotonically from 3 to 1.1 with increasing density over our
experimental range from 0.1 to $4\times10^{11} cm^{-2}$. In the high density
limit, $\chi$ tends correctly towards $\chi\to 1$ and compare well with recent
theory.
|
Automatic literature review generation is one of the most challenging tasks
in natural language processing. Although large language models have tackled
literature review generation, the absence of large-scale datasets has been a
stumbling block to the progress. We release SciReviewGen, consisting of over
10,000 literature reviews and 690,000 papers cited in the reviews. Based on the
dataset, we evaluate recent transformer-based summarization models on the
literature review generation task, including Fusion-in-Decoder extended for
literature review generation. Human evaluation results show that some
machine-generated summaries are comparable to human-written reviews, while
revealing the challenges of automatic literature review generation such as
hallucinations and a lack of detailed information. Our dataset and code are
available at https://github.com/tetsu9923/SciReviewGen.
|
We find two new hook length formulas for binary trees. The particularity of
our formulas is that the hook length $h_v$ appears as an exponent.
|
We propose a simple criterion of compactness in the space of fuzzy number on
the space of finite dimension and apply to deal with a class of fuzzy intergral
equations in the best condition.
|
Object-centric representations are a promising path toward more systematic
generalization by providing flexible abstractions upon which compositional
world models can be built. Recent work on simple 2D and 3D datasets has shown
that models with object-centric inductive biases can learn to segment and
represent meaningful objects from the statistical structure of the data alone
without the need for any supervision. However, such fully-unsupervised methods
still fail to scale to diverse realistic data, despite the use of increasingly
complex inductive biases such as priors for the size of objects or the 3D
geometry of the scene. In this paper, we instead take a weakly-supervised
approach and focus on how 1) using the temporal dynamics of video data in the
form of optical flow and 2) conditioning the model on simple object location
cues can be used to enable segmenting and tracking objects in significantly
more realistic synthetic data. We introduce a sequential extension to Slot
Attention which we train to predict optical flow for realistic looking
synthetic scenes and show that conditioning the initial state of this model on
a small set of hints, such as center of mass of objects in the first frame, is
sufficient to significantly improve instance segmentation. These benefits
generalize beyond the training distribution to novel objects, novel
backgrounds, and to longer video sequences. We also find that such
initial-state-conditioning can be used during inference as a flexible interface
to query the model for specific objects or parts of objects, which could pave
the way for a range of weakly-supervised approaches and allow more effective
interaction with trained models.
|
The large-scale structure of the ionization field during the epoch of
reionization (EoR) can be modeled by the excursion set theory. While the growth
of ionized regions during the early stage are described by the "bubble model",
the shrinking process of neutral regions after the percolation of the ionized
region calls for an "island model". An excursion set based analytical model and
a semi-numerical code (islandFAST) have been developed. The ionizing background
and the bubbles inside the islands are also included in the treatment. With two
kinds of absorbers of ionizing photons, i.e. the large-scale under-dense
neutral islands and the small-scale over-dense clumps, the ionizing background
are self-consistently evolved in the model.
|
The phase diffusion of the order parameter of trapped Bose-Einstein
condensates at temperatures large compared to the mean trap frequency is
determined, which gives the fundamental limit of the line-width of an atom
laser. In addition a prediction of the correlation time of the number
fluctuations in the condensate is made and related to the phase diffusion via
the fluctuation-dissipation relation.
|
The Eddington-inspired-Born-Infeld (EiBI) theory has been recently
resurrected. Such a theory is characterized by being equivalent to Einstein
theory in vacuum but differing from it in the presence of matter. One of the
virtues of the theory is to avoid the Big Bang singularity for a radiation
filled universe. In this paper, we analyze singularity avoidance in this kind
of model. More precisely, we analyze the behavior of a homogeneous and
isotropic universe filled with phantom energy in addition to the dark and
baryonic matter. Unlike the Big Bang singularity that can be avoided in this
kind of model through a bounce or a loitering effect on the physical metric, we
find that the Big Rip singularity is unavoidable in the EiBI phantom model even
though it can be postponed towards a slightly further future cosmic time as
compared with the same singularity in other models based on the standard
general relativity and with the same matter content described above.
|
We employ radio-frequency spectroscopy to investigate a polarized
spin-mixture of ultracold ${}^6$Li atoms close to a broad Feshbach scattering
resonance. Focusing on the regime of strong repulsive interactions, we observe
well-defined coherent quasiparticles even for unitarity-limited interactions.
We characterize the many-body system by extracting the key properties of
repulsive Fermi polarons: the energy $E_+$, the effective mass $m^*$, the
residue $Z$ and the decay rate $\Gamma$. Above a critical interaction, $E_+$ is
found to exceed the Fermi energy of the bath while $m^*$ diverges and even
turns negative, thereby indicating that the repulsive Fermi liquid state
becomes energetically and thermodynamically unstable.
|
Measurements in the quantum domain can exceed classical notions. This
concerns fundamental questions about the nature of the measurement process
itself, as well as applications, such as their function as building blocks of
quantum information processing protocols. In this paper we explore the notion
of entanglement for detection devices in theory and experiment. A method is
devised that allows one to determine nonlocal quantum coherence of
positive-operator-valued measures via negative contributions in a joint
distribution that fully describes the measurement apparatus under study. This
approach is then applied to experimental data for detectors that ideally
project onto Bell states. In particular, we describe the reconstruction of the
aforementioned entanglement quasidistributions from raw data and compare the
resulting negativities with those expected from theory. Therefore, our method
provides a versatile toolbox for analyzing measurements regarding their
quantum-correlation features for quantum science and quantum technology.
|
We present an operator approach to deriving Mehler's formula and the Rogers
formula for the bivariate Rogers-Szeg\"{o} polynomials $h_n(x,y|q)$. The proof
of Mehler's formula can be considered as a new approach to the nonsymmetric
Poisson kernel formula for the continuous big $q$-Hermite polynomials
$H_n(x;a|q)$ due to Askey, Rahman and Suslov. Mehler's formula for $h_n(x,y|q)$
involves a ${}_3\phi_2$ sum and the Rogers formula involves a ${}_2\phi_1$ sum.
The proofs of these results are based on parameter augmentation with respect to
the $q$-exponential operator and the homogeneous $q$-shift operator in two
variables. By extending recent results on the Rogers-Szeg\"{o} polynomials
$h_n(x|q)$ due to Hou, Lascoux and Mu, we obtain another Rogers-type formula
for $h_n(x,y|q)$. Finally, we give a change of base formula for $H_n(x;a|q)$
which can be used to evaluate some integrals by using the Askey-Wilson
integral.
|
Subsets and Splits