text
stringlengths 6
128k
|
---|
The strategy of band convergence of multi-valley conduction bands or
multi-peak valence bands has been widely used to search or improve
thermoelectric materials. However, the phonon-assisted intervalley scatterings
due to multiple band degeneracy are usually neglected in the thermoelectric
community. In this work, we investigate the (thermo)electric properties of
non-polar monolayer $\beta$- and $\alpha$-antimonene considering full mode- and
momentum-resolved electron-phonon interactions. We also analyze thoroughly the
selection rules on electron-phonon matrix-elements using group-theory
arguments. Our calculations reveal strong intervalley scattering between the
nearly degenerate valley states in both $\beta$- and $\alpha$-antimonene, and
the commonly-used deformation potential approximation neglecting the dominant
intervalley scattering gives inaccurate estimations of the electron-phonon
scattering and thermoelectric transport properties. By considering full
electron-phonon interactions based on the rigid-band approximation, we find
that, the maximum value of the thermoelectric figure of merits $zT$ at room
temperature reduces to 0.37 in $\beta$-antimonene, by a factor of 5.7 comparing
to the value predicted based on the constant relaxation-time approximation
method. Our work not only provides an accurate prediction of the thermoelectric
performances of antimonenes that reveals the key role of intervalley
scatterings in determining the electronic part of zT, but also showcases a
computational framework for thermoelectric materials.
|
We study photonic, neutrino and charged particle signatures from slow decays
of gravitino dark matter in supersymmetric theories where R-parity is
explicitly broken by trilinear operators. Photons and (anti-)fermions from loop
and tree-level processes give rise to spectra with distinct features, which, if
observed, can give crucial input on the possible mass of the gravitino and the
magnitude and flavour structure of R-violating operators. Within this
framework, we make detailed comparisons of the theoretical predictions to the
recent experimental data from PAMELA, ATIC and Fermi LAT.
|
Mathematical models and computer algorithms are developed to calculate
dynamic stress concentration and fracture wave propagation in a reinforced
composite sheet. The composite consists of a regular system alternating
extensible fibers and pliable adhesive layers. In computer simulations, we
derive difference algorithms preventing or minimizing the parasite distortions
caused by the mesh dispersion and obtain precise numerical solutions in the
plane fracture problem of a pre-stretched sheet along the fibers. Interactive
effects of microscale dynamic deformation and multiple damage in fibers and
adhesive are studied. Two engineering models of the composite are considered:
the first assumes that adhesive can be represented by inertionless bonds of
constant stiffness, while in the second one an adhesive is described by
inertial medium perceived shear stresses. Comparison of results allows the
evaluation of facilities of models in wave and fracture patterns analysis.
|
The X-ray spectra of accreting stellar-mass black hole systems exhibit
spectral features due to reflection, especially broad iron K alpha emission
lines. We investigate the reflection by the accretion disc that can be expected
in the high/soft state of such a system. First, we perform a self-consistent
calculation of the reflection that results from illumination of a hot, inner
portion of the disc with its atmosphere in hydrostatic equilibrium. Then we
present reflection spectra for a range of illumination strengths and disc
temperatures under the assumption of a constant-density atmosphere. Reflection
by a hot accretion disc differs in important ways from that of a much cooler
disc, such as that expected in an active galactic nucleus.
|
Artificial intelligence has not yet revolutionized the design of materials
and molecules. In this perspective, we identify four barriers preventing the
integration of atomistic deep learning, molecular science, and high-performance
computing. We outline focused research efforts to address the opportunities
presented by these challenges.
|
One of the first widespread uses of multi-user multiple-input multiple-output
(MU-MIMO) is in 5G networks, where each base station has an advanced antenna
system (AAS) that is connected to the baseband unit (BBU) with a
capacity-constrained fronthaul. In the AAS configuration, multiple passive
antenna elements and radio units are integrated into a single box. This paper
considers precoded downlink transmission over a single-cell MU-MIMO system. We
study optimized linear precoding for AAS with a limited-capacity fronthaul,
which requires the precoding matrix to be quantized. We propose a new precoding
design that is aware of the fronthaul quantization and minimizes the
mean-squared error at the receiver side. We compute the precoding matrix using
a sphere decoding (SD) approach. We also propose a heuristic low-complexity
approach to quantized precoding. This heuristic is computationally efficient
enough for massive MIMO systems. The numerical results show that our proposed
precoding significantly outperforms quantization-unaware precoding and other
previous approaches in terms of the sum rate. The performance loss for our
heuristic method compared to quantization-aware precoding is insignificant
considering the complexity reduction, which makes the heuristic method feasible
for real-time applications. We consider both perfect and imperfect channel
state information.
|
After the precise observations of the Cosmic Microwave Background (CMB)
anisotropy power spectrum, attention is now being focused on the higher order
statistics of the CMB anisotropies. Since linear evolution preserves the
statistical properties of the initial conditions, observed non-Gaussianity of
the CMB will mirror primordial non-Gaussianity. Single field slow-roll
inflation robustly predicts negligible non-Gaussianity so an indication of
non-Gaussianity will suggest alternative scenarios need to be considered. In
this paper we calculate the information on primordial non-Gaussianity encoded
in the polarization of the CMB. After deriving the optimal weights for a cubic
estimator we evaluate the Signal-to-Noise ratio of the estimator for WMAP,
Planck and an ideal cosmic variance limited experiment. We find that when the
experiment can observe CMB polarization with good sensitivity, the sensitivity
to primordial non-Gaussianity increases by roughly a factor of two. We also
test the weakly non-Gaussian assumption used to derive the optimal weight
factor by calculating the degradation factor produced by the gravitational
lensing induced connected four-point function. The physical scales in the
radiative transfer functions are largely irrelevant for the constraints on the
primordial non-Gaussianity. We show that the total (S/N)^2 is simply
proportional to the number of observed pixels on the sky.
|
We present a Bayesian inference analysis of the Markevitch (1998) and Allen &
Fabian (1998) cooling flow corrected X-ray cluster temperature catalogs that
constrains the slope and the evolution of the empirical X-ray cluster
luminosity-temperature (L-T) relation. We find that for the luminosity range
10^44.5 erg s^-1 < L_bol < 10^46.5 erg s^-1 and the redshift range z < 0.5,
L_bol is proportional to T^2.80(+0.15/-0.15)(1+z)^(0.91-1.12q_0)(+0.54/-1.22).
We also determine the L-T relation that one should use when fitting the Press-
Schechter mass function to X-ray cluster luminosity catalogs such as the
Einstein Medium Sensitivity Survey (EMSS) and the Southern Serendipitous High-
Redshift Archival ROSAT Catalog (Southern SHARC), for which cooling flow
corrected luminosities are not determined and a universal X-ray cluster
temperature of T = 6 keV is assumed. In this case, L_bol is proportional to
T^2.65(+0.23/-0.20)(1+z)^(0.42-1.26q_0)(+0.75/-0.83) for the same luminosity
and redshift ranges.
|
Text data are an important source of detailed information about social and
political events. Automated systems parse large volumes of text data to infer
or extract structured information that describes actors, actions, dates, times,
and locations. One of these sub-tasks is geocoding: predicting the geographic
coordinates associated with events or locations described by a given text. We
present an end-to-end probabilistic model for geocoding text data.
Additionally, we collect a novel data set for evaluating the performance of
geocoding systems. We compare the model-based solution, called ELECTRo-map, to
the current state-of-the-art open source system for geocoding texts for event
data. Finally, we discuss the benefits of end-to-end model-based geocoding,
including principled uncertainty estimation and the ability of these models to
leverage contextual information.
|
Understanding the nature of the excitation spectrum in quantum spin liquids
is of fundamental importance, in particular for the experimental detection of
candidate materials. However, current theoretical and numerical techniques have
limited capabilities, especially in obtaining the dynamical structure factor,
which gives a crucial characterization of the ultimate nature of the quantum
state and may be directly assessed by inelastic neutron scattering. In this
work, we investigate the low-energy properties of the $S=1/2$ Heisenberg model
on the triangular lattice, including both nearest-neighbor $J_1$ and
next-nearest-neighbor $J_2$ super-exchanges, by a dynamical variational Monte
Carlo approach that allows accurate results on spin models. For $J_2=0$, our
calculations are compatible with the existence of a well-defined magnon in the
whole Brillouin zone, with gapless excitations at $K$ points (i.e., at the
corners of the Brillouin zone). The strong renormalization of the magnon branch
(also including roton-like minima around the $M$ points, i.e., midpoints of the
border zone) is described by our Gutzwiller-projected state, where Abrikosov
fermions are subject to a non-trivial magnetic $\pi$-flux threading half of the
triangular plaquettes. When increasing the frustrating ratio $J_2/J_1$, we
detect a progessive softening of the magnon branch at $M$, which eventually
becomes gapless within the spin-liquid phase. This feature is captured by the
band structure of the unprojected wave function (with $2$ Dirac points for each
spin component). In addition, we observe an intense signal at low energies
around the $K$ points, which cannot be understood within the unprojected
picture and emerges only when the Gutzwiller projection is considered,
suggesting the relevance of gauge fields for the low-energy physics of spin
liquids.
|
The behavior of LuLiF4 sheelite (I41/a, Z = 4) under hydrostatic pressure was
investigated by means of the first principles calculations. The ferroelastic
phase transition from the tetragonal structure of LuLiF4 to fergusonite
structure (C12/c1, Z = 4) has been found at 10.5 GPa. It has been determined
that this is the second order phase transition.
|
Central limit theorems are established for the sum, over a spatial region, of
observations from a linear process on a $d$-dimensional lattice. This region
need not be rectangular, but can be irregularly-shaped. Separate results are
established for the cases of positive strong dependence, short range
dependence, and negative dependence. We provide approximations to asymptotic
variances that reveal differential rates of convergence under the three types
of dependence. Further, in contrast to the one dimensional (i.e., the time
series) case, it is shown that the form of the asymptotic variance in
dimensions $d>1$ critically depends on the geometry of the sampling region
under positive strong dependence and under negative dependence and that there
can be non-trivial edge-effects under negative dependence for $d>1$. Precise
conditions for the presence of edge effects are also given.
|
We consider a phase field crystal modeling approach for binary mixtures of
interacting active and passive particles. The approach allows to describe
generic properties for such systems within a continuum model. We validate the
approach by reproducing experimental results, as well as results obtained with
agent-based simulations, for the whole spectrum from highly dilute suspensions
of passive particles to interacting active particles in a dense background of
passive particles.
|
In this study, the effects of sulfur substitution on the structural,
mechanical, electronic, optical, and thermodynamic properties of RbTaO3-xSx
have been investigated using the WIEN2k code in the framework of density
functional theory (DFT). The cubic phase of RbTaO3 transforms to tetragonal for
RbTaO2S and RbTaOS2, the later transforms again to a cubic phase with added
sulfur for RbTaS3. The results showed that substituting S for O anions in
RbTaO3 effectively decreased the band gap from 2.717 eV to 1.438 eV, 0.286 eV,
and 0.103 eV for the RbTaO3,RbTaO2S, RbTaOS2, and RbTaS3 compounds,
respectively. The optical constants such as dielectric constants, refractive
index, absorption coefficient, photoconductivity, reflectivity and loss
function have been calculated and analyzed. The elastic constants and moduli,
and their anisotropic nature were also investigated. Finally, the Debye
temperature, thermal conductivity, melting temperature, specific capacities and
thermal expansion coefficients were computed and analyzed using established
formalisms. The reduced band gap (1.438 eV) and high absorption coefficient
(~106 cm-1) of RbTaO2S makes it suitable for solar cell applications and for
other visible light devices. Reduction of the band gap and phonon thermal
conductivity owing to Ssubstitution is expected to enhance thermoelectric
performances of the S-containing phases
|
Flexible microfluidics have found extensive utility in the biological and
biomedical fields. A leading substrate material for compliant devices is
polydimethylsiloxane (PDMS). Despite its many advantages, PDMS is inherently
hydrophobic and consequently its use in passive (pumpless) microfluidics
becomes problematic. To this end, many physical and chemical modifications have
been introduced to render PDMS hydrophilic, ranging from amphiphilic molecule
additions to surface plasma treatments. However, when transitioning from lab
benchtop to realized medical devices, these modifications must exhibit
long-term stability. Unfortunately, these modifications are often presented but
their mechanisms and long-term stability are not studied in detail. We have
investigated an array of PDMS modifications, utilizing contact angle goniometry
to study surface energy over a 30-day evolution study. Samples were stored in
air and water, and Fourier Transform Infrared-Attenuated Total Reflectance
(FTIR-ATR) analysis was used to confirm surface functional group uniformity. We
have identified preferred modification techniques for long-lasting PDMS devices
and characterized often overlooked material stability.
|
We study the embedding of D7 brane probes in five geometries that are
deformations of AdS_5 x S^5. Each case corresponds to the inclusion of quark
fields in a dual gauge theory where we are interested in investigating whether
chiral symmetry breaking occurs. We use a supersymmetric geometry describing an
N=2 theory on its moduli space and a dilaton driven non-supersymmetric flow to
establish criteria for a chiral symmetry breaking embedding. We develop a
simple spherical D7 embedding that tests the repulsion of the core of the
geometry and signals dynamical symmetry breaking. We then use this tool in more
complicated geometries to show that an N=2* theory and a non-supersymmetric
theory with scalar masses do not induce a chiral condensate. Finally we provide
evidence that the Yang Mills* geometry does.
|
Edge computing has been recently introduced as a way to bring computational
capabilities closer to end users of modern network-based services, in order to
support existent and future delay-sensitive applications by effectively
addressing the high propagation delay issue that affects cloud computing.
However, the problem of efficiently and fairly manage the system resources
presents particular challenges due to the limited capacity of both edge nodes
and wireless access networks, as well as the heterogeneity of resources and
services' requirements. To this end, we propose a techno-economic market where
service providers act as buyers, securing both radio and computing resources
for the execution of their associated end users' jobs, while being constrained
by a budget limit. We design an allocation mechanism that employs convex
programming in order to find the unique market equilibrium point that maximizes
fairness, while making sure that all buyers receive their preferred resource
bundle. Additionally, we derive theoretical properties that confirm how the
market equilibrium approach strikes a balance between fairness and efficiency.
We also propose alternative allocation mechanisms and give a comparison with
the market-based mechanism. Finally, we conduct simulations in order to
numerically analyze and compare the performance of the mechanisms and confirm
the theoretical properties of the market model.
|
The interaction between dark matter and dark energy has become a focal point
in contemporary cosmological research, particularly in addressing current
cosmological tensions. This study explores the cubic Galileon model's
interaction with dark matter, where the interaction potential in the dark
sector is proportional to the dark energy density of the Galileon field. By
employing dimensionless variables, we transform the field equations into an
autonomous dynamical system. We calculate the critical points of the
corresponding autonomous systems and demonstrate the existence of a stable de
Sitter epoch. Our investigation proceeds in two phases. First, we conduct a
detailed analysis of the exact interacting cubic Galileon (ICG) model, derived
from the precise solution of the equations of motion. Second, we explore an
approximate tracker solution, labeled TICG, assuming a small coupling parameter
between dark matter and dark energy. We evaluate the evolution of these models
using data from two experiments, aiming to resolve the tensions surrounding
$H_0$ and $S_8$. The analysis of the TICG model indicates a preference for a
phantom regime and provides a negative coupling parameter in the dark sector at
a $68\%$ confidence level. This model also shows that the current tensions
regarding $H_0$ and $S_8$ are alleviated. Conversely, the ICG model, despite
its preference for the phantom regime, is plagued by an excess in today's
matter density and a higher expansion rate, easing only the $H_0$ tension.
|
We establish square function estimates for integral operators on uniformly
rectifiable sets by proving a local $T(b)$ theorem and applying it to show that
such estimates are stable under the so-called big pieces functor. More
generally, we consider integral operators associated with Ahlfors-David regular
sets of arbitrary codimension in ambient quasi-metric spaces. The local $T(b)$
theorem is then used to establish an inductive scheme in which square function
estimates on so-called big pieces of an Ahlfors-David regular set are proved to
be sufficient for square function estimates to hold on the entire set.
Extrapolation results for $L^p$ and Hardy space versions of these estimates are
also established. Moreover, we prove square function estimates for integral
operators associated with variable coefficient kernels, including the Schwartz
kernels of pseudodifferential operators acting between vector bundles on
subdomains with uniformly rectifiable boundaries on manifolds.
|
Theories including a collapse mechanism have been presented various years
ago. They are based on a modification of standard quantum mechanics in which
nonlinear and stochastic terms are added to the evolution equation. Their
principal merits derive from the fact that they are mathematically precise
schemes accounting, on the basis of a unique universal dynamical principle,
both for the quantum behavior of microscopic systems as well as for the
reduction associated to measurement processes and for the classical behavior of
macroscopic objects. Since such theories qualify themselves not as new
interpretations but as modifications of the standard theory they can be, in
principle, tested against quantum mechanics. Recently, various investigations
identifying possible crucial test have been discussed. In spite of the extreme
difficulty to perform such tests it seems that recent technological
developments allow at least to put precise limits on the parameters
characterizing the modifications of the evolution equation. Here we will simply
mention some of the recent investigations in this direction, while we will
mainly concentrate our attention to the way in which collapse theories account
for definite perceptual process. The differences between the case of reductions
induced by perceptions and those related to measurement procedures by means of
standard macroscopic devices will be discussed. On this basis, we suggest a
precise experimental test of collapse theories involving conscious observers.
We make plausible, by discussing in detail a toy model, that the modified
dynamics can give rise to quite small but systematic errors in the visual
perceptual process.
|
Interactions between dark matter and dark energy, allowing both conformal and
and disformal couplings, are studied in detail. We discuss the background
evolution, anisotropies in the cosmic microwave background and large scale
structures. One of our main findings is that a large conformal coupling is not
necessarily disallowed in the presence of a general disformal term. On the
other hand, we find that negative disformal couplings very often lead to
instabilities in the scalar field. Studying the background evolution and linear
perturbations only, our results show that it is observationally challenging to
disentangle disformal from purely conformal couplings.
|
Encryption techniques demonstrate a great deal of security when implemented
in an optical system (such as holography) due to the inherent physical
properties of light and the precision it demands. However, such systems have
shown to be vulnerable during digital implementations under various
crypt-analysis attacks. One of the primary reasons for this is the predictable
nature of the security keys (i.e., simulated random keys) used in the
encryption process. To alleviate, in this work, we are presenting a Physically
Unclonable Functions (PUFs) for producing a robust security key for digital
encryption systems. To note, a correlation function of the scattered perfect
optical vortex (POV) beams is utilized to generate the encryption keys. To the
best of our knowledge, this is the first report on properly utilizing the
scattered POV in optical encryption system. To validate the generated key, one
of the standard optical encryption systems i.e., Double Random Phase Encoding,
is opted. Experimental and simulation results validate that the proposed key
generation method is an effective alternative to the digital keys.
|
We review recent results on the study of the isoperimetric problem on
Riemannian manifolds with Ricci lower bounds. We focus on the validity of sharp
second order differential inequalities satisfied by the isoperimetric profile
of possibly noncompact Riemannian manifolds with Ricci lower bounds. We give a
self-contained overview of the methods employed for the proof of such result,
which exploit modern tools and ideas from nonsmooth geometry. The latter
methods are needed for achieving the result even in the smooth setting. Next,
we show applications of the differential inequalities of the isoperimetric
profile, providing simplified proofs of: the sharp and rigid isoperimetric
inequality on manifolds with nonnegative Ricci and Euclidean volume growth,
existence of isoperimetric sets for large volumes on manifolds with nonnegative
Ricci and Euclidean volume growth, the classical L\'{e}vy-Gromov isoperimetric
inequality. On the way, we discuss relations of these results and methods with
the existing literature, pointing out several open problems.
|
In this paper we will define an invariant $mc_{\infty}(f)$ of maps $f:X
\rightarrow Y_{\mathbb{Q}}$ between a finite CW-complex and a rational space
$Y_{\mathbb{Q}}$. We prove that this invariant is complete, i.e.
$mc_{\infty}(f)=mc_{\infty}(g)$ if an only if $f$ and $g$ are homotopic. We
will also construct an $L_{\infty}$-model for the based mapping space
$Map_*(X,Y_{\mathbb{Q}})$ from a $C_{\infty}$-coalgebra and an
$L_{\infty}$-algebra.
|
Sr$_{3}$Cr$_{2}$O$_{8}$ consist of a lattice of spin-1/2 Cr$^{5+}$ ions,
which form hexagonal bilayers and which are paired into dimers by the dominant
antiferromagnetic intrabilayer coupling. The dimers are coupled
three-dimensionally by frustrated interdimer interactions. A structural
distortion from hexagonal to monoclinic leads to orbital order and lifts the
frustration giving rise to spatially anisotropic exchange interactions. We have
grown large single crystals of Sr$_{3}$Cr$_{2}$O$_{8}$ and have performed DC
susceptibility, high field magnetisation and inelastic neutron scattering
measurements. The neutron scattering experiments reveal three gapped and
dispersive singlet to triplet modes arising from the three twinned domains that
form below the transition thus confirming the picture of orbital ordering. The
exchange interactions are extracted by comparing the data to a Random Phase
Approximation model and the dimer coupling is found to be $J_{0}=5.55$ meV,
while the ratio of interdimer to intradimer exchange constants is
$J'/J_{0}=0.64$. The results are compared to those for other gapped magnets.
|
We discuss a relativistic chiral theory of nuclear matter with $\sigma$ and
$\omega$ exchange using a formulation of the $\sigma$ model in which all the
chiral constraints are automatically fulfilled. We establish a relation between
the nuclear response to the scalar field and the QCD one which includes the
nucleonic parts. It allows a comparison between nuclear and QCD information.
Going beyond the mean field approach we introduce the effects of the pion loops
supplemented by the short-range interaction. The corresponding Landau-Migdal
parameters are taken from spin-isospin physics results. The parameters linked
to the scalar meson exchange are extracted from lattice QCD results. These
inputs lead to a reasonable description of the saturation properties,
illustrating the link between QCD and nuclear physics. We also derive from the
corresponding equation of state the density dependence of the quark condensate
and of the QCD susceptibilities.
|
We prove that rationally essential manifolds with suitably large fundamental
groups do not admit any maps of non-zero degree from products of closed
manifolds of positive dimension. Particular examples include all manifolds of
non-positive sectional curvature of rank one and all irreducible locally
symmetric spaces of non-compact type. For closed manifolds from certain
classes, say non-positively curved ones, or certain surface bundles over
surfaces, we show that they do admit maps of non-zero degree from non-trivial
products if and only if they are virtually diffeomorphic to products.
|
The nonet meson properties are studied in the Nambu-Jona-Lasinio model at
finite temperature and chemical potential using dimensional regularization.
This study leads to the reasonable description which is mainly similar to one
obtained in the model with the cutoff regularization. However, remarkable
differences between the two regularizations are observed in the behavior of the
chiral phase transition at finite chemical potential.
|
In situ hybridisation gene expression information helps biologists identify
where a gene is expressed. However, the databases that republish the
experimental information are often both incomplete and inconsistent. This paper
examines a system, Argudas, designed to help tackle these issues. Argudas is an
evolution of an existing system, and so that system is reviewed as a means of
both explaining and justifying the behaviour of Argudas. Throughout the
discussion of Argudas a number of issues will be raised including the
appropriateness of argumentation in biology and the challenges faced when
integrating apparently similar online biological databases.
|
The quantum behaviour of electrons in materials lays the foundation for
modern electronic and information technology. Quantum materials with novel
electronic and optical properties have been proposed as the next frontier, but
much remains to be discovered to actualize the promise. Here we report the
first observation of topological quantum properties of chiral crystals in the
RhSi family. We demonsrate that this material hosts novel phase of matter
exhibiting nearly ideal topological surface properties that emerge as a
consequence of the crystals' structural chirality or handedness. We also
demonstrate that the electrons on the surface of this crystal show a highly
unusual helicoid structure that spirals around two high-symmetry momenta
signalling its topological electronic chirality. Such helicoid Fermi arcs on
the surface experimentally characterize the topological charges of $\pm{2}$,
which arise from the bulk chiral fermions. The existence of bulk high-fold
degenerate fermions are guaranteed by the crystal symmetries, however, in order
to determine the topological charge in the chiral crystals it is essential to
identify and study the helical arc states. Remarkably, these topological
conductors we discovered exhibit helical Fermi arcs which are of length $\pi$,
stretching across the entire Brillouin zone and orders of magnitude larger than
those found in all known Weyl semimetals. Our results demonstrate novel
electronic topological state of matter on a structurally chiral crystal
featuring helicoid Fermi arc surface states. The exotic electronic chiral
fermion state realised in these materials can be used to detect a quantised
photogalvanic optical response or the chiral magnetic effect and its optical
version in future devices as described by G. Chang \textit{et.al.,}
`Topological quantum properties of chiral crystals' Nature Mat. 17, 978-985
(2018).
|
Expansion dynamics of the Universe is one of the important subjects in modern
cosmology. The dark energy equation of state determines this dynamics so that
the Universe is in an accelerating phase. However, the dark matter can also
affect the accelerated expansion of the Universe through its equation of state.
In the present work, we explore the expansion dynamics of the Universe in the
presence of dark matter pressure. In this regard, applying the dark matter
equation of state from the observational data related to the rotational curves
of galaxies, we calculate the evolution of dark matter density. Moreover, the
Hubble parameter, history of scale factor, luminosity distance, and
deceleration parameter are studied while the dark matter pressure is taken into
account. Our results verify that the dark matter pressure leads to the higher
values of the Hubble parameter at each redshift and the expansion of the
Universe grows due to the DM pressure.
|
We present the data release of the Gemini-South GMOS spectroscopy in the
fields of 11 galaxy groups at $0.8<z<1$, within the COSMOS field. This forms
the basis of the Galaxy Environment Evolution Collaboration 2 (GEEC2) project
to study galaxy evolution in haloes with $M\sim 10^{13}M_\odot$ across cosmic
time. The final sample includes $162$ spectroscopically--confirmed members with
$R<24.75$, and is $>50$ per cent complete for galaxies within the virial
radius, and with stellar mass $M_{\rm star}>10^{10.3}M_\odot$. Including
galaxies with photometric redshifts we have an effective sample size of $\sim
400$ galaxies within the virial radii of these groups. We present group
velocity dispersions, dynamical and stellar masses. Combining with the GCLASS
sample of more massive clusters at the same redshift we find the total stellar
mass is strongly correlated with the dynamical mass, with
$\log{M_{200}}=1.20\left(\log{M_{\rm star}}-12\right)+14.07$. This stellar
fraction of $~\sim 1$ per cent is lower than predicted by some halo occupation
distribution models, though the weak dependence on halo mass is in good
agreement. Most groups have an easily identifiable most massive galaxy (MMG)
near the centre of the galaxy distribution, and we present the spectroscopic
properties and surface brightness fits to these galaxies. The total stellar
mass distribution in the groups, excluding the MMG, compares well with an NFW
profile with concentration $4$, for galaxies beyond $\sim 0.2R_{200}$. This is
more concentrated than the number density distribution, demonstrating that
there is some mass segregation.
|
One of the greatest unsolved issues of the physics of this century is to find
a quantum field theory of gravity. According to a vast amount of literature
unification of quantum field theory and gravitation requires a gauge theory of
gravity which includes torsion and an associated spin field. Various models
including either massive or massless torsion fields have been suggested. We
present arguments for a massive torsion field, where the probable rest mass of
the corresponding spin three gauge boson is the Planck mass.
|
Neural networks trained with backpropagation often struggle to identify
classes that have been observed a small number of times. In applications where
most class labels are rare, such as language modelling, this can become a
performance bottleneck. One potential remedy is to augment the network with a
fast-learning non-parametric model which stores recent activations and class
labels into an external memory. We explore a simplified architecture where we
treat a subset of the model parameters as fast memory stores. This can help
retain information over longer time intervals than a traditional memory, and
does not require additional space or compute. In the case of image
classification, we display faster binding of novel classes on an Omniglot image
curriculum task. We also show improved performance for word-based language
models on news reports (GigaWord), books (Project Gutenberg) and Wikipedia
articles (WikiText-103) --- the latter achieving a state-of-the-art perplexity
of 29.2.
|
Step skew products with interval fibres and a subshift as a base are
considered. It is proved that if the fibre maps are continuous, piecewise
monotone, expanding and surjective and the subshift has the specification
property and a periodic orbit such that the composition of the fibre maps along
this orbit is mixing, then the corresponding step skew product has the
specification property.
|
Classical homogenization theory based on the Hashin-Shtrikman coated
ellipsoids is used to model the changes in the complex valued conductivity (or
admittivity) of a lung during tidal breathing. Here, the lung is modeled as a
two-phase composite material where the alveolar air-filling corresponds to the
inclusion phase. The theory predicts a linear relationship between the real and
the imaginary parts of the change in the complex valued conductivity of a lung
during tidal breathing, and where the loss cotangent of the change is
approximately the same as of the effective background conductivity and hence
easy to estimate. The theory is illustrated with numerical examples, as well as
by using reconstructed Electrical Impedance Tomography (EIT) images based on
clinical data from an ongoing study within the EU-funded CRADL project. The
theory may be potentially useful for improving the imaging algorithms and
clinical evaluations in connection with lung EIT for respiratory management and
monitoring in neonatal intensive care units.
|
The downward closure of a word language is the set of all (not necessarily
contiguous) subwords of its members. It is well-known that the downward closure
of any language is regular. While the downward closure appears to be a powerful
abstraction, algorithms for computing a finite automaton for the downward
closure of a given language have been established only for few language
classes.
This work presents a simple general method for computing downward closures.
For language classes that are closed under rational transductions, it is shown
that the computation of downward closures can be reduced to checking a certain
unboundedness property.
This result is used to prove that downward closures are computable for (i)
every language class with effectively semilinear Parikh images that are closed
under rational transductions, (ii) matrix languages, and (iii) indexed
languages (equivalently, languages accepted by higher-order pushdown automata
of order 2).
|
Citation recommendation systems aim to recommend citations for either a
complete paper or a small portion of text called a citation context. The
process of recommending citations for citation contexts is called local
citation recommendation and is the focus of this paper. Firstly, we develop
citation recommendation approaches based on embeddings, topic modeling, and
information retrieval techniques. We combine, for the first time to the best of
our knowledge, the best-performing algorithms into a semi-genetic hybrid
recommender system for citation recommendation. We evaluate the single
approaches and the hybrid approach offline based on several data sets, such as
the Microsoft Academic Graph (MAG) and the MAG in combination with arXiv and
ACL. We further conduct a user study for evaluating our approaches online. Our
evaluation results show that a hybrid model containing embedding and
information retrieval-based components outperforms its individual components
and further algorithms by a large margin.
|
Deploying convolutional neural networks (CNNs) for embedded applications
presents many challenges in balancing resource-efficiency and task-related
accuracy. These two aspects have been well-researched in the field of CNN
compression. In real-world applications, a third important aspect comes into
play, namely the robustness of the CNN. In this paper, we thoroughly study the
robustness of uncompressed, distilled, pruned and binarized neural networks
against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool,
LocalSearch and GenAttack). These new insights facilitate defensive training
schemes or reactive filtering methods, where the attack is detected and the
input is discarded and/or cleaned. Experimental results are shown for distilled
CNNs, agent-based state-of-the-art pruned models, and binarized neural networks
(BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets.
We present evaluation methods to simplify the comparison between CNNs under
different attack schemes using loss/accuracy levels, stress-strain graphs,
box-plots and class activation mapping (CAM). Our analysis reveals susceptible
behavior of uncompressed and pruned CNNs against all kinds of attacks. The
distilled models exhibit their strength against all white box attacks with an
exception of C&W. Furthermore, binary neural networks exhibit resilient
behavior compared to their baselines and other compressed variants.
|
The geometric approach to study the dynamics of U(1)-invariant membranes is
developed. The approach reveals an important role of the Abel nonlinear
differential equation of the first type with variable coefficients depending on
time and one of the membrane extendedness parameters. The general solution of
the Abel equation is constructed. Exact solutions of the whole system of
membrane equations in the D=5 Minkowski space-time are found and classified. It
is shown that if the radial component of the membrane world vector is only time
dependent then the dynamics is described by the pendulum equation.
|
We demonstrate mitigation of inter-channel nonlinear interference noise
(NLIN) in WDM systems for several amplification schemes. Using a practical
decision directed recursive least-squares algorithm, we take advantage of the
temporal correlations of NLIN to achieve a notable improvement in system
performance.
|
We propose a high resolution spatial diagnostic method via inserting a
millimeter-gap grating into the collimated terahertz beam to monitor the minute
variation of the terahertz beam in strong-field terahertz sources, which is
difficult to be resolved in conventional terahertz imaging systems. To verify
the method, we intentionally fabricate tiny variations of the terahertz beam
through tuning the iris for the infrared pumping beam before the tilted
pulse-front pumping (TPFP) setups. The phenomena can be well explained by the
the theory based on tilted pulse front technique and terahertz diffraction. We
believe our observation not only help further understand the mechanism of
intense terahertz generation, but also may be useful for strong-field terahertz
applications.
|
In this paper we present our scientific discovery that good representation
can be learned via continuous attention during the interaction between
Unsupervised Learning(UL) and Reinforcement Learning(RL) modules driven by
intrinsic motivation. Specifically, we designed intrinsic rewards generated
from UL modules for driving the RL agent to focus on objects for a period of
time and to learn good representations of objects for later object recognition
task. We evaluate our proposed algorithm in both with and without extrinsic
reward settings. Experiments with end-to-end training in simulated environments
with applications to few-shot object recognition demonstrated the effectiveness
of the proposed algorithm.
|
Electromagnetic simulations of complex geologic settings are computationally
expensive. One reason for this is the fact that a fine mesh is required to
accurately discretize the electrical conductivity model of a given setting.
This conductivity model may vary over several orders of magnitude and these
variations can occur over a large range of length scales. Using a very fine
mesh for the discretization of this setting leads to the necessity to solve a
large system of equations that is often difficult to deal with. To keep the
simulations computationally tractable, coarse meshes are often employed for the
discretization of the model. Such coarse meshes typically fail to capture the
fine-scale variations in the conductivity model resulting in inaccuracies in
the predicted data. In this work, we introduce a framework for constructing a
coarse-mesh or upscaled conductivity model based on a prescribed fine-mesh
model. Rather than using analytical expressions, we opt to pose upscaling as a
parameter estimation problem. By solving an optimization problem, we obtain a
coarse-mesh conductivity model. The optimization criterion can be tailored to
the survey setting in order to produce coarse models that accurately reproduce
the predicted data generated on the fine mesh. This allows us to upscale
arbitrary conductivity structures, as well as to better understand the meaning
of the upscaled quantity. We use 1D and 3D examples to demonstrate that the
proposed framework is able to emulate the behavior of the heterogeneity in the
fine-mesh conductivity model, and to produce an accurate description of the
desired predicted data obtained by using a coarse mesh in the simulation
process.
|
In continuum physics is presupposed that general-relativistic balance
equations are valid which are created from the Lorentz-covariant ones by
application of the equivalence principle. Consequently, the question arises,
how to make these general-covariant balances compatible with Einstein's field
equations. The compatibility conditions are derived by performing a modified
Belinfante-Rosenfeld symmetrization for the non-symmetric and not
divergence-free general-relativistic energy-momentum tensor. The procedure
results in the Mathisson-Papapetrou equations.
|
We have determined spectroscopic orbits for five single-lined spectroscopic
binaries, HD 100167, HD 135991, HD 140667, HD 158222, HD 217924. Their periods
range from 60.6 to 2403 days and the eccentricities, from 0.20 to 0.84. Our
spectral classes for the stars confirm that they are of solar type, F9 to G5,
and all are dwarfs. Their [Fe/H] abundances, determined spectroscopically, are
close to the solar value and on average are 0.12 greater than abundances from a
photometric calibration. Four of the five stars are rotating faster than their
predicted pseudosynchronous rotational velocities.
|
When the process underlying DNA substitutions varies across evolutionary
history, the standard Markov models underlying standard phylogenetic methods
are mathematically inconsistent. The most prominent example is the general time
reversible model (GTR) together with some, but not all, of its submodels. To
rectify this deficiency, Lie Markov models have been developed as the class of
models that are consistent in the face of a changing process of DNA
substitutions. Some well-known models in popular use are within this class, but
are either overly simplistic (e.g. the Kimura two-parameter model) or overly
complex (the general Markov model). On a diverse set of biological data sets,
we test a hierarchy of Lie Markov models spanning the full range of parameter
richness. Compared against the benchmark of the ever-popular GTR model, we find
that as a whole the Lie Markov models perform remarkably well, with the best
performing models having eight parameters and the ability to recognise the
distinction between purines and pyrimidines.
|
The magnetic network observed on the solar surface harbors a sizable fraction
of the total quiet Sun flux. However, its origin and maintenance are not well
known. Here we investigate the contribution of internetwork magnetic fields to
the network flux. Internetwork fields permeate the interior of supergranular
cells and show large emergence rates. We use long-duration sequences of
magnetograms acquired by Hinode and an automatic feature tracking algorithm to
follow the evolution of network and internetwork flux elements. We find that
14% of the quiet Sun flux is in the form of internetwork fields, with little
temporal variations. Internetwork elements interact with network patches and
modify the flux budget of the network, either by adding flux (through merging
processes) or by removing it (through cancellation events). Mergings appear to
be dominant, so the net flux contribution of the internetwork is positive. The
observed rate of flux transfer to the network is 1.5 x 10^24 Mx day^-1 over the
entire solar surface. Thus, the internetwork supplies as much flux as is
present in the network in only 9-13 hours. Taking into account that not all the
transferred flux is incorporated into the network, we find that the
internetwork would be able to replace the entire network flux in approximately
18-24 hours. This renders the internetwork the most important contributor to
the network, challenging the view that ephemeral regions are the main source of
flux in the quiet Sun. About 40% of the total internetwork flux eventually ends
up in the network.
|
We study the question about existence i.e. stability with respect to
dissociation of the spin-quartet, permutation- and reflection-symmetric
${}^4(-3)^+_g$ ($S_z=-3/2, M=-3$) state of the $(\alpha\alpha e e e)$ Coulomb
system: the ${\rm He}_2^+$ molecular ion, placed in a magnetic field $0 \le B
\le 10000$ a.u. We assume that the $\alpha$-particles are infinitely massive
(Born-Oppenheimer approximation of zero order) and adopt the parallel
configuration, when the molecular axis and the magnetic field direction
coincide, as the optimal configuration. The study of the stability is performed
variationally with a physically adequate trial function. To achieve this goal,
we explore several Helium-contained compounds in strong magnetic fields, in
particular, we study the spin-quartet ground state of ${\rm He}^-$ ion, and the
ground (spin-triplet) state of the Helium atom, both for a magnetic field in
$100 \leq B\leq 10000$ a.u. The main result is that the ${\rm He}_2^+$
molecular ion in the state ${}^4(-3)^+_g$ is stable towards all possible decay
modes for magnetic fields $B \gtrsim 120$ a.u. and with the magnetic field
increase the ion becomes more tightly bound and compact with a cigar-type form
of electronic cloud. At $B=1000$ a.u., the dissociation energy of ${\rm
He}_2^+$ into ${\rm He}^- + \alpha$ is $\sim 701.8$ eV and the dissociation
energy for the decay channel to ${\rm He} + \alpha + e $ is $\sim 729.1$ eV,
latter both energies are in the energy window for one of the observed
absorption features of the isolated neutron star 1E1207.4-5209.
|
Motivated by the ideas of analogue gravity, we have performed experiments in
a flume where an analogue White Hole horizon is generated, in the form of a
wave blocking region, by suitably tuned uniform fluid (water) flow and
counter-propagating shallow water waves. We corroborate earlier experimental
observations by finding a critical wave frequency for a particular discharge
above which the waves are effectively blocked beyond the horizon. An obstacle,
in the form of a bottom wave, is introduced to generate a sharp blocking zone.
All previous researchers used this obstacle.
A novel part of our experiment is where we do not introduce the obstacle and
find that wave blocking still takes place, albeit in a more diffused zone.
Lastly we replace the fixed bottom wave obstacle by a movable sand bed to study
the sediment transport and the impact of the horizon or wave blocking
phenomenon on the sediment profile. We find signatures of the wave blocking
zone in the ripple pattern.
|
Electrical transport through a normal metal / superconductor contact at
biases smaller than the energy gap can occur via the reflection of an electron
as a hole of opposite wave vector. The same mechanism of electron-hole
reflection gives rise to low energy states at the surface of unconventional
superconductors having nodes in their order parameter. The occurrence of
electron-hole reflections at normal metal / superconductor interfaces was
predicted independently by Saint James and de Gennes and by Andreev, and their
spectroscopic features discussed in detail by Saint James in the early sixties.
They are generally called Andreev reflections but, for that reason, we call
them Andreev - Saint James (ASJ) reflections. We present a historical review of
ASJ reflections and spectroscopy in conventional superconductors, and review
their application to the High $T_c$ cuprates. The occurrence of ASJ reflections
in all studied cuprates is well documented for a broad range of doping levels,
implying that there is no large asymmetry between electrons and holes near the
Fermi level in the superconducting state. In the underdoped regime, where the
pseudo-gap phenomenon has been observed by other methods such as NMR, ARPES and
Giaever tunneling, gap values obtained from ASJ spectroscopy are smaller than
pseudo-gap values, indicating a lack of coherence in the pseudo-gap energy
range.
|
Normal fetal adipose tissue (AT) development is essential for perinatal
well-being. AT, or simply fat, stores energy in the form of lipids.
Malnourishment may result in excessive or depleted adiposity. Although previous
studies showed a correlation between the amount of AT and perinatal outcome,
prenatal assessment of AT is limited by lacking quantitative methods. Using
magnetic resonance imaging (MRI), 3D fat- and water-only images of the entire
fetus can be obtained from two point Dixon images to enable AT lipid
quantification. This paper is the first to present a methodology for developing
a deep learning based method for fetal fat segmentation based on Dixon MRI. It
optimizes radiologists' manual fetal fat delineation time to produce annotated
training dataset. It consists of two steps: 1) model-based semi-automatic fetal
fat segmentations, reviewed and corrected by a radiologist; 2) automatic fetal
fat segmentation using DL networks trained on the resulting annotated dataset.
Three DL networks were trained. We show a significant improvement in
segmentation times (3:38 hours to < 1 hour) and observer variability (Dice of
0.738 to 0.906) compared to manual segmentation. Automatic segmentation of 24
test cases with the 3D Residual U-Net, nn-UNet and SWIN-UNetR transformer
networks yields a mean Dice score of 0.863, 0.787 and 0.856, respectively.
These results are better than the manual observer variability, and comparable
to automatic adult and pediatric fat segmentation. A radiologist reviewed and
corrected six new independent cases segmented using the best performing
network, resulting in a Dice score of 0.961 and a significantly reduced
correction time of 15:20 minutes. Using these novel segmentation methods and
short MRI acquisition time, whole body subcutaneous lipids can be quantified
for individual fetuses in the clinic and large-cohort research.
|
In the era of Agile methodologies, organizations are exploring strategies to
scale development across teams. Various scaling strategies have emerged, from
"SAFe" to "LeSS", with some organizations creating their own methods. Despite
numerous studies on organizational challenges with these approaches, none have
empirically compared their impact on Agile team effectiveness. This study aims
to evaluate the effectiveness of Agile teams using different scaling methods,
focusing on factors like responsiveness, stakeholder satisfaction, and
management approach. We surveyed 15,078 Agile team members and 1,841
stakeholders, followed by statistical analyses. The results showed minor
differences in effectiveness across scaling strategies. In essence, the choice
of scaling strategy does not significantly impact team effectiveness, and
organizations should select based on their culture and management style.
|
This paper introduces the notion of learning from contradictions (a.k.a
Universum learning) for deep one class classification problems. We formalize
this notion for the widely adopted one class large-margin loss, and propose the
Deep One Class Classification using Contradictions (DOC3) algorithm. We show
that learning from contradictions incurs lower generalization error by
comparing the Empirical Rademacher Complexity (ERC) of DOC3 against its
traditional inductive learning counterpart. Our empirical results demonstrate
the efficacy of DOC3 compared to popular baseline algorithms on several
real-life data sets.
|
In this manuscript we study the modeling of experimental data and its impact
on the resulting integral experimental covariance and correlation matrices. By
investigating a set of three low enriched and water moderated UO2 fuel rod
arrays we found that modeling the same set of data with different, yet
reasonable assumptions concerning the fuel rod composition and its geometric
properties leads to significantly different covariance matrices or correlation
coefficients. Following a Monte Carlo sampling approach, we show for nine
different modeling assumptions the corresponding correlation coefficients and
sensitivity profiles for each pair of the effective neutron multiplication
factor keff. Within the 95% confidence interval the correlation coefficients
vary from 0 to 1, depending on the modeling assumptions. Our findings show that
the choice of modeling can have a huge impact on integral experimental
covariance matrices. When the latter are used in a validation procedure to
derive a bias, this procedure can be affected by the choice of modeling
assumptions, too. The correct consideration of correlated data seems to be
inevitable if the experimental data in a validation procedure is limited or one
cannot rely on a sufficient number of uncorrelated data sets, e.g. from
different laboratories using different setups etc.
|
Most proof systems for concurrent programs assume the underlying memory model
to be sequentially consistent (SC), an assumption which does not hold for
modern multicore processors. These processors, for performance reasons,
implement relaxed memory models. As a result of this relaxation a program,
proved correct on the SC memory model, might execute incorrectly. To ensure its
correctness under relaxation, fence instructions are inserted in the code. In
this paper we show that the SC proof of correctness of an algorithm, carried
out in the proof system of [Sou84], identifies per-thread instruction orderings
sufficient for this SC proof. Further, to correctly execute this algorithm on
an underlying relaxed memory model it is sufficient to respect only these
orderings by inserting fence instructions.
|
We present deep near-IR photometry for Galactic bulge stars in Baade's
Window, $(l,b) = (1.0\deg, -3.9\deg),$ and another minor axis field at $(l,b) =
(0^\circ,-6^\circ)$. We combine our data with previously published photometry
and construct a luminosity function over the range $5.5 \leq K_0 \leq 16.5$,
deeper than any previously published. The slope of this luminosity function and
the magnitude of the tip of the first ascent giant branch are consistent with
theoretical values derived from isochrones with appropriate age and
metallicity.
We use the relationship between [Fe/H] and the giant branch slope derived
from near-IR observations of metal rich globular clusters by Kuchinski {\it et
al.} [AJ, 109, 1131 (1995)] to calculate the mean metallicity for several bulge
fields along the minor axis. For Baade's Window we derive $\langle
{\rm[Fe/H]}\rangle = -0.28 \pm 0.16$, consistent with the recent estimate of
McWilliam \& Rich [ApJS, 91, 749 (1994)], but somewhat lower than previous
estimates based on CO and TiO absorption bands and the $JHK$ colors of M giants
by Frogel {\it et al.} [ApJ, 353, 494 (1990)]. Between $b = -3\deg$ and
$-12\deg$ we find a gradient in $\langle {\rm [Fe/H]}\rangle$ of $-0.06 \pm
0.03$ dex/degree or $-0.43 \pm 0.21$ dex/kpc for $R_0 = 8$ kpc, consistent with
other independent derivations. We derive a helium abundance for Baade's Window
with the $R$ and $R^\prime$ methods and find that $Y = 0.27 \pm 0.03$ implying
$\Delta Y / \Delta Z = 3.3 \pm 1.3$.
Next, we find that the bolometric corrections for bulge K giants ($V - K \leq
2$) are in excellent agreement with empirical derivations based on observations
of globular cluster and local field stars. However, for the redder M giants we
|
Measurements of polarization and temperature dependent soft x-ray absorption
have been performed on Na_xCoO_2 single crystals with x=0.4 and x=0.6. They
show a deviation of the local trigonal symmetry of the CoO_6 octahedra, which
is temperature independent in a temperature range between 25 K and 372 K. This
deviation was found to be different for Co^{3+} and Co^{4+} sites. With the
help of a cluster calculation we are able to interpret the Co L_{23}-edge
absorption spectrum and find a doping dependent energy splitting between the
t_{2g} and the e_g levels (10Dq) in Na_xCoO_2.
|
Changing the set of independent variables of Poincare gauge theory and
considering, in a manner similar to the second order formalism of general
relativity, the Riemannian part of the Lorentz connection as function of the
tetrad field, we construct theories that do not contain second or higher order
derivatives in the field variables, possess a full general relativity limit in
the absence of spinning matter fields, and allow for propagating torsion fields
in the general case. A concrete model is discussed and the field equations are
reduced by means of a Yasskin type ansatz to a conventional Einstein-Proca
system. Approximate solutions describing the exterior of a spin polarized
neutron star are prsented and the possibility of an experimental detection of
the torsion fields is briefly discussed.
|
We present optical photometry of the afterglow of the long GRB 180205A with
the COATLI telescope from 217 seconds to about 5 days after the {\itshape
Swift}/BAT trigger. We analyse this photometry in the conjunction with the
X-ray light curve from {\itshape Swift}/XRT. The late-time light curves and
spectra are consistent with the standard forward-shock scenario. However, the
early-time optical and X-ray light curves show non-typical behavior; the
optical light curve exhibits a flat plateau while the X-ray light curve shows a
flare. We explore several scenarios and conclude that the most likely
explanation for the early behavior is late activity of the central engine.
|
The lifetimes of non-covalent A:a knob-hole bonds in fibrin probed with the
optical trap-based force-clamp first increases ("catch bonds") and then
decreases ("slip bonds") with increasing tensile force. Molecular modeling of
"catch-to-slip" transition using the atomic structure of the A:a complex
reveals that the movable flap serves as tension-dependent molecular switch.
Flap dissociation from the regulatory B-domain in $\gamma$-nodule and
translocation from the periphery to knob `A' triggers the hole `a' closure and
interface remodeling, which results in the increased binding affinity and
prolonged bond lifetimes. Fluctuating bottleneck theory is developed to
understand the "catch-to-slip" transition in terms of the interface stiffness
$\kappa =$ 15.7 pN nm $^{-1}$, interface size fluctuations 0.7-2.7 nm, knob `A'
escape rate constant $k_0 =$ 0.11 nm$^2$ s$^{-1}$, and transition distance for
dissociation $\sigma_y =$ 0.25 nm. Strengthening of the A:a knob-hole bonds
under small tension might favor formation and reinforcement of nascent fibrin
clots under hydrodynamic shear.
|
We consider a class of two-player turn-based zero-sum games on graphs with
reachability objectives, known as reachability games, where the objective of
Player 1 (P1) is to reach a set of goal states, and that of Player 2 (P2) is to
prevent this. In particular, we consider the case where the players have
asymmetric information about each other's action capabilities: P2 starts with
an incomplete information (misperception) about P1's action set, and updates
the misperception when P1 uses an action previously unknown to P2. When P1 is
made aware of P2's misperception, the key question is whether P1 can control
P2's perception so as to deceive P2 into selecting actions to P1's advantage?
We show that there might exist a deceptive winning strategy for P1 that ensures
P1's objective is achieved with probability one from a state otherwise losing
for P1, had the information being symmetric and complete. We present three key
results: First, we introduce a dynamic hypergame model to capture the
reachability game with evolving misperception of P2. Second, we present a
fixed-point algorithm to compute the Deceptive Almost-Sure Winning (DASW)
region and DASW strategy. Finally, we show that DASW strategy is at least as
powerful as Almost-Sure Winning (ASW) strategy in the game in which P1 does not
account for P2's misperception. We illustrate our algorithm using a robot
motion planning in an adversarial environment.
|
We prove upper and lower bounds for the threshold of the q-overlap-k-Exact
cover problem.
These results are motivated by the one-step replica symmetry breaking
approach of Statistical Physics, and the hope of using an approach based on
that of Mezard et al. (2005) to rigorously prove that for some values of the
order parameter the overlap distribution of k-Exact Cover has discontinuous
support.
|
The definition and basic properties of the Burnside ring of compact Lie
groups are presented, with emphasis on the analogy with the construction of the
Burnside ring of finite groups.
|
We consider graphs $G$ with $\Delta=3$ such that $\chi'(G)=4$ and
$\chi'(G-e)=3$ for every edge $e$, so-called \emph{critical} graphs. Jakobsen
noted that the Petersen graph with a vertex deleted, $P^*$, is such a graph and
has average degree only $\frac83$. He showed that every critical graph has
average degree at least $\frac83$, and asked if $P^*$ is the only graph where
equality holds. A result of Cariolaro and Cariolaro shows that this is true. We
strengthen this average degree bound further. Our main result is that if $G$ is
a subcubic critical graph other than $P^*$, then $G$ has average degree at
least $\frac{46}{17}\approx2.706$. This bound is best possible, as shown by the
Hajos join of two copies of $P^*$.
|
In this article we consider the problem of pricing and hedging
high-dimensional Asian basket options by Quasi-Monte Carlo simulation. We
assume a Black-Scholes market with time-dependent volatilities and show how to
compute the deltas by the aid of the Malliavin Calculus, extending the
procedure employed by Montero and Kohatsu-Higa (2003). Efficient
path-generation algorithms, such as Linear Transformation and Principal
Component Analysis, exhibit a high computational cost in a market with
time-dependent volatilities. We present a new and fast Cholesky algorithm for
block matrices that makes the Linear Transformation even more convenient.
Moreover, we propose a new-path generation technique based on a Kronecker
Product Approximation. This construction returns the same accuracy of the
Linear Transformation used for the computation of the deltas and the prices in
the case of correlated asset returns while requiring a lower computational
time. All these techniques can be easily employed for stochastic volatility
models based on the mixture of multi-dimensional dynamics introduced by Brigo
et al. (2004).
|
The analytical package written in FORM presented in this paper allows the
computation of the complete set of Feynman Rules producing the Rational terms
of kind R2 contributing to the virtual part of NLO amplitudes in the Standard
Model of the Electroweak interactions. Building block topologies filled by
means of generic scalars, vectors and fermions, allowing to build these Feynman
Rules in terms of specific elementary particles, are explicitly given in the
Rxi gauge class, together with the automatic dressing procedure to obtain the
Feynman Rules from them. The results in more specific gauges, like the 't Hooft
Feynman one, follow as particular cases, in both the HV and the FDH dimensional
regularization schemes. As a check on our formulas, the gauge independence of
the total Rational contribution (R1 + R2) to renormalized S-matrix elements is
verified by considering the specific example of the H --> gamma-gamma decay
process at 1-loop. This package can be of interest for people aiming at a
better understanding of the nature of the Rational terms. It is organized in a
modular way, allowing a further use of some its files even in different
contexts. Furthermore, it can be considered as a first seed in the effort
towards a complete automation of the process of the analytical calculation of
the R2 effective vertices, given the Lagrangian of a generic gauge theory of
particle interactions.
|
In this paper, we analytically investigate the effect of adding an external
magnetic field in presence of Born-Infeld corrections to a holographic
superconductor in the probe limit. The technique employed is based on the
matching of the solutions to the field equations near the horizon and the
asymptotic AdS region. We obtain expressions for the critical temperature and
the condensation values explicitly to all orders in the Born-Infeld parameter.
The value of the critical magnetic field is finally obtained and is found to
get affected by the Born-Infeld parameter.
|
We report on the detection of single photons with {\lambda} = 8 {\mu}m using
a superconducting hot-electron microbolometer. The sensing element is a
titanium transition-edge sensor with a volume ~ 0.1 {\mu}m^3 fabricated on a
silicon substrate. Poisson photon counting statistics including simultaneous
detection of 3 photons was observed. The width of the photon-number peaks was
0.11 eV, 70% of the photon energy, at 50-100 mK. This achieved energy
resolution is one of the best figures reported so far for superconducting
devices. Such devices can be suitable for single photon calorimetric
spectroscopy throughout the mid-infrared and even the far-infrared.
|
Change in structure and magnetic properties of SmCo5-xFex permanent magnets
for different values of x ranging from 0 to 2 have been studied. Structural
investigation from X-ray diffraction (XRD) patterns confirms the hexagonal
CaCu5-type structure of the SmCo5-xFex ribbons for 0<x<2. The decrease in
angular position of the diffraction peaks points to the lattice expansion due
to the substitution of Co atoms by larger Fe atoms. Mixture of phases occurs
for x>2 and has been confirmed by both XRD studies and magnetic measurements.
Nucleation effect induced by the additive Fe enhances the coercivity (Hc)up to
27 kOe which is much larger than 4.5 kOe obtained for pure SmCo5.
|
Tree-based networks are a class of phylogenetic networks that attempt to
formally capture what is meant by "tree-like" evolution. A given non-tree-based
phylogenetic network, however, might appear to be very close to being
tree-based, or very far. In this paper, we formalise the notion of proximity to
tree-based for unrooted phylogenetic networks, with a range of proximity
measures. These measures also provide characterisations of tree-based networks.
One measure in particular, related to the nearest neighbour interchange
operation, allows us to define the notion of "tree-based rank". This provides a
subclassification within the tree-based networks themselves, identifying those
networks that are "very" tree-based. Finally, we prove results relating
tree-based networks in the settings of rooted and unrooted phylogenetic
networks, showing effectively that an unrooted network is tree-based if and
only if it can be made a rooted tree-based network by rooting it and orienting
the edges appropriately. This leads to a clarification of the contrasting
decision problems for tree-based networks, which are polynomial in the rooted
case but NP complete in the unrooted.
|
Nuclear spins are among the potential candidates prospected for quantum
information technology. A recent breakthrough enabled to atomically resolve
their interaction with the electron spin, the so-called hyperfine interaction,
within individual atoms utilizing scanning tunneling microscopy (STM).
Intriguingly, this was only realized for a few species put on a two-layers
thick MgO. Here, we systematically quantify from first-principles the hyperfine
interactions of the whole series of 3d transition adatoms deposited on various
thicknesses of MgO, NaF, NaCl, h--BN and Cu$_2$N films. We identify the
adatom-substrate complexes with the largest hyperfine interactions and unveil
the main trends and exceptions. We reveal the core mechanisms at play, such as
the interplay of the local bonding geometry and the chemical nature of the thin
films, which trigger transitions between high- and low-spin states accompanied
with subtle internal rearrangements of the magnetic electrons. By providing a
general map of hyperfine interactions, our work has immediate implications in
future STM investigations aiming at detecting and realizing quantum concepts
hinging on nuclear spins.
|
We investigate the evolution of the faint-end slope of the luminosity
function, $\alpha$, using semi-analytical modeling of galaxy formation. In
agreement with observations, we find that the slope can be fitted well by
$\alpha (z) =a+b z$, with a=-1.13 and b=-0.1. The main driver for the evolution
in $\alpha$ is the evolution in the underlying dark matter mass function.
Sub-L_* galaxies reside in dark matter halos that occupy a different part of
the mass function. At high redshifts, this part of the mass function is steeper
than at low redshifts and hence $\alpha$ is steeper. Supernova feedback in
general causes the same relative flattening with respect to the dark matter
mass function. The faint-end slope at low redshifts is dominated by field
galaxies and at high redshifts by cluster galaxies. The evolution of
$\alpha(z)$ in each of these environments is different, with field galaxies
having a slope b=-0.14 and cluster galaxies b=-0.05. The transition from
cluster-dominated to field-dominated faint-end slope occurs roughly at a
redshift $z_* \sim 2$, and suggests that a single linear fit to the overall
evolution of $\alpha(z)$ might not be appropriate. Furthermore, this result
indicates that tidal disruption of dwarf galaxies in clusters cannot play a
significant role in explaining the evolution of $\alpha(z)$ at z< z_*. In
addition we find that different star formation efficiencies a_* in the
Schmidt-Kennicutt-law and supernovae-feedback efficiencies $\epsilon$ generally
do not strongly influence the evolution of $\alpha(z)$.
|
We investigate percolation in binary and ternary mixtures of patchy colloidal
particles theoretically and using Monte Carlo simulations. Each particle has
three identical patches, with distinct species having different types of patch.
Theoretically we assume tree-like clusters and calculate the bonding
probabilities using Wertheim's first-order perturbation theory for association.
For ternary mixtures we find up to eight fundamentally different percolated
states. The states differ in terms of the species and pairs of species that
have percolated. The strongest gel is a trigel or tricontinuous gel, in which
each of the three species has percolated. The weakest gel is a mixed gel in
which all of the particles have percolated, but none of the species percolates
by itself. The competition between entropy of mixing and internal energy of
bonding determines the stability of each state. Theoretical and simulation
results are in very good agreement. The only significant difference is the
temperature at the percolation threshold, which is overestimated by the theory
due to the absence of closed loops in the theoretical description.
|
We analyze topological objects in pure QCD in the presence of external quarks
by calculating the distributions of instanton and monopole densities around
static color sources. We find a suppression of the densities close to external
sources and the formation of a flux tube between a static quark--antiquark
pair. The similarity in the behavior of instantons and monopoles around static
sources might be due to a local correlation between these topological objects.
On an $8^{3} \times 4$ lattice at $\beta=5.6$, it turns out that topological
quantities are correlated approximately two lattice spacings.
|
Graph generative models are a highly active branch of machine learning. Given
the steady development of new models of ever-increasing complexity, it is
necessary to provide a principled way to evaluate and compare them. In this
paper, we enumerate the desirable criteria for such a comparison metric and
provide an overview of the status quo of graph generative model comparison in
use today, which predominantly relies on the maximum mean discrepancy (MMD). We
perform a systematic evaluation of MMD in the context of graph generative model
comparison, highlighting some of the challenges and pitfalls researchers
inadvertently may encounter. After conducting a thorough analysis of the
behaviour of MMD on synthetically-generated perturbed graphs as well as on
recently-proposed graph generative models, we are able to provide a suitable
procedure to mitigate these challenges and pitfalls. We aggregate our findings
into a list of practical recommendations for researchers to use when evaluating
graph generative models.
|
Experimental progresses in the miniaturisation of electronic devices have
made routinely available in the laboratory small electronic systems, on the
micron or sub-micron scale, which at low temperature are sufficiently well
isolated from their environment to be considered as fully coherent. Some of
their most important properties are dominated by the interaction between
electrons. Understanding their behaviour therefore requires a description of
the interplay between interference effects and interactions.
The goal of this review is to address this relatively broad issue, and more
specifically to address it from the perspective of the quantum chaos community.
I will therefore present some of the concepts developed in the field of quantum
chaos which have some application to study many-body effects in mesoscopic and
nanoscopic systems. Their implementation is illustrated on a few examples of
experimental relevance such as persistent currents, mesoscopic fluctuations of
Kondo properties or Coulomb blockade. I will furthermore try to bring out, from
the various physical illustrations, some of the specific advantages on more
general grounds of the quantum chaos based approach.
|
How to train a machine learning model while keeping the data private and
secure? We present CodedPrivateML, a fast and scalable approach to this
critical problem. CodedPrivateML keeps both the data and the model
information-theoretically private, while allowing efficient parallelization of
training across distributed workers. We characterize CodedPrivateML's privacy
threshold and prove its convergence for logistic (and linear) regression.
Furthermore, via extensive experiments on Amazon EC2, we demonstrate that
CodedPrivateML provides significant speedup over cryptographic approaches based
on multi-party computing (MPC).
|
We analyze the density and size dependence of the relaxation time for
kinetically constrained spin models (KCSM) intensively studied in the physical
literature as simple models sharing some of the features of a glass transition.
KCSM are interacting particle systems on $\Z^d$ with Glauber-like dynamics,
reversible w.r.t. a simple product i.i.d Bernoulli($p$) measure. The essential
feature of a KCSM is that the creation/destruction of a particle at a given
site can occur only if the current configuration of empty sites around it
satisfies certain constraints which completely define each specific model. No
other interaction is present in the model. From the mathematical point of view,
the basic issues concerning positivity of the spectral gap inside the
ergodicity region and its scaling with the particle density $p$ remained open
for most KCSM (with the notably exception of the East model in $d=1$
\cite{Aldous-Diaconis}). Here for the first time we: i) identify the ergodicity
region by establishing a connection with an associated bootstrap percolation
model; ii) develop a novel multi-scale approach which proves positivity of the
spectral gap in the whole ergodic region; iii) establish, sometimes optimal,
bounds on the behavior of the spectral gap near the boundary of the ergodicity
region and iv) establish pure exponential decay for the persistence function.
Our techniques are flexible enough to allow a variety of constraints and our
findings disprove certain conjectures which appeared in the physical literature
on the basis of numerical simulations.
|
In this paper we show that the weighted Bernstein-Walsh inequality in
logarithmic potential theory is sharp up to some new universal constant,
provided that the external field is given by a logarithmic potential. Our main
tool for such results is a new technique of discretization of logarithmic
potentials, where we take the same starting point as in earlier work of Totik
and of Levin \& Lubinsky, but add an important new ingredient, namely some new
mean value property for the cumulative distribution function of the underlying
measure. As an application, we revisit the work of Beckermann \& Kuijlaars on
the superlinear convergence of conjugate gradients. These authors have
determined the asymptotic convergence factor for sequences of systems of linear
equations with an asymptotic eigenvalue distribution. There was some numerical
evidence to let conjecture that the integral mean of Green functions occurring
in their work should also allow to give inequalities for the rate of
convergence if one makes a suitable link between measures and the eigenvalues
of a single matrix of coefficients. We prove this conjecture , at least for a
class of measures which is of particular interest for applications.
|
In this paper, we present an analysis of the dynamics and segregation of
galaxies in rich clusters from z~0.32 to z~0.48 taken from the CFHT Optical
PDCS (COP) survey and from the CNOC survey (Carlberg et al. 1997). Our results
from the COP survey are based upon the recent observational work of Adami et
al. (2000) and Holden et al. (2000) and use new spectroscopic and photometric
data on six clusters selected from the Palomar Distant Cluster Survey (PDCS;
Postman et al. 1996). We have compared the COP and CNOC samples to the ESO
Nearby Abell Cluster Survey (ENACS: z~0.07). Our sample shows that the z<0.4
clusters have the same velocity dispersion versus magnitude, morphological type
and radius relationships as nearby Abell clusters. The z~0.48 clusters exhibit,
however, departures from these relations. Furthermore, there appears to be a
higher fraction of late-type (or bluer, e.g. Butcher and Oemler, 1984) galaxies
in the distant clusters compared to the nearby ones. The classical scenario in
which massive galaxies virialize before they evolve from late into early type
explain our observations. In such a scenario, the clusters of our sample began
to form before a redshift of ~0.8 and the late-type galaxy population had a
continuous infall into the clusters.
|
Task and Motion Planning (TAMP) algorithms solve long-horizon robotics tasks
by integrating task planning with motion planning; the task planner proposes a
sequence of actions towards a goal state and the motion planner verifies
whether this action sequence is geometrically feasible for the robot. However,
state-of-the-art TAMP algorithms do not scale well with the difficulty of the
task and require an impractical amount of time to solve relatively small
problems. We propose Constraints and Streams for Task and Motion Planning
(COAST), a probabilistically-complete, sampling-based TAMP algorithm that
combines stream-based motion planning with an efficient, constrained task
planning strategy. We validate COAST on three challenging TAMP domains and
demonstrate that our method outperforms baselines in terms of cumulative task
planning time by an order of magnitude. You can find more supplementary
materials on our project
\href{https://branvu.github.io/coast.github.io}{website}.
|
We investigate quantum tunneling in smooth symmetric and asymmetric
double-well potentials. Exact solutions for the ground and first excited states
are used to study the dynamics. We introduce Wigner's quasi-probability
distribution function to highlight and visualize the non-classical nature of
spatial correlations arising in tunneling.
|
Pathology has played a crucial role in the diagnosis and evaluation of
patient tissue samples obtained from surgeries and biopsies for many years. The
advent of Whole Slide Scanners and the development of deep learning
technologies have significantly advanced the field, leading to extensive
research and development in pathology AI (Artificial Intelligence). These
advancements have contributed to reducing the workload of pathologists and
supporting decision-making in treatment plans. Recently, large-scale AI models
known as Foundation Models (FMs), which are more accurate and applicable to a
wide range of tasks compared to traditional AI, have emerged, and expanded
their application scope in the healthcare field. Numerous FMs have been
developed in pathology, and there are reported cases of their application in
various tasks, such as disease diagnosis, rare cancer diagnosis, patient
survival prognosis prediction, biomarker expression prediction, and the scoring
of immunohistochemical expression intensity. However, several challenges remain
for the clinical application of FMs, which healthcare professionals, as users,
must be aware of. Research is ongoing to address these challenges. In the
future, it is expected that the development of Generalist Medical AI, which
integrates pathology FMs with FMs from other medical domains, will progress,
leading to the effective utilization of AI in real clinical settings to promote
precision and personalized medicine.
|
In this paper we consider the pairwise kidney exchange game. This game
naturally appears in situations that some service providers benefit from
pairwise allocations on a network, such as the kidney exchanges between
hospitals.
Ashlagi et al. present a $2$-approximation randomized truthful mechanism for
this problem. This is the best known result in this setting with multiple
players. However, we note that the variance of the utility of an agent in this
mechanism may be as large as $\Omega(n^2)$, which is not desirable in a real
application. In this paper we resolve this issue by providing a
$2$-approximation randomized truthful mechanism in which the variance of the
utility of each agent is at most $2+\epsilon$.
Interestingly, we could apply our technique to design a deterministic
mechanism such that, if an agent deviates from the mechanism, she does not gain
more than $2\lceil \log_2 m\rceil$. We call such a mechanism an almost truthful
mechanism. Indeed, in a practical scenario, an almost truthful mechanism is
likely to imply a truthful mechanism. We believe that our approach can be used
to design low risk or almost truthful mechanisms for other problems.
|
We present an algorithm that achieves almost optimal pseudo-regret bounds
against adversarial and stochastic bandits. Against adversarial bandits the
pseudo-regret is $O(K\sqrt{n \log n})$ and against stochastic bandits the
pseudo-regret is $O(\sum_i (\log n)/\Delta_i)$. We also show that no algorithm
with $O(\log n)$ pseudo-regret against stochastic bandits can achieve
$\tilde{O}(\sqrt{n})$ expected regret against adaptive adversarial bandits.
This complements previous results of Bubeck and Slivkins (2012) that show
$\tilde{O}(\sqrt{n})$ expected adversarial regret with $O((\log n)^2)$
stochastic pseudo-regret.
|
We report first-principles calculations for one of the few materials that is
believed to be a ferroelectric ferromagnet, Bi$_2$NiMnO$_6$. Our calculations
show that, contrary to what it has been reported so far, bulk Bi$_2$NiMnO$_6$
does not have a polarization. Instead, like BiMnO$_3$, it crystallizes into a
centrosymmetric structure with space group $C2/c$. We also predict that
Bi$_2$NiMnO$_6$ will indeed be a ferroelectric ferromagnet if it is grown as an
epitaxial film on a substrate with in-plane square symmetry and a lattice
constant around 4~\AA, such as BaTiO$_3$ or PbZr$_{1-x}$Ti$_{x}$O$_{3}$.
|
The convolutional layers of standard convolutional neural networks (CNNs) are
equivariant to translation. However, the convolution and fully-connected layers
are not equivariant or invariant to other affine geometric transformations.
Recently, a new class of CNNs is proposed in which the conventional layers of
CNNs are replaced with equivariant convolution, pooling, and
batch-normalization layers. The final classification layer in equivariant
neural networks is invariant to different affine geometric transformations such
as rotation, reflection and translation, and the scalar value is obtained by
either eliminating the spatial dimensions of filter responses using convolution
and down-sampling throughout the network or average is taken over the filter
responses. In this work, we propose to integrate the orthogonal moments which
gives the high-order statistics of the function as an effective means for
encoding global invariance with respect to rotation, reflection and translation
in fully-connected layers. As a result, the intermediate layers of the network
become equivariant while the classification layer becomes invariant. The most
widely used Zernike, pseudo-Zernike and orthogonal Fourier-Mellin moments are
considered for this purpose. The effectiveness of the proposed work is
evaluated by integrating the invariant transition and fully-connected layer in
the architecture of group-equivariant CNNs (G-CNNs) on rotated MNIST and
CIFAR10 datasets.
|
Tunable, battery free light emission is demonstrated in a solid state device
that is compatible with lab on a chip technology and easily fabricated via
solution processing techniques. A porous one dimensional (1D) photonic crystal
(also called Bragg stack or mirror) is infiltrated by chemiluminescence
rubrene-based reagents. The Bragg mirror has been designed to have the photonic
band gap overlapping with the emission spectrum of rubrene. The
chemiluminescence reaction occurs in the intrapores of the photonic crystal and
the emission spectrum of the dye is modulated according to the photonic band
gap position. This is a compact, powerless emitting source that can be
exploited in disposable photonic chip for sensing and point of care
applications.
|
We consider a distribution grid used to charge electric vehicles such that
voltage drops stay bounded. We model this as a class of resource-sharing
networks, known as bandwidth-sharing networks in the communication network
literature. We focus on resource-sharing networks that are driven by a class of
greedy control rules that can be implemented in a decentralized fashion. For a
large number of such control rules, we can characterize the performance of the
system by a fluid approximation. This leads to a set of dynamic equations that
take into account the stochastic behavior of EVs. We show that the invariant
point of these equations is unique and can be computed by solving a specific
ACOPF problem, which admits an exact convex relaxation. We illustrate our
findings with a case study using the SCE 47-bus network and several special
cases that allow for explicit computations.
|
On any Calabi-Yau manifold X one can define a certain sheaf of chiral N=2
superconformal field theories, known as the chiral de Rham complex of X. It
depends only on the complex structure of X, and its local structure is
described by a simple free field theory. We show that the cohomology of this
sheaf can be identified with the infinite-volume limit of the half-twisted
sigma-model defined by E. Witten more than a decade ago. We also show that the
correlators of the half-twisted model are independent of the Kahler moduli to
all orders in worldsheet perturbation theory, and that the relation to the
chiral de Rham complex can be violated only by worldsheet instantons.
|
The concept of a decentralized ledger usually implies that each node of a
blockchain network stores the entire blockchain. However, in the case of
popular blockchains, which each weigh several hundreds of GB, the large amount
of data to be stored can incite new or low-capacity nodes to run lightweight
clients. Such nodes do not participate to the global storage effort and can
result in a centralization of the blockchain by very few nodes, which is
contrary to the basic concepts of a blockchain.
To avoid this problem, we propose new low storage nodes that store a reduced
amount of data generated from the blockchain by using erasure codes. The
properties of this technique ensure that any block of the chain can be easily
rebuild from a small number of such nodes. This system should encourage low
storage nodes to contribute to the storage of the blockchain and to maintain
decentralization despite of a globally increasing size of the blockchain. This
system paves the way to new types of blockchains which would only be managed by
low capacity nodes.
|
The general solution of M\o ller's field equations in case of spherical
symmetry is derived. The previously obtained solutions are verified as special
cases of the general solution.
|
We have isolated a sample of 14 candidate variable objects with extended
image structure to Bj = 22.5 in 0.284 deg^2 of Selected Area 57. The majority
of candidates are blue (U-B<0) and relatively compact. At fainter magnitudes,
there is a steep rise in the number of variable objects. These objects are also
compact and blue, and some of them are likely to be truly stellar. Twelve of
the Bj <= 22.5 candidates have been observed spectroscopically over limited
ranges of wavelength and a variety of resulting signal-to-noise. Five
candidates display spectra consistent with Seyfert-like activity. In most cases
where we have not been able to confirm a Seyfert spectroscopic type, the
spectra are of insufficient quality or coverage to rule out such a
classification. The majority of candidates have luminosities less than 10% of
the nominal demarkation between QSOs and AGN (M(B) = -23). The surface density
of confirmed M(B) > -23 AGN to Bj = 22, including stellar sources, is
~40/deg^2, in good agreement with other surveys at this depth. The confirmed
AGN in extended sources make up 36% of this population. Thus, the application
of a variability criterion to images with extended structure enhances the
completeness of the census of active nuclei. If the majority of our candidates
are bona fide AGN, the surface density could be as high as 82/deg^2 for M(B) >
-23, and 162/deg^2 for all luminosities to Bj = 22, with extended sources
contributing up to 33% of the total. (abridged)
|
In an atmosphere, a cloud condensation region is characterized by a strong
vertical gradient in the abundance of the related condensing species. On Earth,
the ensuing gradient of mean molecular weight has relatively few dynamical
consequences because N$_2$ is heavier than water vapor, so that only the
release of latent heat significantly impacts convection. On the contrary, in an
hydrogen dominated atmosphere (e.g. giant planets), all condensing species are
significantly heavier than the background gas. This can stabilize the
atmosphere against convection near a cloud deck if the enrichment in the given
species exceeds a critical threshold. This raises two questions. What is
transporting energy in such a stabilized layer, and how affected can the
thermal profile of giant planets be? To answer these questions, we first carry
out a linear analysis of the convective and double-diffusive instabilities in a
condensable medium showing that an efficient condensation can suppress
double-diffusive convection. This suggests that a stable radiative layer can
form near a cloud condensation level, leading to an increase in the temperature
of the deep adiabat. Then, we investigate the impact of the condensation of the
most abundant species---water---with a steady-state atmosphere model. Compared
to standard models, the temperature increase can reach several hundred degrees
at the quenching depth of key chemical tracers. Overall, this effect could have
many implications for our understanding of the dynamical and chemical state of
the atmosphere of giant planets, for their future observations (with Juno for
example), and for their internal evolution.
|
Many data mining and statistical machine learning algorithms have been
developed to select a subset of covariates to associate with a response
variable. Spurious discoveries can easily arise in high-dimensional data
analysis due to enormous possibilities of such selections. How can we know
statistically our discoveries better than those by chance? In this paper, we
define a measure of goodness of spurious fit, which shows how good a response
variable can be fitted by an optimally selected subset of covariates under the
null model, and propose a simple and effective LAMM algorithm to compute it. It
coincides with the maximum spurious correlation for linear models and can be
regarded as a generalized maximum spurious correlation. We derive the
asymptotic distribution of such goodness of spurious fit for generalized linear
models and $L_1$ regression. Such an asymptotic distribution depends on the
sample size, ambient dimension, the number of variables used in the fit, and
the covariance information. It can be consistently estimated by multiplier
bootstrapping and used as a benchmark to guard against spurious discoveries. It
can also be applied to model selection, which considers only candidate models
with goodness of fits better than those by spurious fits. The theory and method
are convincingly illustrated by simulated examples and an application to the
binary outcomes from German Neuroblastoma Trials.
|
Empathy is a cognitive and emotional reaction to an observed situation of
others. Empathy has recently attracted interest because it has numerous
applications in psychology and AI, but it is unclear how different forms of
empathy (e.g., self-report vs counterpart other-report, concern vs. distress)
interact with other affective phenomena or demographics like gender and age. To
better understand this, we created the {\it Empathic Conversations} dataset of
annotated negative, empathy-eliciting dialogues in which pairs of participants
converse about news articles. People differ in their perception of the empathy
of others. These differences are associated with certain characteristics such
as personality and demographics. Hence, we collected detailed characterization
of the participants' traits, their self-reported empathetic response to news
articles, their conversational partner other-report, and turn-by-turn
third-party assessments of the level of self-disclosure, emotion, and empathy
expressed. This dataset is the first to present empathy in multiple forms along
with personal distress, emotion, personality characteristics, and person-level
demographic information. We present baseline models for predicting some of
these features from conversations.
|
No abstract; review only
|
Previous experiments have found mixed results on whether honesty is intuitive
or requires deliberation. Here we add to this literature by building on prior
work of Capraro (2017). We report a large study (N=1,389) manipulating time
pressure vs time delay in a deception game. We find that, in this setting,
people are more honest under time pressure, and that this result is not driven
by confounds present in earlier work.
|
The 1-loop renormalization group equations for the minimal Z' models
encompassing a type-I seesaw mechanism are studied in the light of the 125 GeV
Higgs boson discovery. This model is taken as a benchmark for the general case
of singlet extensions of the standard model. The most important result is that
negative scalar mixing angles are favoured with respect to positive values.
Further, a minimum value for the latter exists, as well as a maximum value for
the masses of the heavy neutrinos, depending on the vacuum expectation value of
the singlet scalar.
|
Subsets and Splits