text
stringlengths 6
128k
|
---|
We investigate the problem of the equivalence of $L^q$-Sobolev norms in
Malliavin spaces for $q\in [1,\infty)$, focusing on the graph norm of the
$k$-th Malliavin derivative operator and the full Sobolev norm involving all
derivatives up to order $k$, where $k$ is any positive integer. The case $q=1$
in the infinite-dimensional setting is challenging, since at such extreme the
standard approach involving Meyer's inequalities fails. In this direction, we
are able to establish the mentioned equivalence for $q=1$ and $k=2$ relying on
a vector-valued Poincar\'e inequality that we prove independently and that
turns out to be new at this level of generality, while for $q=1$ and $k>2$ the
equivalence issue remains open, even if we obtain some functional estimates of
independent interest. With our argument (that also resorts to the Wiener chaos)
we are able to recover the case $q\in (1,\infty)$ in the infinite-dimensional
setting; the latter is known since the eighties, however our proof is more
direct than those existing in the literature, and allows to give explicit
bounds on all the multiplying constants involved in the functional
inequalities. Finally, we also deal with the finite-dimensional case for all
$q\in [1,\infty)$ (where the equivalence, without explicit constants, follows
from standard compactness arguments): our proof in such setting is much
simpler, relying on Gaussian integration-by-parts formulas and an adaptation of
Sobolev inequalities in Euclidean spaces, and it provides again quantitative
bounds on the multiplying constants, which however blow up when the dimension
diverges to $\infty$ (whence the need for a different approach in the
infinite-dimensional setting).
|
We study the analytical continuation in the complex plane of free energy of
the Ising model on diamond-like hierarchical lattices. It is known that the
singularities of free energy of this model lie on the Julia set of some
rational endomorphism $f$ related to the action of the Migdal-Kadanoff
renorm-group. We study the asymptotics of free energy when temperature goes
along hyperbolic geodesics to the boundary of an attractive basin of $f$. We
prove that for almost all (with respect to the harmonic measure) geodesics the
complex critical exponent is common, and compute it.
|
We derive a link representation for all tree amplitudes in N=8 supergravity,
from a recent conjecture by Cachazo and Skinner. The new formula explicitly
writes amplitudes as contour integrals over constrained link variables, with an
integrand naturally expressed in terms of determinants, or equivalently tree
diagrams. Important symmetries of the amplitude, such as supersymmetry, parity
and (partial) permutation invariance, are kept manifest in the formulation. We
also comment on rewriting the formula in a GL(k)-invariant manner, which may
serve as a starting point for the generalization to possible Grassmannian
contour integrals.
|
We introduce a class of discrete time stationary trawl processes taking real
or integer values and written as sums of past values of independent `seed'
processes on shrinking intervals (`trawl heights'). Related trawl processes in
continuous time were studied in Barndorff-Nielsen (2011) and Barndorff-Nielsen
et al. (2014), however in our case, the i.i.d. seed processes can be very
general and need not be infinitely divisible. In the case when the trawl height
decays with the lag as $j^{-\alpha}$ for some $1< \alpha < 2 $, the trawl
process exhibits long memory and its covariance decays as $j^{1-\alpha}$. We
show that under general conditions on generic seed process, the normalized
partial sums of such trawl process may tend either to a fractional Brownian
motion or to an $\alpha$-stable L\'evy process.
|
The main purpose of this paper is to introduce the geometric difference
sequence space
$l_\infty^{G} (\Delta_G)$ and prove that $l_\infty^{G} ({\Delta}_{G})$ is a
Banach space with respect to the norm $\left\|.\right\|^G_{{\Delta}_G}.$ Also
we compute the $\alpha$-dual, $\beta$-dual and $\gamma$-dual spaces. Finally we
obtain the Geometric Newton-Gregory interpolation formulae.
|
Results of an exploratory study of the antinucleon-nucleon interaction within
chiral effective field theory are reported. The antinucleon-nucleon potential
is derived up to next-to-next-to-leading order, based on a modified Weinberg
power counting, in close analogy to pertinent studies of the nucleon-nucleon
interaction. The low-energy constants associated with the arising contact
interactions are fixed by a fit to phase shifts and inelasticities provided by
a recently published phase-shift analysis of antiproton-proton scattering data.
The overall quality of the achieved description of the antinucleon-nucleon
amplitudes is comparable to the one found in case of the nucleon-nucleon
interaction at the same order. For most S-waves and several P-waves good
agreement with the antinucleon-nucleon phase shifts and inelasticities is
obtained up to laboratory energies of around 200 MeV.
|
We explore how the slopes and scatters of the scaling relations of disk
galaxies (Vm-L[-M], R-L[-M], and Vm-R) do change when moving from B to K bands
and to stellar and baryonic quantities. For our compiled sample of 76 normal,
non-interacting high and low surface brightness galaxies, we find some changes,
which evidence evolution effects, mainly related to gas infall and star
formation (SF). We also explore correlations among the (B-K) color, stellar
mass fraction fs, mass M (luminosity L), and surface density (SB), as well as
correlations among the residuals of the scaling relations. Some of our findings
are: (i) the scale length Rb is a third parameter in the baryonic TF relation
and the residuals of this relation follow a trend (slope ~-0.15) with the
residuals of the Rb-Mb relation; for the stellar and K band cases, R is not
anymore a third parameter and the mentioned trend disappears; (ii) among the
TFRs, the B-band TFR is the most scattered; in this case, the color is a third
parameter; (iii) the LSB galaxies break some observed trends, which suggest a
threshold in the gas surface density Sg, below which the SF becomes independent
of the gas infall rate and Sg. Our results are interpreted and discussed in the
light of LCDM-based models of galaxy evolution. The models explain not only the
baryonic scaling relations, but also most of the processes responsible for the
observed changes in the slopes, scatters, and correlations among the residuals
when changing to stellar and luminous quantities. The baryon fraction is
required to be smaller than 0.05 on average. We detect some potential
difficulties for the models: the observed color-M and surface density-M
correlations are steeper, and the intrinsic scatter in the baryonic TFR is
smaller than those predicted. [abridged]
|
In a secure coded caching system, a central server balances the traffic flow
between peak and off-peak periods by distributing some public data to the
users' caches in advance. Meanwhile, these data are securely protected against
the possible colluding users, who might share their cache. We model the system
as a flow network and study its capacity region via a network
information-theoretic approach. Due to the difficulty of characterizing the
capacity region straightforwardly, our approach is two folded from the
perspective of network information theory. On one hand, we identify the inner
bound of capacity region by proposing a coded caching scheme to achieve a low
load secure data transmission. On the other hand, we also establish outer outer
bounds on the capacity region, which show that our proposed scheme is order
optimal in general under specific circumstance.
|
Recording simultaneous activity of hundreds of neurons is now possible.
Existing methods can model such population activity, but do not directly reveal
the computations used by the brain. We present a fully unsupervised method that
models neuronal activity and reveals the computational strategy. The method
constructs a topological model of neuronal dynamics consisting of
interconnected loops. Transitions between loops mark computationally-salient
decisions. We accurately model activation of 100s of neurons in the primate
cortex during a working memory task. Dynamics of a recurrent neural network
(RNN) trained on the same task are topologically identical suggesting that a
similar computational strategy is used. The RNN trained on a modified dataset,
however, reveals a different topology. This topology predicts specific novel
stimuli that consistently elicit incorrect responses with near perfect
accuracy. Thus, our methodology yields a quantitative model of neuronal
activity and reveals the computational strategy used to solve the task.
|
The transport properties of a simple model for a finite level structure (a
molecule or a dot) connected to metal electrodes in an alternating current
scanning tunneling microscope (AC-STM) configuration is studied. The finite
level structure is assumed to have strong binding properties with the metallic
substrate, and the bias between the STM tip and the hybrid metal-molecule
interface has both an AC and a DC component. The finite frequency current
response and the zero frequency photo-assisted shot noise are computed using
the Keldysh technique, and examples for a single site molecule (a quantum dot)
and for a two-site molecule are examined. The model may be useful for the
interpretation of recent experiments using an AC-STM for the study of both
conducting and insulating surfaces, where the third harmonic component of the
current is measured. The zero frequency photo-assisted shot noise serves as a
useful diagnosis for analyzing the energy level structure of the molecule. The
present work motivates the need for further analysis of current fluctuations in
electronic molecular transport.
|
In their article, Wang et al. [1] report a new scheme for THz heterodyne
detection using a laser-driven LTG-GaAs photomixer [2, 3] and make the
impressive claim of achieving near quantum-limited sensitivity at room
temperature. Unfortunately, their experimental methodology is incorrect, and
furthermore the paper provides no information on the mixer conversion loss, an
important quantity that could readily have been measured and reported as a
consistency check. The paper thus offers no reliable experimental evidence that
substantiates the claimed sensitivities. To the contrary, the very high value
reported for their photomixer impedance strongly suggests that the conversion
loss is quite poor and that the actual sensitivity is far worse than claimed.
|
Building on previous work, we calculate the temperature- and
frequency-dependent {\it anomalous} Hall conductivity for the putative
multiband chiral superconductor $\Sr$ using a simple microscopic two-orbital
model without impurities. A Hall effect arises in this system without the
application of an external magnetic field due to the time-reversal-symmetry
breaking chiral superconducting state. The anomalous Hall conductivity is
nonzero only when there is more than one superconducting order parameter,
involving inter- as well as intra-band Cooper pairing. We find that such a
multiband superconducting state gives rise to a distinctive resonance in the
frequency-dependence of the Hall conductivity at a frequency close to the
inter-orbital hopping energy scale that describes hopping between Ru $d_{xz}$
and $d_{yz}$ orbitals. The detection of this feature, robust to temperature and
impurity effects in the superconducting phase, would thus constitute compelling
evidence in favour of a multiband origin of superconductivity in $\Sr$, with
strong superconductivity on the $\alpha$ and $\beta$ bands. The temperature
dependence of the Hall conductivity and Kerr rotation angle are studied within
this model at the one-loop approximation.
|
We introduce AdaCoSeg, a deep neural network architecture for adaptive
co-segmentation of a set of 3D shapes represented as point clouds. Differently
from the familiar single-instance segmentation problem, co-segmentation is
intrinsically contextual: how a shape is segmented can vary depending on the
set it is in. Hence, our network features an adaptive learning module to
produce a consistent shape segmentation which adapts to a set. Specifically,
given an input set of unsegmented shapes, we first employ an offline
pre-trained part prior network to propose per-shape parts. Then, the
co-segmentation network iteratively and} jointly optimizes the part labelings
across the set subjected to a novel group consistency loss defined by matrix
ranks. While the part prior network can be trained with noisy and
inconsistently segmented shapes, the final output of AdaCoSeg is a consistent
part labeling for the input set, with each shape segmented into up to (a
user-specified) K parts. Overall, our method is weakly supervised, producing
segmentations tailored to the test set, without consistent ground-truth
segmentations. We show qualitative and quantitative results from AdaCoSeg and
evaluate it via ablation studies and comparisons to state-of-the-art
co-segmentation methods.
|
For a smooth irreducible affine algebraic variety we study a class of gauge
modules admitting compatible actions of both the algebra $A$ of functions and
the Lie algebra $\mathcal{V}$ of vector fields on the variety. We prove that a
gauge module corresponding to a simple $\mathfrak{gl}_N$-module is irreducible
as a module over the Lie algebra of vector fields unless it appears in the de
Rham complex.
|
We deal with a model where a set of observations is obtained by a linear
superposition of unknown components called sources. The problem consists in
recovering the sources without knowing the linear transform. We extend the
well-known Independent Component Analysis (ICA) methodology. Instead of
assuming independent source components, we assume that the source vector is a
probability mixture of two distributions. Only one distribution satisfies the
ICA assumptions, while the other one is concentrated on a specific but unknown
support. Sample points from the latter are clustered based on a data-driven
distance in a fully unsupervised approach. A theoretical grounding is provided
through a link with the Christoffel function. Simulation results validate our
approach and illustrate that it is an extension of a formerly proposed method.
|
We study Landau damping in the 1+1D Vlasov-Poisson system using a
Fourier-Hermite spectral representation. We describe the propagation of free
energy in phase space using forwards and backwards propagating Hermite modes
recently developed for gyrokinetics [Schekochihin et al. (2014)]. The change in
the electric field corresponds to the net Hermite flux via a free energy
evolution equation. In linear Landau damping, decay in the electric field
corresponds to forward propagating Hermite modes; in nonlinear damping, the
initial decay is followed by a growth phase characterised by the generation of
backwards propagating Hermite modes by the nonlinear term. The free energy
content of the backwards propagating modes increases exponentially until
balancing that of the forward propagating modes. Thereafter there is no
systematic net Hermite flux, so the electric field cannot decay and the
nonlinearity effectively suppresses Landau damping. These simulations are
performed using the fully-spectral 5D gyrokinetics code SpectroGK [Parker et
al. 2014], modified to solve the 1+1D Vlasov-Poisson system. This captures
Landau damping via an iterated L\'enard-Bernstein collision operator or via
Hou-Li filtering in velocity space. Therefore the code is applicable even in
regimes where phase-mixing and filamentation are dominant.
|
We propose a simple yet effective and robust method for contrastive
captioning: generating discriminative captions that distinguish target images
from very similar alternative distractor images. Our approach is built on a
pragmatic inference procedure that formulates captioning as a reference game
between a speaker, which produces possible captions describing the target, and
a listener, which selects the target given the caption. Unlike previous methods
that derive both speaker and listener distributions from a single captioning
model, we leverage an off-the-shelf CLIP model to parameterize the listener.
Compared with captioner-only pragmatic models, our method benefits from rich
vision language alignment representations from CLIP when reasoning over
distractors. Like previous methods for discriminative captioning, our method
uses a hyperparameter to control the tradeoff between the informativity (how
likely captions are to allow a human listener to discriminate the target image)
and the fluency of the captions. However, we find that our method is
substantially more robust to the value of this hyperparameter than past
methods, which allows us to automatically optimize the captions for
informativity - outperforming past methods for discriminative captioning by 11%
to 15% accuracy in human evaluations
|
We continue our investigation of the parameter space of families of
polynomial skew products. Assuming that the base polynomial has a Julia set not
totally disconnected and is neither a Chebyshev nor a power map, we prove that,
near any bifurcation parameter, one can find parameters where $k$ critical
points bifurcate \emph{independently}, with $k$ up to the dimension of the
parameter space. This is a striking difference with respect to the
one-dimensional case. The proof is based on a variant of the inclination lemma,
applied to the postcritical set at a Misiurewicz parameter. By means of an
analytical criterion for the non-vanishing of the self-intersections of the
bifurcation current, we deduce the equality of the supports of the bifurcation
current and the bifurcation measure for such families. Combined with results by
Dujardin and Taflin, this also implies that the support of the bifurcation
measure in these families has non-empty interior.As part of our proof we
construct, in these families, subfamilies of codimension 1 where the
bifurcation locus has non empty interior. This provides a new independent proof
of the existence of holomorphic families of arbitrarily large dimension whose
bifurcation locus has non empty interior. Finally, it shows that the Hausdorff
dimension of the support of the bifurcation measure is maximal at any point of
its support.
|
Dielectric constant and absorption measurements on boron doped silicon
samples show that transitions between the acceptor energy levels can be induced
by an applied resonant ac electric field and the Stark tuning of level spacing
with an external DC electric field. The relatively longer decoherence times
were observed by the electric echo measurement in a low boron dopant
concentration Si sample. The scalable acceptor-based system is a promising
candidate of the charge qubit for quantum computing.
|
A significant amount of high-impact contemporary scientific research occurs
where biology, computer science, engineering and chemistry converge. Although
programmes have been put in place to support such work, the complex dynamics of
interdisciplinarity are still poorly understood. In this paper we highlight
potential barriers to effective research across disciplines, and suggest, using
a case study, possible mechanisms for removing these impediments.
|
We describe a search for H-alpha emission-lined stars in M31, M33, and seven
dwarfs in or near the Local Group (IC 10, NGC 6822, WLM, Sextans B, Sextans A,
Pegasus and the Phoenix dwarf) using interference filter imaging with the KPNO
and CTIO 4-m telescope and Mosaic cameras. The survey is aimed primarily at
identifying new Luminous Blue Variables (LBVs) from their spectroscopic
similarity to known LBVs, avoiding the bias towards photometric variability,
which may require centuries to manifest itself if LBVs go through long
quiescent periods. Followup spectroscopy with WIYN confirms that our survey
detected a wealth of stars whose spectra are similar to the known LBVs. We
"classify" the spectra of known LBVs, and compare these to the spectra of the
new LBV candidates. We demonstrate spectacular spectral variability for several
of the new LBV candidates, such as AM2, previously classified as a Wolf-Rayet
star, which now shows FeI, FeII and Balmer emission lines but neither the NIII
4634,42 nor HeII 4686 emission that it did in 1982. Profound spectral changes
are also noted for other suspected and known LBVs. Several of the LBV
candidates also show >0.5 mag changes in V over the past 10-20 years. The
number of known or suspected LBVs is now 24 in M31, 37 in M33, 1 in NGC 6822,
and 3 in IC 10. We estimate that the total number of LBVs in M31 and M33 may be
several hundred, in contrast to the 8 known historically through large-scale
photometric variability. This has significant implications for the time scale
of the LBV phase. We also identify a few new WRs and peculiar emission-lined
objects.
|
In scattering experiments, physicists observe so-called resonances as peaks
at certain energy values in the measured scattering cross sections per solid
angle. These peaks are usually associate with certain scattering processes,
e.g., emission, absorption, or excitation of certain particles and systems. On
the other hand, mathematicians define resonances as poles of an analytic
continuation of the resolvent operator through complex dilations. A major
challenge is to relate these scattering and resonance theoretical notions,
e.g., to prove that the poles of the resolvent operator induce the above
mentioned peaks in the scattering matrix. In the case of quantum mechanics,
this problem was addressed in numerous works that culminated in Simon's seminal
paper [33] in which a general solution was presented for a large class of pair
potentials. However, in quantum field theory the analogous problem has been
open for several decades despite the fact that scattering and resonance
theories have been well-developed for many models. In certain regimes these
models describe very fundamental phenomena, such as emission and absorption of
photons by atoms, from which quantum mechanics originated. In this work we
present a first non-perturbative formula that relates the scattering matrix to
the resolvent operator in the massless Spin-Boson model. This result can be
seen as a major progress compared to our previous works [13] and [12] in which
we only managed to derive a perturbative formula.
|
The "Condor Array Telescope" or "Condor" is a high-performance "array
telescope" comprised of six apochromatic refracting telescopes of objective
diameter 180 mm, each equipped with a large-format, very low-read-noise
($\approx 1.2$ e$^-$), very rapid-read-time ($< 1$ s) CMOS camera. Condor is
located at a very dark astronomical site in the southwest corner of New Mexico,
at the Dark Sky New Mexico observatory near Animas, roughly midway between (and
more than 150 km from either) Tucson and El Paso. Condor enjoys a wide field of
view ($2.29 \times 1.53$ deg$^2$ or 3.50 deg$^2$), is optimized for measuring
both point sources and extended, very low-surface-brightness features, and for
broad-band images can operate at a cadence of 60 s (or even less) while
remaining sky-noise limited with a duty cycle near 100\%. In its normal mode of
operation, Condor obtains broad-band exposures of exposure time 60 s over dwell
times spanning dozens or hundreds of hours. In this way, Condor builds up deep,
sensitive images while simultaneously monitoring tens or hundreds of thousands
of point sources per field at a cadence of 60 s. Condor is also equipped with
diffraction gratings and with a set of He II 468.6 nm, [O III] 500.7 nm, He I
587.5 nm, H$\alpha$ 656.3 nm, [N II] 658.4 nm, and [S II] 671.6 nm narrow-band
filters, allowing it to address a variety of broad- and narrow-band science
issues. Given its unique capabilities, Condor can access regions of
"astronomical discovery space" that have never before been studied. Here we
introduce Condor and describe various aspects of its performance.
|
Linear nested codes, where two or more sub-codes are nested in a global code,
have been proposed as candidates for reliable multi-terminal communication. In
this paper, we consider nested array-based spatially coupled low-density
parity-check (SC-LDPC) codes and propose a line-counting based optimization
scheme for minimizing the number of dominant absorbing sets in order to improve
its performance in the high signal-to-noise ratio regime. Since the
parity-check matrices of different nested sub-codes partially overlap, the
optimization of one nested sub-code imposes constraints on the optimization of
the other sub-codes. To tackle these constraints, a multi-step optimization
process is applied first to one of the nested codes, then sequential
optimization of the remaining nested codes is carried out based on the
constraints imposed by the previously optimized sub-codes. Results show that
the order of optimization has a significant impact on the number of dominant
absorbing sets in the Tanner graph of the code, resulting in a tradeoff between
the performance of a nested code structure and its optimization sequence: the
code which is optimized without constraints has fewer harmful structures than
the code which is optimized with constraints. We also show that for certain
code parameters, dominant absorbing sets in the Tanner graphs of all nested
codes are completely removed using our proposed optimization strategy.
|
An elastic disk is coated with an elastic rod, uniformly prestressed with a
tensile or compressive axial force. The prestress state is assumed to be
induced by three different models of external radial load or by 'shrink-fit'
forcing the coating onto the disk. The prestressed coating/disk system, when
loaded with an additional and arbitrary incremental external load, experiences
incremental displacement, strain, and stress, which are solved via complex
potentials. The analysis incorporates models for both perfect and imperfect
bonding at the coating/disk interface. The derived solution highlights the
significant influence not only of the prestress but also of the method employed
to generate it. These two factors lead, in different ways, to a loss or an
increase in incremental stiffness for compressive or tensile prestress. The
first bifurcation load of the structure (which differs for different prestress
generations) is determined in a perturbative way. The results emphasize the
importance of modelling the load and may find applications in flexible
electronics and robot arms subject to pressure or uniformly-distributed radial
forces.
|
Besides the structure of interactions within networks, also the interactions
between networks are of the outmost importance. We therefore study the outcome
of the public goods game on two interdependent networks that are connected by
means of a utility function, which determines how payoffs on both networks
jointly influence the success of players in each individual network. We show
that an unbiased coupling allows the spontaneous emergence of interdependent
network reciprocity, which is capable to maintain healthy levels of public
cooperation even in extremely adverse conditions. The mechanism, however,
requires simultaneous formation of correlated cooperator clusters on both
networks. If this does not emerge or if the coordination process is disturbed,
network reciprocity fails, resulting in the total collapse of cooperation.
Network interdependence can thus be exploited effectively to promote
cooperation past the limits imposed by isolated networks, but only if the
coordination between the interdependent networks is not disturbed.
|
Assuming Lehmer's conjecture, we estimate the degree of the trace field
$K(M_{p/q})$ of a hyperbolic Dehn-filling $M_{p/q}$ of a 1-cusped hyperbolic
3-manifold $M$ by $$ \dfrac{1}{C}(\max\;\{|p|,|q|\})\leq \text{deg }K(M_{p/q})
\leq C(\max\;\{|p|,|q|\}) $$ where $C=C_M$ is a constant that depends on $M$.
|
Click-through rate (CTR) prediction is a fundamental technique in
recommendation and advertising systems. Recent studies have proved that
learning a unified model to serve multiple domains is effective to improve the
overall performance. However, it is still challenging to improve generalization
across domains under limited training data, and hard to deploy current
solutions due to their computational complexity. In this paper, we propose a
simple yet effective framework AdaSparse for multi-domain CTR prediction, which
learns adaptively sparse structure for each domain, achieving better
generalization across domains with lower computational cost. In AdaSparse, we
introduce domain-aware neuron-level weighting factors to measure the importance
of neurons, with that for each domain our model can prune redundant neurons to
improve generalization. We further add flexible sparsity regularizations to
control the sparsity ratio of learned structures. Offline and online
experiments show that AdaSparse outperforms previous multi-domain CTR models
significantly.
|
Time-dependent data-generating distributions have proven to be difficult for
gradient-based training of neural networks, as the greedy updates result in
catastrophic forgetting of previously learned knowledge. Despite the progress
in the field of continual learning to overcome this forgetting, we show that a
set of common state-of-the-art methods still suffers from substantial
forgetting upon starting to learn new tasks, except that this forgetting is
temporary and followed by a phase of performance recovery. We refer to this
intriguing but potentially problematic phenomenon as the stability gap. The
stability gap had likely remained under the radar due to standard practice in
the field of evaluating continual learning models only after each task.
Instead, we establish a framework for continual evaluation that uses
per-iteration evaluation and we define a new set of metrics to quantify
worst-case performance. Empirically we show that experience replay,
constraint-based replay, knowledge-distillation, and parameter regularization
methods are all prone to the stability gap; and that the stability gap can be
observed in class-, task-, and domain-incremental learning benchmarks.
Additionally, a controlled experiment shows that the stability gap increases
when tasks are more dissimilar. Finally, by disentangling gradients into
plasticity and stability components, we propose a conceptual explanation for
the stability gap.
|
With the widespread use of sophisticated machine learning models in sensitive
applications, understanding their decision-making has become an essential task.
Models trained on tabular data have witnessed significant progress in
explanations of their underlying decision making processes by virtue of having
a small number of discrete features. However, applying these methods to
high-dimensional inputs such as images is not a trivial task. Images are
composed of pixels at an atomic level and do not carry any interpretability by
themselves. In this work, we seek to use annotated high-level interpretable
features of images to provide explanations. We leverage the Shapley value
framework from Game Theory, which has garnered wide acceptance in general XAI
problems. By developing a pipeline to generate counterfactuals and subsequently
using it to estimate Shapley values, we obtain contrastive and interpretable
explanations with strong axiomatic guarantees.
|
We review recent results on the analysis of singular stochastic partial
differential equations in the language of paracontrolled distributions.
|
Recent observations show that our universe is accelerating by dark energy, so
it is important to investigate the thermodynamical properties of it. The
Undulant
Universe is a model with equation of state $\omega(a)=-\cos(b\ln a)$ for dark
energy, where we show that there neither the event horizon nor the particle
horizon exists. However, as a boundary of keeping thermodynamical properties,
the apparent horizon is a good holographic screen. The Universe has a thermal
equilibrium inside the apparent horizon, so the Unified First Law and the
Generalized Second Law of thermodynamics are satisfied. As a thermodynamical
whole, the evolution of the Undulant Universe behaves very well in the current
phase. However, when considering the unification theory, the failure of
conversation law at the epoch of the matter dominated or near singularity need
some more consideration for the form of the Undulant Universe.
|
In this paper, we study a stochastic disclosure control problem using
information-theoretic methods. The useful data to be disclosed depend on
private data that should be protected. Thus, we design a privacy mechanism to
produce new data which maximizes the disclosed information about the useful
data under a strong $\chi^2$-privacy criterion. For sufficiently small leakage,
the privacy mechanism design problem can be geometrically studied in the space
of probability distributions by a local approximation of the mutual
information. By using methods from Euclidean information geometry, the original
highly challenging optimization problem can be reduced to a problem of finding
the principal right-singular vector of a matrix, which characterizes the
optimal privacy mechanism. In two extensions we first consider a scenario where
an adversary receives a noisy version of the user's message and then we look
for a mechanism which finds $U$ based on observing $X$, maximizing the mutual
information between $U$ and $Y$ while satisfying the privacy criterion on $U$
and $Z$ under the Markov chain $(Z,Y)-X-U$.
|
The detection of periodic signals from transiting exoplanets is often impeded
by extraneous aperiodic photometric variability, either intrinsic to the star
or arising from the measurement process. Frequently, these variations are
autocorrelated wherein later flux values are correlated with previous ones. In
this work, we present the methodology of the Autoregessive Planet Search (ARPS)
project which uses Autoregressive Integrated Moving Average (ARIMA) and related
statistical models that treat a wide variety of stochastic processes, as well
as nonstationarity, to improve detection of new planetary transits. Providing a
time series is evenly spaced or can be placed on an evenly spaced grid with
missing values, these low-dimensional parametric models can prove very
effective. We introduce a planet-search algorithm to detect periodic transits
in the residuals after the application of ARIMA models. Our matched-filter
algorithm, the Transit Comb Filter (TCF), is closely related to the traditional
Box-fitting Least Squares and provides an analogous periodogram. Finally, if a
previously identified or simulated sample of planets is available, selected
scalar features from different stages of the analysis -- the original light
curves, ARIMA fits, TCF periodograms, and folded light curves -- can be
collectively used with a multivariate classifier to identify promising
candidates while efficiently rejecting false alarms. We use Random Forests for
this task, in conjunction with Receiver Operating Characteristic (ROC) curves,
to define discovery criteria for new, high fidelity planetary candidates. The
ARPS methodology can be applied to both evenly spaced satellite light curves
and densely cadenced ground-based photometric surveys.
|
For a symplectic space $V$ of dimension $2n$ over $\mathbb{F}_{q}$, we
compute the eigenvalues of its orthogonality graph. This is the simple graph
with vertices the $2$-dimensional non-degenerate subspaces of $V$ and edges
between orthogonal vertices. As a consequence of Garland's method, we obtain
vanishing results on the homology groups of the frame complex of $V$, which is
the clique complex of this graph. We conclude that if $n < q+3$ then the poset
of frames of size $\neq 0,n-1$, which is homotopy equivalent to the frame
complex, is Cohen-Macaulay over a field of characteristic $0$. However, we also
show that this poset is not Cohen-Macaulay if the dimension is big enough.
|
We show that some sets of quantum observables are unique up to an isometry
and have a contextuality witness that attains the same value for any initial
state. We prove that these two properties make it possible to certify any of
these sets by looking at the statistics of experiments with sequential
measurements and using any initial state of full rank, including thermal and
maximally mixed states. We prove that this ``certification with any full-rank
state'' (CFR) is possible for any quantum system of finite dimension $d \ge 3$
and is robust and experimentally useful in dimensions 3 and 4. In addition, we
prove that complete Kochen-Specker sets can be Bell self-tested if and only if
they enable CFR. This establishes a fundamental connection between these two
methods of certification, shows that both methods can be combined in the same
experiment, and opens new possibilities for certifying quantum devices.
|
The coupled cluster method is applied to a spin-half model at zero
temperature ($T=0$), which interpolates between Heisenberg antiferromagnets
(HAF's) on a kagome and a square lattice. With respect to an underlying
triangular lattice the strengths of the Heisenberg bonds joining the
nearest-neighbor (NN) kagome sites are $J_{1} \geq 0$ along two of the
equivalent directions and $J_{2} \geq 0$ along the third. Sites connected by
$J_{2}$ bonds are themselves connected to the missing NN non-kagome sites of
the triangular lattice by bonds of strength $J_{1}' \geq 0$. When
$J_{1}'=J_{1}$ and $J_{2}=0$ the model reduces to the square-lattice HAF. The
magnetic ordering of the system is investigated and its $T=0$ phase diagram
discussed. Results for the kagome HAF limit are among the best available.
|
Using large-scale fully-kinetic two-dimensional particle-in-cell simulations,
we investigate the effects of shock rippling on electron acceleration at
low-Mach-number shocks propagating in high-$\beta$ plasmas, in application to
merger shocks in galaxy clusters. We find that the electron acceleration rate
increases considerably when the rippling modes appear. The main acceleration
mechanism is stochastic shock-drift acceleration, in which electrons are
confined at the shock by pitch-angle scattering off turbulence and gain energy
from the motional electric field. The presence of multi-scale magnetic
turbulence at the shock transition and the region immediately behind the main
shock overshoot is essential for electron energization. Wide-energy non-thermal
electron distributions are formed both upstream and downstream of the shock.
The maximum energy of the electrons is sufficient for their injection into
diffusive shock acceleration. We show for the first time that the downstream
electron spectrum has a~power-law form with index $p\approx 2.5$, in agreement
with observations.
|
If a finite group $G$ is isomorphic to a subgroup of $SO(3)$, then $G$ has
the D2-property. Let $X$ be a finite complex satisfying Wall's D2-conditions.
If $\pi_1(X)=G$ is finite, and $\chi(X) \geq 1-Def(G)$, then $X \vee S^2$ is
simple homotopy equivalent to a finite $2$-complex, whose simple homotopy type
depends only on $G$ and $\chi(X)$.
|
When small RNAs are loaded onto Argonaute proteins they can form the
RNA-induced silencing complexes (RISCs), which mediate RNA interference.
RISC-formation is dependent on a shared pool of Argonaute proteins and RISC
loading factors, and is thus susceptible to competition among small RNAs for
loading. We present a mathematical model that aims to understand how small RNA
competition for the PTR resources affects target gene repression. We discuss
that small RNA activity is limited by RISC-formation, RISC-degradation and the
availability of Argonautes. Together, these observations explain a number of
PTR saturation effects encountered experimentally. We show that different
competition conditions for RISC-loading result in different signatures of PTR
activity determined also by the amount of RISC-recycling taking place. In
particular, we find that the small RNAs less efficient at RISC-formation, using
fewer resources of the PTR pathway, can perform in the low RISC-recycling range
equally well as their more effective counterparts. Additionally, we predict a
novel signature of PTR in target expression levels. Under conditions of low
RISC-loading efficiency and high RISC-recycling, the variation in target levels
increases linearly with the target transcription rate. Furthermore, we show
that RISC-recycling determines the effect that Argonaute scarcity conditions
have on target expression variation. Our observations taken together offer a
framework of predictions which can be used in order to infer from experimental
data the particular characteristics of underlying PTR activity.
|
The impact of blending by RCGs (red clump giants, or relatively metal-rich
red horizontal branch stars) is discussed as it relates to RRab and classical
Cepheids, and invariably establishing an improved distance scale. An analysis
of OGLE Magellanic Cloud variables reaffirms that blending with RCGs may
advantageously thrust remote extragalactic stars into the range of
detectability. Specifically, simulations of Magellanic Cloud RRab and RCG
blends partly reproduce bright non-canonical trends readily observed in
amplitude-magnitude space ($I_c$ vs.~$A_{I_c}$). Conversely, the larger
magnitude offset between classical Cepheids and RCGs causes the latter's
influence to be challenging to address. The relative invariance of a Wesenheit
function's slope to metallicity (e.g., $W_{VI_c}$) implies that a deviation
from the trend could reveal blending and photometric inaccuracies (e.g.,
standardization), as blending by RCGs (a proxy of an evolved red stellar
demographic) can flatten period-Wesenehit relations owing to the increased
impact on less-luminous shorter-period Cepheids. That could partly explain both
a shallower inferred Wesenheit function and over-estimated $H_0$ values. A
consensus framework to identify and exploit blending is desirable, as presently
$H_0$ estimates from diverse teams are unwittingly leveraged without
homogenizing the disparate approaches (e.g., no blending correction to a
sizable $\simeq 0.^{\rm m}3$).
|
We present a new paradigm in the field of cavity QED by bringing out
remarkable features associated with the avoided crossing of the dressed state
levels of the Jaynes Cummings model. We demonstrate how the parametric
couplings, realized by a second order nonlinearity in the cavity, can turn the
crossing of dressed states into avoided crossings. We show how one can generate
coherence between the avoided crossing of dressed states. Such coherences
result, for example, in quantum beats in the excitation probability of the
qubit. The quality of quantum beats can be considerably improved by
adiabatically turning on the parametric interaction. We show how these avoided
crossings can be used to generate superpositions of even or odd Fock states
with the remarkable property of equal weights for the states in superposition.
The fidelity of generation is more than 95\%. In addition, we show strong
entanglement between the cavity field and the qubit with the concurrence
parameter exceeding 90\%.
|
We consider spin and charge flow in normal metals. We employ the Keldysh
formalism to find transport equations in the presence of spin-orbit
interaction, interaction with magnetic impurities, and non-magnetic impurity
scattering. Using the quasiclassical approximation, we derive diffusion
equations which include contributions from skew scattering, side-jump
scattering and the anomalous spin-orbit induced velocity. We compute the
magnitude of various spin Hall effects in experimental relevant geometries and
discuss when the different scattering mechanisms are important.
|
The Painleve test is very useful to construct not only the Laurent-series
solutions but also the elliptic and trigonometric ones. Such single-valued
functions are solutions of some polynomial first order differential equations.
To find the elliptic solutions we transform an initial nonlinear differential
equation in a nonlinear algebraic system in parameters of the Laurent-series
solutions of the initial equation. The number of unknowns in the obtained
nonlinear system does not depend on number of arbitrary coefficients of the
used first order equation. In this paper we describe the corresponding
algorithm, which has been realized in REDUCE and Maple.
|
Sequence modeling has important applications in natural language processing
and computer vision. Recently, the transformer-based models have shown strong
performance on various sequence modeling tasks, which rely on attention to
capture pairwise token relations, and position embedding to inject positional
information. While showing good performance, the transformer models are
inefficient to scale to long input sequences, mainly due to the quadratic
space-time complexity of attention. To overcome this inefficiency, we propose
to model sequences with a relative position encoded Toeplitz matrix and use a
Toeplitz matrix-vector production trick to reduce the space-time complexity of
the sequence modeling to log linear. A lightweight sub-network called relative
position encoder is proposed to generate relative position coefficients with a
fixed budget of parameters, enabling the proposed Toeplitz neural network to
deal with varying sequence lengths. In addition, despite being trained on
512-token sequences, our model can extrapolate input sequence length up to 14K
tokens in inference with consistent performance. Extensive experiments on
autoregressive and bidirectional language modeling, image modeling, and the
challenging Long-Range Arena benchmark show that our method achieves better
performance than its competitors in most downstream tasks while being
significantly faster. The code is available at
https://github.com/OpenNLPLab/Tnn.
|
Ultrahigh energy cosmic rays that produce giant extensive showers of charged
particles and photons when they interact in the Earth's atmosphere provide a
unique tool to search for new physics. Of particular interest is the
possibility of detecting a very small violation of Lorentz invariance such as
may be related to the structure of space-time near the Planck scale of $\sim
10^{-35}$m. We discuss here the possible signature of Lorentz invariance
violation on the spectrum of ultrahigh energy cosmic rays as compared with
present observations of giant air showers. We also discuss the possibilities of
using more sensitive detection techniques to improve searches for Lorentz
invariance violation in the future. Using the latest data from the Pierre Auger
Observatory, we derive a best fit to the LIV parameter of $3.0^{+1.5}_{-3.0}
\times 10^{-23}$, corresponding to an upper limit of $4.5 \times 10^{-23}$ at a
proton Lorentz factor of $\sim 2 \times 10^{11}$. This result has fundamental
implications for quantum gravity models.
|
Human luteinizing hormone (LH) and chorionic gonadotropin (hCG) have been
considered biologically equivalent because of their structural similarities and
their binding to the same receptor; the LH/CGR. However, accumulating evidence
suggest that LH/CGR differentially responds to the two hormones triggering
differential intracellular signaling and steroidogenesis. The mechanistic basis
of such differential responses remains mostly unknown. Here, we compared the
abilities of recombinant rhLH and rhCG to elicit cAMP, \beta-arrestin 2
activation, and steroidogenesis in HEK293 cells and mouse Leydig tumor cells
(mLTC-1). For this, BRET and FRET technologies were used allowing quantitative
analyses of hormone activities in real-time and in living cells. Our data
indicate that rhLH and rhCG differentially promote cell responses mediated by
LH/CGR revealing interesting divergences in their potencies, efficacies and
kinetics: rhCG was more potent than rhLH in both HEK293 and mLTC-1 cells.
Interestingly, partial effects of rhLH were found on \beta-arrestin recruitment
and on progesterone production compared to rhCG. Such a link was further
supported by knockdown experiments. These pharmacological differences
demonstrate that rhLH and rhCG act as natural biased agonists. The discovery of
novel mechanisms associated with gonadotropin-specific action may ultimately
help improve and personalize assisted reproduction technologies
|
This volume contains selected papers from the proceedings of the First
International Workshop on Strategies in Rewriting, Proving, and Programming
(IWS 2010), which was held on July 9, 2010, in Edinburgh, UK. Strategies are
ubiquitous in programming languages, automated deduction and reasoning systems.
In the two communities of Rewriting and Programming on one side, and of
Deduction and Proof engines (Provers, Assistants, Solvers) on the other side,
workshops have been launched to make progress towards a deeper understanding of
the nature of strategies, their descriptions, their properties, and their
usage, in all kinds of computing and reasoning systems. Since more recently,
strategies are also playing an important role in rewrite-based programming
languages, verification tools and techniques like SAT/SMT engines or
termination provers. Moreover strategies have come to be viewed more generally
as expressing complex designs for control in computing, modeling, proof search,
program transformation, and access control. IWS 2010 was organized as a
satellite workshop of FLoC 2010. FLoC 2010 provided an excellent opportunity to
foster exchanges between the communities of Rewriting and Programming on one
side, and of Deduction and Proof engines on the other side. IWS2010 was a joint
follow-up of two series of worshops, held since 1997: the Strategies workshops
held by the CADE-IJCAR community and the Workshops on Reduction Strategies
(WRS) held by the RTA-RDP community.
|
In this paper, we consider an active eavesdropping scenario in a cooperative
system consisting of a source, a destination, and an active eavesdropper with
multiple decode-and-forward relays. Considering an existing assumption in which
an eavesdropper is also a part of network, a proactive relay selection by the
eavesdropper is proposed. The best relay which maximizes the eavesdropping rate
is selected by the eavesdropper. A relay selection scheme is also proposed to
improve the secrecy of the system by minimizing the eavesdropping rate.
Performances of these schemes are compared with two passive eavesdropping
scenarios in which the eavesdropper performs selection and maximal ratio
combining on the relayed links. A realistic channel model with independent
non-identical links between nodes and direct links from the source to both the
destination and eavesdropper are assumed. Closed-form expressions for the
secrecy outage probability (SOP) of these schemes in Rayleigh fading channel
are obtained. It is shown that the relay selection by the proactive
eavesdropper is most detrimental to the system as not only the SOP increases
with the increase in the number of relays, but its diversity also remains
unchanged.
|
Assisted text input techniques can save time and effort and improve text
quality. In this paper, we investigate how grounded and conditional extensions
to standard neural language models can bring improvements in the tasks of word
prediction and completion. These extensions incorporate a structured knowledge
base and numerical values from the text into the context used to predict the
next word. Our automated evaluation on a clinical dataset shows extended models
significantly outperform standard models. Our best system uses both
conditioning and grounding, because of their orthogonal benefits. For word
prediction with a list of 5 suggestions, it improves recall from 25.03% to
71.28% and for word completion it improves keystroke savings from 34.35% to
44.81%, where theoretical bound for this dataset is 58.78%. We also perform a
qualitative investigation of how models with lower perplexity occasionally fare
better at the tasks. We found that at test time numbers have more influence on
the document level than on individual word probabilities.
|
Iron-based oxypnictides substituted with yttrium have been prepared via a
conventional solid state reaction. The product after first 50 hours of reaction
showed diamagnetic-like transition at around 10 K but was not superconducting,
while additional 72 hours of high temperature heat treatment was required to
yield superconducting sample which was doped with fluoride. Temperature
dependence of the susceptibility shows both screening and Meissner effect at
around 10 K, while resistance as a function of temperature displayed a drop at
around the same temperature.
|
In this note we present invariant formulation of the d'Alambert principle and
classical time-dependent Lagrangian mechanics with holonomic constraints from
the perspective of moving frames.
|
We present a linear-response formulation of density cumulant theory (DCT)
that provides a balanced and accurate description of many electronic states
simultaneously. In the original DCT formulation, only information about a
single electronic state (usually, the ground state) is obtained. We discuss the
derivation of linear-response DCT, present its implementation for the ODC-12
method (LR-ODC-12), and benchmark its performance for excitation energies in
small molecules (N$_2$, CO, HCN, HNC, C$_2$H$_2$, and H$_2$CO), as well as
challenging excited states in ethylene, butadiene, and hexatriene. For small
molecules, LR-ODC-12 shows smaller mean absolute errors in excitation energies
than equation-of-motion coupled cluster theory with single and double
excitations (EOM-CCSD), relative to the reference data from EOM-CCSDT. In a
study of butadiene and hexatriene, LR-ODC-12 correctly describes the relative
energies of the singly-excited $1^1\mathrm{B_{u}}$ and the doubly-excited
$2^1\mathrm{A_{g}}$ states, in excellent agreement with highly accurate
semistochastic heat-bath configuration interaction results, while EOM-CCSD
overestimates the energy of the $2^1\mathrm{A_{g}}$ state by almost 1 eV. Our
results demonstrate that linear-response DCT is a promising theoretical
approach for excited states of molecules.
|
We present Arc-Flag TB, a journey planning algorithm for public transit
networks which combines Trip-Based Public Transit Routing (TB) with the
Arc-Flags speedup technique. Compared to previous attempts to apply Arc-Flags
to public transit networks, which saw limited success, our approach uses
stronger pruning rules to reduce the search space. Our experiments show that
Arc-Flag TB achieves a speedup of up to two orders of magnitude over TB,
offering query times of less than a millisecond even on large countrywide
networks. Compared to the state-of-the-art speedup technique Trip-Based Public
Transit Routing Using Condensed Search Trees (TB-CST), our algorithm achieves
similar query times but requires significantly less additional memory. Other
state-of-the-art algorithms which achieve even faster query times, e.g., Public
Transit Labeling, require enormous memory usage. In contrast, Arc-Flag TB
offers a tradeoff between query performance and memory usage due to the fact
that the number of regions in the network partition required by our algorithm
is a configurable parameter. We also identify an issue in the transfer
precomputation of TB that affects both TB-CST and Arc-Flag TB, leading to
incorrect answers for some queries. This has not been previously recognized by
the author of TB-CST. We provide discussion on how to resolve this issue in the
future. Currently, Arc-Flag TB answers 1-6% of queries incorrectly, compared to
over 20% for TB-CST on some networks.
|
We present improved measurements of the branching fractions for the decays B0
--> Ds+pi- and B0-bar --> Ds+K- using a data sample of 657x10^6 BB-bar events
collected at the Y(4S) resonance with the Belle detector at the KEKB
asymmetric-energy e+e- collider. The results are BF(B0 --> Ds+pi-) = (1.99 +/-
0.26 +/- 0.18)x10^-5 and BF(B0-bar --> Ds+K-) = (1.91 +/- 0.24 +/- 0.17)x10^-5,
where the uncertainties are statistical and systematic, respectively. Based on
these results, we determine the ratio between amplitudes of the doubly Cabibbo
suppressed decay B0 --> D+pi- and the Cabibbo favored decay B0 --> D-pi+, R_Dpi
= [1.71 +/- 0.11(stat) +/- 0.09(syst) +/- 0.02(theo)]%, where the last term
denotes the theory error.
|
We present a new method to analyse and reduce chemical networks and apply
this technique to the chemistry in molecular clouds. Using the technique, we
investigated the possibility of reducing the number of chemical reactions and
species in the UMIST 95 database simultaneously. In addition, we did the same
reduction but with the ``objective technique'' in order to compare both
methods. We found that it is possible to compute the abundance of carbon
monoxide and fractional ionisation accurately with significantly reduced
chemical networks in the case of pure gas-phase chemistry. For gas-grain
chemistry involving surface reactions reduction is not worthwhile. Compared to
the ``objective technique'' our reduction method is more effective but more
time-consuming as well.
|
The first generations of stars left their chemical fingerprints on metal-poor
stars in the Milky Way and its surrounding dwarf galaxies. While instantaneous
and homogeneous enrichment implies that groups of co-natal stars should have
the same element abundances, small amplitudes of abundance scatter are seen at
fixed [Fe/H]. Measurements of intrinsic abundance scatter have been made with
small, high-resolution spectroscopic datasets where measurement uncertainty is
small compared to this scatter. In this work, we present a method to use
mid-resolution survey data, which has larger errors, to make this measurement.
Using APOGEE DR17, we calculate the intrinsic scatter of Al, O, Mg, Si, Ti, Ni,
and Mn relative to Fe for 333 metal-poor stars across 6 classical dwarf
galaxies around the Milky Way, and 1604 stars across 19 globular clusters. We
first calibrate the reported abundance errors in bins of signal-to-noise and
[Fe/H] using a high-fidelity halo dataset. We then apply these calibrated
errors to the APOGEE data, and find small amplitudes of average intrinsic
abundance scatter in dwarf galaxies ranging from 0.032 $-$ 0.14 dex with a
median value of 0.043 dex. For the globular clusters, we find intrinsic
scatters ranging from 0.018 $-$ 0.21 dex, with particularly high scatter for Al
and O. Our measurements of intrinsic abundance scatter place important upper
limits on the intrinsic scatter in these systems, as well as constraints on
their underlying star formation history and mixing, that we can look to
simulations to interpret.
|
It is well known that defining a local refractive index for a metamaterial
requires that the wavelength be large with respect to the scale of its
microscopic structure (generally the period). However, the converse does not
hold. There are simple structures, such as the infinite, perfectly conducting
wire medium, which remain non-local for arbitrarily large wavelength-to-period
ratios. In this work we extend these results to the more realistic and relevant
case of finite wire media with finite conductivity. In the quasi-static regime
the metamaterial is described by a non-local permittivity which is obtained
analytically using a two-scale renormalization approach. Its accuracy is tested
and confirmed numerically via full vector 3D finite element calculations.
Moreover, finite wire media exhibit large absorption with small reflection,
while their low fill factor allows considerable freedom to control other
characteristics of the metamaterial such as its mechanical, thermal or chemical
robustness.
|
The controversy over whether ultraluminous X-ray sources (ULXs) contain a new
intermediate-mass class of black holes (IMBHs) remains unresolved. We present
new analyses of the deepest XMM-Newton observations of ULXs that address their
underlying nature. We examine both empirical and physical modelling of the
X-ray spectra of a sample of thirteen of the highest quality ULX datasets, and
find that there are anomalies in modelling ULXs as accreting IMBHs with
properties simply scaled-up from Galactic black holes. Most notably, spectral
curvature above 2 keV in several sources implies the presence of an
optically-thick, cool corona. We also present a new analysis of a 100 ks
observation of Holmberg II X-1, in which a rigorous analysis of the temporal
data limits the mass of its black hole to no more than 100 solar masses. We
argue that a combination of these results points towards many (though not
necessarily all) ULXs containing black holes that are at most a few 10s of
solar mass in size.
|
Identifiability is a crucial property for a statistical model since
distributions in the model uniquely determine the parameters that produce them.
In phylogenetics, the identifiability of the tree parameter is of particular
interest since it means that phylogenetic models can be used to infer
evolutionary histories from data. In this paper we introduce a new
computational strategy for proving the identifiability of discrete parameters
in algebraic statistical models that uses algebraic matroids naturally
associated to the models. We then use this algorithm to prove that the tree
parameters are generically identifiable for 2-tree CFN and K3P mixtures. We
also show that the $k$-cycle phylogenetic network parameter is identifiable
under the K2P and K3P models.
|
Modeling of physical systems must be based on their suitability to
unavoidable physical laws. In this work, in the context of classical,
isothermal, finite-time, and weak drivings, I demonstrate that physical
systems, driven simultaneously at the same rate in two or more external
parameters, must have the Fourier transform of their relaxation functions
composing a positive-definite matrix to satisfy the Second Law of
Thermodynamics. By evaluating them in the limit of near-to-equilibrium
processes, I identify that such coefficients are the Casimir-Onsager ones. The
result is verified in paradigmatic models of the overdamped and underdamped
white noise Brownian motions. Finally, an extension to thermally isolated
systems is made by using the time-averaged Casimir-Onsager matrix, in which the
example of the harmonic oscillator is presented.
|
In 1999, S. V. Konyagin and V. N. Temlyakov introduced the so-called
Thresholding Greedy Algorithm. Since then, there have been many interesting and
useful characterizations of greedy-type bases in Banach spaces. In this
article, we study and extend several characterizations of greedy and almost
greedy bases in the literature. Along the way, we give various examples to
complement our main results. Furthermore, we propose a new version of the
so-called Weak Thresholding Greedy Algorithm (WTGA) and show that the
convergence of this new algorithm is equivalent to the convergence of the WTGA.
|
Coherent elastic neutrino-nucleus scattering (CE$\nu$NS) offers a unique way
to study neutrino properties and to search for new physics beyond the Standard
Model. Nuclear reactors are promising sources to explore this process at low
energies since they deliver large fluxes of (anti-)neutrinos with typical
energies of a few MeV. In this paper, a new-generation experiment to study
CE$\nu$NS is described. The NUCLEUS experiment will use cryogenic detectors
which feature an unprecedentedly low energy threshold and a time response fast
enough to be operated in above-ground conditions. Both sensitivity to
low-energy nuclear recoils and a high event rate tolerance are stringent
requirements to measure CE$\nu$NS of reactor antineutrinos. A new experimental
site, denoted the Very-Near-Site (VNS) at the Chooz nuclear power plant in
France is described. The VNS is located between the two 4.25 GW$_{\mathrm{th}}$
reactor cores and matches the requirements of NUCLEUS. First results of on-site
measurements of neutron and muon backgrounds, the expected dominant background
contributions, are given. In this paper a preliminary experimental setup with
dedicated active and passive background reduction techniques is presented.
Furthermore, the feasibility to operate the NUCLEUS detectors in coincidence
with an active muon-veto at shallow overburden is studied. The paper concludes
with a sensitivity study pointing out the promising physics potential of
NUCLEUS at the Chooz nuclear power plant.
|
We present Gaptron, a randomized first-order algorithm for online multiclass
classification. In the full information setting we show expected mistake bounds
with respect to the logistic loss, hinge loss, and the smooth hinge loss with
constant regret, where the expectation is with respect to the learner's
randomness. In the bandit classification setting we show that Gaptron is the
first linear time algorithm with $O(K\sqrt{T})$ expected regret, where $K$ is
the number of classes. Additionally, the expected mistake bound of Gaptron does
not depend on the dimension of the feature vector, contrary to previous
algorithms with $O(K\sqrt{T})$ regret in the bandit classification setting. We
present a new proof technique that exploits the gap between the zero-one loss
and surrogate losses rather than exploiting properties such as exp-concavity or
mixability, which are traditionally used to prove logarithmic or constant
regret bounds.
|
We perform Monte Carlo simulations of large two-dimensional Gaussian Ising
spin glasses down to very low temperatures $\beta=1/T=50$. Equilibration is
ensured by using a cluster algorithm including Monte Carlo moves consisting of
flipping fundamental excitations. We study the thermodynamic behavior using the
Binder cumulant, the spin-glass susceptibility, the distribution of overlaps,
the overlap with the ground state and the specific heat. We confirm that
$T_c=0$. All results are compatible with an algebraic divergence of the
correlation length with an exponent $\nu$. We find $-1/\nu=-0.295(30)$, which
is compatible with the value for the domain-wall and droplet exponent
$\theta\approx-0.29$ found previously in ground-state studies. Hence the
thermodynamic behavior of this model seems to be governed by one single
exponent.
|
Let $G$ be a finite group and $N$ be a normal subgroup of $G$. Let
$J=J(F[N])$ denote the Jacboson radical of $F[N]$ and $I={\rm Ann}(J)=\{\alpha
\in F[G]|J\alpha =0\}$. We have another algebra $F[G]/I$. We study the
decomposition of Cartan matrix of $F[G]$ according to $F[G/N]$ and $F[G]/I$.
This decomposition establishs some connections between Cartan invariants and
chief composition factors of $G$. We find that existing zero-defect $p$-block
in $N$ depends on the properties of $I$ in $F[G]$ or Cartan invariants. When we
consider the Cartan invariants for a block algebra $B$ of $G$, the
decomposition is related to what kind of blocks in $N$ covered by $B$. We
mainly consider a block $B$ of $G$ which covers a block $b$ of $N$ with
$l(b)=1$. In two cases, we prove Willems' conjecture holds for these blocks,
which covers some true cases by Holm and Willems. Furthermore We give an
affirmative answer to a question by Holm and Willems in our cases. Some other
results about Cartan invariants are presented in our paper.
|
Let $X$ be the surface $\T\times\T$ where $\T$ is the complex torus. This
paper is the third in a series, studying the fundamental group of the Galois
cover of $X$ \wrt a generic projection onto $\C\P^2$.
Van Kampen Theorem gives a presentation of the fundamental group of the
complement of the branch curve, with 54 generators and more than 2000
relations. Here we introduce a certain natural quotient (obtained by
identifying pairs of generators), prove it is a quotient of a Coxeter group
related to the degeneration of $X$, and show that this quotient is virtually
nilpotent.
|
Two-dimensional materials are emerging as a promising platform for ultrathin
channels in field-effect transistors. To this aim, novel high-mobility
semiconductors need to be found or engineered. While extrinsic mechanisms can
in general be minimized by improving fabrication processes, the suppression of
intrinsic scattering (driven e.g. by electron-phonon interactions) requires to
modify the electronic or vibrational properties of the material. Since
intervalley scattering critically affects mobilities, a powerful approach to
enhance transport performance relies on engineering the valley structure. We
show here the power of this strategy using uniaxial strain to lift degeneracies
and suppress scattering into entire valleys, dramatically improving
performance. This is shown in detail for arsenene, where a 2% strain stops
scattering into 4 of the 6 valleys, and leads to a 600% increase in mobility.
The mechanism is general and can be applied to many other materials, including
in particular the isostructural antimonene and blue phosphorene.
|
A mathematical model of Subject behaviour choice is proposed. The background
of the model is the concept of two preference relations determining Subject
behaviour. These are an "internal" or subjective preference relation and an
"external" or objective preference relation.
The first (internal) preference relation is defined by some partial order on
a set of states of the Subject. The second (external) preference relation on
the state set is defined by a mapping from the state set to another partially
ordered set. The mapping will be called evaluation mapping (function).
We research the process of external preference maximization in a fashion that
uses the external preference as little as possible. On the contrary, Subject
may use the internal preference without any restriction.
The complexity of a maximization procedure depends on the disagreement
between these preferences. To solve the problem we apply some kind of the
successive approximations methods. In terms of evaluation mappings this method
operates on a decomposition of the mapping into a superposition of several
standard operations and "easy" mappings (see the details below). Obtained in
such way superpositions are called approximating forms.
We construct several such forms and present two applications. One of them is
concerned with a hypothetic origin of logic. The other application provides a
new interpretation of the well known model of human choice by Lefebvre. The
interpretation seems to suggest a justification different from the one proposed
by Lefebvre himself.
|
The Internet of Things is an essential component in the growth of an
ecosystem that enables quick and precise judgments to be made for communication
on the battleground. The usage of the battlefield of things (BoT) is, however,
subject to several restrictions for a variety of reasons. There is a potential
for instances of replay, data manipulation, breaches of privacy, and other
similar occurrences. As a direct result of this, the implementation of a
security mechanism to protect the communication that occurs within BoT has
turned into an absolute requirement. To this aim, we propose a blockchain-based
solution that is both safe and private for use in communications inside the BoT
ecosystem. In addition, research is conducted on the benefits of integrating
blockchain technology and cybersecurity into BoT application implementations.
This work elaborates on the importance of integrating cybersecurity and
blockchain-based tools, techniques and methodologies for BoT.
|
This work presents a technique for statistically modeling errors introduced
by reduced-order models. The method employs Gaussian-process regression to
construct a mapping from a small number of computationally inexpensive `error
indicators' to a distribution over the true error. The variance of this
distribution can be interpreted as the (epistemic) uncertainty introduced by
the reduced-order model. To model normed errors, the method employs existing
rigorous error bounds and residual norms as indicators; numerical experiments
show that the method leads to a near-optimal expected effectivity in contrast
to typical error bounds. To model errors in general outputs, the method uses
dual-weighted residuals---which are amenable to uncertainty control---as
indicators. Experiments illustrate that correcting the reduced-order-model
output with this surrogate can improve prediction accuracy by an order of
magnitude; this contrasts with existing `multifidelity correction' approaches,
which often fail for reduced-order models and suffer from the curse of
dimensionality. The proposed error surrogates also lead to a notion of
`probabilistic rigor', i.e., the surrogate bounds the error with specified
probability.
|
Using a high-sensitivity cavity perturbation technique (40 to 180 GHz), we
have probed the angle dependent interlayer magneto-electrodynamic response
within the vortex state of the extreme two-dimensional organic superconductor
k-(BEDT-TTF)2Cu(NCS)2. A previously reported Josephson plasma resonance [M.
Mola et al., Phys. Rev. B 62, 5965 (2000)] exhibits a dramatic re-entrant
behavior for fields very close (<1 degree) to alignment with the layers. In
this same narrow angle range, a new resonant mode develops which appears to be
associated with the non-equilibrium critical state. Fits to the angle
dependence of the Josephson plasma resonance provide microscopic information
concerning the superconducting phase correlation function within the vortex
state. We also show that the effect of an in-plane magnetic field on the
temperature dependence of the interlayer phase coherence is quite different
from what we have previously observed for perpendicular magnetic fields.
|
Using archival data of low-redshift (z < 0.01) Type Ia supernovae (SN Ia) and
recent observations of high-redshift (0.16 < z <0.64; Matheson et al. 2005) SN
Ia, we study the "uniformity'' of the spectroscopic properties of nearby and
distant SN Ia. We find no difference in the measures we describe here. In this
paper, we base our analysis solely on line-profile morphology, focusing on
measurements of the velocity location of maximum absorption (vabs) and peak
emission (vpeak). We find that the evolution of vabs and vpeak for our sample
lines (Ca II 3945, Si II 6355, and S II 5454, 5640) is similar for both the
low- and high-redshift samples. We find that vabs for the weak S II 5454, 5640
lines, and vpeak for S II 5454, can be used to identify fast-declining [dm15 >
1.7] SN Ia, which are also subluminous. In addition, we give the first direct
evidence in two high-z SN Ia spectra of a double-absorption feature in Ca II
3945, an event also observed, though infrequently, in low-redshift SN Ia
spectra (6/22 SN Ia in our local sample). We report for the first time the
unambiguous and systematic intrinsic blueshift of peak emission of optical
P-Cygni line profiles in Type Ia spectra, by as much as 8000 km/s. All the
high-z SN Ia analyzed in this paper were discovered and followed up by the
ESSENCE collaboration, and are now publicly available.
|
Let $(X, \Delta)$ be a projective klt pair of dimension $2$ and let $L$ be a
nef $\mathbb{Q}$-divisor on $X$ such that $K_X + \Delta + L$ is nef. As a
complement to the Generalized Abundance Conjecture by Lazi\'c and Peternell, we
prove that if $K_X + \Delta$ and $L$ are not proportional modulo numerical
equivalence, then $K_X + \Delta + L$ is semiample. An example due to Lazi\'c
shows that this is no longer true in any dimension $n \ge 3$.
|
We use a tight-binding Bogoliubov-de Gennes (BdG) formalism to
self-consistently calculate the proximity effect, Josephson current, and local
density of states in ballistic graphene SNS Josephson junctions. Both short and
long junctions, with respect to the superconducting coherence length, are
considered, as well as different doping levels of the graphene. We show that
self-consistency does not notably change the current-phase relationship derived
earlier for short junctions using the non-selfconsistent Dirac-BdG formalism
but predict a significantly increased critical current with a stronger junction
length dependence. In addition, we show that in junctions with no Fermi level
mismatch between the N and S regions superconductivity persists even in the
longest junctions we can investigate, indicating a diverging Ginzburg-Landau
superconducting coherence length in the normal region.
|
After the study of non-inertial frames in special relativity with emphasis on
the problem of clock synchronization (i.e. of how to define 3-space), an
overview is given of Einstein canonical gravity in the York canonical basis and
of its Hamiltonian Post-Minkowskian (PM) linearization in 3-orthogonal gauges.
It is shown that the York time (the trace of the extrinsic curvature of
3-spaces) is the inertial gauge variable describing the general relativistic
remnant of the clock synchronization gauge freedom. The dark matter implied by
the rotation curves of galaxies can be explained with a choice of the York time
implying a PM extension of the Newtonian celestial frame ICRS.
|
We prove that successors of singular limits of strongly compact cardinals
have the strong tree property. We also prove that aleph_{omega+1} can
consistently satisfy the strong tree property.
|
We give a new proof of $l^2$ decoupling for the parabola inspired from
efficient congruencing. Making quantitative this proof matches a bound obtained
by Bourgain for the discrete restriction problem for the parabola. We
illustrate similarities and differences between this new proof and efficient
congruencing and the proof of decoupling by Bourgain and Demeter. We also show
where tools from decoupling such as $l^2 L^2$ decoupling, Bernstein, and ball
inflation come into play.
|
Mobile Crowdsensing has become main stream paradigm for researchers to
collect behavioral data from citizens in large scales. This valuable data can
be leveraged to create centralized repositories that can be used to train
advanced Artificial Intelligent (AI) models for various services that benefit
society in all aspects. Although decades of research has explored the viability
of Mobile Crowdsensing in terms of incentives and many attempts have been made
to reduce the participation barriers, the overshadowing privacy concerns
regarding sharing personal data still remain. Recently a new pathway has
emerged to enable to shift MCS paradigm towards a more privacy-preserving
collaborative learning, namely Federated Learning. In this paper, we posit a
first of its kind framework for this emerging paradigm. We demonstrate the
functionalities of our framework through a case study of diversifying two
vision algorithms through to learn the representation of ordinary sidewalk
obstacles as part of enhancing visually impaired navigation.
|
Let $(R,\mathfrak{m})$ be a commutative Noetherian local ring, $\mathfrak{a}$
be a proper ideal of $R$ and $M$ be an $R$-complex in $\mathrm{D}(R)$. We prove
that if $M\in\mathrm{D}^f_\sqsubset(R)$ (respectively,
$M\in\mathrm{D}^f_\sqsupset(R)$), then
$\mathrm{id}_R\mathbf{R}\Gamma_{\mathfrak{a}}(M)=\mathrm{id}_R M$
(respectively, $\mathrm{fd}_R\mathbf{R}\Gamma_{\mathfrak{a}}(M)=\mathrm{fd}_R
M$). Next, it is proved that the right derived section functor of a complex
$M\in\mathrm{D}_\sqsubset(R)$ ($R$ is not necessarily local) can be computed
via a genuine left-bounded complex $G\simeq M$ of Gorenstein injective modules.
We show that if $R$ has a dualizing complex and $M$ is an $R$-complex in
$\mathrm{D}^f_\square(R)$, then
$\mathrm{Gfd}_R\mathbf{R}\Gamma_{\mathfrak{a}}(M)=\mathrm{Gfd}_R M$ and
$\mathrm{Gid}_R\mathbf{R}\Gamma_{\mathfrak{a}}(M)=\mathrm{Gid}_R M$. Also, we
show that if $M$ is a relative Cohen-Macaulay $R$-module with respect to
$\mathfrak{a}$ (respectively, Cohen-Macaulay $R$-module of dimension $n$), then
$\mathrm{Gfd}_R\mathbf{H}^{\mathrm{ht_M\mathfrak{a}}}_{\mathfrak{a}}(M)=\mathrm{Gfd}_RM+n$
(respectively,
$\mathrm{Gid}_R\mathbf{H}^n_{\mathfrak{m}}(M)=\mathrm{Gid}_RM-n$). The above
results generalize some known results and provide characterizations of
Gorenstein rings.
|
Two-dimensional (2D) antimony (Sb, antimonene) recently attracted interest
due to its peculiar electronic properties and its suitability as anode material
in next generation batteries. Sb however exhibits a large
polymorphic/allotropic structural diversity, which is also influenced by the
Sb's support. Thus understanding Sb heterostructure formation is key in 2D Sb
integration. Particularly 2D Sb/graphene interfaces are of prime importance as
contacts in electronics and electrodes in batteries. We thus study here
few-layered 2D Sb/graphene heterostructures by atomic-resolution (scanning)
transmission electron microscopy. We find the co-existence of two Sb
morphologies: First is a 2D growth morphology of layered beta-Sb with
beta-Sb(001)||graphene(001) texture. Second are one-dimensional (1D) Sb
nanowires which can be matched to beta-Sb with beta-Sb[2-21] perpendicular to
graphene(001) texture and are structurally also closely related to
thermodynamically non-preferred cubic Sb(001)||graphene(001). Importantly, both
Sb morphologies show rotational van-der-Waals epitaxy with the graphene
support. Both Sb morphologies are well resilient against environmental bulk
oxidation, although superficial Sb-oxide layer formation merits consideration,
including formation of novel epitaxial Sb2O3(111)/beta-Sb(001)
heterostructures. Exact Sb growth behavior is sensitive on employed processing
and substrate properties including, notably, the nature of the support
underneath the direct graphene support. This introduces the substrate
underneath a direct 2D support as a key parameter in 2D Sb heterostructure
formation. Our work provides insights into the rich phase and epitaxy landscape
in 2D Sb and 2D Sb/graphene heterostructures.
|
We consider the problem of learning stabilizable systems governed by
nonlinear state equation $h_{t+1}=\phi(h_t,u_t;\theta)+w_t$. Here $\theta$ is
the unknown system dynamics, $h_t $ is the state, $u_t$ is the input and $w_t$
is the additive noise vector. We study gradient based algorithms to learn the
system dynamics $\theta$ from samples obtained from a single finite trajectory.
If the system is run by a stabilizing input policy, we show that
temporally-dependent samples can be approximated by i.i.d. samples via a
truncation argument by using mixing-time arguments. We then develop new
guarantees for the uniform convergence of the gradients of empirical loss.
Unlike existing work, our bounds are noise sensitive which allows for learning
ground-truth dynamics with high accuracy and small sample complexity. Together,
our results facilitate efficient learning of the general nonlinear system under
stabilizing policy. We specialize our guarantees to entry-wise nonlinear
activations and verify our theory in various numerical experiments
|
Information provision experiments are a popular way to study causal effects
of beliefs on behavior. Researchers estimate these effects using TSLS. I show
that existing TSLS specifications do not estimate the average partial effect;
they have weights proportional to belief updating in the first-stage. If people
whose decisions depend on their beliefs gather information before the
experiment, the information treatment may shift beliefs more for people with
weak belief effects. This attenuates TSLS estimates. I propose researchers use
a local-least-squares (LLS) estimator that I show consistently estimates the
average partial effect (APE) under Bayesian updating, and apply it to Settele
(2022).
|
Using Fisher information matrices, we forecast the uncertainties $\sigma_M$
on the measurement of a "Planet X" at heliocentric distance $d_X$ via its tidal
gravitational field's action on the known planets. Using planetary measurements
currently in hand, including ranging from the Juno, Cassini, and Mars-orbiting
spacecraft, we forecast a median uncertainty (over all possible sky positions)
of $\sigma_M=0.22M_\oplus (d_x/400\,\textrm{AU})^3.$ A definitive $(5\sigma)$
detection of a $5M_\oplus$ Planet X at $d_X=400$ AU should be possible over the
full sky but over only 5% of the sky at $d_X=800$ AU. The gravity of an
undiscovered Earth- or Mars-mass object should be detectable over 90% of the
sky to a distance of 260 or 120 AU, respectively. Upcoming Mars ranging
improves these limits only slightly. We also investigate the power of
high-precision astrometry of $\approx8000$ Jovian Trojans over the 2023--2035
period from the upcoming Legacy Survey of Space and Time (LSST). We find that
the dominant systematic errors in optical Trojan astrometry (photocenter
motion, non-gravitational forces, and differential chromatic refraction) can be
solved internally with minimal loss of information. The Trojan data allow
useful cross-checks with Juno/Cassini/Mars ranging, but do not significantly
improve the best-achievable $\sigma_M$ values until they are $\gtrsim10\times$
more accurate than expected from LSST. The ultimate limiting factor in searches
for a Planet X tidal field is confusion with the tidal field created by the
fluctuating quadrupole moment of the Kuiper Belt as its members orbit. This
background will not, however, become the dominant source of Planet X
uncertainty until the data get substantially better than they are today.
|
The phase space of a Hamiltonian system is symplectic. However, the
post-Newtonian Hamiltonian formulation of spinning compact binaries in existing
publications does not have this property, when position, momentum and spin
variables $[X, P, S_1, S_2]$ compose its phase space. This may give a
convenient application of perturbation theory to the derivation of the
post-Newtonian formulation, but also makes classic theories of a symplectic
Hamiltonian system be a serious obstacle in application, especially in
diagnosing integrability and nonintegrability from a dynamical system theory
perspective. To completely understand the dynamical characteristic of the
integrability or nonintegrability for the binary system, we construct a set of
conjugate spin variables and reexpress the spin Hamiltonian part so as to make
the complete Hamiltonian formulation symplectic. As a result, it is directly
shown with the least number of independent isolating integrals that a
conservative Hamiltonian compact binary system with both one spin and the pure
orbital part to any post-Newtonian order is typically integrable and not
chaotic. And conservative binary system consisting of two spins restricted to
the leading order spin-orbit interaction and the pure orbital part at all
post-Newtonian orders is also integrable, independently on the mass ratio. For
all other various spinning cases, the onset of chaos is possible.
|
We extend to the beta-divergence (Itakura-Saito) case beta =0, the
comparative bi-stochaticization analyses-previously conducted (arXiv:1208.3428)
for the (Kullback-Leibler) beta=1 and (squared-Euclidean) beta = 2 cases -of
the 3,107 - county 1995-2000 U. S. migration network. A heuristic, "greedy"
algorithm is devised. While the largest 25,329 entries of the 735,531 non-zero
entries of the bi-stochasticized table - in the beta=1 case - are required to
complete the widely-applied two-stage (double-standardization and
strong-component hierarchical clustering) procedure, 105,363 of the 735,531 are
needed (reflective of greater uniformity of entries) in the beta=0 instance.
The North Carolina counties of Mecklenburg (Charlotte) and Wake (Raleigh) are
considerably relatively more cosmopolitan in the beta=0 study. The Colorado
county of El Paso (Colorado Springs) replaces the Florida Atlantic county of
Brevard (the "Space Coast") as the most cosmopolitan, with Brevard becoming the
second-most. Honolulu County splinters away from the other four (still-grouped)
Hawaiian counties, becoming the fifth most cosmopolitan county nation-wide. The
five counties of Rhode Island remain intact as a regional entity, but the eight
counties of Connecticut fragment, leaving only five counties clustered.
|
We report on the development and characterization of the low-noise, low
power, mixed analog-digital SIRIUS ASICs for both the LAD and WFM X-ray
instruments of LOFT. The ASICs we developed are reading out large area silicon
drift detectors (SDD). Stringent requirements in terms of noise (ENC of 17 e-
to achieve an energy resolution on the LAD of 200 eV FWHM at 6 keV) and power
consumption (650 {\mu}W per channel) were basis for the ASICs design. These
SIRIUS ASICs are developed to match SDD detectors characteristics: 16 channels
ASICs adapted for the LAD (970 microns pitch) and 64 channels for the WFM (145
microns pitch) will be fabricated. The ASICs were developed with the 180nm
mixed technology of TSMC.
|
We study the mass spectrum and decay constants of light and heavy mesons in a
soft-wall holographic approach, using the correspondence of string theory in
Anti-de Sitter space and conformal field theory in physical space-time.
|
We present near-infrared $J$ band photometric observations of the
intermediate polar WX Pyx. The frequency analysis indicates the presence of a
period at 1559.2 $\pm$ 0.2 seconds which is attributed to the spin period of
the white dwarf. The spin period inferred from the infrared data closely
matches that determined from X-ray and optical observations. WX Pyx is a system
whose orbital period has not been measured directly and which is not too well
constrained. From the IR observations, a likely peak at 5.30 $\pm$ 0.02 hour is
seen in the power spectrum of the object. It is suggested that this corresponds
to the orbital period of the system. In case this is indeed the true orbital
period, some of the system physical parameters may be estimated. Our analysis
indicates that the secondary star is of M2 spectral type and the distance to
the object is 1.53 kpc. An upper limit of 30$^\circ$ for the angle of
inclination of the system is suggested. The mass transfer rate and the magnetic
moment of the white dwarf are estimated to be (0.95 - 1.6)$\times 10^{-9}
M_\odot$ yr$^{-1}$ and (1.9 - 2.4)$ \times 10^{33}$ G cm$^{3}$ respectively.
|
In this short paper, the authors report a new computational approach in the
context of Density Functional Theory (DFT). It is shown how it is possible to
speed up the self-consistent cycle (iteration) characterizing one of the most
well-known DFT implementations: FLAPW. Generating the Hamiltonian and overlap
matrices and solving the associated generalized eigenproblems $Ax = \lambda Bx$
constitute the two most time-consuming fractions of each iteration. Two
promising directions, implementing the new methodology, are presented that will
ultimately improve the performance of the generalized eigensolver and save
computational time.
|
Training classification models on imbalanced data tends to result in bias
towards the majority class. In this paper, we demonstrate how variable
discretization and cost-sensitive logistic regression help mitigate this bias
on an imbalanced credit scoring dataset, and further show the application of
the variable discretization technique on the data from other domains,
demonstrating its potential as a generic technique for classifying imbalanced
data beyond credit socring. The performance measurements include ROC curves,
Area under ROC Curve (AUC), Type I Error, Type II Error, accuracy, and F1
score. The results show that proper variable discretization and cost-sensitive
logistic regression with the best class weights can reduce the model bias
and/or variance. From the perspective of the algorithm, cost-sensitive logistic
regression is beneficial for increasing the value of predictors even if they
are not in their optimized forms while maintaining monotonicity. From the
perspective of predictors, the variable discretization performs better than
cost-sensitive logistic regression, provides more reasonable coefficient
estimates for predictors which have nonlinear relationships against their
empirical logit, and is robust to penalty weights on misclassifications of
events and non-events determined by their apriori proportions.
|
Randomness and frustration are considered to be the key ingredients for the
existence of spin glass (SG) phase. In a canonical system, these ingredients
are realized by the random mixture of ferromagnetic (FM) and antiferromagnetic
(AF) couplings. The study by Bartolozzi {\it et al.} [Phys. Rev. B{\bf 73},
224419 (2006)] who observed the presence of SG phase on the AF Ising model on
scale free network (SFN) is stimulating. It is a new type of SG system where
randomness and frustration are not caused by the presence of FM and AF
couplings. To further elaborate this type of system, here we study Heisenberg
model on AF SFN and search for the SG phase. The canonical SG Heisenberg model
is not observed in $d$-dimensional regular lattices for ($d \leq 3$). We can
make an analogy for the connectivity density ($m$) of SFN with the
dimensionality of the regular lattice. It should be plausible to find the
critical value of $m$ for the existence of SG behaviour, analogous to the lower
critical dimension ($d_l$) for the canonical SG systems. Here we study system
with $m=2,3,4$ and $5$. We used Replica Exchange algorithm of Monte Carlo
Method and calculated the SG order parameter. We observed SG phase for each
value of $m$ and estimated its corersponding critical temperature.
|
We utilize classical molecular dynamics to study surface effects on the
piezoelectric properties of ZnO nanowires as calculated under uniaxial loading.
An important point to our work is that we have utilized two types of surface
treatments, those of charge compensation and surface passivation, to eliminate
the polarization divergence that otherwise occurs due to the polar (0001)
surfaces of ZnO. In doing so, we find that if appropriate surface treatments
are utilized, the elastic modulus and the piezoelectric properties for ZnO
nanowires having a variety of axial and surface orientations are all reduced as
compared to the bulk value as a result of polarization-reduction in the polar
[0001] direction. The reduction in effective piezoelectric constant is found to
be independent of the expansion or contraction of the polar (0001) surface in
response to surface stresses. Instead, the surface polarization and thus
effective piezoelectric constant is substantially reduced due to a reduction in
the bond length of the Zn-O dimer closest to the polar (0001) surface.
Furthermore, depending on the nanowire axial orientation, we find in the
absence of surface treatment that the piezoelectric properties of ZnO are
either effectively lost due to unphysical transformations from the wurtzite to
non-piezoelectric d-BCT phases, or also become smaller with decreasing nanowire
size. The overall implication of this study is that if enhancement of the
piezoelectric properties of ZnO is desired, then continued miniaturization of
square or nearly square cross section ZnO wires to the nanometer scale is not
likely to achieve this result.
|
We analyze the dynamics of the power grid with a high penetration of
renewable energy sources using the ORNL-PSERC-Alaska (OPA) model. In particular
we consider the power grid of the Balearic Islands with a high share of solar
photovoltaic power as a case study. Day-to-day fluctuations of the solar
generation and the use of storage are included in the model. Resilience is
analyzed through the blackout distribution and performance is measured as the
average fraction of the demand covered by solar power generation. We find that
with the present consumption patterns and moderate storage, solar generation
can replace conventional power plants without compromising reliability up to
$30\%$ of the total installed capacity. We also find that using source
redundancy it is possible to cover up to $80\%$ or more of the demand with
solar plants, while keeping the risk similar to that with full conventional
generation. However this requires oversizing the installed solar power to be at
least $2.5$ larger than the average demand. The potential of wind energy is
also briefly discussed
|
We have examined the teaching practices of faculty members who adopted
research-based instructional strategies as part of the Carl Wieman Science
Education Initiative (CWSEI) at the University of British Columbia. Of the 70
that adopted such strategies with the support of the CWSEI program, only one
subsequently stopped using these strategies. This is a tiny fraction of the 33%
stopping rate for physics faculty in general [Henderson, Dancy, and
Niewiadomska-Bugaj, PRST-PER, 8, 020104 (2012)]. Nearly all of these UBC
faculty members who had an opportunity to subsequently use RBIS in other
courses (without CWSEI support) did so. We offer possible explanations for the
difference in quitting rates. The direct support of the faculty member by a
trained science education specialist in the discipline during the initial
implementation of the new strategies may be the most important factor.
|
The magnetic susceptibility of a new one-dimensional, S=1 system, the
vanadium oxide LiVGe2O6, has been measured. Contrary to previous S=1 chains, it
exhibits an abrupt drop at 22 K typical of a spin-Peierls transition, and it is
consistent with a gapless spectrum above this temperature. We propose that this
behaviour is due to the presence of a significant biquadratic exchange
interaction, a suggestion supported by quantum chemistry calculations that take
into account the quasi-degeneracy of the t2g levels.
|
In this article we consider a consistent matrix equation $AXB = C$ when a
particular solution $X_{0}$ is given and we represent a new form of the general
solution which contains both reproductive and non-reproductive solutions (it
depends on the form of particular solution $X_{0}$). We also analyse the
solutions of some matrix systems using the concept of reproductivity and we
give a new form of the condition for the consistency of the matrix equation
$AXB=C$.
|
In this paper, we propose the beginnings of a formal framework for modeling
narrative \textit{qua} narrative. Our framework affords the ability to discuss
key qualities of stories and their communication, including the flow of
information from a Narrator to a Reader, the evolution of a Reader's story
model over time, and Reader uncertainty. We demonstrate its applicability to
computational narratology by giving explicit algorithms for measuring the
accuracy with which information was conveyed to the Reader and two novel
measurements of story coherence.
|
We investigate the effect of quantum metric fluctuations on qubits that are
gravitationally coupled to a background spacetime. In our first example, we
study the propagation of a qubit in flat spacetime whose metric is subject to
flat quantum fluctuations with a Gaussian spectrum. We find that these
fluctuations cause two changes in the state of the qubit: they lead to a phase
drift, as well as the expected exponential suppression (decoherence) of the
off-diagonal terms in the density matrix. Secondly, we calculate the
decoherence of a qubit in a circular orbit around a Schwarzschild black hole.
The no-hair theorems suggest a quantum state for the metric in which the black
hole's mass fluctuates with a thermal spectrum at the Hawking temperature.
Again, we find that the orbiting qubit undergoes decoherence and a phase drift
that both depend on the temperature of the black hole. Thirdly, we study the
interaction of coherent and squeezed gravitational waves with a qubit in
uniform motion. Finally, we investigate the decoherence of an accelerating
qubit in Minkowski spacetime due to the Unruh effect. In this case decoherence
is not due to fluctuations in the metric, but instead is caused by coupling
(which we model with a standard Hamiltonian) between the qubit and the thermal
cloud of Unruh particles bathing it. When the accelerating qubit is entangled
with a stationary partner, the decoherence should induce a corresponding loss
in teleportation fidelity.
|
Beams of light with a large topological charge significantly change their
spatial structure when they are focused strongly. Physically, it can be
explained by an emerging electromagnetic field component in the direction of
propagation, which is neglected in the simplified scalar wave picture in
optics. Here we ask: Is this a specific photonic behavior, or can similar
phenomena also be predicted for other species of particles? We show that the
same modification of the spatial structure exists for relativistic electrons as
well as for focused gravitational waves. However, this is for different
physical reasons: For electrons, which are described by the Dirac equation, the
spatial structure changes due to a Spin-Orbit coupling in the relativistic
regime. In gravitational waves described with linearized general relativity,
the curvature of space-time between the transverse and propagation direction
leads to the modification of the spatial structure. Thus, this universal
phenomenon exists for both massive and massless elementary particles with Spin
1/2, 1 and 2. It would be very interesting whether other types of particles
such as composite systems (neutrons or C$_{60}$) or neutrinos show a similar
behaviour and how this phenomenon can be explained in a unified physical way.
|
Subsets and Splits