text
stringlengths 6
128k
|
---|
The topology of the magnetic interactions of the copper spins in the
nitrosonium nitratocuprate (NO)[Cu(NO3)3] suggests that it could be a
realization of the Nersesyan-Tsvelik model, whose ground state was argued to be
either a resonating valence bond (RVB) state or a valence bond crystal (VBC).
The measurement of thermodynamic and resonant properties reveals a behavior
inherent to low dimensional spin S = 1/2 systems and provides indeed no
evidence for the formation of long-range magnetic order down to 1.8 K.
|
Fine-grained visual categorization (FGVC), which aims at classifying objects
with small inter-class variances, has been significantly advanced in recent
years. However, ultra-fine-grained visual categorization (ultra-FGVC), which
targets at identifying subclasses with extremely similar patterns, has not
received much attention. In ultra-FGVC datasets, the samples per category are
always scarce as the granularity moves down, which will lead to overfitting
problems. Moreover, the difference among different categories is too subtle to
distinguish even for professional experts. Motivated by these issues, this
paper proposes a novel compositional feature embedding and similarity metric
(CECS). Specifically, in the compositional feature embedding module, we
randomly select patches in the original input image, and these patches are then
replaced by patches from the images of different categories or masked out. Then
the replaced and masked images are used to augment the original input images,
which can provide more diverse samples and thus largely alleviate overfitting
problem resulted from limited training samples. Besides, learning with diverse
samples forces the model to learn not only the most discriminative features but
also other informative features in remaining regions, enhancing the
generalization and robustness of the model. In the compositional similarity
metric module, a new similarity metric is developed to improve the
classification performance by narrowing the intra-category distance and
enlarging the inter-category distance. Experimental results on two ultra-FGVC
datasets and one FGVC dataset with recent benchmark methods consistently
demonstrate that the proposed CECS method achieves the state of-the-art
performance.
|
Let $k$ be a field and $A$ a finite-dimensional $k$-algebra of global
dimension $\leq 2$. We construct a triangulated category $\Cc_A$ associated to
$A$ which, if $A$ is hereditary, is triangle equivalent to the cluster category
of $A$. When $\Cc_A$ is $\Hom$-finite, we prove that it is 2-CY and endowed
with a canonical cluster-tilting object. This new class of categories contains
some of the stable categories of modules over a preprojective algebra studied
by Geiss-Leclerc-Schr{\"o}er and by Buan-Iyama-Reiten-Scott. Our results also
apply to quivers with potential. Namely, we introduce a cluster category
$\Cc_{(Q,W)}$ associated to a quiver with potential $(Q,W)$. When it is
Jacobi-finite we prove that it is endowed with a cluster-tilting object whose
endomorphism algebra is isomorphic to the Jacobian algebra $\Jj(Q,W)$.
|
We review the main results of the study of the Standard model propagating in
two universal extra dimensions. Gauge bosons give rise to heavy spin-1 and
spin-0 particles. The latter constitute the lightest Kaluza-Klein (KK) particle
in the level (1,0). The level (1,1) can be s-channel produced. The main signals
at the Tevatron and the LHC will be several $t\bar t$ resonances.
|
We consider the large-$N$ asymptotics of a system of discrete orthogonal
polynomials on an infinite regular lattice of mesh $\frac{1}{N}$, with weight
$e^{-NV(x)}$, where $V(x)$ is a real analytic function with sufficient growth
at infinity. The proof is based on formulation of an interpolation problem for
discrete orthogonal polynomials, which can be converted to a Riemann-Hilbert
problem, and steepest descent analysis of this Riemann-Hilbert problem.
|
Detecting spoofing attempts of automatic speaker verification (ASV) systems
is challenging, especially when using only one modeling approach. For
robustness, we use both deep neural networks and traditional machine learning
models and combine them as ensemble models through logistic regression. They
are trained to detect logical access (LA) and physical access (PA) attacks on
the dataset released as part of the ASV Spoofing and Countermeasures Challenge
2019. We propose dataset partitions that ensure different attack types are
present during training and validation to improve system robustness. Our
ensemble model outperforms all our single models and the baselines from the
challenge for both attack types. We investigate why some models on the PA
dataset strongly outperform others and find that spoofed recordings in the
dataset tend to have longer silences at the end than genuine ones. By removing
them, the PA task becomes much more challenging, with the tandem detection cost
function (t-DCF) of our best single model rising from 0.1672 to 0.5018 and
equal error rate (EER) increasing from 5.98% to 19.8% on the development set.
|
Data structures are critical in any data-driven scenario, but they are
notoriously hard to design due to a massive design space and the dependence of
performance on workload and hardware which evolve continuously. We present a
design engine, the Data Calculator, which enables interactive and
semi-automated design of data structures. It brings two innovations. First, it
offers a set of fine-grained design primitives that capture the first
principles of data layout design: how data structure nodes lay data out, and
how they are positioned relative to each other. This allows for a structured
description of the universe of possible data structure designs that can be
synthesized as combinations of those primitives. The second innovation is
computation of performance using learned cost models. These models are trained
on diverse hardware and data profiles and capture the cost properties of
fundamental data access primitives (e.g., random access). With these models, we
synthesize the performance cost of complex operations on arbitrary data
structure designs without having to: 1) implement the data structure, 2) run
the workload, or even 3) access the target hardware. We demonstrate that the
Data Calculator can assist data structure designers and researchers by
accurately answering rich what-if design questions on the order of a few
seconds or minutes, i.e., computing how the performance (response time) of a
given data structure design is impacted by variations in the: 1) design, 2)
hardware, 3) data, and 4) query workloads. This makes it effortless to test
numerous designs and ideas before embarking on lengthy implementation,
deployment, and hardware acquisition steps. We also demonstrate that the Data
Calculator can synthesize entirely new designs, auto-complete partial designs,
and detect suboptimal design choices.
|
It may prove useful in cosmology to understand the behavior of the energy
distribution in a scalar field that interacts only with gravity and with itself
by a pure quartic potential, because if such a field existed it would be
gravitationally produced, as a squeezed state, during inflation. It is known
that the mean energy density in such a field after inflation varies with the
expansion of the universe in the same way as radiation. I show that if the
field initially is close to homogeneous, with small energy density contrast
delta rho /rho and coherence length L, the energy density fluctuations behave
like acoustic oscillations in an ideal relativistic fluid for a time on the
order of L/|delta rho /rho|. This ends with the appearance of features that
resemble shock waves, but interact in a close to elastic way that reversibly
disturbs the energy distribution.
|
In recent years, collisional charging has been proposed to promote the growth
of pebbles in early phases of planet formation. Ambient pressure in
protoplanetary disks spans a wide range from below $10^{-9}$ mbar up to way
beyond mbar. Yet, experiments on collisional charging of same material surfaces
have only been conducted under Earth atmospheric pressure, Martian pressure and
more generally down to $10^{-2}$ mbar thus far. This work presents first
pressure dependent charge measurements of same material collisions between
$10^{-8}$ and $10^3$ mbar. Strong charging occurs down to the lowest pressure.
In detail, our observations show a strong similarity to the pressure dependence
of the breakdown voltage between two electrodes and we suggest that breakdown
also determines the maximum charge on colliding grains in protoplanetary disks.
We conclude that collisional charging can occur in all parts of protoplanetary
disks relevant for planet formation.
|
We investigate, theoretically and experimentally, absorption on an
excited-state atomic transition in a thermal vapor where the lower state is
coherently pumped. We find that the transition linewidth can be sub-natural,
i.e. less than the combined linewidth of the lower and upper state. For the
specific case of the 6P_{3/2} -> 7S_{1/2} transition in room temperature cesium
vapor, we measure a minimum linewidth of 6.6 MHz compared with the natural
width of 8.5 MHz. Using perturbation techniques, an expression for the complex
susceptibility is obtained which provides excellent agreement with the measured
spectra.
|
We investigate configuration dynamics of a flexible polymer chain in a bath
of active particles with dynamic chirality, i.e., particles rotate with a
deterministic angular velocity $\omega$ besides self-propulsion,by Langevin
dynamics simulations in two dimensional space. Particular attentions are paid
to how the gyration radius $R_{g}$ changes with the propulsion velocity
$v_{0}$,angular velocity $\omega$ and chain length. We find that in a chiral
bath with a typical nonzero $\omega$, the chain first collapses into a small
compact cluster and swells again with increasing $v_{0}$, in quite contrast to
the case for a normal achiral bath $(\omega=0)$ wherein a flexible chain swells
with increasing $v_{0}$. More interestingly, the polymer can even form a closed
ring if the chain length is large enough,which may oscillate with the cluster
if $v_{0}$ is large. Consequently, the gyration radius $R_{g}$ shows nontrivial
non-monotonic dependences on $v_{0}$, i.e., it undergoes a minimum for
relatively short chains, and two minima with a maximum in between for longer
chains. Our analysis shows that such interesting phenomena are mainly due to
the competition between two roles played by the chiral active bath: while the
persistence motion due to particle activity tends to stretch the chain, the
circular motion of the particle may lead to an effective osmotic pressure that
tends to collapse the chain. In addition, the size of the circular motion
$R_{0}=v_{0}/\omega$ shows an important role in that the compact clusters and
closed-rings are both observed at nearly the same values of $R_{0}$ for
different $\omega$.
|
The paper is devoted to the numerical solutions of fractional PDEs based on
its probabilistic interpretation, that is, we construct approximate solutions
via certain Monte Carlo simulations. The main results represent the upper bound
of errors between the exact solution and the Monte Carlo approximation, the
estimate of the fluctuation via the appropriate central limit theorem(CLT) and
the construction of confidence intervals. Moreover, we provide rates of
convergence in the CLT via Berry-Esseen type bounds. Concrete numerical
computations and illustrations are included.
|
We review the most recent results from experiments studying systems
containing charmed quarks. The selection reflects the presenter's bias, and
there is an emphasis on decays of open charm. We discuss precision measurements
of various sorts, various new states in the charmonium system, measurements
aimed at testing Lattice QCD, and the latest searches for charm mixing. We
conclude with a discussion of upcoming experiments at existing and future
facilities.
|
The study of transiently accreting neutron stars provides a powerful means to
elucidate the properties of neutron star crusts. We present extensive numerical
simulations of the evolution of the neutron star in the transient low-mass
X-ray binary MAXI J0556--332. We model nearly twenty observations obtained
during the quiescence phases after four different outbursts of the source in
the past decade, considering the heating of the star during accretion by the
deep crustal heating mechanism complemented by some shallow heating source. We
show that cooling data are consistent with a single source of shallow heating
acting during the last three outbursts, while a very different and powerful
energy source is required to explain the extremely high effective temperature
of the neutron star, ~350 eV, when it exited the first observed outburst. We
propose that a gigantic thermonuclear explosion, a "hyperburst" from unstable
burning of neutron rich isotopes of oxygen or neon, occurred a few weeks before
the end of the first outburst, releasing 10^44 ergs at densities of the order
of 10^11 g/cm^3. This would be the first observation of a hyperburst and these
would be extremely rare events as the build up of the exploding layer requires
about a millennium of accretion history. Despite its large energy output, the
hyperburst did not produce, due to its depth, any noticeable increase in
luminosity during the accretion phase and is only identifiable by its imprint
on the later cooling of the neutron star.
|
Attribute reduction is one of the most important research topics in the
theory of rough sets, and many rough sets-based attribute reduction methods
have thus been presented. However, most of them are specifically designed for
dealing with either labeled data or unlabeled data, while many real-world
applications come in the form of partial supervision. In this paper, we propose
a rough sets-based semi-supervised attribute reduction method for partially
labeled data. Particularly, with the aid of prior class distribution
information about data, we first develop a simple yet effective strategy to
produce the proxy labels for unlabeled data. Then the concept of information
granularity is integrated into the information-theoretic measure, based on
which, a novel granular conditional entropy measure is proposed, and its
monotonicity is proved in theory. Furthermore, a fast heuristic algorithm is
provided to generate the optimal reduct of partially labeled data, which could
accelerate the process of attribute reduction by removing irrelevant examples
and excluding redundant attributes simultaneously. Extensive experiments
conducted on UCI data sets demonstrate that the proposed semi-supervised
attribute reduction method is promising and even compares favourably with the
supervised methods on labeled data and unlabeled data with true labels in terms
of classification performance.
|
We test the predictions of Emergent Gravity using matter densities of
relaxed, massive clusters of galaxies using observations from optical and X-ray
wavebands. We improve upon previous work in this area by including the baryon
mass contribution of the brightest cluster galaxy in each system, in addition
to total mass profiles from gravitational lensing and mass profiles of the
X-ray emitting gas from Chandra. We use this data in the context of Emergent
Gravity to predict the "apparent" dark matter distribution from the observed
baryon distribution, and vice-versa. We find that although the inclusion of the
brightest cluster galaxy in the analysis improves the agreement with
observations in the inner regions of the clusters ($r \lesssim 10-30$ kpc), at
larger radii ($r \sim 100-200$ kpc) the Emergent Gravity predictions for mass
profiles and baryon mass fractions are discrepant with observations by a factor
of up to $\sim2-6$, though the agreement improves at radii near $r_{500}$. At
least in its current form, Emergent Gravity does not appear to reproduce the
observed characteristics of relaxed galaxy clusters as well as cold dark matter
models.
|
We investigate the fluctuations in the number of integral lattice points on
the Heisenberg groups which lie inside a Cygan-Kor{\'a}nyi norm ball of large
radius. Let
$\mathcal{E}_{q}(x)=\big|\mathbb{Z}^{2q+1}\cap\delta_{x}\mathcal{B}\big|-\textit{vol}\big(\mathcal{B}\big)x^{2q+2}$
denote the error term which occurs for this lattice point counting problem on
the Heisenberg group $\mathbb{H}_{q}$, where $\mathcal{B}$ is the unit ball in
the Cygan-Kor{\'a}nyi norm and $\delta_{x}$ is the Heisenberg-dilation by
$x>0$. For $q\geq3$ we consider the suitably normalized error term
$\mathcal{E}_{q}(x)/x^{2q-1}$, and prove it has a limiting value distribution
which is absolutely continuous with respect to the Lebesgue measure. We show
that the defining density for this distribution, denoted by
$\mathcal{P}_{q}(\alpha)$, can be extended to the whole complex plane
$\mathbb{C}$ as an entire function of $\alpha$ and satisfies for any
non-negative integer $j\geq0$ and any $\alpha\in\mathbb{R}$,
$|\alpha|>\alpha_{q,j}$, the bound: \begin{equation*} \begin{split}
\big|\mathcal{P}^{(j)}_{q}(\alpha)\big|\leq\exp{\Big(-|\alpha|^{4-\beta/\log\log{|\alpha|}}\Big)}
{split} {equation*} where $\beta>0$ is an absolute constant. In addition, we
give an explicit formula for the $j$-th integral moment of the density
$\mathcal{P}_{q}(\alpha)$ for any integer $j\geq1$.
|
An extension of unimodular Einsteinian gravity in the context of $F(R)$
gravities is used to construct a class of anisotropic evolution scenarios. In
unimodular GR the determinant of the metric is constrained to be a fixed number
or a function. However, the metric of a generic anisotropic universe is not
compatible with the unimodular constraint, so that a redefinition of the
metric, to properly take into account the constraint, need be performed. The
unimodular constraint is imposed on $F(R)$ gravity in the Jordan frame by means
of a Lagrangian multiplier, to get the equations of motion. The resulting
equations can be viewed as a reconstruction method, which allows to determine
what function of the Ricci scalar can realize the desired evolution. For the
sake of clarity, some characteristic examples are invoked to show how this
reconstruction method works explicitly. The de Sitter spacetime here
considered, in the context of unimodular $F(R)$ gravity, is suitable to
describe both the early- and late-time epochs of the universe history.
|
In this paper we consider the finite size effects for the strings in \beta
-deformed AdS_{5}\times T^{1,1} background. We analyze the finite size
corrections for the cases of giant magnon and single spike string solution. The
finite size corrections for the undeformed case are straightforwardly obtained
sending the deformation parameter to zero.
|
We consider a single server system with infinite waiting room in a random
environment. The service system and the environment interact in both
directions. Whenever the environment enters a prespecified subset of its state
space the service process is completely blocked: Service is interrupted and
newly arriving customers are lost. We prove an if-and-only-if-condition for a
product form steady state distribution of the joint queueing-environment
process. A consequence is a strong insensitivity property for such systems.
We discuss several applications, e.g. from inventory theory and reliability
theory, and show that our result extends and generalizes several theorems found
in the literature, e.g. of queueing-inventory processes.
We investigate further classical loss systems, where due to finite waiting
room loss of customers occurs. In connection with loss of customers due to
blocking by the environment and service interruptions new phenomena arise.
We further investigate the embedded Markov chains at departure epochs and
show that the behaviour of the embedded Markov chain is often considerably
different from that of the continuous time Markov process. This is different
from the behaviour of the standard M/G/1, where the steady state of the
embedded Markov chain and the continuous time process coincide.
For exponential queueing systems we show that there is a product form
equilibrium of the embedded Markov chain under rather general conditions. For
systems with non-exponential service times more restrictive constraints are
needed, which we prove by a counter example where the environment represents an
inventory attached to an M/D/1 queue. Such integrated queueing-inventory
systems are dealt with in the literature previously, and are revisited here in
detail.
|
In this work we prove that the giant component of the Erd\"os--Renyi random
graph $G(n,c/n)$ for c a constant greater than 1 (sparse regime), is not Gromov
$\delta$-hyperbolic for any positive $\delta$ with probability tending to one
as $n\to\infty$. As a corollary we provide an alternative proof that the giant
component of $G(n,c/n)$ when c>1 has zero spectral gap almost surely as
$n\to\infty$.
|
Subgroup analyses are common in epidemiologic and clinical research.
Unfortunately, restriction to subgroup members to test for heterogeneity can
yield imprecise effect estimates. If the true effect differs between members
and non-members due to different distributions of other measured effect measure
modifiers (EMMs), leveraging data from non-members can improve the precision of
subgroup effect estimates. We obtained data from the PRIME RCT of panitumumab
in patients with metastatic colon and rectal cancer from Project Datasphere(TM)
to demonstrate this method. We weighted non-Hispanic White patients to resemble
Hispanic patients in measured potential EMMs (e.g., age, KRAS distribution,
sex), combined Hispanic and weighted non-Hispanic White patients in one data
set, and estimated 1-year differences in progression-free survival (PFS). We
obtained percentile-based 95% confidence limits for this 1-year difference in
PFS from 2,000 bootstraps. To show when the method is less helpful, we also
reweighted male patients to resemble female patients and mutant-type KRAS (no
treatment benefit) patients to resemble wild-type KRAS (treatment benefit)
patients. The PRIME RCT included 795 non-Hispanic White and 42 Hispanic
patients with complete data on EMMs. While the Hispanic-only analysis estimated
a one-year PFS change of -17% (95% C.I. -45%, 8.8%) with panitumumab, the
combined weighted estimate was more precise (-8.7%, 95% CI -22%, 5.3%) while
differing from the full population estimate (1.0%, 95% CI: -5.9%, 7.5%). When
targeting wild-type KRAS patients the combined weighted estimate incorrectly
suggested no benefit (one-year PFS change: 0.9%, 95% CI: -6.0%, 7.2%). Methods
to extend inferences from study populations to specific targets can improve the
precision of estimates of subgroup effect estimates when their assumptions are
met. Violations of those assumptions can lead to bias, however.
|
We show that a para-quaternion nearly Kahler manifold is necessarily a
para-quaternion Kahler manifold
|
Mobile Crowd-Sensing (MCS) has appeared as a prospective solution for
large-scale data collection, leveraging built-in sensors and social
applications in mobile devices that enables a variety of Internet of Things
(IoT) services. However, the human involvement in MCS results in a high
possibility for unintentionally contributing corrupted and falsified data or
intentionally spreading disinformation for malevolent purposes, consequently
undermining IoT services. Therefore, recruiting trustworthy contributors plays
a crucial role in collecting high-quality data and providing better quality of
services while minimizing the vulnerabilities and risks to MCS systems. In this
article, a novel trust model called Experience-Reputation (E-R) is proposed for
evaluating trust relationships between any two mobile device users in a MCS
platform. To enable the E-R model, virtual interactions among the users are
manipulated by considering an assessment of the quality of contributed data
from such users. Based on these interactions, two indicators of trust called
Experience and Reputation are calculated accordingly. By incorporating the
Experience and Reputation trust indicators (TIs), trust relationships between
the users are established, evaluated and maintained. Based on these trust
relationships, a novel trust-based recruitment scheme is carried out for
selecting the most trustworthy MCS users to contribute to data sensing tasks.
In order to evaluate the performance and effectiveness of the proposed
trust-based mechanism as well as the E-R trust model, we deploy several
recruitment schemes in a MCS testbed which consists of both normal and
malicious users. The results highlight the strength of the trust-based scheme
as it delivers better quality for MCS services while being able to detect
malicious users.
|
Knowledge Distillation (KD) utilizes training data as a transfer set to
transfer knowledge from a complex network (Teacher) to a smaller network
(Student). Several works have recently identified many scenarios where the
training data may not be available due to data privacy or sensitivity concerns
and have proposed solutions under this restrictive constraint for the
classification task. Unlike existing works, we, for the first time, solve a
much more challenging problem, i.e., "KD for object detection with zero
knowledge about the training data and its statistics". Our proposed approach
prepares pseudo-targets and synthesizes corresponding samples (termed as
"Multi-Object Impressions"), using only the pretrained Faster RCNN Teacher
network. We use this pseudo-dataset as a transfer set to conduct zero-shot KD
for object detection. We demonstrate the efficacy of our proposed method
through several ablations and extensive experiments on benchmark datasets like
KITTI, Pascal and COCO. Our approach with no training samples, achieves a
respectable mAP of 64.2% and 55.5% on the student with same and half capacity
while performing distillation from a Resnet-18 Teacher of 73.3% mAP on KITTI.
|
The game-theoretical approach to non-extensive entropy measures of
statistical physics is based on an abstract measure of complexity from which
the entropy measure is derived in a natural way. A wide class of possible
complexity measures is considered and a property of factorization investigated.
The property reflects a separation between the system being observed and the
observer. Apparently, the property is also related to escorting. It is shown
that only those complexity measures which are connected with Tsallis entropy
have the factorization property.
|
We compute the classical $r$-matrix for the relativistic generalization of
the Calogero-Moser model, or Ruijsenaars-Schneider model, at all values of the
speed-of-light parameter $\lambda$. We connect it with the non-relativistic
Calogero-Moser $r$-matrix $(\lambda \rightarrow -1)$ and the $\lambda = 1$
sine-Gordon soliton limit.
|
We present a simulation of twelve globular clusters with different
concentration, distance, and background population, whose properties are
transformed into Gaia observables with the help of the lates Gaia science
performances prescriptions. We adopt simplified crowding receipts, based on
five years of simulations performed by DPAC (Data Processing and Analysis
Consortium) scientists, to explore the effect of crowding and to give a basic
idea of what will be made possible by Gaia in the field of Galactic globular
clusters observations.
|
Researchers often summarize their work in the form of scientific posters.
Posters provide a coherent and efficient way to convey core ideas expressed in
scientific papers. Generating a good scientific poster, however, is a complex
and time consuming cognitive task, since such posters need to be readable,
informative, and visually aesthetic. In this paper, for the first time, we
study the challenging problem of learning to generate posters from scientific
papers. To this end, a data-driven framework, that utilizes graphical models,
is proposed. Specifically, given content to display, the key elements of a good
poster, including attributes of each panel and arrangements of graphical
elements are learned and inferred from data. During the inference stage, an MAP
inference framework is employed to incorporate some design principles. In order
to bridge the gap between panel attributes and the composition within each
panel, we also propose a recursive page splitting algorithm to generate the
panel layout for a poster. To learn and validate our model, we collect and
release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which
consists of scientific papers and corresponding posters with exhaustively
labelled panels and attributes. Qualitative and quantitative results indicate
the effectiveness of our approach.
|
This study aims to analyse the effects of reducing Received Dose Intensity
(RDI) in chemotherapy treatment for osteosarcoma patients on their survival by
using a novel approach. In this scenario, toxic side effects are risk factors
for mortality and predictors of future exposure levels, introducing
post-assignment confounding.
Chemotherapy administration data from BO03 and BO06 Randomized Clinical
Trials (RCTs) in ostosarcoma are employed to emulate a target trial with three
RDI-based exposure strategies: 1) standard, 2) reduced, and 3) highly-reduced
RDI. Investigations are conducted between subgroups of patients characterised
by poor or good Histological Responses (HRe). Inverse Probability of Treatment
Weighting (IPTW) is first used to transform the original population into a
pseudo-population which mimics the target randomized cohort. Then, a Marginal
Structural Cox Model with effect modification is employed. Conditional Average
Treatment Effects (CATEs) are ultimately measured as the difference between the
Restricted Mean Survival Time of reduced/highly-reduced RDI strategy and the
standard one. Confidence Intervals for CATEs are obtained using a novel
IPTW-based bootstrap procedure.
Significant effect modifications based on HRe were found. Increasing
RDI-reductions led to contrasting trends for poor and good responders: the
higher the reduction, the better (worsen) was the survival in poor (good)
reponders. This study introduces a novel approach to (i) comprehensively
address the challenges related to the analysis of chemotherapy data, (ii)
mitigate the toxicity-treatment-adjustment bias, and (iii) repurpose existing
RCT data for retrospective analyses extending beyond the original trials'
intended scopes.
|
Interference at the radio receiver is a key source of degradation in quality
of service of wireless communication systems. This paper presents a unified
framework for OFDM/FBMC interference characterization and analysis in
asynchronous environment. Multi-user interference is caused by the timing
synchronization errors which lead to the destruction of the orthogonality
between subcarriers. In this paper, we develop a theoretical analysis of the
asynchronous interference considering the multi-path effects on the
interference signal. We further propose an accurate model for interference that
provides a useful computational tool in order to evaluate the performance of an
OFDM/FBMC system in a frequency selective fading environment. Finally,
simulation results confirmed the accuracy of the proposed model.
|
In flat spacetime, the vacuum neutrino flavour oscillations are known to be
sensitive only to the difference between the squared masses, and not to the
individual masses, of neutrinos. In this work, we show that the lensing of
neutrinos induced by a gravitational source substantially modifies this
standard picture and it gives rise to a novel contribution through which the
oscillation probabilities also depend on the individual neutrino masses. A
gravitating mass located between a source and a detector deflects the neutrinos
in their journey, and at a detection point, neutrinos arriving through
different paths can lead to the phenomenon of interference. The flavour
transition probabilities computed in the presence of such interference depend
on the individual masses of neutrinos whenever there is a non-zero path
difference between the interfering neutrinos. We demonstrate this explicitly by
considering an example of weak lensing induced by a Schwarzschild mass. Through
the simplest two flavour case, we show that the oscillation probability in the
presence of lensing is sensitive to the sign of $\Delta m^2 = m_2^2 -m_1^2$,
for non-maximal mixing between two neutrinos, unlike in the case of standard
vacuum oscillation in flat spacetime. Further, the probability itself
oscillates with respect to the path difference and the frequency of such
oscillations depends on the absolute mass scale $m_1$ or $m_2$. We also give
results for realistic three flavour case and discuss various implications of
gravitationally modified neutrino oscillations and means of observing them.
|
This paper proposes a new Helmholtz decomposition based windowed Green
function (HD-WGF) method for solving the time-harmonic elastic scattering
problems on a half-space with Dirichlet boundary conditions in both 2D and 3D.
The Helmholtz decomposition is applied to separate the pressure and shear
waves, which satisfy the Helmholtz and Helmholtz/Maxwell equations,
respectively, and the corresponding boundary integral equations of type
$(\mathbb{I}+\mathbb{T})\bs\phi=\bs f$, that couple these two waves on the
unbounded surface, are derived based on the free-space fundamental solution of
Helmholtz equation. This approach avoids the treatment of the complex elastic
displacement tensor and traction operator that involved in the classical
integral equation method for elastic problems. Then a smooth ``slow-rise''
windowing function is introduced to truncate the boundary integral equations
and a ``correction'' strategy is proposed to ensure the uniformly fast
convergence for all incident angles of plane incidence. Numerical experiments
for both two and three dimensional problems are presented to demonstrate the
accuracy and efficiency of the proposed method.
|
[Abridged] The environment where galaxies are found heavily influences their
evolution. Close groupings, like the cores of galaxy clusters or compact
groups, evolve in ways far more dramatic than their isolated counterparts. We
have conducted a multiwavelength study of HCG7, consisting of four giant
galaxies: 3 spirals and 1 lenticular. We use Hubble Space Telescope (HST)
imaging to identify and characterize the young and old star cluster
populations. We find young massive clusters (YMC) mostly in the three spirals,
while the lenticular features a large, unimodal population of globular clusters
(GC) but no detectable clusters with ages less than ~Gyr. The spatial and
approximate age distributions of the ~300 YMCs and ~150 GCs thus hint at a
regular star formation history in the group over a Hubble time. While at first
glance the HST data show the galaxies as undisturbed, our deep ground-based,
wide-field imaging that extends the HST coverage reveals faint signatures of
stellar material in the intra-group medium. We do not detect the intra-group
medium in HI or Chandra X-ray observations, signatures that would be expected
to arise from major mergers. We find that the HI gas content of the individual
galaxies and the group as a whole are a third of the expected abundance. The
appearance of quiescence is challenged by spectroscopy that reveals an intense
ionization continuum in one galaxy nucleus, and post-burst characteristics in
another. Our spectroscopic survey of dwarf galaxy members yields one dwarf
elliptical in an apparent tidal feature. We therefore suggest an evolutionary
scenario for HCG7, whereby the galaxies convert most of their available gas
into stars without major mergers and result in a dry merger. As the conditions
governing compact groups are reminiscent of galaxies at intermediate redshift,
we propose that HCGs are appropriate for studying galaxy evolution at z~1-2.
|
Conditional Variational Auto Encoders (VAE) are gathering significant
attention as an Explainable Artificial Intelligence (XAI) tool. The codes in
the latent space provide a theoretically sound way to produce counterfactuals,
i.e. alterations resulting from an intervention on a targeted semantic feature.
To be applied on real images more complex models are needed, such as
Hierarchical CVAE. This comes with a challenge as the naive conditioning is no
longer effective. In this paper we show how relaxing the effect of the
posterior leads to successful counterfactuals and we introduce VAEX an
Hierarchical VAE designed for this approach that can visually audit a
classifier in applications.
|
Non-homogeneous renewal processes are not yet well established. One of the
tools necessary for studying these processes is the non-homogeneous time
convolution. Renewal theory has great relevance in general in economics and in
particular in actuarial science, however most actuarial problems are connected
with the age of the insured person. The introduction of non-homogeneity in the
renewal processes brings actuarial applications closer to the real world. This
paper will define the non-homogeneous time convolutions and try to give order
to the non-homogeneous renewal processes. The numerical aspects of these
processes are dealt with and, finally, a real data application to an aspect of
motorcar insurance is proposed.
|
In this paper we present results of the lowest eigenvalues of random
Hamiltonians for both fermion and boson systems. We show that an empirical
formula of evaluating the lowest eigenvalues of random Hamiltonians in terms of
energy centroids and widths of eigenvalues are applicable to many different
systems (except for $d$ boson systems). We improve the accuracy of the formula
by adding moments higher than two. We suggest another new formula to evaluate
the lowest eigenvalues for random matrices with large dimensions (20-5000).
These empirical formulas are shown to be applicable not only to the evaluation
of the lowest energy but also to the evaluation of excited energies of systems
under random two-body interactions.
|
The AdS/CFT duality has established a mapping between quantities in the bulk
AdS black-hole physics and observables in a boundary finite-temperature field
theory. Such a relationship appears to be valid for an arbitrary number of
spacetime dimensions, extrapolating the original formulations of Maldacena's
correspondence. In the same sense properties like the hydrodynamic behavior of
AdS black-hole fluctuations have been proved to be universal. We investigate in
this work the complete quasinormal spectra of gravitational perturbations of
$d$-dimensional plane-symmetric AdS black holes (black branes). Holographically
the frequencies of the quasinormal modes correspond to the poles of two-point
correlation functions of the field-theory stress-energy tensor. The important
issue of the correct boundary condition to be imposed on the gauge-invariant
perturbation fields at the AdS boundary is studied and elucidated in a fully
$d$-dimensional context. We obtain the dispersion relations of the first few
modes in the low-, intermediate- and high-wavenumber regimes. The sound-wave
(shear-mode) behavior of scalar (vector)-type low-frequency quasinormal mode is
analytically and numerically confirmed. These results are found employing both
a power series method and a direct numerical integration scheme.
|
This paper addresses the problem of single-target tracker performance
evaluation. We consider the performance measures, the dataset and the
evaluation system to be the most important components of tracker evaluation and
propose requirements for each of them. The requirements are the basis of a new
evaluation methodology that aims at a simple and easily interpretable tracker
comparison. The ranking-based methodology addresses tracker equivalence in
terms of statistical significance and practical differences. A fully-annotated
dataset with per-frame annotations with several visual attributes is
introduced. The diversity of its visual properties is maximized in a novel way
by clustering a large number of videos according to their visual attributes.
This makes it the most sophistically constructed and annotated dataset to date.
A multi-platform evaluation system allowing easy integration of third-party
trackers is presented as well. The proposed evaluation methodology was tested
on the VOT2014 challenge on the new dataset and 38 trackers, making it the
largest benchmark to date. Most of the tested trackers are indeed
state-of-the-art since they outperform the standard baselines, resulting in a
highly-challenging benchmark. An exhaustive analysis of the dataset from the
perspective of tracking difficulty is carried out. To facilitate tracker
comparison a new performance visualization technique is proposed.
|
Taken together and viewed holistically, recent theory, low temperature (T)
transport, photoelectron spectroscopy and quantum oscillation experiments have
built a very strong case that the paradigmatic mixed valence insulator SmB6 is
currently unique as a three-dimensional strongly correlated topological
insulator (TI). As such, its many-body T-dependent bulk gap brings an extra
richness to the physics beyond that of the weakly correlated TI materials. How
will the robust, symmetry-protected TI surface states evolve as the gap closes
with increasing T? For SmB6 exploiting this opportunity first requires
resolution of other important gap-related issues, its origin, its magnitude,
its T-dependence and its role in bulk transport. In this paper we report
detailed T-dependent angle resolved photoemission spectroscopy (ARPES)
measurements that answer all these questions in a unified way.
|
We have performed three-dimensional magneto-hydrodynamical simulations of
stellar accretion disks, using the PLUTO code, and studied the accretion of gas
onto a Jupiter-mass planet and the structure of the circumplanetary gas flow
after opening a gap in the disk. We compare our results with simulations of
laminar, yet viscous disks with different levels of an $\alpha$-type viscosity.
In all cases, we find that the accretion flow across the surface of the Hill
sphere of the planet is not spherically or azimuthally symmetric, and is
predominantly restricted to the mid-plane region of the disk. Even in the
turbulent case, we find no significant vertical flow of mass into the Hill
sphere. The outer parts of the circumplanetary disk are shown to rotate
significantly below Keplerian speed, independent of viscosity, while the
circumplanetary disk density (therefore the angular momentum) increases with
viscosity. For a simulation of a magnetized turbulent disk, where the global
averaged alpha stress is $\alpha_{MHD}=10^{-3}$, we find the accretion rate
onto the planet to be $\dot{M}\approx 2\times10^{-6}M_{J}yr^{-1}$ for a gap
surface density of $12 g cm^{-2}$. This is about a third of the accretion rate
obtained in a laminar viscous simulation with equivalent $\alpha$ parameter.
|
In the classic problem of sequence prediction, a predictor receives a
sequence of values from an emitter and tries to guess the next value before it
appears. The predictor masters the emitter if there is a point after which all
of the predictor's guesses are correct. In this paper we consider the case in
which the predictor is an automaton and the emitted values are drawn from a
finite set; i.e., the emitted sequence is an infinite word. We examine the
predictive capabilities of finite automata, pushdown automata, stack automata
(a generalization of pushdown automata), and multihead finite automata. We
relate our predicting automata to purely periodic words, ultimately periodic
words, and multilinear words, describing novel prediction algorithms for
mastering these sequences.
|
We investigate the quasiparticle band structure of anatase TiO2, a wide gap
semiconductor widely employed in photovoltaics and photocatalysis. We obtain GW
quasiparticle energies starting from density-functional theory (DFT)
calculations including Hubbard U corrections. Using a simple iterative
procedure we determine the value of the Hubbard parameter yielding a vanishing
quasiparticle correction to the fundamental band gap of anatase TiO2. The band
gap (3.3 eV) calculated using this optimal Hubbard parameter is smaller than
the value obtained by applying many-body perturbation theory to standard DFT
eigenstates and eigenvalues (3.7 eV). We extend our analysis to the rutile
polymorph of TiO2 and reach similar conclusions. Our work highlights the role
of the starting non-interacting Hamiltonian in the calculation of GW
quasiparticle energies in TiO2, and suggests an optimal Hubbard parameter for
future calculations.
|
The well-known bluer-when-brighter trend observed in quasar variability is a
signature of the complex processes in the accretion disk, and can be a probe of
the quasar variability mechanism. Using a sample of 604 variable quasars with
repeat spectra in SDSS-I/II, we construct difference spectra to investigate the
physical causes of this bluer-when-brighter trend. The continuum of our
composite difference spectrum is well-fit by a power-law, with a spectral index
in excellent agreement with previous results. We measure the spectral
variability relative to the underlying spectra of the quasars, which is
independent of any extinction, and compare to model predictions. We show that
our SDSS spectral variability results cannot be produced by global accretion
rate fluctuations in a thin disk alone. However, we find that a simple model of
a inhomogeneous disk with localized temperature fluctuations will produce
power-law spectral variability over optical wavelengths. We show that the
inhomogeneous disk will provide good fits to our observed spectral variability
if the disk has large temperature fluctuations in many independently varying
zones, in excellent agreement with independent constraints from quasar
microlensing disk sizes, their strong UV spectral continuum, and single-band
variability amplitudes. Our results provide an independent constraint on quasar
variability models, and add to the mounting evidence that quasar accretion
disks have large localized temperature fluctuations.
|
We show that the Masur-Veech volumes and area Siegel-Veech constants can be
obtained by intersection numbers on the strata of Abelian differentials with
prescribed orders of zeros. As applications, we evaluate their large genus
limits and compute the saddle connection Siegel-Veech constants for all strata.
We also show that the same results hold for the spin and hyper-elliptic
components of the strata.
|
We perform the first lattice study on the mixing of the isoscalar
pseudoscalar meson $\eta$ and the pseudoscalar glueball $G$ in the $N_f=2$ QCD
at the pion mass $m_\pi\approx 350$ MeV. The $\eta$ mass is determined to be
$m_\eta=714(6)(16)$ MeV. Through the Witten-Veneziano relation, this value can
be matched to a mass value of $\sim 981$ MeV for the $\mathrm{SU(3)}$
counterpart of $\eta$. Based on a large gauge ensemble, the $\eta-G$ mixing
energy and the mixing angle are determined to be $|x|=107(15)(2)$ MeV and
$|\theta|=3.46(46)^\circ$ from the $\eta-G$ correlators that are calculated
using the distillation method. We conclude that the $\eta-G$ mixing is tiny and
the topology induced interaction contributes most of $\eta$ mass owing to the
QCD $\mathrm{U_A(1)}$ anomaly.
|
We present two-dimensional general relativistic radiative
magnetohydrodynamical simulations of accretion disks around non-rotating
stellar-mass black hole. We study the evolution of an equilibrium accreting
torus in different grid resolutions to determine an adequate resolution to
produce a stable turbulent disk driven by magneto-rotational instability. We
evaluate the quality parameter, $Q_{\theta}$, from the ratio of MRI wavelength
to the grid zone size and examine the effect of resolution in various
quantitative values such as the accretion rate, magnetisation, fluxes of
physical quantities and disk scale-height. We also analyse how the resolution
affects the formation of plasmoids produced in the magnetic reconnection
events.
|
We construct a new family of models of our Galaxy in which dark matter and
disc stars are both represented by distribution functions that are analytic
functions of the action integrals of motion. The potential that is
self-consistently generated by the dark matter, stars and gas is determined,
and parameters in the distribution functions are adjusted until the model is
compatible with observational constraints on the circular-speed curve, the
vertical density profile of the stellar disc near the Sun, the kinematics of
nearly 200 000 giant stars within 2 kpc of the Sun, and estimates of the
optical depth to microlensing of bulge stars. We find that the data require a
dark halo in which the phase-space density is approximately constant for
actions |J| \lesssim 140 kpc km ^-1. In real space these haloes have core radii
~ 2 kpc.
|
Continuous time random walks (CTRWs) are versatile models for anomalous
diffusion processes that have found widespread application in the quantitative
sciences. Their scaling limits are typically non-Markovian, and the computation
of their finite-dimensional distributions is an important open problem. This
paper develops a general semi-Markov theory for CTRW limit processes in
$\mathbb{R}^d$ with infinitely many particle jumps (renewals) in finite time
intervals. The particle jumps and waiting times can be coupled and vary with
space and time. By augmenting the state space to include the scaling limits of
renewal times, a CTRW limit process can be embedded in a Markov process.
Explicit analytic expressions for the transition kernels of these Markov
processes are then derived, which allow the computation of all finite
dimensional distributions for CTRW limits. Two examples illustrate the proposed
method.
|
Supernova (SN) H0pe was discovered as a new transient in James Webb Space
Telescope (JWST) NIRCam images of the galaxy cluster PLCK G165.7+67.0 taken as
part of the "Prime Extragalactic Areas for Reionization and Lensing Science"
(PEARLS) JWST GTO program (# 1176) on 2023 March 30 (AstroNote 2023-96; Frye et
al. 2023). The transient is a compact source associated with a background
galaxy that is stretched and triply-imaged by the cluster's strong
gravitational lensing. This paper reports spectra in the 950-1370 nm observer
frame of two of the galaxy's images obtained with Large Binocular Telescope
(LBT) Utility Camera in the Infrared (LUCI) in longslit mode two weeks after
the \JWST\ observations. The individual average spectra show the [OII] doublet
and the Balmer and 4000 Angstrom breaks at redshift z=1.783+/-0.002. The CIGALE
best-fit model of the spectral energy distribution indicates that SN H0pe's
host galaxy is massive (Mstar~6x10^10 Msun after correcting for a magnification
factor ~7) with a predominant intermediate age (~2 Gyr) stellar population,
moderate extinction, and a magnification-corrected star formation rate ~13
Msun/yr, consistent with being below the main sequence of star formation. These
properties suggest that H0pe might be a type Ia SN. Additional observations of
SN H0pe and its host recently carried out with JWST (JWST-DD-4446; PI: B. Frye)
will be able to both determine the SN classification and confirm its
association with the galaxy analyzed in this work.
|
As an instance of the B-polynomial, the circuit, or cycle, polynomial
P(G(Gamma); w) of the generalized rooted product G(Gamma) of graphs was studied
by Farrell and Rosenfeld ({\em Jour. Math. Sci. (India)}, 2000, \textbf{11}(1),
35--47) and Rosenfeld and Diudea ({\em Internet Electron. J. Mol. Des.}, 2002,
\textbf{1}(3), 142--156). In both cases, the rooted product G(Gamma) was
considered without any restrictions on graphs G and Gamma. Herein, we present a
new general result and its corollaries concerning the case when the core graph
G is restricted to be bipartite. The last instance of G(Gamma), as well as all
its predecessors, can find chemical applications.
|
We compute the electromagnetic charged kaon form factor in the time-like
region by employing a Poincar\'e covariant formalism of the Bethe-Salpeter
equation to study quark-antiquark bound states in conjunction with the
Schwinger-Dyson equation for the quark propagator. Following a recent kindred
calculation of the time-like electromagnetic pion form factor, we include the
most relevant intermediate composite particles permitted by their quantum
numbers in the interaction kernel to allow for a decay mechanism for the
resonances involved. This term augments the usual gluon mediated interaction
between quarks. For a sufficiently low energy time-like probing photon, the
electromagnetic form factor is saturated by the $\rho(770)$ and $\phi(1020)$
resonances. We assume $SU(2)$ isospin symmetry throughout. Our results for the
absolute value squared of the electromagnetic form factor agree qualitatively
rather well and quantitatively moderately so with available experimental data.
|
We probe the isotropy of the Universe with the largest all-sky photometric
redshift dataset currently available, namely WISE~$\times$~SuperCOSMOS. We
search for dipole anisotropy of galaxy number counts in multiple redshift
shells within the $0.10 < z < 0.35$ range, for two subsamples drawn from the
same parent catalogue. Our results show that the dipole directions are in good
agreement with most of the previous analyses in the literature, and in most
redshift bins the dipole amplitudes are well consistent with $\Lambda$CDM-based
mocks in the cleanest sample of this catalogue. In the $z<0.15$ range, however,
we obtain a persistently large anisotropy in both subsamples of our dataset.
Overall, we report no significant evidence against the isotropy assumption in
this catalogue except for the lowest redshift ranges. The origin of the latter
discrepancy is unclear, and improved data may be needed to explain it.
|
The decay $J/\psi \rightarrow \omega p \bar{p}$ has been studied, using
$225.3\times 10^{6}$ $J/\psi$ events accumulated at BESIII. No significant
enhancement near the $p\bar{p}$ invariant-mass threshold (denoted as
$X(p\bar{p})$) is observed. The upper limit of the branching fraction
$\mathcal{B}(J/\psi \rightarrow \omega X(p\bar{p}) \rightarrow \omega p
\bar{p})$ is determined to be $3.9\times10^{-6}$ at the 95% confidence level.
The branching fraction of $J/\psi \rightarrow \omega p \bar{p}$ is measured to
be $\mathcal{B}(J/\psi \rightarrow \omega p \bar{p}) =(9.0 \pm 0.2\
(\text{stat.})\pm 0.9\ (\text{syst.})) \times 10^{-4}$.
|
The class-agnostic counting (CAC) problem has caught increasing attention
recently due to its wide societal applications and arduous challenges. To count
objects of different categories, existing approaches rely on user-provided
exemplars, which is hard-to-obtain and limits their generality. In this paper,
we aim to empower the framework to recognize adaptive exemplars within the
whole images. A zero-shot Generalized Counting Network (GCNet) is developed,
which uses a pseudo-Siamese structure to automatically and effectively learn
pseudo exemplar clues from inherent repetition patterns. In addition, a
weakly-supervised scheme is presented to reduce the burden of laborious density
maps required by all contemporary CAC models, allowing GCNet to be trained
using count-level supervisory signals in an end-to-end manner. Without
providing any spatial location hints, GCNet is capable of adaptively capturing
them through a carefully-designed self-similarity learning strategy. Extensive
experiments and ablation studies on the prevailing benchmark FSC147 for
zero-shot CAC demonstrate the superiority of our GCNet. It performs on par with
existing exemplar-dependent methods and shows stunning cross-dataset generality
on crowd-specific datasets, e.g., ShanghaiTech Part A, Part B and UCF_QNRF.
|
This paper has been superseded by quant-ph/9908074.
|
In this work, a tensor completion problem is studied, which aims to perfectly
recover the tensor from partial observations. Existing theoretical guarantee
requires the involved transform to be orthogonal, which hinders its
applications. In this paper, jumping out of the constraints of isotropy or
self-adjointness, the theoretical guarantee of exact tensor completion with
arbitrary linear transforms is established. To that end, we define a new
tensor-tensor product, which leads us to a new definition of the tensor nuclear
norm. Equipped with these tools, an efficient algorithm based on alternating
direction of multipliers is designed to solve the transformed tensor completion
program and the theoretical bound is obtained. Our model and proof greatly
enhance the flexibility of tensor completion and extensive experiments validate
the superiority of the proposed method.
|
The Regge exchange model used by Dzierba et al. is shown to be questionable,
since the pion pole term is not allowed. Hence the Regge amplitudes in their
calculation are exaggerated. The amount of kinematic reflection in the mass
spectrum of the (nK+) system, which is one decay channel of a possible
pentaquark, is not well justified in the fitting procedure used by Dzierba et
al., as shown by comparison with the (K+K-) invariant mass spectrum, which is
one decay channel of the a_2 and f_2 tensor mesons. While kinematic reflections
are still a concern in some papers that have presented evidence for the
pentaquark, better quantitative calculations are needed to demonstrate the
significance of this effect.
|
The relationship between the M-species stochastic Lotka-Volterra competition
(SLVC) model and the M-allele Moran model of population genetics is explored
via timescale separation arguments. When selection for species is weak and the
population size is large but finite, precise conditions are determined for the
stochastic dynamics of the SLVC model to be mappable to the neutral Moran
model, the Moran model with frequency-independent selection and the Moran model
with frequency-dependent selection (equivalently, a game-theoretic formulation
of the Moran model). We demonstrate how these mappings can be used to calculate
extinction probabilities and the times until a species' extinction in the SLVC
model.
|
The framework of Baikov-Gazizov-Ibragimov approximate symmetries has proven
useful for many examples where a small perturbation of an ordinary differential
equation (ODE) destroys its local symmetry group. For the perturbed model, some
of the local symmetries of the unperturbed equation may (or may not) re-appear
as approximate symmetries, and new approximate symmetries can appear.
Approximate symmetries are useful as a tool for the construction of approximate
solutions. We show that for algebraic and first-order differential equations,
to every point symmetry of the unperturbed equation, there corresponds an
approximate point symmetry of the perturbed equation. For second and
higher-order ODEs, this is not the case: some point symmetries of the original
ODE may be unstable, that is, they do not arise in the approximate point
symmetry classification of the perturbed ODE. We show that such unstable point
symmetries correspond to higher-order approximate symmetries of the perturbed
ODE, and can be systematically computed. Two detailed examples, including a
fourth-order nonlinear Boussinesq equation, are presented. Examples of the use
of higher-order approximate symmetries and approximate integrating factors to
obtain approximate solutions of higher-order ODEs are provided.
|
The 11-year sunspot cycle has many irregularities, the most promi- nent
amongst them being the grand minima when sunspots may not be seen for several
cycles. After summarizing the relevant observational data about the
irregularities, we introduce the flux transport dynamo model, the currently
most successful theoretical model for explaining the 11-year sunspot cycle.
Then we analyze the respective roles of nonlinearities and random fluctuations
in creating the irregularities. We also discuss how it has recently been
realized that the fluctuations in meridional circula- tion also can be a source
of irregularities. We end by pointing out that fluctuations in the poloidal
field generation and fluctuations in meridional circulation together can
explain the occurrences of grand minima.
|
Despite advancements in the areas of parallel and distributed computing, the
complexity of programming on High Performance Computing (HPC) resources has
deterred many domain experts, especially in the areas of machine learning and
artificial intelligence (AI), from utilizing performance benefits of such
systems. Researchers and scientists favor high-productivity languages to avoid
the inconvenience of programming in low-level languages and costs of acquiring
the necessary skills required for programming at this level. In recent years,
Python, with the support of linear algebra libraries like NumPy, has gained
popularity despite facing limitations which prevent this code from distributed
runs. Here we present a solution which maintains both high level programming
abstractions as well as parallel and distributed efficiency. Phylanx, is an
asynchronous array processing toolkit which transforms Python and NumPy
operations into code which can be executed in parallel on HPC resources by
mapping Python and NumPy functions and variables into a dependency tree
executed by HPX, a general purpose, parallel, task-based runtime system written
in C++. Phylanx additionally provides introspection and visualization
capabilities for debugging and performance analysis. We have tested the
foundations of our approach by comparing our implementation of widely used
machine learning algorithms to accepted NumPy standards.
|
We study the relaxation of the exciton spin (longitudinal relaxation time
$T_{1}$) in single asymmetrical quantum dots due to an interplay of the
short--range exchange interaction and acoustic phonon deformation. The
calculated relaxation rates are found to depend strongly on the dot size,
magnetic field and temperature. For typical quantum dots and temperatures below
100 K, the zero--magnetic field relaxation times are long compared to the
exciton lifetime, yet they are strongly reduced in high magnetic fields. We
discuss explicitly quantum dots based on (In,Ga)As and (Cd,Zn)Se semiconductor
compounds.
|
We explore the formation and evolution of the black hole X-ray binary system
M33 X-7. In particular, we examine whether accounting for systematic errors in
the stellar parameters inherent to single star models, as well as the
uncertainty in the distance to M33, can explain the discrepancy between the
observed and expected luminosity of the ~70 solar masses companion star. Our
analysis assumes no prior interactions between the companion star and the black
hole progenitor. We use four different stellar evolution codes, modified to
include a variety of current stellar wind prescriptions. For the models
satisfying the observational constraints on the donor star's effective
temperature and luminosity, we recalculate the black hole mass, the orbital
separation, and the mean X-ray luminosity. Our best model, satisfying
simultaneously all observational constraints except the observationally
inferred companion mass, consists of a ~13 solar masses black hole and a ~54
solar masses companion star. We conclude that a star with the observed mass and
luminosity can not be explained via single star evolution models, and that a
prior interaction between the companion star and the black hole progenitor
should be taken into account.
|
Many real networks are not isolated from each other but form networks of
networks, often interrelated in non trivial ways. Here, we analyze an epidemic
spreading process taking place on top of two interconnected complex networks.
We develop a heterogeneous mean field approach that allows us to calculate the
conditions for the emergence of an endemic state. Interestingly, a global
endemic state may arise in the coupled system even though the epidemics is not
able to propagate on each network separately, and even when the number of
coupling connections is small. Our analytic results are successfully confronted
against large-scale numerical simulations.
|
Phases of the spherical harmonic analysis of full-sky cosmic microwave
background (CMB) temperature data contain useful information complementary to
the ubiquitous angular power spectrum. In this letter we present a new method
of phase analysis on incomplete sky maps. It is based on Fourier phases of
equal-latitude pixel rings of the map, which are related to the mean angle of
the trigonometric moments from the full-sky phases. They have an advantage for
probing regions of interest without tapping polluted Galactic plane area, and
can localize non-Gaussian features and departure from statistical isotropy in
the CMB.
|
The main goal of this paper is to estimate the regional acoustic and
geoacoustic shallow-water environment from data collected by a vertical
hydrophone array and transmitted by distant time-harmonic point sources. We aim
at estimating the statistical properties of the random fluctuations of the
index of refraction in the water column and the characteristics of the sea
bottom. We first explain from first principles how acoustic wave propagation
can be expressed as Markovian dynamics for the complex mode amplitudes of the
sound pressure, which makes it possible to express the cross moments of the
sound pressure in terms of the parameters to be estimated. We then show how the
estimation problem can be formulated as a nonlinear inverse problem using this
formulation, that can be solved by minimization of a misfit function. We apply
this method to experimental data collected by the ALMA system (Acoustic
Laboratory for Marine Applications).
|
In this thesis we investigate several aspects related to the theory of
fluctuations in the Cosmic Microwave Background. We develop a new algorithm to
calculate the angular power spectrum of the anisotropies which is two orders of
magnitude faster than the standard Boltzmann hierarchy approach (Chapter 3).
The new algorithm will become essential when comparing the observational
results of the next generation of CMB experiments with theoretical predictions.
The parameter space of the models is so large that an exhaustive exploration to
find the best fit model will only be feasible with this new type of algorithm.
We also investigate the polarization properties of the CMB field. We develop a
new formalism to describe the statistics of the polarization variables that
takes into account their spin two nature (Chapter 2). In Chapter 4 we explore
several physical effects that create distinct features in the polarization
power spectrum. We study the signature of the reionization of the universe and
a stochastic background of gravitational waves. We also describe how the
polarization correlation functions can be used to test the causal structure of
the universe. Finally in Chapter 5 we quantify the amount of information the
next generation of satellites can obtain by measuring both temperature and
polarization anisotropies. We calculate the expected error bars on the
cosmological parameters for the specifications of the MAP and Planck satellite
missions.
|
The optimal discrimination of non-orthogonal quantum states with minimum
error probability is a fundamental task in quantum measurement theory as well
as an important primitive in optical communication. In this work, we propose
and experimentally realize a new and simple quantum measurement strategy
capable of discriminating two coherent states with smaller error probabilities
than can be obtained using the standard measurement devices; the Kennedy
receiver and the homodyne receiver.
|
Polymer vesicles are stable robust vesicles made from block copolymer
amphiphiles. Recent progress in the chemical design of block copolymers opens
up the exciting possibility of creating a wide variety of polymer vesicles with
varying fine structure, functionality and geometry. Polymer vesicles not only
constitute useful systems for drug delivery and micro/nano-reactors but also
provide an invaluable arena for exploring the ordering of matter on curved
surfaces embedded in three dimensions. By choosing suitable liquid-crystalline
polymers for one of the copolymer components one can create vesicles with
smectic stripes. Smectic order on shapes of spherical topology inevitably
possesses topological defects (disclinations) that are themselves distinguished
regions for potential chemical functionalization and nucleators of vesicle
budding. Here we report on glassy striped polymer vesicles formed from
amphiphilic block copolymers in which the hydrophobic block is a smectic liquid
crystal polymer containing cholesteryl-based mesogens. The vesicles exhibit
two-dimensional smectic order and are ellipsoidal in shape with defects, or
possible additional budding into isotropic vesicles, at the poles.
|
Thermal conduction has been suggested as a possible mechanism by which
sufficient energy is supplied to the central regions of galaxy clusters to
balance the effect of radiative cooling. Here we present the results of a
simulated, high-resolution, 3-d Virgo cluster for different values of thermal
conductivity (1, 1/10, 1/100, 0 times the full Spitzer value). Starting from an
initially isothermal cluster atmosphere we allow the cluster to evolve freely
over timescales of roughly $ 1.3-4.7 \times 10^{9} $ yrs. Our results show that
thermal conductivity at the Spitzer value can increase the central ICM
radiative cooling time by a factor of roughly 3.6. In addition, for larger
values of thermal conductvity the simulated temperature and density profiles
match the observations significantly better than for the lower values. However,
no physically meaningful value of thermal conductivity was able to postpone the
cooling catastrophe (characterised by a rapid increase in the central density)
for longer than a fraction of the Hubble time nor explain the absence of a
strong cooling flow in the Virgo cluster today. We also calculate the effective
adiabatic index of the cluster gas for both simulation and observational data
and compare the values with theoretical expectations. Using this method it
appears that the Virgo cluster is being heated in the cluster centre by a
mechanism other than thermal conductivity. Based on this and our simulations it
is also likely that the thermal conductvity is suppressed by a factor of at
least 10 and probably more. Thus, we suggest that thermal conductvity, if
present at all, has the effect of slowing down the evolution of the ICM, by
radiative cooling, but only by a factor of a few.
|
In this paper, we study a nonlocal elliptic problem with the fractional
Laplacian on $R^n$. We show that the problem has infinite positive solutions in
$C^\tau(R^n)\bigcap H^\alpha_{loc}(R^n)$. Moreover each of these solutions
tends to some positive constant limit at infinity. We extend Lin's result to
the nonlocal problem on $R^n$.
|
Given two sets finite $S_0$ and $S_1$ of quantum states. We show necessary
and sufficient conditions for distinguishing them by a measurement.
|
The focus of the present work is on the Cauchy problem for the quadratic
gravity models introduced in \cite{stelle}-\cite{stelle2}. These are
renormalizable higher order derivative models of gravity, but at cost of
ghostly states propagating in the phase space. A previous work on the subject
is \cite{noakes}. The techniques employed here differ slightly from those in
\cite{noakes}, but the main conclusions agree. Furthermore, the analysis of the
initial value formulation in \cite{noakes} is enlarged and the use of harmonic
coordinates is clarified. In particular, it is shown that the initial
constraints found \cite{noakes} include a redundant one. In other words, this
constraint is satisfied when the equations of motion are taken into account. In
addition, some terms that are not specified in \cite{noakes} are derived
explicitly. This procedure facilitates application of some of the mathematical
theorems given in \cite{ringstrom}. As a consequence of these theorems, the
existence of both $C^\infty$ solutions and maximal globally hyperbolic
developments is proved. The obtained equations may be relevant for the
stability analysis of the solutions under small perturbations of the initial
data.
|
The Probability Density Function (PDF) provides an estimate of the
photometric redshift (zphot) prediction error. It is crucial for current and
future sky surveys, characterized by strict requirements on the zphot
precision, reliability and completeness. The present work stands on the
assumption that properly defined rejection criteria, capable of identifying and
rejecting potential outliers, can increase the precision of zphot estimates and
of their cumulative PDF, without sacrificing much in terms of completeness of
the sample. We provide a way to assess rejection through proper cuts on the
shape descriptors of a PDF, such as the width and the height of the maximum
PDF's peak. In this work we tested these rejection criteria to galaxies with
photometry extracted from the Kilo Degree Survey (KiDS) ESO Data Release 4,
proving that such approach could lead to significant improvements to the zphot
quality: e.g., for the clipped sample showing the best trade-off between
precision and completeness, we achieve a reduction in outliers fraction of
$\simeq 75\%$ and an improvement of $\simeq 6\%$ for NMAD, with respect to the
original data set, preserving the $\simeq 93\%$ of its content.
|
The squashed Kaluza-Klien (KK) black holes differ from the Schwarzschild
black holes with asymptotic flatness or the black strings even at energies for
which the KK modes are not excited yet, so that squashed KK black holes open a
window in higher dimensions. Another important feature is that the squashed KK
black holes are apparently stable and, thereby, let us avoid the
Gregory-Laflamme instability. In the present paper, the evolution of scalar and
gravitational perturbations in time and frequency domains is considered for
these squashed KK black holes. The scalar field perturbations are analyzed for
general rotating squashed KK black holes. Gravitational perturbations for the
so called zero mode are shown to be decayed for non-rotating black holes, in
concordance with the stability of the squashed KK black holes. The correlation
of quasinormal frequencies with the size of extra dimension is discussed.
|
The paper shows how a generalization of the elasticity theory to four
dimensions and to space-time allows for a consistent description of the
homogeneous and isotropic universe, including the accelerated expansion. The
analogy is manifested by the inclusion in the traditional Lagrangian of general
relativity of an additional term accounting for the strain induced in the
manifold (i.e. in space-time) by the curvature, be it induced by the presence
of a texture defect or by a matter/energy distribution. The additional term is
sufficient to account for various observed features of the universe and to give
a simple interpretation for the so called dark energy. Then, we show how the
same approach can be adopted back in three dimensions to obtain the equilibrium
configuration of a given solid subject to strain induced by defects or applied
forces. Finally, it is shown how concepts coming from the familiar elasticity
theory can inspire new approaches to cosmology and in return how methods
appropriated to General Relativity can be applied back to classical problems of
elastic deformations in three dimensions.
|
Cartesian impedance control is a type of motion control strategy for robots
that improves safety in partially unknown environments by achieving a compliant
behavior of the robot with respect to its external forces. This compliant robot
behavior has the added benefit of allowing physical human guidance of the
robot. In this paper, we propose a C++ implementation of compliance control
valid for any torque-commanded robotic manipulator. The proposed controller
implements Cartesian impedance control to track a desired end-effector pose.
Additionally, joint impedance is projected in the nullspace of the Cartesian
robot motion to track a desired robot joint configuration without perturbing
the Cartesian motion of the robot. The proposed implementation also allows the
robot to apply desired forces and torques to its environment. Several safety
features such as filtering, rate limiting, and saturation are included in the
proposed implementation. The core functionalities are in a re-usable base
library and a Robot Operating System (ROS) ros_control integration is provided
on top of that. The implementation was tested with the KUKA LBR iiwa robot and
the Franka Emika Robot (Panda) both in simulation and with the physical robots.
|
The energy market encompasses the behavior of energy supply and trading
within a platform system. By utilizing centralized or distributed trading,
energy can be effectively managed and distributed across different regions,
thereby achieving market equilibrium and satisfying both producers and
consumers. However, recent years have presented unprecedented challenges and
difficulties for the development of the energy market. These challenges include
regional energy imbalances, volatile energy pricing, high computing costs, and
issues related to transaction information disclosure. Researchers widely
acknowledge that the security features of blockchain technology can enhance the
efficiency of energy transactions and establish the fundamental stability and
robustness of the energy market. This type of blockchain-enabled energy market
is commonly referred to as an energy blockchain. Currently, there is a
burgeoning amount of research in this field, encompassing algorithm design,
framework construction, and practical application. It is crucial to organize
and compare these research efforts to facilitate the further advancement of
energy blockchain. This survey aims to comprehensively review the fundamental
characteristics of blockchain and energy markets, highlighting the significant
advantages of combining the two. Moreover, based on existing research outcomes,
we will categorize and compare the current energy market research supported by
blockchain in terms of algorithm design, market framework construction, and the
policies and practical applications adopted by different countries. Finally, we
will address current issues and propose potential future directions for
improvement, to provide guidance for the practical implementation of blockchain
in the energy market.
|
It has been established under very general conditions that the ergodic
properties of Markov processes are inherited by their conditional distributions
given partial information. While the existing theory provides a rather complete
picture of classical filtering models, many infinite-dimensional problems are
outside its scope. Far from being a technical issue, the infinite-dimensional
setting gives rise to surprising phenomena and new questions in filtering
theory. The aim of this paper is to discuss some elementary examples,
conjectures, and general theory that arise in this setting, and to highlight
connections with problems in statistical mechanics and ergodic theory. In
particular, we exhibit a simple example of a uniformly ergodic model in which
ergodicity of the filter undergoes a phase transition, and we develop some
qualitative understanding as to when such phenomena can and cannot occur. We
also discuss closely related problems in the setting of conditional Markov
random fields.
|
Recent studies in deepfake detection have yielded promising results when the
training and testing face forgeries are from the same dataset. However, the
problem remains challenging when one tries to generalize the detector to
forgeries created by unseen methods in the training dataset. This work
addresses the generalizable deepfake detection from a simple principle: a
generalizable representation should be sensitive to diverse types of forgeries.
Following this principle, we propose to enrich the "diversity" of forgeries by
synthesizing augmented forgeries with a pool of forgery configurations and
strengthen the "sensitivity" to the forgeries by enforcing the model to predict
the forgery configurations. To effectively explore the large forgery
augmentation space, we further propose to use the adversarial training strategy
to dynamically synthesize the most challenging forgeries to the current model.
Through extensive experiments, we show that the proposed strategies are
surprisingly effective (see Figure 1), and they could achieve superior
performance than the current state-of-the-art methods. Code is available at
\url{https://github.com/liangchen527/SLADD}.
|
This is a follow-up to a paper with the same title and by the same authors.
In that paper, all groups were assumed to be abelian, and we are now aiming to
generalize the results to nonabelian groups.
The motivating point is Pedersen's theorem, which does hold for an arbitrary
locally compact group $G$, saying that two actions $(A,\alpha)$ and $(B,\beta)$
of $G$ are outer conjugate if and only if the dual coactions
$(A\rtimes_{\alpha}G,\widehat\alpha)$ and $(B\rtimes_{\beta}G,\widehat\beta)$
of $G$ are conjugate via an isomorphism that maps the image of $A$ onto the
image of $B$ (inside the multiplier algebras of the respective crossed
products).
We do not know of any examples of a pair of non-outer-conjugate actions such
that their dual coactions are conjugate, and our interest is therefore
exploring the necessity of latter condition involving the images, and we have
decided to use the term "Pedersen rigid" for cases where this condition is
indeed redundant.
There is also a related problem, concerning the possibility of a so-called
equivariant coaction having a unique generalized fixed-point algebra, that we
call "fixed-point rigidity". In particular, if the dual coaction of an action
is fixed-point rigid, then the action itself is Pedersen rigid, and no example
of non-fixed-point-rigid coaction is known.
|
We consider resonant absorption in a spectral line in the outflowing plasma
within several tens of Schwarzschild radii from a compact object. We take into
account both Doppler and gravitational shifting effects and re-formulate the
theory of P-Cygni profiles in these new circumstances. It is found that a
spectral line may have multiple absorption and emission components depending on
how far the region of interaction is from the compact object and what is the
distribution of velocity and opacity. Profiles of spectral lines produced near
a neutron star or a black hole can be strongly distorted by Doppler blue-, or
red-shifting, and gravitational red-shifting. These profiles may have both red-
and blue-shifted absorption troughs. The result should be contrasted with
classical P-Cygni profiles which consist of red-shifted emission and
blue-shifted absorption features.
We suggest this property of line profiles to have complicated narrow
absorption and emission components in the presence of strong gravity may help
to study spectroscopically the innermost parts of an outflow.
|
The TMD soft function can be obtained by formulating the Wilson line in terms
of auxiliary 1-dimensional fermion fields on the lattice. In this formulation,
the directional vector of the auxiliary field in Euclidean space has the form
$\tilde n = (in^0, \vec 0_\perp, n^3)$, where the time component is purely
imaginary. The components of these complex directional vectors in the Euclidean
space can be mapped directly to the rapidities of the Minkowski space soft
function. We present the results of the one-loop calculation of the Euclidean
space analog to the soft function using these complex directional vectors. As a
result, we show that the calculation is valid only when the directional vectors
obey the relation: $|r| = |n^3/n^0| > 1$, and that this result corresponds to a
computation in Minkowski space with space-like directed Wilson lines. Finally,
we show that a lattice calculable object can be constructed that has the
desired properties of the soft function.
|
We construct the covariant $\kappa$-symmetric superstring action for type
$IIB$ superstring on plane wave space supported by Ramond-Ramond background.
The action is defined as a 2d sigma-model on the coset superspace. We fix the
fermionic and bosonic light-cone gauges in the covariant Green-Schwarz
superstring action and find the light-cone string Lagrangian and the
Hamiltonian. The resulting light-cone gauge action is quadratic in both the
bosonic and fermionic superstring 2d fields, and therefore, this model can be
explicitly quantized. We also obtain a realization of the generators of the
basic superalgebra in terms of the superstring 2d fields in the light-cone
gauge.
|
Recent advances in large pre-trained language models have demonstrated strong
results in generating natural languages and significantly improved performances
for many natural language generation (NLG) applications such as machine
translation and text summarization. However, when the generation tasks are more
open-ended and the content is under-specified, existing techniques struggle to
generate long-term coherent and creative content. Moreover, the models exhibit
and even amplify social biases that are learned from the training corpora. This
happens because the generation models are trained to capture the surface
patterns (i.e. sequences of words), instead of capturing underlying semantics
and discourse structures, as well as background knowledge including social
norms. In this paper, I introduce our recent works on controllable text
generation to enhance the creativity and fairness of language generation
models. We explore hierarchical generation and constrained decoding, with
applications to creative language generation including story, poetry, and
figurative languages, and bias mitigation for generation models.
|
This paper presents Learning-based Autonomous Guidance with RObustness and
Stability guarantees (LAG-ROS), which provides machine learning-based nonlinear
motion planners with formal robustness and stability guarantees, by designing a
differential Lyapunov function using contraction theory. LAG-ROS utilizes a
neural network to model a robust tracking controller independently of a target
trajectory, for which we show that the Euclidean distance between the target
and controlled trajectories is exponentially bounded linearly in the learning
error, even under the existence of bounded external disturbances. We also
present a convex optimization approach that minimizes the steady-state bound of
the tracking error to construct the robust control law for neural network
training. In numerical simulations, it is demonstrated that the proposed method
indeed possesses superior properties of robustness and nonlinear stability
resulting from contraction theory, whilst retaining the computational
efficiency of existing learning-based motion planners.
|
The gravitational shock waves have provided crucial insights into
entanglement structures of black holes in the AdS/CFT correspondence. Recent
progress on the soft hair physics suggests that these developments from
holography may also be applicable to geometries beyond negatively curved
spacetime. In this work, we derive a remarkably simple thermodynamic relation
which relates the gravitational shock wave to a microscopic area deformation.
Our treatment is based on the covariant phase space formalism and is applicable
to any Killing horizon in generic static spacetime which is governed by
arbitrary covariant theory of gravity. The central idea is to probe the
gravitational shock wave, which shifts the horizon in the $u$ direction, by the
Noether charge constructed from a vector field which shifts the horizon in the
$v$ direction. As an application, we illustrate its use for the Gauss-Bonnet
gravity. We also derive a simplified form of the gravitational scattering
unitary matrix and show that its leading-order contribution is nothing but the
exponential of the horizon area: $\mathcal{U}=\exp(i \text{Area})$.
|
The scalar contributions to the radiative decays of light vector mesons into
a pair of neutral pseudoscalars, $V\to P^0P^0\gamma$, are studied within the
framework of the Linear Sigma Model. This model has the advantage of
incorporating not only the scalar resonances in an explicit way but also the
constraints required by chiral symmetry. The experimental data on
$\phi\to\pi^0\pi^0\gamma$, $\phi\to\pi^0\eta\gamma$, $\rho\to\pi^0\pi^0\gamma$
and $\omega\to\pi^0\pi^0\gamma$ are satisfactorily accommodated in our
framework. Theoretical predictions for $\phi\to K^0\bar K^0\gamma$,
$\rho\to\pi^0\eta\gamma$, $\omega\to\pi^0\eta\gamma$ and the ratio $\phi\to
f_0\gamma/a_0\gamma$ are also given.
|
Hashing methods have been widely used for efficient similarity retrieval on
large scale image database. Traditional hashing methods learn hash functions to
generate binary codes from hand-crafted features, which achieve limited
accuracy since the hand-crafted features cannot optimally represent the image
content and preserve the semantic similarity. Recently, several deep hashing
methods have shown better performance because the deep architectures generate
more discriminative feature representations. However, these deep hashing
methods are mainly designed for supervised scenarios, which only exploit the
semantic similarity information, but ignore the underlying data structures. In
this paper, we propose the semi-supervised deep hashing (SSDH) approach, to
perform more effective hash function learning by simultaneously preserving
semantic similarity and underlying data structures. The main contributions are
as follows: (1) We propose a semi-supervised loss to jointly minimize the
empirical error on labeled data, as well as the embedding error on both labeled
and unlabeled data, which can preserve the semantic similarity and capture the
meaningful neighbors on the underlying data structures for effective hashing.
(2) A semi-supervised deep hashing network is designed to extensively exploit
both labeled and unlabeled data, in which we propose an online graph
construction method to benefit from the evolving deep features during training
to better capture semantic neighbors. To the best of our knowledge, the
proposed deep network is the first deep hashing method that can perform hash
code learning and feature learning simultaneously in a semi-supervised fashion.
Experimental results on 5 widely-used datasets show that our proposed approach
outperforms the state-of-the-art hashing methods.
|
We present a detailed analysis from new multi-wavelength observations of the
exceptional galaxy cluster ACT-CL J0102-4915 "El Gordo," likely the most
massive, hottest, most X-ray luminous and brightest Sunyaev-Zeldovich (SZ)
effect cluster known at z>0.6. The Atacama Cosmology Telescope collaboration
discovered El Gordo as the most significant SZ decrement in a sky survey area
of 755 deg^2. Our VLT/FORS2 spectra of 89 member galaxies yield a cluster
redshift, z=0.870, and velocity dispersion, s=1321+/-106 km/s. Our Chandra
observations reveal a hot and X-ray luminous system with an integrated
temperature of Tx=14.5+/-1.0 keV and 0.5-2.0 keV band luminosity of
Lx=(2.19+/-0.11)x10^45 h70^-2 erg/s. We obtain several statistically consistent
cluster mass estimates; using mass scaling relations with velocity dispersion,
X-ray Yx, and integrated SZ, we estimate a cluster mass of
M200a=(2.16+/-0.32)x10^15 M_sun/h70. The Chandra and VLT/FORS2 optical data
also reveal that El Gordo is undergoing a major merger between components with
a mass ratio of approximately 2 to 1. The X-ray data show significant
temperature variations from a low of 6.6+/-0.7 keV at the merging low-entropy,
high-metallicity, cool core to a high of 22+/-6 keV. We also see a wake in the
X-ray surface brightness caused by the passage of one cluster through the
other. Archival radio data at 843 MHz reveal diffuse radio emission that, if
associated with the cluster, indicates the presence of an intense double radio
relic, hosted by the highest redshift cluster yet. El Gordo is possibly a
high-redshift analog of the famous Bullet Cluster. Such a massive cluster at
this redshift is rare, although consistent with the standard L-CDM cosmology in
the lower part of its allowed mass range. Massive, high-redshift mergers like
El Gordo are unlikely to be reproduced in the current generation of numerical
N-body cosmological simulations.
|
In this study, we use THz-assisted atom probe tomography (APT) to analyse
silica matrices used to encapsulate biomolecules. This technique provides the
chemical composition and 3D structure without significantly heating the
biosample, which is crucial for studying soft organic molecules such as
proteins. Our results show that THz pulses and a positive static field trigger
controlled evaporation of silica matrices, enabling 4D imaging with chemical
sensitivity comparable to UV laser-assisted APT. To support the interpretation
of these experimental results, we devise a computational model based on
time-dependent density functional theory to describe the interaction between
silica matrices and THz radiation. This model captures the nonlinear dynamics
driven by THz-pulses and the interplay between the THz source and the static
electric field in real time. This interdisciplinary approach expands the
capabilities of APT and holds promise for other THz-based analyses offering new
insights into material dynamics in complex biological environments.
|
We prove regularity properties of the self-energy, to all orders in
perturbation theory, for systems with singular Fermi surfaces which contain Van
Hove points where the gradient of the dispersion relation vanishes. In this
paper, we show for spatial dimensions $d \ge 3$ that despite the Van Hove
singularity, the overlapping loop bounds we proved together with E. Trubowitz
for regular non--nested Fermi surfaces [J. Stat. Phys. 84 (1996) 1209] still
hold, provided that the Fermi surface satisfies a no-nesting condition. This
implies that for a fixed interacting Fermi surface, the self-energy is a
continuously differentiable function of frequency and momentum, so that the
quasiparticle weight and the Fermi velocity remain close to their values in the
noninteracting system to all orders in perturbation theory. In a companion
paper, we treat the more singular two-dimensional case.
|
This paper sheds light on the current development in major industrialized
countries (such as Germany, Japan, and Switzerland): the trend towards
highly-integrated and autonomous production systems. The question is how such a
transition of a production infrastructure can take place most efficiently. This
research uses the system dynamics method to address this complex transition
process from a legacy production system to a modern and highly integrated
production system (an Industry 4.0 system). The findings mainly relate to the
identification of system structures that are relevant for an Industry 4.0
perspective. Our research is the first in its kind which presents a causal
model that addresses the transition to Industry 4.0.
|
Some consumers, particularly households, are unwilling to face volatile
electricity prices, and they can perceive as unfair price differentiation in
the same local area. For these reasons, nodal prices in distribution networks
are rarely employed. However, the increasing availability of renewable
resources and emerging price-elastic behaviours pave the way for the effective
introduction of marginal nodal pricing schemes in distribution networks. The
aim of the proposed framework is to show how traditional non-flexible consumers
can coexist with flexible users in a local distribution area. Flexible users
will pay nodal prices, whereas non-flexible consumers will be charged a fixed
price derived from the underlying nodal prices. Moreover, the developed
approach shows how a distribution system operator should manage the local grid
by optimally determining the lines to be expanded, and the collected network
tariff levied on grid users, while accounting for both congestion rent and
investment costs. The proposed model is formulated as a non-linear integer
bilevel program, which is then recast as an equivalent single optimization
problem, by using integer algebra and complementarity relations. The power
flows in the distribution area are modelled by resorting to a second-order cone
relaxation, whose solution is exact for radial networks under mild assumptions.
The final model results in a mixed-integer quadratically constrained program,
which can be solved with off-the-shelf solvers. Numerical test cases based on
both 5-bus and 33-bus networks are reported to show the effectiveness of the
proposed method.
|
We investigate the small-scale conformity in color between bright galaxies
and their faint companions in the Virgo cluster. Cluster member galaxies are
spectroscopically determined using the Extended Virgo Cluster Catalog (EVCC)
and the Sloan Digital Sky Survey Data Release 12 (SDSS DR12). We find that the
luminosity-weighted mean color of faint galaxies depends on the color of
adjacent bright galaxy as well as on the cluster-scale environment
(gravitational potential index). From this result for the entire area of the
Virgo cluster, it is not distinguishable whether the small-scale conformity is
genuine or is artificially produced due to cluster-scale variation of galaxy
color. To disentangle this degeneracy, we divide the Virgo cluster area into
three sub-areas so that the cluster-scale environmental dependence is
minimized: A1 (central), A2 (intermediate) and A3 (outermost). We find
conformity in color between bright galaxies and their faint companions
(color-color slope significance S ~ 2.73 sigma and correlation coefficient cc ~
0.50) in A2, where the cluster-scale environmental dependence is almost
negligible. On the other hand, the conformity is not significant or very
marginal (S ~ 1.75 sigma and cc ~ 0.27) in A1. The conformity is not
significant either in A3 (S ~ 1.59 sigma and cc ~ 0.44), but the sample size is
too small in this area. These results are consistent with a scenario in which
the small-scale conformity in a cluster is a vestige of infallen groups and
these groups lose conformity as they come closer to the cluster center.
|
We determine the geometrical and viewing angle parameters of the Large
Magellanic Cloud (LMC) using the Leavitt law based on a sample of more than
$3500$ common classical Cepheids (FU and FO) in optical ($V,I$), near-infrared
($JHK_{s}$) and mid-infrared ($[3.6]~\mu$m and $[4.5]~\mu$m) photometric bands.
Statistical reddening and distance modulus free from the effect of reddening to
each of the individual Cepheids are obtained using the simultaneous multi-band
fit to the apparent distance moduli from the analysis of the resulting Leavitt
laws in these seven photometric bands. A reddening map of the LMC obtained from
the analysis shows good agreement with the other maps available in the
literature. Extinction free distance measurements along with the information of
the equatorial coordinates $(\alpha,\delta)$ for individual stars are used to
obtain the corresponding Cartesian coordinates with respect to the plane of the
sky. By fitting a plane solution of the form $z=f(x,y)$ to the observed three
dimensional distribution, the following viewing angle parameters of the LMC are
obtained: inclination angle $i=25^{\circ}.110\pm 0^{\circ}.365$, position angle
of line of nodes $\theta_{\text{lon}}=154^{\circ}.702\pm1^{\circ}.378$. On the
other hand, modelling the observed three dimensional distribution of the
Cepheids as a triaxial ellipsoid, the following values of the geometrical axes
ratios of the LMC are obtained: $1.000\pm 0.003:1.151\pm0.003:1.890\pm 0.014$
with the viewing angle parameters: inclination angle of $i=11^{\circ}.920\pm
0^{\circ}.315$ with respect to the longest axis from the line of sight and
position angle of line of nodes $\theta_{\rm lon} = 128^{\circ}.871\pm
0^{\circ}.569$. The position angles are measured eastwards from north.
|
The alloying effect on the lattice parameters, isostructural mixing
enthalpies and ductility of the ternary nitride systems Cr1-xTMxN (TM=Sc, Y;
Ti, Zr, Hf; V, Nb, Ta; Mo, W) in the cubic B1 structure has been investigated
using first-principles calculations. Maximum mixing enthalpy due to large
lattice mismatch in Cr1-xYxN solid solution shows a strong preference for phase
separation, while Cr1-xTaxN exhibits a negative mixing enthalpy in the whole
compositional range with respect to cubic B1 structured CrN and TaN, thus being
unlikely to decompose spinodally. The near-to-zero mixing enthalpies of
Cr1-xScxN and Cr1-xVxN are ascribed to the mutually counteracted electronic and
lattice mismatch effects. Additions of small amounts of V, Nb, Ta, Mo or W into
CrN coatings increase its ductility.
|
High energy inclusive hadron production in the central kinematical region is
analyzed within the models of unitarized pomeron. It is shown that the sum of
multipomeron exchanges with intercept $\alpha_P(0)>1$ reproduce qualitatively
contribution of the triple pole (at $t=0$) pomeron to inclusive cross section.
Basing on this analogy we then suggest a general form of unitarized pomeron
contributions (in particular the dipole or tripole pomeron) to inclusive cross
section. They lead to a parabolic form of the rapidity distribution giving
$<n>\propto \ln^3s$ (tripole) or $<n>\propto \ln^2s$ (dipole). The models
considered with suggested parametrization of $p_t$-dependence for cross
sections well describe the rapidity distributions data in $pp$ and $\bar pp$
interactions at energy $\sqrt{s}\geq 200$ GeV. The predictions for one particle
inclusive production at LHC energies are given.
|
We consider the stochastic multi-armed bandit problem with a prior
distribution on the reward distributions. We are interested in studying
prior-free and prior-dependent regret bounds, very much in the same spirit as
the usual distribution-free and distribution-dependent bounds for the
non-Bayesian stochastic bandit. Building on the techniques of Audibert and
Bubeck [2009] and Russo and Roy [2013] we first show that Thompson Sampling
attains an optimal prior-free bound in the sense that for any prior
distribution its Bayesian regret is bounded from above by $14 \sqrt{n K}$. This
result is unimprovable in the sense that there exists a prior distribution such
that any algorithm has a Bayesian regret bounded from below by $\frac{1}{20}
\sqrt{n K}$. We also study the case of priors for the setting of Bubeck et al.
[2013] (where the optimal mean is known as well as a lower bound on the
smallest gap) and we show that in this case the regret of Thompson Sampling is
in fact uniformly bounded over time, thus showing that Thompson Sampling can
greatly take advantage of the nice properties of these priors.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.