text
stringlengths 6
128k
|
---|
Finite time coherent sets [8] have recently been defined by a measure based
objective function describing the degree that sets hold together, along with a
Frobenius-Perron transfer operator method to produce optimally coherent sets.
Here we present an extension to generalize the concept to hierarchially defined
relatively coherent sets based on adjusting the finite time coherent sets to
use relative mesure restricted to sets which are developed iteratively and
hierarchically in a tree of partitions. Several examples help clarify the
meaning and expectation of the techniques, as they are the nonautonomous double
gyre, the standard map, an idealized stratospheric flow, and empirical data
from the Mexico Gulf during the 2010 oil spill. Also for sake of analysis of
computational complexity, we include an appendic concerning the computational
complexity of developing the Ulam-Galerkin matrix extimates of the
Frobenius-Perron operator centrally used here.
|
An idea that became unavoidable to study zero entropy symbolic dynamics is
that the dynamical properties of a system induce in it a combinatorial
structure. An old problem addressing this intuition is finding a structure
theorem for linear-growth complexity subshifts using the S-adic formalism. It
is known as the S-adic conjecture and motivated several influential results in
the theory. In this article, we completely solve the conjecture by providing an
S-adic structure for this class. Our methods extend to
nonsuperlinear-complexity subshifts. An important consequence of our main
results is that these complexity classes gain access to the S-adic machinery.
We show how this provides a unified framework and simplified proofs of several
known results, including the pioneering 1996 Cassaigne's Theorem.
|
Let ${\mathfrak{g}}$ be a complex semisimple Lie algebra with Borel
subalgebra ${\mathfrak{b}}$ and corresponding nilradical ${\mathfrak{n}}$. We
show that singular Whittaker modules $M$ are simple if and only if the space
$\hbox{Wh}\,M$ of Whittaker vectors is $1$-dimensional. For arbitrary locally
${\mathfrak{n}}$-finite ${\mathfrak{g}}$-modules $V$, an immediate corollary is
that the dimension of $\hbox{Wh}\,V$ is bounded by the composition length of
$V$.
|
We have measured the multiplicity fractions and separation distributions of
seven young star-forming regions using a uniform sample of young binaries. Both
the multiplicity fractions and separation distributions are similar in the
different regions. A tentative decline in the multiplicity fraction with
increasing stellar density is apparent, even for binary systems with
separations too close (19-100au) to have been dynamically processed. The
separation distributions in the different regions are statistically
indistinguishable over most separation ranges, and the regions with higher
densities do not exhibit a lower proportion of wide (300-620au) relative to
close (62-300au) binaries as might be expected from the preferential
destruction of wider pairs. Only the closest (19-100au) separation range, which
would be unaffected by dynamical processing, shows a possible difference in
separation distributions between different regions. The combined set of young
binaries, however, shows a distinct difference when compared to field binaries,
with a significant excess of close (19-100au) systems among the younger
binaries. Based on both the similarities and differences between individual
regions, and between all seven young regions and the field, especially over
separation ranges too close to be modified by dynamical processing, we conclude
that multiple star formation is not universal and, by extension, the star
formation process is not universal.
|
We present and experimentally realize a quantum algorithm for efficiently
solving the following problem: given an $N\times N$ matrix $\mathcal{M}$, an
$N$-dimensional vector $\textbf{\emph{b}}$, and an initial vector
$\textbf{\emph{x}}(0)$, obtain a target vector $\textbf{\emph{x}}(t)$ as a
function of time $t$ according to the constraint
$d\textbf{\emph{x}}(t)/dt=\mathcal{M}\textbf{\emph{x}}(t)+\textbf{\emph{b}}$.
We show that our algorithm exhibits an exponential speedup over its classical
counterpart in certain circumstances. In addition, we demonstrate our quantum
algorithm for a $4\times4$ linear differential equation using a 4-qubit nuclear
magnetic resonance quantum information processor. Our algorithm provides a key
technique for solving many important problems which rely on the solutions to
linear differential equations.
|
Privacy-enhancing technologies are technologies that implement fundamental
data protection principles. With respect to biometric recognition, different
types of privacy-enhancing technologies have been introduced for protecting
stored biometric data which are generally classified as sensitive. In this
regard, various taxonomies and conceptual categorizations have been proposed
and standardization activities have been carried out. However, these efforts
have mainly been devoted to certain sub-categories of privacy-enhancing
technologies and therefore lack generalization. This work provides an overview
of concepts of privacy-enhancing technologies for biometrics in a unified
framework. Key aspects and differences between existing concepts are
highlighted in detail at each processing step. Fundamental properties and
limitations of existing approaches are discussed and related to data protection
techniques and principles. Moreover, scenarios and methods for the assessment
of privacy-enhancing technologies for biometrics are presented. This paper is
meant as a point of entry to the field of biometric data protection and is
directed towards experienced researchers as well as non-experts.
|
With the Westerbork Synthesis Radio Telescope, we performed HI observations
of a sample of known X-ray emitting Gigahertz-peaked-spectrum galaxies with
compact-symmetric-object morphology (GPS/CSOs) that lacked an HI absorption
detection. We combined radio and X-ray data of the full sample of X-ray
emitting GPS/CSOs and found a significant, positive correlation between the
column densities of the total and neutral hydrogen ($N_{\rm H}$ and $N_{\rm
HI}$, respectively). Using a Bayesian approach, we simultaneously quantified
the parameters of the $N_{\rm H} - N_{\rm HI}$ relation and the intrinsic
spread of the data set. For a specific subset of our sample, we found $N_{\rm
H} \propto N_{\rm HI}^b$, with $b=0.93^{+0.49}_{-0.33}$, and $\sigma_{int}
(N_{\rm H})= 1.27^{+1.30}_{-0.40}$. The $N_{\rm H} - N_{\rm HI}$ correlation
suggests a connection between the physical properties of the radio and X-ray
absorbing gas.
|
Building upon the strength of modern large language models (LLMs), generative
error correction (GEC) has emerged as a promising paradigm that can elevate the
performance of modern automatic speech recognition (ASR) systems. One
representative approach is to leverage in-context learning to prompt LLMs so
that a better hypothesis can be generated by the LLMs based on a
carefully-designed prompt and an $N$-best list of hypotheses produced by ASR
systems. However, it is yet unknown whether the existing prompts are the most
effective ones for the task of post-ASR error correction. In this context, this
paper first explores alternative prompts to identify an initial set of
effective prompts, and then proposes to employ an evolutionary prompt
optimization algorithm to refine the initial prompts. Evaluations results on
the CHiME-4 subset of the Task $1$ of the SLT $2024$ GenSEC challenge show the
effectiveness and potential of the proposed algorithms.
|
This paper introduces Targeted Function Balancing (TFB), a covariate
balancing weights framework for estimating the average treatment effect of a
binary intervention. TFB first regresses an outcome on covariates, and then
selects weights that balance functions (of the covariates) that are
probabilistically near the resulting regression function. This yields balance
in the regression function's predicted values and the covariates, with the
regression function's estimated variance determining how much balance in the
covariates is sufficient. Notably, TFB demonstrates that intentionally leaving
imbalance in some covariates can increase efficiency without introducing bias,
challenging traditions that warn against imbalance in any variable.
Additionally, TFB is entirely defined by a regression function and its
estimated variance, turning the problem of how best to balance the covariates
into how best to model the outcome. Kernel regularized least squares and the
LASSO are considered as regression estimators. With the former, TFB contributes
to the literature of kernel-based weights. As for the LASSO, TFB uses the
regression function's estimated variance to prioritize balance in certain
dimensions of the covariates, a feature that can be greatly exploited by
choosing a sparse regression estimator. This paper also introduces a balance
diagnostic, Targeted Function Imbalance, that may have useful applications.
|
For any polarized variety (X,L), we show that test configurations and, more
generally, R-test configurations (defined as finitely generated filtrations of
the section ring) can be analyzed in terms of Fubini-Study functions on the
Berkovich analytification of X with respect to the trivial absolute value on
the ground field. Building on non-Archimedean pluripotential theory, we
describe the (Hausdorff) completion of the space of test configurations, with
respect to two natural pseudo-metrics, in terms of plurisubharmonic functions
and measures of finite energy on the Berkovich space. We also describe the
Hausdorff quotient of the space of all filtrations, and establish a 1--1
correspondence between divisorial norms and divisorial measures, both being
determined in terms of finitely many divisorial valuations.
|
Free-standing thin films of magnetic ion intercalated transition metal
dichalcogenides are produced using ultramicrotoming techniques. Films of
thicknesses ranging from 30nm to 250nm were achieved and characterized using
transmission electron diffraction and X-ray magnetic circular dichroism.
Diffraction measurements visualize the long range crystallographic ordering of
the intercalated ions, while the dichroism measurements directly assess the
orbital contributions to the total magnetic moment. We thus verify the
unquenched orbital moment in Fe0.25TaS2 and measure the fully quenched orbital
contribution in Mn0.25TaS2. Such films can be used in a wide variety of
ultrafast X-ray and electron techniques that benefit from transmission
geometries, and allow measurements of ultrafast structural, electronic, and
magnetization dynamics in space and time.
|
In this work, we developed a new Bayesian method for variable selection in
function-on-scalar regression (FOSR). Our method uses a hierarchical Bayesian
structure and latent variables to enable an adaptive covariate selection
process for FOSR. Extensive simulation studies show the proposed method's main
properties, such as its accuracy in estimating the coefficients and high
capacity to select variables correctly. Furthermore, we conducted a substantial
comparative analysis with the main competing methods, the BGLSS (Bayesian Group
Lasso with Spike and Slab prior) method, the group LASSO (Least Absolute
Shrinkage and Selection Operator), the group MCP (Minimax Concave Penalty), and
the group SCAD (Smoothly Clipped Absolute Deviation). Our results demonstrate
that the proposed methodology is superior in correctly selecting covariates
compared with the existing competing methods while maintaining a satisfactory
level of goodness of fit. In contrast, the competing methods could not balance
selection accuracy with goodness of fit. We also considered a COVID-19 dataset
and some socioeconomic data from Brazil as an application and obtained
satisfactory results. In short, the proposed Bayesian variable selection model
is highly competitive, showing significant predictive and selective quality.
|
We study theoretically the electronic structure of three-dimensional (3D)
higher-order topological insulators in the presence of step edges. We
numerically find that a 1D conducting state with a helical spin structure,
which also has a linear dispersion near the zero energy, emerges at a step edge
and on the opposite surface of the step edge. We also find that the 1D helical
conducting state on the opposite surface of a step edge emerges when the
electron hopping in the direction perpendicular to the step is weak. In other
words, the existence of the 1D helical conducting state on the opposite surface
of a step edge can be understood by considering an addition of two
different-sized independent blocks of 3D higher-order topological insulators.
On the other hand, when the electron hopping in the direction perpendicular to
the step is strong, the location of the emergent 1D helical conducting state
moves from the opposite surface of a step edge to the dip ($270^{\circ}$ edge)
just below the step edge. In this case, the existence at the dip below the step
edge can be understood by assigning each surface with a sign ($+$ or $-$) of
the mass of the surface Dirac fermions. These two physical pictures are
connected continuously without the bulk bandgap closing. Our finding paves the
way for on-demand creation of 1D helical conducting states from 3D higher-order
topological insulators employing experimental processes commonly used in
thin-film devices, which could lead to, e.g., a realization of high-density
Majorana qubits.
|
We study a family of sparse estimators defined as minimizers of some
empirical Lipschitz loss function -- which include the hinge loss, the logistic
loss and the quantile regression loss -- with a convex, sparse or group-sparse
regularization. In particular, we consider the L1 norm on the coefficients, its
sorted Slope version, and the Group L1-L2 extension. We propose a new
theoretical framework that uses common assumptions in the literature to
simultaneously derive new high-dimensional L2 estimation upper bounds for all
three regularization schemes. %, and to improve over existing results. For L1
and Slope regularizations, our bounds scale as $(k^*/n) \log(p/k^*)$ --
$n\times p$ is the size of the design matrix and $k^*$ the dimension of the
theoretical loss minimizer $\B{\beta}^*$ -- and match the optimal minimax rate
achieved for the least-squares case. For Group L1-L2 regularization, our bounds
scale as $(s^*/n) \log\left( G / s^* \right) + m^* / n$ -- $G$ is the total
number of groups and $m^*$ the number of coefficients in the $s^*$ groups which
contain $\B{\beta}^*$ -- and improve over the least-squares case. We show that,
when the signal is strongly group-sparse, Group L1-L2 is superior to L1 and
Slope. In addition, we adapt our approach to the sub-Gaussian linear regression
framework and reach the optimal minimax rate for Lasso, and an improved rate
for Group-Lasso. Finally, we release an accelerated proximal algorithm that
computes the nine main convex estimators of interest when the number of
variables is of the order of $100,000s$.
|
In a recent work [Reible et al., Phys. Rev. Res. 5, 023156, 2023], it has
been shown that the mean particle-particle interaction across an ideal surface
that divides a system into two parts, can be employed to estimate the size
dependence for the thermodynamic accuracy of the system. In this work we
propose its application to systems with finite range interactions that models a
dense quantum gases and derive an approximate size-dependence scaling law. In
addition, we show that the application of the criterion is equivalent to the
determination of a free energy response to a perturbation. The latter result
confirms the complementarity of the criterion to other estimates of finite-size
effects based on direct simulations and empirical structure or energy
convergence criteria.
|
A novel model of particle acceleration in the magnetospheres of rotating
active galactic nuclei (AGN) is constructed.The particle energies may be
boosted up to $10^{21}$eV in a two step mechanism: In the first stage, the
Langmuir waves are centrifugally excited and amplified by means of a parametric
process that efficiently pumps rotational energy to excite electrostatic
fields. In the second stage, the electrostatic energy is transferred to
particle kinetic energy via Landau damping made possible by rapid "Langmuir
collapse". The time-scale for parametric pumping of Langmuir waves turns out to
be small compared to the kinematic time-scale, indicating high efficiency of
the first process. The second process of "Langmuir collapse" - the creation of
caverns or low density regions - also happens rapidly for the characteristic
parameters of the AGN magnetosphere. The Langmuir collapse creates appropriate
conditions for transferring electric energy to boost up already high particle
energies to much higher values. It is further shown that various energy loss
mechanism are relatively weak, and do not impose any significant constraints on
maximum achievable energies.
|
We discuss an application of the method of the angular quantization to
reconstruction of form-factors of local fields in massive integrable models.
The general formalism is illustrated with examples of the Klein-Gordon,
sinh-Gordon and Bullough-Dodd models. For the latter two models the angular
quantization approach makes it possible to obtain free field representations
for form-factors of exponential operators. We discuss an intriguing relation
between the free field representations and deformations of the Virasoro
algebra. The deformation associated with the Bullough-Dodd models appears to be
different from the known deformed Virasoro algebra.
|
We present an up-to-date analysis for a precise determination of the
effective fine structure constant and discuss the prospects for future
improvements. We advocate to use a determination monitored by the Adler
function which allows us to exploit perturbative QCD in an optimal well
controlled way. Together with a long term program of hadronic cross section
measurements at energies up to a few GeV, a determination of alpha(M_Z) at a
precision comparable to the one of the Z mass M_Z should be feasible. Presently
alpha(E) at E>1 GeV is the least precisely known of the fundamental parameters
of the SM. Since, in spite of substantial progress due to new BaBar exclusive
data, the region 1.4 to 2.4 GeV remains the most problematic one a major step
in the reduction of the uncertainties are expected from VEPP-2000 and from a
possible ``high-energy'' option DAFNE-2 at Frascati. The up-to-date evaluation
reads Delta alpha^{(5)}_{had}(M_Z^2) = 0.027515 +/- 0.000149 or
alpha^{-1}(M_Z)=128.957 +/- 0.020.
|
Spectroscopic phase curves provide unique access to the three-dimensional
properties of transiting exoplanet atmospheres. However, a modeling framework
must be developed to deliver accurate inferences of atmospheric properties for
these complex data sets. Here, we develop an approach to retrieve temperature
structures and molecular abundances from phase curve spectra at any orbital
phase. In the context of a representative hot Jupiter with a large day-night
temperature contrast, we examine the biases in typical one-dimensional (1D)
retrievals as a function of orbital phase/geometry, compared to two-dimensional
(2D) models that appropriately capture the disk-integrated phase geometry. We
guide our intuition by applying our new framework on a simulated HST+Spitzer
phase curve data set in which the "truth" is known, followed by an application
to the spectroscopic phase curve of the canonical hot Jupiter, WASP-43b. We
also demonstrate the retrieval framework on simulated JWST phase curve
observations. We apply our new geometric framework to a joint-fit of all
spectroscopic phases, assuming longitudinal molecular abundance homogeneity,
resulting in an a factor of 2 improvement in abundances precision when compared
to individual phase constraints. With a 1D retrieval model on simulated
HST+Spitzer data, we find strongly biased molecular abundances for CH$_4$ and
CO$_2$ at most orbital phases. With 2D, the day and night profiles retrieved
from WASP-43b remain consistent throughout the orbit. JWST retrievals show that
a 2D model is strongly favored at all orbital phases. Based on our new 2D
retrieval implementation, we provide recommendations on when 1D models are
appropriate and when more complex phase geometries involving multiple TP
profiles are required to obtain an unbiased view of tidally locked planetary
atmospheres.
|
We present a multi-wavelength study of the young stellar population in the
Cygnus-X DR15 region. We studied young stars forming or recently formed at and
around the tip of a prominent molecular pillar and an infrared dark cloud.
Using a combination of ground based near-infrared, space based infrared and
X-ray data, we constructed a point source catalog from which we identified 226
young stellar sources, which we classified into evolutionary classes. We
studied their spatial distribution across the molecular gas structures and
identified several groups possibly belonging to distinct young star clusters.
We obtained samples of these groups and constructed K-band luminosity functions
that we compared with those of artificial clusters, allowing us to make first
order estimates of the mean ages and age spreads of the groups. We used a
$^{13}$CO(1-0) map to investigate the gas kinematics at the prominent gaseous
envelope of the central cluster in DR15, and we infer that the removal of this
envelope is relatively slow compared to other cluster regions, in which gas
dispersal timescale could be similar or shorter than the circumstellar disk
dissipation timescale. The presence of other groups with slightly older ages,
associated with much less prominent gaseous structures may imply that the
evolution of young clusters in this part of the complex proceeds in periods
that last 3 to 5 Myr, perhaps after a slow dissipation of their dense molecular
cloud birthplaces.
|
Unsupervised extractive document summarization aims to select important
sentences from a document without using labeled summaries during training.
Existing methods are mostly graph-based with sentences as nodes and edge
weights measured by sentence similarities. In this work, we find that
transformer attentions can be used to rank sentences for unsupervised
extractive summarization. Specifically, we first pre-train a hierarchical
transformer model using unlabeled documents only. Then we propose a method to
rank sentences using sentence-level self-attentions and pre-training
objectives. Experiments on CNN/DailyMail and New York Times datasets show our
model achieves state-of-the-art performance on unsupervised summarization. We
also find in experiments that our model is less dependent on sentence
positions. When using a linear combination of our model and a recent
unsupervised model explicitly modeling sentence positions, we obtain even
better results.
|
Space variant beams are of great importance as a variety of applications have
emerged in recent years. As such, manipulation of their degrees of freedom is
highly desired. Here, by exploiting the circular dichroism and circular
birefringence in a Zeeman-shifted Rb medium, we study the general interaction
of space variant beams with such a medium. We present two particular cases of
radial polarization and hybrid polarization beams where the control of the
polarization states is demonstrated experimentally. Moreover, we show that a
Zeeman-shifted atomic system can be used as an analyzer for such space variant
beams
|
We prove that the (real or complex) chromatic roots of a series-parallel
graph with maxmaxflow Lambda lie in the disc |q-1| < (Lambda-1)/log 2. More
generally, the same bound holds for the (real or complex) roots of the
multivariate Tutte polynomial when the edge weights lie in the "real
antiferromagnetic regime" -1 \le v_e \le 0. This result is within a factor
1/log 2 \approx 1.442695 of being sharp
|
The development and production of radio frequency quadrupoles, which are used
for accelerating low-energy ions to high energies, continues since 1970s. The
development of RFQ design software packages, which can provide ease of use with
a graphical interface, can visualize the behavior of the ion beam inside the
RFQ, and can run on both Unix and Windows platforms, has become inevitable due
to increasing interest around the world. In this context, a new RFQ design
software package, DEMIRCI, has been under development. To meet the user
expectations, a number of new features have been recently added to DEMIRCI.
Apart from being usable via both graphical interface and command line, DEMIRCI
has been enriched with beam dynamics calculations. This new module gives users
the possibility to define and track an input beam and to monitor its behavior
along the RFQ. Additionally, the Windows OS has been added to the list of
supported platforms. Finally, the addition of more realistic 8 term potential
results has been ongoing. This note will summarize the latest developments and
results from DEMIRCI RFQ design software.
|
SARS-COV-19 is the most prominent issue which many countries face today. The
frequent changes in infections, recovered and deaths represents the dynamic
nature of this pandemic. It is very crucial to predict the spreading rate of
this virus for accurate decision making against fighting with the situation of
getting infected through the virus, tracking and controlling the virus
transmission in the community. We develop a prediction model using statistical
time series models such as SARIMA and FBProphet to monitor the daily active,
recovered and death cases of COVID-19 accurately. Then with the help of various
details across each individual patient (like height, weight, gender etc.), we
designed a set of rules using Semantic Web Rule Language and some mathematical
models for dealing with COVID19 infected cases on an individual basis. After
combining all the models, a COVID-19 Ontology is developed and performs various
queries using SPARQL query on designed Ontology which accumulate the risk
factors, provide appropriate diagnosis, precautions and preventive suggestions
for COVID Patients. After comparing the performance of SARIMA and FBProphet, it
is observed that the SARIMA model performs better in forecasting of COVID
cases. On individual basis COVID case prediction, approx. 497 individual
samples have been tested and classified into five different levels of COVID
classes such as Having COVID, No COVID, High Risk COVID case, Medium to High
Risk case, and Control needed case.
|
Although deeper and larger neural networks have achieved better performance,
the complex network structure and increasing computational cost cannot meet the
demands of many resource-constrained applications. Existing methods usually
choose to execute or skip an entire specific layer, which can only alter the
depth of the network. In this paper, we propose a novel method called Dynamic
Multi-path Neural Network (DMNN), which provides more path selection choices in
terms of network width and depth during inference. The inference path of the
network is determined by a controller, which takes into account both previous
state and object category information. The proposed method can be easily
incorporated into most modern network architectures. Experimental results on
ImageNet and CIFAR-100 demonstrate the superiority of our method on both
efficiency and overall classification accuracy. To be specific, DMNN-101
significantly outperforms ResNet-101 with an encouraging 45.1% FLOPs reduction,
and DMNN-50 performs comparably to ResNet-101 while saving 42.1% parameters.
|
Motivated by the discovery of a number of radio relics we investigate the
fate of fossil radio plasma during a merger of clusters of galaxies using
cosmological smoothed-particle hydrodynamics simulations. Radio relics are
extended, steep-spectrum radio sources that do not seem to be associated with a
host galaxy. One proposed scenario whereby these relics form is through the
compression of fossil radio plasma during a merger between clusters. The
ensuing compression of the plasma can lead to a substantial increase in
synchrotron luminosity and this appears as a radio relic. Our simulations show
that relics are most likely to be found at the periphery of the cluster at the
positions of the outgoing merger shock waves. Relics are expected to be very
rare in the centre of the cluster where the life time of relativistic electrons
is short and shock waves are weaker than in the cooler, peripheral regions of
the cluster. These predictions can soon be tested with upcoming low-frequency
radio telescopes.
|
In certain classes of subharmonic functions u on C distinguished in terms of
lower bounds for the Riesz measure of u, a sharp estimate is obtained for the
rate of approximation by functions of the form log |f(z)|, where f is an entire
function. The results complement and generalize those recently obtained by Yu.
Lyubarskii and Eu. Malinnikova.
|
The thermoelectric (TE) properties of a material are dramatically altered
when electron-electron interactions become the dominant scattering mechanism.
In the degenerate hydrodynamic regime, the thermal conductivity is reduced and
becomes a {\it decreasing} function of the electronic temperature, due to a
violation of the Wiedemann-Franz (WF) law. We here show how this peculiar
temperature dependence gives rise to new striking TE phenomena. These include
an 80-fold increase in TE efficiency compared to the WF regime, dramatic
qualitative changes in the steady state temperature profile, and an anomalously
large Thomson effect. In graphene, which we pay special attention to here,
these effects are further amplified due to a doubling of the thermopower.
|
We present Im2Flow2Act, a scalable learning framework that enables robots to
acquire manipulation skills from diverse data sources. The key idea behind
Im2Flow2Act is to use object flow as the manipulation interface, bridging
domain gaps between different embodiments (i.e., human and robot) and training
environments (i.e., real-world and simulated). Im2Flow2Act comprises two
components: a flow generation network and a flow-conditioned policy. The flow
generation network, trained on human demonstration videos, generates object
flow from the initial scene image, conditioned on the task description. The
flow-conditioned policy, trained on simulated robot play data, maps the
generated object flow to robot actions to realize the desired object movements.
By using flow as input, this policy can be directly deployed in the real world
with a minimal sim-to-real gap. By leveraging real-world human videos and
simulated robot play data, we bypass the challenges of teleoperating physical
robots in the real world, resulting in a scalable system for diverse tasks. We
demonstrate Im2Flow2Act's capabilities in a variety of real-world tasks,
including the manipulation of rigid, articulated, and deformable objects.
|
We report the latest electroweak and QCD results from two Tevatron
experiments.
|
We show that a sequence $\{\Phi_n\}$ of quantum channels strongly converges
to a quantum channel $\Phi_0$ if and only if there exist a common environment
for all the channels and a corresponding sequence $\{V_n\}$ of Stinespring
isometries strongly converging to a Stinespring isometry $V_0$ of the channel
$\Phi_0$.
We also give a quantitative description of the above characterization of the
strong convergence in terms of the appropriate metrics on the sets of quantum
channels and Stinespring isometries. As a result, the uniform selective
continuity of the complementary operation with respect to the strong
convergence is established.
We show discontinuity of the unitary dilation by constructing a strongly
converging sequence of channels which can not be represented as a reduction of
a strongly converging sequence of unitary channels.
The Stinespring representation of strongly converging sequences of quantum
channels allows to prove the lower semicontinuity of the entropic disturbance
as a function of a pair (channel, input ensemble). Some corollaries of this
property are considered.
|
The search for diffuse non-thermal inverse Compton (IC) emission from galaxy
clusters at hard X-ray energies has been undertaken with many instruments, with
most detections being either of low significance or controversial. Background
and contamination uncertainties present in the data of non-focusing
observatories result in lower sensitivity to IC emission and a greater chance
of false detection. We present 266ks NuSTAR observations of the Bullet cluster,
detected from 3-30 keV. NuSTAR's unprecedented hard X-ray focusing capability
largely eliminates confusion between diffuse IC and point sources; however, at
the highest energies the background still dominates and must be well
understood. To this end, we have developed a complete background model
constructed of physically inspired components constrained by extragalactic
survey field observations, the specific parameters of which are derived locally
from data in non-source regions of target observations. Applying the background
model to the Bullet cluster data, we find that the spectrum is well - but not
perfectly - described as an isothermal plasma with kT=14.2+/-0.2 keV. To
slightly improve the fit, a second temperature component is added, which
appears to account for lower temperature emission from the cool core, pushing
the primary component to kT~15.3 keV. We see no convincing need to invoke an IC
component to describe the spectrum of the Bullet cluster, and instead argue
that it is dominated at all energies by emission from purely thermal gas. The
conservatively derived 90% upper limit on the IC flux of 1.1e-12 erg/s/cm^2
(50-100 keV), implying a lower limit on B>0.2{\mu}G, is barely consistent with
detected fluxes previously reported. In addition to discussing the possible
origin of this discrepancy, we remark on the potential implications of this
analysis for the prospects for detecting IC in galaxy clusters in the future.
|
It is shown that the main variable Z of the Null Surface Formulation of GR is
the generating function of a constrained Lagrange submanifold that lives on the
energy surface H=0 and that its level surfaces Z=const. are Legendre
submanifolds on that energy surface.
The behaviour of the variable Z at the caustic points is analysed and a
genralization of this variable is discussed.
|
The effective representation, processing, analysis, and visualization of
large-scale structured data, especially those related to complex domains such
as networks and graphs, are one of the key questions in modern machine
learning. Graph signal processing (GSP), a vibrant branch of signal processing
models and algorithms that aims at handling data supported on graphs, opens new
paths of research to address this challenge. In this article, we review a few
important contributions made by GSP concepts and tools, such as graph filters
and transforms, to the development of novel machine learning algorithms. In
particular, our discussion focuses on the following three aspects: exploiting
data structure and relational priors, improving data and computational
efficiency, and enhancing model interpretability. Furthermore, we provide new
perspectives on future development of GSP techniques that may serve as a bridge
between applied mathematics and signal processing on one side, and machine
learning and network science on the other. Cross-fertilization across these
different disciplines may help unlock the numerous challenges of complex data
analysis in the modern age.
|
Math Word Problems (MWPs) are crucial for evaluating the capability of Large
Language Models (LLMs), with current research primarily focusing on questions
with concise contexts. However, as real-world math problems often involve
complex circumstances, LLMs' ability to solve long MWPs is vital for their
applications in these scenarios, yet remains under-explored. This study
pioneers the exploration of Context Length Generalizability (CoLeG), the
ability of LLMs to solve long MWPs. We introduce Extended Grade-School Math
(E-GSM), a collection of MWPs with lengthy narratives. Two novel metrics are
proposed to assess the efficacy and resilience of LLMs in solving these
problems. Our examination of existing zero-shot prompting techniques and both
proprietary and open-source LLMs reveals a general deficiency in CoLeG. To
alleviate these challenges, we propose distinct approaches for different
categories of LLMs. For proprietary LLMs, a new instructional prompt is
proposed to mitigate the influence of long context. For open-source LLMs, a new
data augmentation task is developed to improve CoLeG. Our comprehensive results
demonstrate the effectiveness of our proposed methods, showing not only
improved performance on E-GSM but also generalizability across several other
MWP benchmarks. Our findings pave the way for future research in employing LLMs
for complex, real-world applications, offering practical solutions to current
limitations and opening avenues for further exploration of model
generalizability and training methodologies.
|
Successfully reproducing the galaxy luminosity function and the bimodality in
the galaxy distribution requires a mechanism that can truncate star formation
in massive haloes. Current models of galaxy formation consider two such
truncation mechanisms: strangulation, which acts on satellite galaxies, and AGN
feedback, which predominantly affects central galaxies. The efficiencies of
these processes set the blue fraction of galaxies as function of galaxy
luminosity and halo mass. In this paper we use a galaxy group catalogue
extracted from the Sloan Digital Sky Survey (SDSS) to determine these
fractions. To demonstrate the potential power of this data as a benchmark for
galaxy formation models, we compare the results to the semi-analytical model
for galaxy formation of Croton et al. (2006). Although this model accurately
fits the global statistics of the galaxy population, as well as the shape of
the conditional luminosity function, there are significant discrepancies when
the blue fraction of galaxies as a function of mass and luminosity is compared
between the observations and the model. In particular, the model predicts (i)
too many faint satellite galaxies in massive haloes, (ii) a blue fraction of
satellites that is much too low, and (iii) a blue fraction of centrals that is
too high and with an inverted luminosity dependence. In the same order, we
argue that these discrepancies owe to (i) the neglect of tidal stripping in the
semi-analytical model, (ii) the oversimplified treatment of strangulation, and
(iii) improper modeling of dust extinction and/or AGN feedback. The data
presented here will prove useful to test and calibrate future models of galaxy
formation and in particular to discriminate between various models for AGN
feedback and other star formation truncation mechanisms.
|
We deal with values taken by various pseudopower functions at a singular
cardinal that is not a fixed point of the aleph function.
|
In this paper, we continue the study of the total domination game in graphs
introduced in [Graphs Combin. 31(5) (2015), 1453--1462], where the players
Dominator and Staller alternately select vertices of $G$. Each vertex chosen
must strictly increase the number of vertices totally dominated, where a vertex
totally dominates another vertex if they are neighbors. This process eventually
produces a total dominating set $S$ of $G$ in which every vertex is totally
dominated by a vertex in $S$. Dominator wishes to minimize the number of
vertices chosen, while Staller wishes to maximize it. The game total domination
number, $\gamma_{\rm tg}(G)$, of $G$ is the number of vertices chosen when
Dominator starts the game and both players play optimally. Henning, Klav\v{z}ar
and Rall [Combinatorica, to appear] posted the $\frac{3}{4}$-Game Total
Domination Conjecture that states that if $G$ is a graph on $n$ vertices in
which every component contains at least three vertices, then $\gamma_{\rm
tg}(G) \le \frac{3}{4}n$. In this paper, we prove this conjecture over the
class of graphs $G$ that satisfy both the condition that the degree sum of
adjacent vertices in $G$ is at least $4$ and the condition that no two vertices
of degree $1$ are at distance $4$ apart in $G$. In particular, we prove that by
adopting a greedy strategy, Dominator can complete the total domination game
played in a graph with minimum degree at least $2$ in at most $3n/4$ moves.
|
The generalized sine-Gordon (sG) equation $u_{tx}=(1+\nu\partial_x^2)\sin\,u$
was derived as an integrable generalization of the sG equation. In a previous
paper (Matsuno Y 2010 J. Phys. A: Math. Theor. {\bf 43} 105204) which is
referred to as I, we developed a systematic method for solving the generalized
sG equation with $\nu=-1$. Here, we address the equation with $\nu=1$. By
solving the equation analytically, we find that the structure of solutions
differs substantially from that of the former equation. In particular, we show
that the equation exhibits kink and breather solutions and does not admit
multi-valued solutions like loop solitons as obtained in I. We also demonstrate
that the equation reduces to the short pulse and sG equations in appropriate
scaling limits. The limiting forms of the multisoliton solutions are also
presented. Last, we provide a recipe for deriving an infinite number of
conservation laws by using a novel B\"acklund transformation connecting
solutions of the sG and generalized sG equations.
|
The ability to manipulate clouds of ultra-cold atoms is crucial for modern
experiments on quantum manybody systems and quantum thermodynamics as well as
future metrological applications of Bose-Einstein condensate. While optical
manipulation offers almost arbitrary flexibility, the precise control of the
resulting dipole potentials and the mitigation of unwanted disturbances is
quite involved and only heuristic algorithms with rather slow convergence rates
are available up to now. This paper thus suggests the application of iterative
learning control (ILC) methods to generate fine-tuned effective potentials in
the presence of uncertainties and external disturbances. Therefore, the given
problem is reformulated to obtain a one-dimensional tracking problem by using a
quasicontinuous input mapping which can be treated by established ILC methods.
Finally, the performance of the proposed concept is illustrated in a simulation
scenario.
|
We present a class of linear programming approximations for constrained
optimization problems. In the case of mixed-integer polynomial optimization
problems, if the intersection graph of the constraints has bounded tree-width
our construction yields a class of linear size formulations that attain any
desired tolerance. As a result, we obtain an approximation scheme for the
"AC-OPF" problem on graphs with bounded tree-width. We also describe a more
general construction for pure binary optimization problems where individual
constraints are available through a membership oracle; if the intersection
graph for the constraints has bounded tree-width our construction is of linear
size and exact. This improves on a number of results in the literature, both
from the perspective of formulation size and generality.
|
We investigate the condensate mechanism of the low-lying excitations in the
matrix models of 4-dimensional quantum Hall fluids recently proposed by us. It
is shown that there exist some hierarchies of 4-dimensional quantum Hall fluid
states in the matrix models, and they are similar to the Haldane's hierarchy in
the 2-dimensional quantum Hall fluids. However, these hierarchical fluid states
appear consistently in our matrix models without any requirement of
modifications of the matrix models.
|
We compute fourth sound for superfluids dual to a charged scalar and a gauge
field in an AdS_4 background. For holographic superfluids with condensates that
have a large scaling dimension (greater than approximately two), we find that
fourth sound approaches first sound at low temperatures. For condensates that a
have a small scaling dimension it exhibits non-conformal behavior at low
temperatures which may be tied to the non-conformal behavior of the order
parameter of the superfluid. We show that by introducing an appropriate scalar
potential, conformal invariance can be enforced at low temperatures.
|
We consider a (small) quantum mechanical system which is operated by an
external agent, who changes the Hamiltonian of the system according to a fixed
scenario. In particular we assume that the agent (who may be called a demon)
performs measurement followed by feedback, i.e., it makes a measurement of the
system and changes the protocol according to the outcome. We extend to this
setting the generalized Jarzynski relations, recently derived by Sagawa and
Ueda for classical systems with feedback. One of the two relations by Sagawa
and Ueda is derived here in error-free quantum processes, while the other is
derived only when the measurement process involves classical errors. The first
relation leads to a second law which takes into account the efficiency of the
feedback.
|
Field star BD+20 307 is the dustiest known main sequence star, based on the
fraction of its bolometric luminosity, 4%, that is emitted at infrared
wavelengths. The particles that carry this large IR luminosity are unusually
warm, comparable to the temperature of the zodiacal dust in the solar system,
and their existence is likely to be a consequence of a fairly recent collision
of large objects such as planets or planetary embryos. Thus, the age of BD+20
307 is potentially of interest in constraining the era of terrestrial planet
formation. The present project was initiated with an attempt to derive this age
using the Chandra X-ray Observatory to measure the X-ray flux of BD+20 307 in
conjunction with extensive photometric and spectroscopic monitoring
observations from Fairborn Observatory. However, the recent realization that
BD+20 307 is a short period, double-line, spectroscopic binary whose components
have very different lithium abundances, vitiates standard methods of age
determination. We find the system to be metal-poor; this, combined with its
measured lithium abundances, indicates that BD+20 307 may be several to many
Gyr old. BD+20 307 affords astronomy a rare peek into a mature planetary system
in orbit around a close binary star (because such systems are not amenable to
study by the precision radial velocity technique).
|
This paper investigates the phenomenon of emergence of spatial curvature.
This phenomenon is absent in the Standard Cosmological Model, which has a flat
and fixed spatial curvature (small perturbations are considered in the Standard
Cosmological Model but their global average vanishes, leading to spatial
flatness at all times). This paper shows that with the nonlinear growth of
cosmic structures the global average deviates from zero. The analysis is based
on the {\em silent universes} (a wide class of inhomogeneous cosmological
solutions of the Einstein equations). The initial conditions are set in the
early universe as perturbations around the $\Lambda$CDM model with $\Omega_m =
0.31$, $\Omega_\Lambda = 0.69$, and $H_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$. As the
growth of structures becomes nonlinear, the model deviates from the
$\Lambda$CDM model, and at the present instant if averaged over a domain ${\cal
D}$ with volume $V = (2150\,{\rm Mpc})^3$ (at these scales the cosmic variance
is negligibly small) gives: $\Omega_m^{\cal D} = 0.22$, $\Omega_\Lambda^{\cal
D} = 0.61$, $\Omega_{\cal R}^{\cal D} = 0.15$ (in the FLRW limit $\Omega_{\cal
R}^{\cal D} \to \Omega_k$), and $\langle H \rangle_{\cal D} = 72.2$ km s$^{-1}$
Mpc$^{-1}$. Given the fact that low-redshift observations favor higher values
of the Hubble constant and lower values of matter density, compared to the CMB
constraints, the emergence of the spatial curvature in the low-redshift
universe could be a possible solution to these discrepancies.
|
Recurrent neural networks (RNNs) are state-of-the-art in voice
awareness/understanding and speech recognition. On-device computation of RNNs
on low-power mobile and wearable devices would be key to applications such as
zero-latency voice-based human-machine interfaces. Here we present Chipmunk, a
small (<1 mm${}^2$) hardware accelerator for Long-Short Term Memory RNNs in UMC
65 nm technology capable to operate at a measured peak efficiency up to 3.08
Gop/s/mW at 1.24 mW peak power. To implement big RNN models without incurring
in huge memory transfer overhead, multiple Chipmunk engines can cooperate to
form a single systolic array. In this way, the Chipmunk architecture in a 75
tiles configuration can achieve real-time phoneme extraction on a demanding RNN
topology proposed by Graves et al., consuming less than 13 mW of average power.
|
We present PrimDiffusion, the first diffusion-based framework for 3D human
generation. Devising diffusion models for 3D human generation is difficult due
to the intensive computational cost of 3D representations and the articulated
topology of 3D humans. To tackle these challenges, our key insight is operating
the denoising diffusion process directly on a set of volumetric primitives,
which models the human body as a number of small volumes with radiance and
kinematic information. This volumetric primitives representation marries the
capacity of volumetric representations with the efficiency of primitive-based
rendering. Our PrimDiffusion framework has three appealing properties: 1)
compact and expressive parameter space for the diffusion model, 2) flexible 3D
representation that incorporates human prior, and 3) decoder-free rendering for
efficient novel-view and novel-pose synthesis. Extensive experiments validate
that PrimDiffusion outperforms state-of-the-art methods in 3D human generation.
Notably, compared to GAN-based methods, our PrimDiffusion supports real-time
rendering of high-quality 3D humans at a resolution of $512\times512$ once the
denoising process is done. We also demonstrate the flexibility of our framework
on training-free conditional generation such as texture transfer and 3D
inpainting.
|
Modelling term dependence in IR aims to identify co-occurring terms that are
too heavily dependent on each other to be treated as a bag of words, and to
adapt the indexing and ranking accordingly. Dependent terms are predominantly
identified using lexical frequency statistics, assuming that (a) if terms
co-occur often enough in some corpus, they are semantically dependent; (b) the
more often they co-occur, the more semantically dependent they are. This
assumption is not always correct: the frequency of co-occurring terms can be
separate from the strength of their semantic dependence. E.g. "red tape" might
be overall less frequent than "tape measure" in some corpus, but this does not
mean that "red"+"tape" are less dependent than "tape"+"measure". This is
especially the case for non-compositional phrases, i.e. phrases whose meaning
cannot be composed from the individual meanings of their terms (such as the
phrase "red tape" meaning bureaucracy). Motivated by this lack of distinction
between the frequency and strength of term dependence in IR, we present a
principled approach for handling term dependence in queries, using both lexical
frequency and semantic evidence. We focus on non-compositional phrases,
extending a recent unsupervised model for their detection [21] to IR. Our
approach, integrated into ranking using Markov Random Fields [31], yields
effectiveness gains over competitive TREC baselines, showing that there is
still room for improvement in the very well-studied area of term dependence in
IR.
|
A finite automaton is called bideterministic if it is both deterministic and
codeterministic -- that is, if it is deterministic and its transpose is
deterministic as well. The study of such automata in a weighted setting is
initiated. All trim bideterministic weighted automata over integral domains and
over positive semirings are proved to be minimal. On the contrary, it is
observed that this property does not hold over commutative rings in general:
non-minimal trim bideterministic weighted automata do exist over all semirings
that are not zero-divisor free, and over many such semirings, these automata
might not even admit equivalents that are both minimal and bideterministic. The
problem of determining whether a given rational series is realised by a
bideterministic automaton is shown to be decidable over fields and over
tropical semirings. An example of a positive semiring over which this problem
becomes undecidable is given as well.
|
We diagonalize Q-operators for rational homogeneous sl(2)-invariant
Heisenberg spin chains using the algebraic Bethe ansatz. After deriving the
fundamental commutation relations relevant for this case from the Yang-Baxter
equation we demonstrate that the Q-operators act diagonally on the Bethe
vectors if the Bethe equations are satisfied. In this way we provide a direct
proof that the eigenvalues of the Q-operators studied here are given by
Baxter's Q-functions.
|
This work outlines a time-domain numerical integration technique for linear
hyperbolic partial differential equations sourced by distributions (Dirac
$\delta$-functions and their derivatives). Such problems arise when studying
binary black hole systems in the extreme mass ratio limit. We demonstrate that
such source terms may be converted to effective domain-wide sources when
discretized, and we introduce a class of time-steppers that directly account
for these discontinuities in time integration. Moreover, our time-steppers are
constructed to respect time reversal symmetry, a property that has been
connected to conservation of physical quantities like energy and momentum in
numerical simulations. To illustrate the utility of our method, we numerically
study a distributionally-sourced wave equation that shares many features with
the equations governing linear perturbations to black holes sourced by a point
mass.
|
The Borodin-Kostochka Conjecture states that for a graph $G$, if
$\Delta(G)\geq9$, then $\chi(G)\leq\max\{\Delta(G)-1,\omega(G)\}$. We use $P_t$
and $C_t$ to denote a path and a cycle on $t$ vertices, respectively. Let
$C=v_1v_2v_3v_4v_5v_1$ be an induced $C_5$. A {\em $C_5^+$} is a graph obtained
from $C$ by adding a $C_3=xyzx$ and a $P_2=t_1t_2$ such that (1) $x$ and $y$
are both exactly adjacent to $v_1,v_2,v_3$ in $V(C)$, $z$ is exactly adjacent
to $v_2$ in $V(C)$, $t_1$ is exactly adjacent to $v_4,v_5$ in $V(C)$ and $t_2$
is exactly adjacent to $v_1,v_4,v_5$ in $V(C)$, (2) $t_1$ is exactly adjacent
to $z$ in $\{x,y,z\}$ and $t_2$ has no neighbors in $\{x,y,z\}$. In this paper,
we show that the Borodin-Kostochka Conjecture holds for ($P_6,C_4,H$)-free
graphs, where $H\in \{K_7,C_5^+\}$. This generalizes some results of Gupta and
Pradhan in \cite{GP21,GP24}.
|
The populations of both quiescent and actively star-forming galaxies at 1<z<2
are still under-represented in our spectroscopic census of galaxies throughout
the history of the Universe. In the light of galaxy formation models, however,
the evolution of galaxies at these redshifts is of pivotal importance and
merits further investigation. We therefore designed a spectroscopic observing
campaign of a sample of both massive, quiescent and star-forming galaxies at
z>1.4, called Galaxy Mass Assembly ultra-deep Spectroscopic Survey (GMASS). To
determine redshifts and physical properties, such as metallicity, dust content,
dynamical masses, and star formation history, we performed ultra-deep
spectroscopy with the red-sensitive optical spectrograph FORS2 at the VLT. Our
sample consists of objects, within the CDFS/GOODS area, detected at 4.5 micron,
to be sensitive to stellar mass rather than star formation intensity. The
spectroscopic targets were selected with a photometric redshift constraint
(z>1.4) and magnitude constraints (B(AB)<26, I(AB)<26.5), which should ensure
that these are faint, distant, and fairly massive galaxies. We present the
sample selection, survey design, observations, data reduction, and
spectroscopic redshifts. Up to 30 hours of spectroscopy of 174 spectroscopic
targets and 70 additional objects enabled us to determine 210 redshifts, of
which 145 are at z>1.4. From the redshifts and photometry, we deduce that the
BzK selection criteria are efficient (82%) and suffer low contamination (11%).
Several papers based on the GMASS survey show its value for studies of galaxy
formation and evolution. We publicly release the redshifts and reduced spectra.
In combination with existing and on-going additional observations in
CDFS/GOODS, this data set provides a legacy for future studies of distant
galaxies.
|
We report on the 20 ksec observation of Vela X-1 performed by BeppoSAX on
1996 July 14 during its Science Verification Phase. We observed the source in
two intensity states, characterized by a change in luminosity of a factor ~ 2,
and a change in absorption of a factor ~ 10. The single Narrow Field Instrument
pulse-averaged spectra are well fit by a power law with significantly different
indices. This is in agreement with the observed changes of slope in the
wide-band spectrum: a first change of slope at ~ 10 keV, and a second one at ~
35 keV. To mimic this behaviour we used a double power law modified by an
exponential cutoff --- the so-called NPEX model --- to fit the whole 2-100 keV
continuum. This functional is able to adequately describe the data, expecially
the low intensity state. We found an absorption-like feature at ~ 57 keV, very
well visible in the ratio performed with the Crab spectrum. We interpreted this
feature as a cyclotron resonance, corresponding to a neutron star surface
magnetic strength of 4.9 x 10^12 Gauss. The BeppoSAX data do not require the
presence of a cyclotron resonance at ~ 27 keV as found in earlier works.
|
The dynamical system described herein uses a hybrid cellular automata (CA)
mechanism to attain reversibility, and this approach is adapted to create a
novel block cipher algorithm called HCA. CA are widely used for modeling
complex systems and employ an inherently parallel model. Therefore,
applications derived from CA have a tendency to fit very well in the current
computational paradigm where scalability and multi-threading potential are
quite desirable characteristics. HCA model has recently received a patent by
the Brazilian agency INPI. Several evaluations and analyses performed on the
model are presented here, such as theoretical discussions related to its
reversibility and an analysis based on graph theory, which reduces HCA security
to the well-known Hamiltonian cycle problem that belongs to the NP-complete
class. Finally, the cryptographic robustness of HCA is empirically evaluated
through several tests, including avalanche property compliance and the NIST
randomness suite.
|
Let $M$ be a complete non-compact Riemannian manifold satisfying the doubling
volume property. Let $\overrightarrow{\Delta}$ be the Hodge-de Rham Laplacian
acting on 1-differential forms. According to the Bochner formula,
$\overrightarrow{\Delta}=\nabla^*\nabla+R_+-R_-$ where $R_+$ and $R_-$ are
respectively the positive and negative part of the Ricci curvature and $\nabla$
is the Levi-Civita connection. We study the boundedness of the Riesz transform
$d^*(\overrightarrow{\Delta})^{-\frac{1}{/2}}$ from $L^p(\Lambda^1T^*M)$ to
$L^p(M)$ and of the Riesz transform $d(\overrightarrow{\Delta})^{-\frac{1}{2}}$
from $L^p(\Lambda^1T^*M)$ to $L^p(\Lambda^2T^*M)$. We prove that, if the heat
kernel on functions $p_t(x,y)$ satisfies a Gaussian upper bound and if the
negative part $R_-$ of the Ricci curvature is $\epsilon$-sub-critical for some
$\epsilon\in[0,1)$, then $d^*(\overrightarrow{\Delta})^{-\frac{1}{2}}$ is
bounded from $L^p(\Lambda^1T^*M)$ to $L^p(M)$ and
$d(\overrightarrow{\Delta})^{-\frac{1}{2}}$ is bounded from
$L^p(\Lambda^1T^*M)$ to $L^p(\Lambda^2T^* M)$ for $p\in(p_0',2]$ where $p_0>2$
depends on $\epsilon$ and on a constant appearing in the doubling volume
property. A duality argument gives the boundedness of the Riesz transform
$d(\Delta)^{-\frac{1}{2}}$ from $L^p(M)$ to $L^p(\Lambda^1T^*M)$ for $p\in
[2,p_0)$ where $\Delta$ is the non-negative Laplace-Beltrami operator. We also
give a condition on $R_-$ to be $\epsilon$-sub-critical under both analytic and
geometric assumptions.
|
We present the results of a search for gravitationally-lensed giant arcs
conducted on a sample of 825 SDSS galaxy clusters. Both a visual inspection of
the images and an automated search were performed and no arcs were found. This
result is used to set an upper limit on the arc probability per cluster. We
present selection functions for our survey, in the form of arc detection
efficiency curves plotted as functions of arc parameters, both for the visual
inspection and the automated search. The selection function is such that we are
sensitive only to long, high surface brightness arcs with g-band surface
brightness mu_g < 24.8 and length-to-width ratio l/w > 10. Our upper limits on
the arc probability are compatible with previous arc searches. Lastly, we
report on a serendipitous discovery of a giant arc in the SDSS data, known
inside the SDSS Collaboration as Hall's arc.
|
Can machines know what twin prime is? From the composition of this phrase,
machines may guess twin prime is a certain kind of prime, but it is still
difficult to deduce exactly what twin stands for without additional knowledge.
Here, twin prime is a jargon - a specialized term used by experts in a
particular field. Explaining jargon is challenging since it usually requires
domain knowledge to understand. Recently, there is an increasing interest in
extracting and generating definitions of words automatically. However, existing
approaches, either extraction or generation, perform poorly on jargon. In this
paper, we propose to combine extraction and generation for jargon definition
modeling: first extract self- and correlative definitional information of
target jargon from the Web and then generate the final definitions by
incorporating the extracted definitional information. Our framework is
remarkably simple but effective: experiments demonstrate our method can
generate high-quality definitions for jargon and outperform state-of-the-art
models significantly, e.g., BLEU score from 8.76 to 22.66 and human-annotated
score from 2.34 to 4.04.
|
With the rapid advancement of technology, the design of virtual humans has
led to a very realistic user experience, such as in movies, video games, and
simulations. As a result, virtual humans are becoming increasingly similar to
real humans. However, following the Uncanny Valley (UV) theory, users tend to
feel discomfort when watching entities with anthropomorphic traits that differ
from real humans. This phenomenon is related to social identity theory, where
the observer looks for something familiar. In Computer Graphics (CG),
techniques used to create virtual humans with dark skin tones often rely on
approaches initially developed for rendering characters with white skin tones.
Furthermore, most CG characters portrayed in various media, including movies
and games, predominantly exhibit white skin tones. Consequently, it is
pertinent to explore people's perceptions regarding different groups of virtual
humans. Thus, this paper aims to examine and evaluate the human perception of
CG characters from different media, comparing two types of skin colors. The
findings indicate that individuals felt more comfortable and perceived less
realism when watching characters with dark colored skin than those with white
colored skin. Our central hypothesis is that dark colored characters, rendered
with classical developed algorithms, are considered more cartoon than realistic
and placed on the left of the Valley in the UV chart.
|
We give necessary and sufficient conditions for the Hardy operator to be
bounded on a rearrangement invariant quasi-Banach space in terms of its Boyd
indices.
|
Previously published CTEQ6 parton distributions adopt the conventional
zero-mass parton scheme; these sets are most appropriate for use with massless
hard-scattering matrix elements commonly found in most physics applications.
For precision observables which are sensitive to charm and bottom quark mass
effects, we provide in this paper an additional CTEQ6HQ parton distribution set
determined in the more general variable flavor number scheme which incorporates
heavy flavor mass effects. The results are obtained by combining these parton
distributions with consistently matched DIS structure functions computed in the
same scheme. We describe the analysis procedure, examine the predominant
features of the new distributions, and compare with previous distributions.
|
We have recently shown [Blunt et al., Science 322, 1077 (2008)] that
p-terphenyl-3,5,3',5'-tetracarboxylic acid adsorbed on graphite self-assembles
into a two-dimensional rhombus random tiling. This tiling is close to ideal,
displaying long range correlations punctuated by sparse localised tiling
defects. In this paper we explore the analogy between dynamic arrest in this
type of random tilings and that of structural glasses. We show that the
structural relaxation of these systems is via the propagation--reaction of
tiling defects, giving rise to dynamic heterogeneity. We study the scaling
properties of the dynamics, and discuss connections with kinetically
constrained models of glasses.
|
In this contribution we consider stochastic growth models in the
Kardar-Parisi-Zhang universality class in 1+1 dimension. We discuss the large
time distribution and processes and their dependence on the class on initial
condition. This means that the scaling exponents do not uniquely determine the
large time surface statistics, but one has to further divide into subclasses.
Some of the fluctuation laws were first discovered in random matrix models.
Moreover, the limit process for curved limit shape turned out to show up in a
dynamical version of hermitian random matrices, but this analogy does not
extend to the case of symmetric matrices. Therefore the connections between
growth models and random matrices is only partial.
|
This is a survey article on distance-squared mappings and related topics.
|
This paper is concerned with long-time strong approximations of SDEs with
non-globally Lipschitz coefficients.Under certain non-globally Lipschitz
conditions, a long-time version of fundamental strong convergence theorem is
established for general one-step time discretization schemes. With the aid of
the fundamental strong convergence theorem, we prove the expected strong
convergence rate over infinite time for two types of schemes such as the
backward Euler method and the projected Euler method in non-globally Lipschitz
settings. Numerical examples are finally reported to confirm our findings.
|
We propose a new method for solving imaging inverse problems using
text-to-image latent diffusion models as general priors. Existing methods using
latent diffusion models for inverse problems typically rely on simple null text
prompts, which can lead to suboptimal performance. To address this limitation,
we introduce a method for prompt tuning, which jointly optimizes the text
embedding on-the-fly while running the reverse diffusion process. This allows
us to generate images that are more faithful to the diffusion prior. In
addition, we propose a method to keep the evolution of latent variables within
the range space of the encoder, by projection. This helps to reduce image
artifacts, a major problem when using latent diffusion models instead of
pixel-based diffusion models. Our combined method, called P2L, outperforms both
image- and latent-diffusion model-based inverse problem solvers on a variety of
tasks, such as super-resolution, deblurring, and inpainting.
|
Let us consider the set of joint quantum correlations arising from
two-outcome local measurements on a bipartite quantum system. We prove that no
finite dimension is sufficient to generate all these sets. We approach the
problem in two different ways by constructing explicit examples for every
dimension d, which demonstrates that there exist bipartite correlations that
necessitate d-dimensional local quantum systems in order to generate them. We
also show that at least 10 two-outcome measurements must be carried out by the
two parties altogether so as to generate bipartite joint correlations not
achievable by two-dimensional local systems. The smallest explicit example we
found involves 11 settings.
|
Microscopic pyramidal pits in a reflective surface, a geometry similar to a
retroreflector, are frequently used to enhance signal strength. The enhancement
effect is generally attributed to surface plasmons, however, the sub-wavelength
to near-wavelength dimensions of the pyramidal 3D geometry suggest
contributions from diffraction and near-field effects. Our theoretical analysis
of the light intensity distribution in the similar (but simpler) 2D geometry
assuming a perfect conductor screen, that is, in the absence of any plasmon
effects, shows that interference patterns forming within the cavity cause a
significant resonant increase in local intensity. Such effect can be important
for many applications, especially for the widely used Raman spectroscopy.
Resonant enhancement without plasmons of the emitted Raman signal due to
enhanced local field amplitude is also possible, which implies that the
geometry practically implements a Raman laser. Comparison of diffraction
patterns obtained with near-field and far-field approaches reveals that the
near-field component is responsible for the observed dramatic intensity
enhancement, and thus the Raman enhancement as well.
|
We present a scheme to entangle two microwave fields by using the nonlinear
magnetostrictive interaction in a ferrimagnet. The magnetostrictive interaction
enables the coupling between a magnon mode (spin wave) and a mechanical mode in
the ferrimagnet, and the magnon mode simultaneously couples to two microwave
cavity fields via the magnetic dipole interaction. The magnon-phonon coupling
is enhanced by directly driving the ferrimagnet with a strong red-detuned
microwave field, and the driving photons are scattered onto two sidebands
induced by the mechanical motion. We show that two cavity fields can be
prepared in a stationary entangled state if they are respectively resonant with
two mechanical sidebands. The present scheme illustrates a new mechanism for
creating entangled states of optical fields, and enables potential applications
in quantum information science and quantum tasks that require entangled
microwave fields.
|
In this manuscript, we determine the optimal approximation rate for Skorohod
integrals of sufficiently regular integrands. This generalizes the optimal
approximation results for It\^o integrals. However, without adaptedness and the
It\^o isometry, new proof techniques are required. The main tools are a
characterization via S-transform and a reformulation of the Wiener chaos
decomposition in terms of Wick-analytic functionals.
|
The applicability of the highly idealized secondary infall model to
`realistic' initial conditions is investigated. The collapse of proto-halos
seeded by $3\sigma$ density perturbations to an Einstein--de Sitter universe is
studied here for a variety of scale-free power spectra with spectral indices
ranging from $n=1$ to $-2$. Initial conditions are set by the constrained
realization algorithm and the dynamical evolution is calculated both
analytically and numerically. The analytic calculation is based on the simple
secondary infall model where spherical symmetry is assumed. A full numerical
simulation is performed by a Tree N-body code where no symmetry is assumed. A
hybrid calculation has been performed by using a monopole term code, where no
symmetry is imposed on the particles but the force is approximated by the
monopole term only. The main purpose of using such code is to suppress
off-center mergers. In all cases studied here the rotation curves calculated by
the two numerical codes are in agreement over most of the mass of the halos,
excluding the very inner region, and these are compared with the analytically
calculated ones. The main result obtained here, reinforces the foundings of
many N-body experements, is that the collapse proceeds 'gently' and not {\it
via} violent relaxation. There is a strong correlation of the final energy of
individual particles with the initial one. In particular we find a preservation
of the ranking of particles according to their binding energy. In cases where
the analytic model predicts non-increasing rotation curves its predictions are
confirmed by the simulations. Otherwise, sensitive dependence on initial
conditions is found and the analytic model fails completely.
|
Imitation learning with visual observations is notoriously inefficient when
addressed with end-to-end behavioural cloning methods. In this paper, we
explore an alternative paradigm which decomposes reasoning into three phases.
First, a retrieval phase, which informs the robot what it can do with an
object. Second, an alignment phase, which informs the robot where to interact
with the object. And third, a replay phase, which informs the robot how to
interact with the object. Through a series of real-world experiments on
everyday tasks, such as grasping, pouring, and inserting objects, we show that
this decomposition brings unprecedented learning efficiency, and effective
inter- and intra-class generalisation. Videos are available at
https://www.robot-learning.uk/retrieval-alignment-replay.
|
This paper is a contribution to the problem of counting geometric graphs on
point sets. More concretely, we look at the maximum numbers of non-crossing
spanning trees and forests. We show that the so-called double chain point
configuration of N points has Omega(12.52^N) non-crossing spanning trees and
Omega(13.61^N) non-crossing forests. This improves the previous lower bounds on
the maximum number of non-crossing spanning trees and of non-crossing forests
among all sets of N points in general position given by Dumitrescu, Schulz,
Sheffer and T\'oth. Our analysis relies on the tools of analytic combinatorics,
which enable us to count certain families of forests on points in convex
position, and to estimate their average number of components. A new upper bound
of O(22.12^N) for the number of non-crossing spanning trees of the double chain
is also obtained.
|
We propose a low-complexity transmission strategy in multi-user
multiple-input multiple-output downlink systems. The adaptive strategy adjusts
the precoding methods, denoted as the transmission mode, to improve the system
sum rates while maintaining the number of simultaneously served users. Three
linear precoding transmission modes are discussed, i.e., the block
diagonalization zero-forcing, the cooperative zero-forcing (CZF), and the
cooperative matched-filter (CMF). Considering both the number of data streams
and the multiple-antenna configuration of users, we modify the common CZF and
CMF modes by allocating data streams. Then, the transmission mode is selected
between the modified ones according to the asymptotic sum rate analyses. As
instantaneous channel state information is not needed for the mode selection,
the computational complexity is significantly reduced. Numerical simulations
confirm our analyses and demonstrate that the proposed scheme achieves
substantial performance gains with very low computational complexity.
|
We introduce a topological field theory with a Bogomol'nyi structure
permitting BPS electric, magnetic and dyonic monopoles. From the general
arguments given by Montonen and Olive the particle spectrum and mass compare
favourably with that of the intermediate vector bosons. In most, if not in all,
of its essential features the topological field theory introduced here provides
an example of a dual field theory, the existence of which was conjectured by
Montonen and Olive.
|
Silicon carbide has recently been developed as a platform for optically
addressable spin defects. In particular, the neutral divacancy in the 4H
polytype displays an optically addressable spin-1 ground state and
near-infrared optical emission. Here, we present the Purcell enhancement of a
single neutral divacancy coupled to a photonic crystal cavity. We utilize a
combination of nanolithographic techniques and a dopant-selective
photoelectrochemical etch to produce suspended cavities with quality factors
exceeding 5,000. Subsequent coupling to a single divacancy leads to a Purcell
factor of ~50, which manifests as increased photoluminescence into the
zero-phonon line and a shortened excited-state lifetime. Additionally, we
measure coherent control of the divacancy ground state spin inside the cavity
nanostructure and demonstrate extended coherence through dynamical decoupling.
This spin-cavity system represents an advance towards scalable long-distance
entanglement protocols using silicon carbide that require the interference of
indistinguishable photons from spatially separated single qubits.
|
A noisy CDMA downlink channel operating under a strict complexity constraint
on the receiver is introduced. According to this constraint, detected bits,
obtained by performing hard decisions directly on the channel's matched filter
output, must be the same as the transmitted binary inputs. This channel
setting, allowing the use of the simplest receiver scheme, seems to be
worthless, making reliable communication at any rate impossible. However,
recently this communication paradigm was shown to yield valuable information
rates in the case of a noiseless channel. This finding calls for the
investigation of this attractive complexity-constrained transmission scheme for
the more practical noisy channel case. By adopting the statistical mechanics
notion of metastable states of the renowned Hopfield model, it is proved that
under a bounded noise assumption such complexity-constrained CDMA channel gives
rise to a non-trivial Shannon-theoretic capacity, rigorously analyzed and
corroborated using finite-size channel simulations. For unbounded noise the
channel's outage capacity is addressed and specifically described for the
popular additive white Gaussian noise.
|
The way people respond to messaging from public health organizations on
social media can provide insight into public perceptions on critical health
issues, especially during a global crisis such as COVID-19. It could be
valuable for high-impact organizations such as the US Centers for Disease
Control and Prevention (CDC) or the World Health Organization (WHO) to
understand how these perceptions impact reception of messaging on health policy
recommendations. We collect two datasets of public health messages and their
responses from Twitter relating to COVID-19 and Vaccines, and introduce a
predictive method which can be used to explore the potential reception of such
messages. Specifically, we harness a generative model (GPT-2) to directly
predict probable future responses and demonstrate how it can be used to
optimize expected reception of important health guidance. Finally, we introduce
a novel evaluation scheme with extensive statistical testing which allows us to
conclude that our models capture the semantics and sentiment found in actual
public health responses.
|
Realisation of experiments even on small and medium-scale quantum computers
requires an optimisation of several parameters to achieve high-fidelity
operations. As the size of the quantum register increases, the characterisation
of quantum states becomes more difficult since the number of parameters to be
measured grows as well and finding efficient observables in order to estimate
the parameters of the model becomes a crucial task. Here we propose a method
relying on application of Bayesian inference that can be used to determine
systematic, unknown phase shifts of multi-qubit states. This method offers
important advantages as compared to Ramsey-type protocols. First, application
of Bayesian inference allows the selection of an adaptive basis for the
measurements which yields the optimal amount of information about the phase
shifts of the state. Secondly, this method can process the outcomes of
different observables at the same time. This leads to a substantial decrease in
the resources needed for the estimation of phases, speeding up the state
characterisation and optimisation in experimental implementations. The proposed
Bayesian inference method can be applied in various physical platforms that are
currently used as quantum processors.
|
Providing a measure of market risk is an important issue for investors and
financial institutions. However, the existing models for this purpose are per
definition symmetric. The current paper introduces an asymmetric capital asset
pricing model for measurement of the market risk. It explicitly accounts for
the fact that falling prices determine the risk for a long position in the
risky asset and the rising prices govern the risk for a short position. Thus, a
position dependent market risk measure that is provided accords better with
reality. The empirical application reveals that Apple stock is more volatile
than the market only for the short seller. Surprisingly, the investor that has
a long position in this stock is facing a lower volatility than the market.
This property is not captured by the standard asset pricing model, which has
important implications for the expected returns and hedging designs.
|
Many surface reconstruction methods incorporate normal integration, which is
a process to obtain a depth map from surface gradients. In this process, the
input may represent a surface with discontinuities, e.g., due to
self-occlusion. To reconstruct an accurate depth map from the input normal map,
hidden surface gradients occurring from the jumps must be handled. To model
these jumps correctly, we design a novel discretization scheme for the domain
of normal integration. Our key idea is to introduce auxiliary edges, which
bridge between piecewise-smooth patches in the domain so that the magnitude of
hidden jumps can be explicitly expressed. Using the auxiliary edges, we design
a novel algorithm to optimize the discontinuity and the depth map from the
input normal map. Our method optimizes discontinuities by using a combination
of iterative re-weighted least squares and iterative filtering of the jump
magnitudes on auxiliary edges to provide strong sparsity regularization.
Compared to previous discontinuity-preserving normal integration methods, which
model the magnitudes of jumps only implicitly, our method reconstructs subtle
discontinuities accurately thanks to our explicit representation of jumps
allowing for strong sparsity regularization.
|
Spin-orbit coupled dynamics are of fundamental interest in both quantum
optical and condensed matter systems alike. In this work, we show that photonic
excitations in pseudospin-1/2 atomic lattices exhibit an emergent spin-orbit
coupling when the geometry is chiral. This spin-orbit coupling arises naturally
from the electric dipole interaction between the lattice sites and leads to
spin polarized excitation transport. Using a general quantum optical model, we
determine analytically the conditions that give rise to spin-orbit coupling and
characterize the behavior under various symmetry transformations. We show that
chirality-induced spin textures are associated with a topologically nontrivial
Zak phase that characterizes the chiral setup. Our results demonstrate that
chiral atom arrays are a robust platform for realizing spin-orbit coupled
topological states of matter.
|
Well-known conductive molecular wires, like cumulene or polyyne, provide a
model for interconnecting molecular electronics circuit. In the recent
experiment, the appearance of carbon wire bridging two-dimensional electrodes -
graphene sheets - was observed [PRL 102, 205501 (2009)], thus demonstrating a
mechanical way of producing the cumulene. In this work, we study the structure
and conductance properties of the carbon wire suspended between carbon
nanotubes (CNTs) of different chiralities (zigzag and armchair), and
corresponding conductance variation upon stretching. We find the geometrical
structure of the carbon wire bridging CNTs similar to the experimentally
observed structures in the carbon wire obtained between graphene electrodes. We
show a capability to modulate the conductance by changing bridging sites
between the carbon wire and CNTs without breaking the wire. Observed current
modulation via cumulene wire stretching/elongation together with CNT junction
stability makes it a promising candidate for mechano-switching device for
molecular nanoelectronics.
|
There have now been three supernova-associated gamma-ray bursts (GRBs) at
redshift z < 0.17, namely 980425, 030329, and 031203, but the nearby and
under-luminous GRBs 980425 and 031203 are distinctly different from the
`classical' or standard GRBs. It has been suggested that they could be
classical GRBs observed away from their jet axes, or they might belong to a
population of under-energetic GRBs. Recent radio observations of the afterglow
of GRB 980425 suggest that different engines may be responsible for the
observed diversity of cosmic explosions. Given this assumption, a crude
constraint on a luminosity function for faint GRBs with a mean luminosity
similar to that of GRB 980425 and an upper limit on the rate density of
980425-type events, we simulate the redshift distribution of under-luminous
GRBs assuming BATSE and Swift sensitivities. A local rate density of about 0.6%
of the local supernova Type Ib/c rate yields simulated probabilities for
under-luminous events to occur at rates comparable to the BATSE GRB
low-redshift distribution. In this scenario the probability of BATSE/HETE
detecting at least one GRB at z<0.05 is 0.78 over 4.5 years, a result that is
comparable with observation. Swift has the potential to detect 1--5
under-luminous GRBs during one year of observation.
|
We develop an effective potential approach for assessing the flow of charge
within a two-dimensional donor-acceptor/metal network based on core-level
shifts. To do so, we perform both density functional theory (DFT) calculations
and x-ray photoemission spectroscopy (XPS) measurements of the core-level
shifts for three different monolayers adsorbed on a Ag substrate. Specifically,
we consider perfluorinated pentacene (PFP), copper phthalocyanine (CuPc) and
their 1:1 mixture (PFP+CuPc) adsorbed on Ag(111).
|
Phyllosilicate minerals are an emerging class of naturally occurring layered
insulators with large bandgap energy that have gained attention from the
scientific community. This class of lamellar materials has been recently
explored at the ultrathin two-dimensional level due to their specific
mechanical, electrical, magnetic, and optoelectronic properties, which are
crucial for engineering novel devices (including heterostructures). Due to
these properties, phyllosilicates minerals can be considered promising low-cost
nanomaterials for future applications. In this Perspective article, we will
present relevant features of these materials for their use in potential
2D-based electronic and optoelectronic applications, also discussing some of
the major challenges in working with them.
|
Let V(1) be the Smith-Toda complex at the prime 3. We prove that there exists
a map v_2^9: \Sigma^{144}V(1) \to V(1) that is a K(2) equivalence. This map is
used to construct various v_2-periodic infinite families in the 3-primary
stable homotopy groups of spheres.
|
This paper presents some new results on Parisian ruin under Levy insurance
risk process, where ruin occurs when the process has gone below a fixed level
from the last record maximum, also known as the high-water mark or drawdown,
for a fixed consecutive periods of time. The law of ruin-time and the position
at ruin is given in terms of their joint Laplace transforms. Identities are
presented semi-explicitly in terms of the scale function and the law of the
Levy process. They are established using recent developments on fluctuation
theory of drawdown of spectrally negative Levy process. In contrast to the
Parisian ruin of Levy process below a fixed level, ruin under drawdown occurs
in finite time with probability one.
|
In networks, the well-documented tendency for people with similar
characteristics to form connections is known as the principle of homophily.
Being able to quantify homophily into a number has a significant real-world
impact, ranging from government fund-allocation to finetuning the parameters in
a sociological model. This paper introduces the Popularity-Homophily Index (PH
Index) as a new metric to measure homophily in directed graphs. The PH Index
takes into account the popularity of each actor in the interaction network,
with popularity precalculated with an importance algorithm like Google
PageRank. The PH Index improves upon other existing measures by weighting a
homophilic edge leading to an important actor over a homophilic edge leading to
a less important actor. The PH Index can be calculated not only for a single
node in a directed graph but also for the entire graph. This paper calculates
the PH Index of two sample graphs and concludes with an overview of the
strengths and weaknesses of the PH Index, and its potential applications in the
real world.
|
Double descent is a surprising phenomenon in machine learning, in which as
the number of model parameters grows relative to the number of data, test error
drops as models grow ever larger into the highly overparameterized (data
undersampled) regime. This drop in test error flies against classical learning
theory on overfitting and has arguably underpinned the success of large models
in machine learning. This non-monotonic behavior of test loss depends on the
number of data, the dimensionality of the data and the number of model
parameters. Here, we briefly describe double descent, then provide an
explanation of why double descent occurs in an informal and approachable
manner, requiring only familiarity with linear algebra and introductory
probability. We provide visual intuition using polynomial regression, then
mathematically analyze double descent with ordinary linear regression and
identify three interpretable factors that, when simultaneously all present,
together create double descent. We demonstrate that double descent occurs on
real data when using ordinary linear regression, then demonstrate that double
descent does not occur when any of the three factors are ablated. We use this
understanding to shed light on recent observations in nonlinear models
concerning superposition and double descent. Code is publicly available.
|
We consider one or more independent random walks on the $d\ge 3$ dimensional
discrete torus. The walks start from vertices chosen independently and
uniformly at random. We analyze the fluctuation behavior of the size of some
random sets arising from the trajectories of the random walks at a time
proportional to the size of the torus. Examples include vacant sets and the
intersection of ranges. The proof relies on a refined analysis of tail
estimates for hitting time and can be applied for other vertex-transitive
graphs.
|
For backward elastic scattering of deuterons by ^3He, cross sections \sigma
and tensor analyzing power T_{20} are measured at E_d=140-270 MeV. The data are
analyzed by the PWIA and by the general formula which includes virtual
excitations of other channels, with the assumption of the proton transfer from
^3He to the deuteron. Using ^3He wave functions calculated by the Faddeev
equation, the PWIA describes global features of the experimental data, while
the virtual excitation effects are important for quantitative fits to the
T_{20} data. Theoretical predictions on T_{20}, K_y^y (polarization transfer
coefficient) and C_{yy} (spin correlation coefficient) are provided up to GeV
energies.
|
Freight train services in a railway network system are generally divided into
two categories: one is the unscheduled train, whose operating frequency
fluctuates with origin-destination (OD) demands; the other is the scheduled
train, which is running based on regular timetable just like the passenger
trains. The timetable will be released to the public if determined and it would
not be influenced by OD demands. Typically, the total capacity of scheduled
trains can usually satisfy the predicted demands of express cargos in average.
However, the demands are changing in practice. Therefore, how to distribute the
shipments between different stations to unscheduled and scheduled train
services has become an important research field in railway transportation. This
paper focuses on the coordinated optimization of the rail express cargos
distribution in two service networks. On the premise of fully utilizing the
capacity of scheduled service network first, we established a Car-to-Train
(CTT) assignment model to assign rail express cargos to scheduled and
unscheduled trains scientifically. The objective function is to maximize the
net income of transporting the rail express cargos. The constraints include the
capacity restriction on the service arcs, flow balance constraints, logical
relationship constraint between two groups of decision variables and the due
date constraint. The last constraint is to ensure that the total transportation
time of a shipment would not be longer than its predefined due date. Finally,
we discuss the linearization techniques to simplify the model proposed in this
paper, which make it possible for obtaining global optimal solution by using
the commercial software.
|
Recent experimental progress have revealed Meissner and Vortex phases in
low-dimensional ultracold atoms systems. Atomtronic setups can realize ring
ladders, while explicitly taking the finite size of the system into account.
This enables the engineering of quantized chiral currents and phase slips
in-between them. We find that the mesoscopic scale modifies the current. Full
control of the lattice configuration reveals a reentrant behavior of Vortex and
Meissner phases. Our approach allows a feasible diagnostic of the currents'
configuration through time of flight measurements.
|
We present stellar-dynamical measurements of the central supermassive black
hole (SMBH) in the S0 galaxy NGC 307, using adaptive-optics IFU data from
VLT-SINFONI. We investigate the effects of including dark-matter haloes as well
as multiple stellar components with different mass-to-light (M/L) ratios in the
dynamical modeling. Models with no halo and a single stellar component yield a
relatively poor fit with a low value for the SMBH mass ($7.0 \pm 1.0 \times
10^{7} M_{\odot}$) and a high stellar M/L ratio (K-band M/L = $1.3 \pm 0.1$).
Adding a halo produces a much better fit, with a significantly larger SMBH mass
($2.0 \pm 0.5 \times 10^{8} M_{\odot}$) and a lower M/L ratio ($1.1 \pm 0.1$).
A model with no halo but with separate bulge and disc components produces a
similarly good fit, with a slightly larger SMBH mass ($3.0 \pm 0.5 \times
10^{8} M_{\odot}$) and an identical M/L ratio for the bulge component, though
the disc M/L ratio is biased high (disc M/L $ = 1.9 \pm 0.1$). Adding a halo to
the two-stellar-component model results in a much more plausible disc M/L ratio
of $1.0 \pm 0.1$, but has only a modest effect on the SMBH mass ($2.2 \pm 0.6
\times 10^{8} M_{\odot}$) and leaves the bulge M/L ratio unchanged. This
suggests that measuring SMBH masses in disc galaxies using just a single
stellar component and no halo has the same drawbacks as it does for elliptical
galaxies, but also that reasonably accurate SMBH masses and bulge M/L ratios
can be recovered (without the added computational expense of modeling haloes)
by using separate bulge and disc components.
|
We study a U(N) gauged matrix quantum mechanics which, in the large N limit,
is closely related to the chiral WZW conformal field theory. This manifests
itself in two ways. First, we construct the left-moving Kac-Moody algebra from
matrix degrees of freedom. Secondly, we compute the partition function of the
matrix model in terms of Schur and Kostka polynomials and show that, in the
large $N$ limit, it coincides with the partition function of the WZW model.
This same matrix model was recently shown to describe non-Abelian quantum Hall
states and the relationship to the WZW model can be understood in this
framework.
|
One-bit compressive sensing gains its popularity in signal processing and
communications due to its low storage costs and low hardware complexity.
However, it has been a challenging task to recover the signal only by
exploiting the one-bit (the sign) information. In this paper, we appropriately
formulate the one-bit compressive sensing into a double-sparsity constrained
optimization problem. The first-order optimality conditions for this nonconvex
and discontinuous problem are established via the newly introduced
$\tau$-stationarity, based on which, a gradient projection subspace pursuit
(\texttt{GPSP}) algorithm is developed. It is proven that \texttt{GPSP} can
converge globally and terminate within finite steps. Numerical experiments have
demonstrated its excellent performance in terms of a high order of accuracy
with a fast computational speed.
|
Constructing the Semi - Unitary Transformation (SUT) to obtain the
supersymmetric partner Hamiltonians for a one dimensional harmonic oscillator,
it has been shown that under this transformation the supersymmetric partner
loses its ground state in T^{4}- space while its eigen functions constitute a
complete orthonormal basis in a subspace of full Hilbert space.
Keywords: Supersymmetry, Superluminal Transformations, Semi Unitary
Transformations.
PACS No: 14.80Lv
|
Subsets and Splits