text
stringlengths 6
128k
|
---|
The thin plate spline smoother is a classical model for fnding a smooth
function from the knowledge of its observation at scattered locations which may
have random noises. We consider a nonconforming Morley finite element method to
approximate the model. We prove the stochastic convergence of the finite
element method which characterizes the tail property of the probability
distribution function of the finite element error. We also propose a
self-consistent iterative algorithm to determine the smoothing parameter based
on our theoretical analysis. Numerical examples are included to confirm the
theoretical analysis and to show the competitive performance of the self-
consistent algorithm for finding the smoothing parameter.
|
Here I obtain the conditions necessary for the conservation of the Dirac
current when one substitutes the assumption $\gamma^A_{\ \ |B}=0$ for
$\gamma^A_{\ \ |B}=[V_B,\gamma^A]$, where the $\gamma^A$s are the Dirac
matrices and "$|$" represents the components of the covariant derivative. As an
application, I apply these conditions to the model used in Ref. [M. Novello,
Phys. Rev. {\bf D8}, 2398 (1973)].
|
Consider $n$ independent measurements, with the additional information of the
times at which measurements are performed. This paper deals with testing
statistical hypotheses when $n$ is large and only a small amount of
observations concentrated in short time intervals are relevant to the study. We
define a testing procedure in terms of multiple likelihood ratio (LR)
statistics obtained by splitting the observations into groups, and in
accordance with the following principles: P1) each LR statistic is formed by
gathering the data included in $G$ consecutive vectors of observations, where
$G$ is a suitable time window defined a priori with respect to an arbitrary
choice of the `origin of time'; P2) the null statistical hypothesis is rejected
only if at least $k$ LR statistics are sufficiently small, for a suitable
choice of $k$. We show that the application of the classical Wilks' theorem may
be affected by the arbitrary choice of the "origin of time", in connection with
P1). We then introduce a Wilks' theorem for grouped data which leads to a
testing procedure that overcomes the problem of the arbitrary choice of the
`origin of time', while fulfilling P1) and P2). Such a procedure is more
powerful than the corresponding procedure based on Wilks' theorem.
|
We study the gluon cascade generated via successive medium-induced branchings
by an energetic parton propagating through a dense QCD medium. We focus on the
high-energy regime where the energy $E$ of the leading particle is much larger
than the characteristic medium scale $\omega_c=\hat q L^2/2$, with $\hat q$ the
jet quenching parameter and $L$ the distance travelled through the medium. In
this regime the leading particle loses only a small fraction
$\sim\alpha_s(\omega_c/E)$ of its energy and can be treated as a steady source
of radiation for gluons with energies $\omega\le\omega_c$. For this effective
problem with a source, we obtain exact analytic solutions for the gluon
spectrum and the energy flux. The solutions exhibit wave turbulence: the basic
physical process is a continuing fragmentation which is `quasi-democratic'
(i.e. quasi-local in energy) and which provides an energy transfer from the
source to the medium at a rate (the energy flux $\mathcal{F}$) which is
quasi-independent of $\omega$. The locality of the branching process implies a
spectrum of the Kolmogorov-Obukhov type, i.e. a power-law spectrum which is a
fixed point of the branching process and whose strength is proportional to the
energy flux: $D(\omega)\sim\mathcal{F}/\sqrt\omega$ for $\omega\ll\omega_c$.
Via this turbulent flow, the gluon cascade loses towards the medium an energy
$\Delta E\sim\alpha_s^2\omega_c$, which is independent of the initial energy
$E$ of the leading particle and of the details of the thermalization mechanism
at the low-energy end of the cascade. This energy is carried away by very soft
gluons, which propagate at very large angles with respect to the jet axis. Our
predictions for the value of $\Delta E$ and for its angular distribution appear
to agree quite well, qualitatively and even semi-quantitatively, with the
phenomenology of di-jet asymmetry in nucleus-nucleus collisions at the LHC.
|
In this paper, let $n=2m$ and $d=3^{m+1}-2$ with $m\geq2$ and
$\gcd(d,3^n-1)=1$. By studying the weight distribution of the ternary
Zetterberg code and counting the numbers of solutions of some equations over
the finite field $\mathbb{F}_{3^n}$, the correlation distribution between a
ternary $m$-sequence of period $3^n-1$ and its $d$-decimation sequence is
completely determined. This is the first time that the correlation distribution
for a non-binary Niho decimation has been determined since 1976.
|
Signed graphs encode similarity and dissimilarity relationships among
different entities with positive and negative edges. In this paper, we study
the problem of community recovery over signed graphs generated by the signed
stochastic block model (SSBM) with two equal-sized communities. Our approach is
based on the maximum likelihood estimation (MLE) of the SSBM. Unlike many
existing approaches, our formulation reveals that the positive and negative
edges of a signed graph should be treated unequally. We then propose a simple
two-stage iterative algorithm for solving the regularized MLE. It is shown that
in the logarithmic degree regime, the proposed algorithm can exactly recover
the underlying communities in nearly-linear time at the information-theoretic
limit. Numerical results on both synthetic and real data are reported to
validate and complement our theoretical developments and demonstrate the
efficacy of the proposed method.
|
Scientific machine learning (SciML) has emerged as a versatile approach to
address complex computational science and engineering problems. Within this
field, physics-informed neural networks (PINNs) and deep operator networks
(DeepONets) stand out as the leading techniques for solving partial
differential equations by incorporating both physical equations and
experimental data. However, training PINNs and DeepONets requires significant
computational resources, including long computational times and large amounts
of memory. In search of computational efficiency, training neural networks
using half precision (float16) rather than the conventional single (float32) or
double (float64) precision has gained substantial interest, given the inherent
benefits of reduced computational time and memory consumed. However, we find
that float16 cannot be applied to SciML methods, because of gradient divergence
at the start of training, weight updates going to zero, and the inability to
converge to a local minima. To overcome these limitations, we explore mixed
precision, which is an approach that combines the float16 and float32 numerical
formats to reduce memory usage and increase computational speed. Our
experiments showcase that mixed precision training not only substantially
decreases training times and memory demands but also maintains model accuracy.
We also reinforce our empirical observations with a theoretical analysis. The
research has broad implications for SciML in various computational
applications.
|
Distributed Virtual Reality systems enable globally dispersed users to
interact with each other in a shared virtual environment. In such systems,
different types of latencies occur. For a good VR experience, they need to be
controlled. The time delay between the user's head motion and the corresponding
display output of the VR system might lead to adverse effects such as a reduced
sense of presence or motion sickness. Additionally, high network latency among
worldwide locations makes collaboration between users more difficult and leads
to misunderstandings. To evaluate the performance and optimize dispersed VR
solutions it is therefore important to measure those delays. In this work, a
novel, easy to set up, and inexpensive method to measure local and remote
system latency will be described. The measuring setup consists of a
microcontroller, a microphone, a piezo buzzer, a photosensor, and a
potentiometer. With these components, it is possible to measure
motion-to-photon and mouth-to-ear latency of various VR systems. By using
GPS-receivers for timecode-synchronization it is also possible to obtain the
end-to-end delays between different worldwide locations. The described system
was used to measure local and remote latencies of two HMD based distributed VR
systems.
|
Given a set of $n$ points $P$ in the plane, each colored with one of the $t$
given colors, a color-spanning set $S\subset P$ is a subset of $t$ points with
distinct colors. The minimum diameter color-spanning set (MDCS) is a
color-spanning set whose diameter is minimum (among all color-spanning sets of
$P$). Somehow symmetrically, the largest closest pair color-spanning set
(LCPCS) is a color-spanning set whose closest pair is the largest (among all
color-spanning sets of $P$). Both MDCS and LCPCS have been shown to be
NP-complete, but whether they are fixed-parameter tractable (FPT) when $t$ is a
parameter is still open. Motivated by this question, we consider the FPT
tractability of some matching problems under this color-spanning model, where
$t=2k$ is the parameter. The problems are summarized as follows: (1) MinSum
Matching Color-Spanning Set, namely, computing a matching of $2k$ points with
distinct colors such that their total edge length is minimized; (2) MaxMin
Matching Color-Spanning Set, namely, computing a matching of $2k$ points with
distinct colors such that the minimum edge length is maximized; (3) MinMax
Matching Color-Spanning Set, namely, computing a matching of $2k$ points with
distinct colors such that the maximum edge length is minimized; and (4)
$k$-Multicolored Independent Matching, namely, computing a matching of $2k$
vertices in a graph such that the vertices of the edges in the matching do not
share common edges in the graph. We show that the first three problems are
polynomially solvable (hence in FPT), while problem (4) is W[1]-hard.
|
CP-violation is one of the least understood phenomena in our field. There are
major experimental programs in all high energy laboratories around the world
which will hopefully remedy this within the next decade. The study of
CP-violating effects in B meson decays will allow stringent tests of the
Standard Model to be made and may point the way to New Physics.
The Beauty97 conference provided a forum for these experiments to discuss
their physics potential and experimental challenges relating to these studies.
This paper reviews the ongoing and future experimental B-physics projects. I
will summarize the status and future plans of these projects, as well as the
highlights of the physics and key R&D results presented at the conference. At
the end, a critical comparison of the CP-violation B experiments will be given.
|
Usability is often defined as the ability of a system to carry out specific
tasks by specific users in a specific context. Usability evaluation involves
testing the system for its expected usability. Usability testing is performed
in natural environment (field) or artificial environment (laboratory). The
result of usability evaluation is affected by the environment in which it is
carried out. Previous studies have focused only on the physical environment
(lab and field) effect on the results but rarely focused on the effect of
social environment (people present during testing). Therefore, this study aims
to review how important it is to take context into account during usability
evaluation. Context is explored through the theory of behaviour settings,
according to which behaviour of individuals is strongly influenced by the
physical as well as the social environment in which they function. The result
of this review indicates that the physical and social context plays a
substantial role in usability evaluations. Further, it also suggests that the
usability evaluation model should encompass context as an important component
in the framework.
|
In this paper, we study functions of bounded variation on a complete and
connected metric space with finite one-dimensional Hausdorff measure. The
definition of BV functions on a compact interval based on pointwise variation
is extended to this general setting. We show this definition of BV functions is
equivalent to the BV functions introduced by Miranda. Furthermore, we study the
necessity of conditions on the underlying space in Federer's characterization
of sets of finite perimeter on metric measure spaces. In particular, our
examples show that the doubling and Poincar\'e inequality conditions are
essential in showing that a set has finite perimeter if the codimension one
Hausdorff measure of the measure-theoretic boundary is finite.
|
Considering rhombohedral alpha-boron, rh-B12 as a matrix hosting
interstitials particularly triatomic linear ones as in B12C3 better known as
B4C, the subnitride B12N3 is proposed herein. The N3 triatomic linear alignment
labeled N2-N1-N2 resembles that found in the ionic sodium azide NaN3
characterized by a very short d(N1-N2)= 1.16 A. Within DFT-based calculations
B12N3 is found more cohesive than pristine B12 with larger inter-nitrogen
separation of d(N1-N2)= 1.38 A}. The N1-N2 elongation is explained from the
bonding of the two terminal N2 with one of the two B12 boron substructures
forming 3B-N2-N1-N2-3B-like complex. A resulting non-bonding charge density
localized on central N1 leads to the onset of a magnetic moment of 1 Bohr
Magneton in a half-metal ferromagnetic ground state illustrated by projections
of the magnetic charge density and the site and spin electronic density of
states DOS.
|
We present the Sloan Low-mass Wide Pairs of Kinematically Equivalent Stars
(SLoWPoKES), a catalog of 1342 very-wide (projected separation >500 AU),
low-mass (at least one mid-K--mid-M dwarf component) common proper motion pairs
identified from astrometry, photometry, and proper motions in the Sloan Digital
Sky Survey. A Monte Carlo based Galactic model is constructed to assess the
probability of chance alignment for each pair; only pairs with a probability of
chance alignment </= 0.05 are included in the catalog. The overall fidelity of
the catalog is expected to be 98.35%. The selection algorithm is purposely
exclusive to ensure that the resulting catalog is efficient for follow-up
studies of low-mass pairs. The SLoWPoKES catalog is the largest sample of wide,
low-mass pairs to date and is intended as an ongoing community resource for
detailed study of bona fide systems. Here we summarize the general
characteristics of the SLoWPoKES sample and present preliminary results
describing the properties of wide, low-mass pairs. While the majority of the
identified pairs are disk dwarfs, there are 70 halo subdwarf pairs and 21 white
dwarf-disk dwarf pairs, as well as four triples. Most SLoWPoKES pairs violate
the previously defined empirical limits for maximum angular separation or
binding energies. However, they are well within the theoretical limits and
should prove very useful in putting firm constraints on the maximum size of
binary systems and on different formation scenarios. We find a lower limit to
the wide binary frequency for the mid-K-mid-M spectral types that constitute
our sample to be 1.1%. This frequency decreases as a function of Galactic
height, indicating a time evolution of the wide binary frequency. [See text for
full abstract.]
|
We introduce Codex, a GPT language model fine-tuned on publicly available
code from GitHub, and study its Python code-writing capabilities. A distinct
production version of Codex powers GitHub Copilot. On HumanEval, a new
evaluation set we release to measure functional correctness for synthesizing
programs from docstrings, our model solves 28.8% of the problems, while GPT-3
solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling
from the model is a surprisingly effective strategy for producing working
solutions to difficult prompts. Using this method, we solve 70.2% of our
problems with 100 samples per problem. Careful investigation of our model
reveals its limitations, including difficulty with docstrings describing long
chains of operations and with binding operations to variables. Finally, we
discuss the potential broader impacts of deploying powerful code generation
technologies, covering safety, security, and economics.
|
Ba(Fe1-xCox)2As2 is the most tunable of the Fe-based superconductors (FBS) in
terms of acceptance of high densities of self-assembled and artificially
introduced pinning centres which are effective in significantly increasing the
critical current density, Jc. Moreover, FBS are very sensitive to strain, which
induces an important enhancement in critical temperature, Tc, of the material.
In this paper we demonstrate that strain induced by the substrate can further
improve Jc of both single and multilayer films by more than that expected
simply due to the increase in Tc. The multilayer deposition of Ba(Fe1-xCox)2As2
on CaF2 increases the pinning force density Fp by more than 60% compared to a
single layer film, reaching a maximum of 84 GN/m^3 at 22.5T and 4.2 K, the
highest value ever reported in any 122 phase.
|
Levitated optomechanics is showing potential for precise force measurements.
Here, we report a case study, to show experimentally the capacity of such a
force sensor. Using an electric field as a tool to detect a Coulomb force
applied onto a levitated nanosphere. We experimentally observe the spatial
displacement of up to 6.6 nm of the levitated nanosphere by imposing a DC
field. We further apply an AC field and demonstrate resonant enhancement of
force sensing when a driving frequency, $\omega_{AC}$, and the frequency of the
levitated mechanical oscillator, $\omega_0$, converge. We directly measure a
force of $3.0 \pm 1.5 \times 10^{-20}$ N with 10 second integration time, at a
centre of mass temperature of 3 K and at a pressure of $1.6 \times 10^{-5}$
mbar.
|
Collaboration among researchers is an essential component of the modern
scientific enterprise, playing a particularly important role in
multidisciplinary research. However, we continue to wrestle with allocating
credit to the coauthors of publications with multiple authors, since the
relative contribution of each author is difficult to determine. At the same
time, the scientific community runs an informal field-dependent credit
allocation process that assigns credit in a collective fashion to each work.
Here we develop a credit allocation algorithm that captures the coauthors'
contribution to a publication as perceived by the scientific community,
reproducing the informal collective credit allocation of science. We validate
the method by identifying the authors of Nobel-winning papers that are credited
for the discovery, independent of their positions in the author list. The
method can also compare the relative impact of researchers working in the same
field, even if they did not publish together. The ability to accurately measure
the relative credit of researchers could affect many aspects of credit
allocation in science, potentially impacting hiring, funding, and promotion
decisions.
|
Recently Braga and Mello conjectured that for a given natural number n there
is a piecewise linear system with two zones in the plane with exactly n limit
cycles. In this paper we prove a result from which the conjecture is an
immediate consequence. Several explicit examples are given where location and
stability of limit cycles are provided.
|
The derivation of determinant representations for the space-, time-, and
temperature-dependent correlation functions of the impenetrable Gaudin-Yang
model in the presence of a trapping potential is presented. These
representations are valid in both equilibrium and nonequilibrium scenarios like
the ones initiated by a sudden change of the confinement potential. In the
equal-time case our results are shown to be equivalent to a multicomponent
generalization of Lenard's formula from which Painlev\'e transcendent
representations for the correlators can be obtained in the case of harmonic
trapping and Dirichlet and Neumann boundary conditions. For a system in the
quantum Newton's cradle setup the determinant representations allow for an
exact numerical investigation of the dynamics and even hydrodynamization which
is outside the reach of Generalized Hydrodynamics or other approximate methods.
In the case of a sudden change in the trap's frequency we predict a many-body
bounce effect, not present in the evolution of the density profile, which
causes a nontrivial periodic narrowing of the momentum distribution with
amplitude depending on the statistics of the particles.
|
The lifetime of the 3d^2D_5/2-level in singly-ionized calcium has been
measured by the electron-shelving technique on different samples of rf trapped
ions. The metastable state has been directly populated by exciting the
dipole-forbidden 4S_1/2 - 3D_5/2 transition. In ion clouds, the natural
lifetime of this metastable level has been measured to be (1095+-27) ms. For
the single-ion case, we determined a lifetime of (1152+-20) ms. The
1sigma-error bars at the 2%-level have different origins for the two kinds of
experiments: data fitting methods for lifetime measurements in an ion cloud and
control of experimental parameters for a single ion. De-shelving effects are
extensively discussed. The influence of differing approaches for the processing
of the single-ion quantum jump data on the lifetime values is shown. Comparison
with recent measurements shows excellent agreement when evaluated from a given
method.
|
Let $M^n$ be a closed Riemannian manifold on which the integral of the scalar
curvature is nonnegative. Suppose $\mathfrak{a}$ is a symmetric $(0,2)$ tensor
field whose dual $(1,1)$ tensor $\mathcal{A}$ has $n$ distinct eigenvalues, and
$\mathrm{tr}(\mathcal{A}^k)$ are constants for $k=1,\cdots, n-1$. We show that
all the eigenvalues of $\mathcal{A}$ are constants, generalizing a theorem of
de Almeida and Brito \cite{dB90} to higher dimensions.
As a consequence, a closed hypersurface $M^n$ in $S^{n+1}$ is isoparametric
if one takes $\mathfrak{a}$ above to be the second fundamental form, giving
affirmative evidence to Chern's conjecture.
|
In this article, we present a structured Kalman filter associated with the
transformation matrix for observable Kalman canonical decomposition from
conventional Kalman filter (CKF) in order to generate a more accurate time
scale. The conventional Kalman filter is a special case of the proposed
structured Kalman filter which yields the same predicted unobservable or
observable states when some conditions are satisfied. We consider an
optimization problem respective to the transformation matrix where the
objective function is associated with not only the expected value of prediction
error but also its variance. We reveal that such an objective function is a
convex function and show some conditions under which CKF is nothing but the
optimal algorithm if ideal computation is possible without computation error. A
numerical example is presented to show the robustness of the proposed method in
terms of the initial error covariance
|
The recent findings about two distinct quasiparticle inelastic scattering
rates in angle-dependent magnetoresistance (ADMR) experiments in overdoped
high-$T_c$ cuprates superconductors have motivated many discussions related to
the link between superconductivity, pseudogap, and transport properties in
these materials. After computing dynamical self-energy corrections in the
framework of the $t-J$ model the inelastic scattering rate was introduced as
usual. Two distinct scattering rates were obtained showing the main features
observed in ADMR experiments. Predictions for underdoped cuprates are
discussed. The implicances of these two scattering rates on the resistivity
were also studied as a function of doping and temperature and confronted with
experimental measurements.
|
Neutrinos with a magnetic dipole moment propagating in a medium with a
velocity larger than the phase velocity of light emit photons by the Cerenkov
process. The Cerenkov radiation is a helicity flip process via which a
left-handed neutrino in a supernova core may change into a sterile right-handed
one and free-stream out of the core. Assuming that the luminosity of such
sterile right-handed neutrinos is less than $10^{53}$ ergs/sec gives an upper
bound on the neutrino magnetic dipole moment $\mu_\nu < 0.2 \times 10^{-13}
\mu_B$. This is two orders of magnitude more stringent than the previously
established bounds on $\mu_\nu$ from considerations of supernova cooling rate
by right-handed neutrinos.
|
We report the results of magnetic measurements on a powder sample of
NiCu(pba)(D_2O)_3 \cdot 2D_2O$ (pba=1,3-propylenebis(oxamato)) which is one of
the prototypical examples of an $S$=1/2 and 1 ferrimagnetic chain.
Susceptibility($\chi$) shows a monotonous increase with decreasing temperature
(T) and reaches a maximum at about 7 K. In the plot of $\chi T$ versus $T$, the
experimental data exhibit a broad minimum and are fit to the $\chi T$ curve
calculated for the ferrimagnetic Heisenberg chain composed of S=1/2 and 1. From
this fit, we have evaluated the nearest-neighbor exchange constant $J/k_B=121
K$, the g-values of Ni$^{2+}$ and Cu$^{2+}$, $g_{Ni}$=2.22 and $g_{Cu}$=2.09,
respectively. Applied external field dependence of $\chi T$ at low temperatures
is reproduced fairly well by the calculation for the same ferrimagnetic model.
|
Small Beowulf clusters can effectively serve as personal or group
supercomputers. In such an environment, a cluster can be optimally designed for
a specific problem (or a small set of codes). We discuss how theoretical
analysis of the code and benchmarking on similar hardware lead to optimal
systems.
|
Magnetization reversal mechanisms and impact of magnetization direction are
studied in square arrays of interconnected circular permalloy nanorings using
MOKE, local imaging, numerical simulations and transport techniques.
|
We report thermal expansion measurements on Ca(Fe_(1-x)Co_x)_2As_2 single
crystals with different thermal treatment, with samples chosen to represent
four different ground states observed in this family. For all samples thermal
expansion is anisotropic with different signs of the in-plane and c-axis
thermal expansion coefficients in the high temperature, tetragonal phase. The
features in thermal expansion associated with the phase transitions are of
opposite signs as well, pointing to a different response of transition
temperatures to the in-plane and the c-axis stress. These features, and
consequently the inferred pressure derivatives, are very large, clearly and
substantially exceeding those in the Ba(Fe_(1-x)Co_x)_2As_2 family. For all
transitions the c-axis response is dominant.
|
A general model for treating the effects of three dimensional interface
roughness (IFR) in layered semiconductor structures has been derived and
experimentally verified. Configurational averaging of the IFR potential
produces an effective grading potential in the out-of-plane direction, greatly
altering the energy spectrum of the structures. IFR scattering self-energy is
also derived for the general case; when IFR is strong, its scattering effect is
shown to dominate over phonon interaction and impurity scattering. When applied
to intersubband transitions, the theoretical predictions explain the
experimental observation of the anomalous energy shift and unusual broadening
of the ISB transitions in III-Nitride thin-layered superlattices.
|
Current problems encountered in the spectroscopic determination of
photospheric abundances are outlined and exemplified in a reevaluation of C, N,
O, Ne, Mg, Si, and Fe, taking effects of NLTE and granulation into account.
Updated abundances of these elements are given in Table 2.
Specific topics addressed are (1) the correlation between photospheric matter
and CI chondrites, and the condensation temperature below which it breaks down
(Figure 1), (2) the question whether the metallicity of the Sun is typical for
its age and position in the Galaxy.
|
We study the properties of the straight segments forming in N-body
simulations of the galactic discs. The properties of these features are
consistent with the observational ones summarized by Chernin at al. (2001).
Unlike some previous suggestions to explain the straight segments as gas
dynamical instabilities, they form in our models in the stellar system. We
suggest that the straight segments are forming as a response of the rotating
disc to a gravity of the regions of enhanced density (overdensities) corotating
with the disc. The kinematics of stars near the prominent overdensities is
consistent with this hypothesis.
|
The energy and flux budget (EFB) closure theory for a passive scalar
(non-buoyant and non-inertial particles or gaseous admixtures) is developed for
stably stratified turbulence. The physical background of the EFB turbulence
closures is based on the budget equations for the turbulent kinetic and
potential energies and turbulent fluxes of momentum and buoyancy, as well as
the turbulent flux of particles. The EFB turbulence closure is designed for
stratified geophysical flows from neutral to very stable stratification and it
implies that turbulence is maintained by the velocity shear at any
stratification. In a steady-state, expressions for the turbulent flux of
passive scalar and the anisotropic non-symmetric turbulent diffusion tensor are
derived, and universal flux Richardson number dependencies of the components of
this tensor are obtained. The diagonal component in the vertical direction of
the turbulent diffusion tensor is suppressed by strong stratification, while
the diagonal components in the horizontal directions are not suppressed, and
they are dominant in comparison with the other components of turbulent
diffusion tensor. This implies that any initially created strongly
inhomogeneous particle cloud is evolved into a thin pancake in horizontal plane
with very slow increase of its thickness in the vertical direction. The
turbulent Schmidt number increases linearly with the gradient Richardson
number. Considering the applications of these results to the atmospheric
boundary-layer turbulence, the theoretical relationships are derived which
allow to determine the turbulent diffusion tensor as a function of the vertical
coordinate measured in the units of the local Obukhov length scale. The
obtained relations are potentially useful in modelling applications of particle
dispersion in the atmospheric boundary-layer turbulence and free atmosphere
turbulence.
|
Ultra-deep ACS and WFC3/IR HUDF+HUDF09 data, along with the wide-area
GOODS+ERS+CANDELS data over the CDF-S GOODS field, are used to measure UV
colors, expressed as the UV-continuum slope beta, of star-forming galaxies over
a wide range in luminosity (0.1L*(z=3) to 2L*(z=3)) at high redshift (z~7 to
z~4). Beta is measured using all ACS and WFC3/IR passbands uncontaminated by
Ly_alpha and spectral breaks. Extensive tests show that our beta measurements
are only subject to minimal biases. Using a different selection procedure,
Dunlop et al. recently found large biases in their beta measurements. To
reconcile these different results, we simulated both approaches and found that
beta measurements for faint sources are subject to large biases if the same
passbands are used both to select the sources and to measure beta.
High-redshift galaxies show a well-defined rest-frame UV color-magnitude (CM)
relationship that becomes systematically bluer towards fainter UV luminosities.
No evolution is seen in the slope of the UV CM relationship in the first 1.5
Gyr, though there is a small evolution in the zero-point to redder colors from
z~7 to z~4. This suggests that galaxies are evolving along a well-defined
sequence in the L(UV)-color (beta) plane (a "star-forming sequence"?). Dust
appears to be the principal factor driving changes in the UV color (beta) with
luminosity. These new larger beta samples lead to improved dust extinction
estimates at z~4-7 and confirm that the extinction is still essentially zero at
low luminosities and high redshifts. Inclusion of the new dust extinction
results leads to (i) excellent agreement between the SFR density at z~4-8 and
that inferred from the stellar mass density, and (ii) to higher SSFRs at z>~4,
suggesting the SSFR may evolve modestly (by factors of ~2) from z~4-7 to z~2.
|
In this work we propose a new deep learning tool called deep dictionary
learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at
a time. This requires solving a simple (shallow) dictionary learning problem,
the solution to this is well known. We apply the proposed technique on some
benchmark deep learning datasets. We compare our results with other deep
learning tools like stacked autoencoder and deep belief network; and state of
the art supervised dictionary learning tools like discriminative KSVD and label
consistent KSVD. Our method yields better results than all.
|
It is well known that helical magnetic fields undergo a so-called inverse
cascade by which their correlation length grows due to the conservation of
magnetic helicity in classical ideal magnetohydrodynamics (MHD). At high
energies above approximately $10$ MeV, however, classical MHD is necessarily
extended to chiral MHD and then the conserved quantity is
$\langle\mathcal{H}\rangle + 2 \langle\mu_5\rangle / \lambda$ with
$\langle\mathcal{H}\rangle$ being the mean magnetic helicity and
$\langle\mu_5\rangle$ being the mean chiral chemical potential of charged
fermions. Here, $\lambda$ is a (phenomenological) chiral feedback parameter. In
this paper, we study the evolution of the chiral MHD system with the initial
condition of nonzero $\langle\mathcal{H}\rangle$ and vanishing $\mu_5$. We
present analytic derivations for the time evolution of
$\langle\mathcal{H}\rangle$ and $\langle\mu_5\rangle$ that we compare to a
series of laminar and turbulent three-dimensional direct numerical simulations.
We find that the late-time evolution of $\langle\mathcal{H}\rangle$ depends on
the magnetic and kinetic Reynolds numbers ${\rm Re}_{_\mathrm{M}}$ and ${\rm
Re}_{_\mathrm{K}}$. For a high ${\rm Re}_{_\mathrm{M}}$ and ${\rm
Re}_{_\mathrm{K}}$ where turbulence occurs, $\langle\mathcal{H}\rangle$
eventually evolves in the same way as in classical ideal MHD where the inverse
correlation length of the helical magnetic field scales with time $t$ as
$k_\mathrm{p} \propto t^{-2/3}$. For a low Reynolds numbers where the velocity
field is negligible, the scaling is changed to $k_\mathrm{p} \propto
t^{-1/2}\mathrm{ln}\left(t/t_\mathrm{log}\right)$. After being rapidly
generated, $\langle\mu_5\rangle$ always decays together with $k_\mathrm{p}$,
i.e. $\langle\mu_5\rangle \approx k_\mathrm{p}$, with a time evolution that
depends on whether the system is in the limit of low or high Reynolds numbers.
|
It is well known that one can ignore parts of a belief network when computing
answers to certain probabilistic queries. It is also well known that the
ignorable parts (if any) depend on the specific query of interest and,
therefore, may change as the query changes. Algorithms based on jointrees,
however, do not seem to take computational advantage of these facts given that
they typically construct jointrees for worst-case queries; that is, queries for
which every part of the belief network is considered relevant. To address this
limitation, we propose in this paper a method for reconfiguring jointrees
dynamically as the query changes. The reconfiguration process aims at
maintaining a jointree which corresponds to the underlying belief network after
it has been pruned given the current query. Our reconfiguration method is
marked by three characteristics: (a) it is based on a non-classical definition
of jointrees; (b) it is relatively efficient; and (c) it can reuse some of the
computations performed before a jointree is reconfigured. We present
preliminary experimental results which demonstrate significant savings over
using static jointrees when query changes are considerable.
|
In this talk I shall first make some brief remarks on quaternionic quantum
mechanics, and then describe recent work with A.C. Millard in which we show
that standard complex quantum field theory can arise as the statistical
mechanics of an underlying noncommutative dynamics.
|
Strong quantum fluctuations in magnetic systems can create disordered quantum
spin liquid phases of matter which are not predicted by classical physics. The
complexity of the exotic phenomena on display in spin liquids has led to a
great deal of theoretical and experimental interest. However, understanding the
fundamental nature of the excitations in these systems remains challenging. In
this work, we consider the Lifshitz quantum critical point in a two-dimensional
frustrated $XY$ antiferromagnet. At this point, quantum fluctuations destroy
long range order, leading to the formation of an algebraic Lifshitz spin
liquid. We demonstrate that the bosonic magnon excitations are long-lived and
well-defined in the Lifshitz spin liquid phase, though paradoxically, the
dynamic structure factor has a broad non-Lorentzian frequency distribution with
no single-particle weight. We resolve this apparent contradiction by showing
that the Lifshitz spin liquid suffers from an infrared catastrophe: An external
physical probe always excites an infinite number of arbitrarily low energy
quasiparticles, which leads to significant radiative broadening of the
spectrum.
|
We study a variation of the Stable Marriage problem, where every man and
every woman express their preferences as preference lists which may be
incomplete and contain ties. This problem is called the Stable Marriage problem
with Ties and Incomplete preferences (SMTI). We consider three optimization
variants of SMTI, Max Cardinality, Sex-Equal and Egalitarian, and empirically
compare the following methods to solve them: Answer Set Programming, Constraint
Programming, Integer Linear Programming. For Max Cardinality, we compare these
methods with Local Search methods as well. We also empirically compare Answer
Set Programming with Propositional Satisfiability, for SMTI instances. This
paper is under consideration for acceptance in Theory and Practice of Logic
Programming (TPLP).
|
In the general frameworks of an earlier introduced
quartet-metric/multi-component gravity, a theory of a massive scalar graviton
supplementing the massless tensor one is consistently deduced. The
peculiarities of the scalar-graviton field compared to the canonical scalar one
are demonstrated. The light scalar graviton is treated as an emergent dark
substance of the Universe: dark matter and/or dark energy depending on the
solution. The case with scalar graviton as dark energy responsible for the
late-time accelerated expansion of the Universe is studied in more detail. In
particular, it is shown that due to an attractor solution for the light scalar
graviton there naturally emerges at the classical level a tiny nonzero
effective cosmological constant, even in the absence of the Lagrangian one. The
prospects of going beyond LCDM model per scalar graviton are shortly indicated.
|
Plug-and-play Image Restoration (IR) has been widely recognized as a flexible
and interpretable method for solving various inverse problems by utilizing any
off-the-shelf denoiser as the implicit image prior. However, most existing
methods focus on discriminative Gaussian denoisers. Although diffusion models
have shown impressive performance for high-quality image synthesis, their
potential to serve as a generative denoiser prior to the plug-and-play IR
methods remains to be further explored. While several other attempts have been
made to adopt diffusion models for image restoration, they either fail to
achieve satisfactory results or typically require an unacceptable number of
Neural Function Evaluations (NFEs) during inference. This paper proposes
DiffPIR, which integrates the traditional plug-and-play method into the
diffusion sampling framework. Compared to plug-and-play IR methods that rely on
discriminative Gaussian denoisers, DiffPIR is expected to inherit the
generative ability of diffusion models. Experimental results on three
representative IR tasks, including super-resolution, image deblurring, and
inpainting, demonstrate that DiffPIR achieves state-of-the-art performance on
both the FFHQ and ImageNet datasets in terms of reconstruction faithfulness and
perceptual quality with no more than 100 NFEs. The source code is available at
{\url{https://github.com/yuanzhi-zhu/DiffPIR}}
|
The p-d model which well describes the CuO_2 planes of the high-Tc
superconductors is studied by means of the Composite Operator Method (COM). The
relevant quasi-particle excitations are represented by composite operators. As
a result of taking into account spin excitations we find a p- like band near
the Fermi level. The dispersion of this band gives a Fermi surface which is in
good agreement with the experimental measurements. Due to the strong mixing of
the relevant excitations, the spectral weight of this band is reduced and gives
a large Fermi surface in the moderately doped region. The dependence of the
calculated physical quantities on model parameters, temperature and doping, is
in a very good agreement with the available Quantum Monte Carlo results.
|
Distributed machine learning approaches, including a broad class of federated
learning (FL) techniques, present a number of benefits when deploying machine
learning applications over widely distributed infrastructures. The benefits are
highly dependent on the details of the underlying machine learning topology,
which specifies the functionality executed by the participating nodes, their
dependencies and interconnections. Current systems lack the flexibility and
extensibility necessary to customize the topology of a machine learning
deployment. We present Flame, a new system that provides flexibility of the
topology configuration of distributed FL applications around the specifics of a
particular deployment context, and is easily extensible to support new FL
architectures. Flame achieves this via a new high-level abstraction Topology
Abstraction Graphs (TAGs). TAGs decouple the ML application logic from the
underlying deployment details, making it possible to specialize the application
deployment with reduced development effort. Flame is released as an open source
project, and its flexibility and extensibility support a variety of topologies
and mechanisms, and can facilitate the development of new FL methodologies.
|
We argue that the experimentally measured color transparency ratio is
directly related to the interacting hadron wave function at small transverse
separation, $b^2<1/Q^2$. We show that the present experimental data is
consistent with pure scaling behavior of the hadron-hadron and lepton-hadron
scattering inside the nuclear medium.
|
Let $h$ and $l$ be integers such that $0\le h\le 2$, $0\le l\le 4$. We obtain
asymptotic formulas for the numbers of solutions of the equations $n-3m=h$,
$n-5m=l$ in positive integers $m$ and $n$ of a special kind, $m\le X$.
|
Universality of correlation functions obtained in parametric random matrix
theory is explored in a multi-parameter formalism, through the introduction of
a diffusion matrix $D_{ij}(R)$, and compared to results from a multi-parameter
chaotic model. We show that certain universal correlation functions in 1-d are
no longer well defined by the metric distance between the points in parameter
space, due to a global topological dependence on the path taken. By computing
the density of diabolical points, which is found to increases quadratically
with the dimension of the space, we find a universal measure of the density of
diabolical points in chaotic systems.
|
The symmetry-breaking first-order phase transition between superfluid phases
$^3$He-A and $^3$He-B can be triggered extrinsically by ionising radiation or
heterogeneous nucleation arising from the details of the sample cell
construction. However, the role of potential homogeneous intrinsic nucleation
mechanisms remains elusive. Discovering and resolving the intrinsic processes
may have cosmological consequences, since an analogous first-order phase
transition, and the production of gravitational waves, has been predicted for
the very early stages of the expanding Universe in many extensions of the
Standard Model of particle physics. Here we introduce a new approach for
probing the phase transition in superfluid $^3$He. The setup consists of a
novel stepped-height nanofluidic sample container with close to atomically
smooth walls. The $^3$He is confined in five tiny nanofabricated volumes and
assayed non-invasively by NMR. Tuning of the state of $^3$He by confinement is
used to isolate each of these five volumes so that the phase transitions in
them can occur independently and free from any obvious sources of heterogeneous
nucleation. The small volumes also ensure that the transitions triggered by
ionising radiation are strongly suppressed. Here we present the preliminary
measurements using this setup, showing both strong supercooling of $^3$He-A and
superheating of $^3$He-B, with stochastic processes dominating the phase
transitions between the two. The objective is to study the nucleation as a
function of temperature and pressure over the full phase diagram, to both
better test the proposed extrinsic mechanisms and seek potential parallel
intrinsic mechanisms.
|
A strategy for obtaining low band gap oxide ferroelectrics based on charge
imbalance is described and illustrated by first principles studies of the
hypothetical compound Bi$_6$Ti$_4$O$_{17}$, which is an alternate stacking of
the ferroelectric Bi$_4$Ti$_3$O$_{12}$. We find that this compound is
ferroelectric, similar to Bi$_4$Ti$_3$O$_{12}$ although with a reduced
polarization. Importantly, calculations of the electronic structure with the
recently developed functional of Tran and Blaha yield a much reduced band gap
of 1.83 eV for this material compared to Bi$_4$Ti$_3$O$_{12}$. Therefore,
Bi$_6$Ti$_4$O$_{17}$ is predicted to be a low band gap ferroelectric material.
|
We use high-resolution angle-resolved photoemission spectroscopy to
investigate the electronic structure of the antiferromagnetic heavy fermion
compound CePt2In7, which is a member of the CeIn3-derived heavy fermion
material family. Weak hybridization among 4f electron states and conduction
bands was identified in CePt2In7 at low temperature much weaker than that in
the other heavy fermion compounds like CeIrIn5 and CeRhIn5. The Ce 4f spectrum
shows fine structures near the Fermi energy, reflecting the crystal electric
field splitting of the 4f^1_5/2 and 4f^1_7/2 states. Also, we find that the
Fermi surface has a strongly three-dimensional topology, in agreement with
density-functional theory calculations.
|
This is the problem of the 8$^\mathrm{th}$ International Experimental Physics
Olympiad (EPO). The task of the EPO8 is to determine Plank constant
$\hbar=h/2\pi$ using the given set-up with LED. If you have an idea how to do
it, do it and send us the result; skip the reading of the detailed step by step
instructions with increasing difficulties. We expect the participants to follow
the suggested items -- they are instructive for physics education in
general.Only the reading of historical remarks given in the first section can
be omitted during the Olympiad without loss of generality. Participants should
try solving as much tasks as they can without paying attention to the age
categories: give your best.
|
We present fully general relativistic (GR) simulations of binary white
dwarf-neutron star (WDNS) inspiral and merger. The initial binary is in a
circular orbit at the Roche critical separation. The goal is to determine the
ultimate fate of such systems. We focus on binaries whose total mass exceeds
the maximum mass (Mmax) a cold, degenerate EOS can support against
gravitational collapse. The time and length scales span many orders of
magnitude, making fully general relativistic hydrodynamic (GRHD) simulations
computationally prohibitive. For this reason, we model the WD as a
"pseudo-white dwarf" (pWD) as in our binary WDNS head-on collisions study
[PRD83:064002,2011]. Our GRHD simulations of a pWDNS system with a
0.98-solar-mass WD and a 1.4-solar-mass NS show that the merger remnant is a
spinning Thorne-Zytkow-like Object (TZlO) surrounded by a massive disk. The
final total rest mass exceeds Mmax, but the remnant does not collapse promptly.
To assess whether the object will ultimately collapse after cooling, we
introduce radiative thermal cooling. We first apply our cooling algorithm to
TZlOs formed in WDNS head-on collisions, and show that these objects collapse
and form black holes on the cooling time scale, as expected. However, when we
cool the spinning TZlO formed in the merger of a circular-orbit WDNS binary,
the remnant does not collapse, demonstrating that differential rotational
support is sufficient to prevent collapse. Given that the final total mass
exceeds Mmax, magnetic fields and/or viscosity may redistribute angular
momentum and ultimately lead to delayed collapse to a BH. We infer that the
merger of realistic massive WDNS binaries likely will lead to the formation of
spinning TZlOs that undergo delayed collapse.
|
We have investigated the atomic structure of superconducting Ca-intercalated
bilayer graphene on a SiC(0001) substrate using total-reflection high-energy
positron diffraction. By comparing the experimental rocking-curves with ones
calculated for various structural models using a full-dynamical theory, we have
found that Ca atoms are intercalated in the graphene-buffer interlayer, rather
than between the two graphene layers. From transport measurements, the
superconducting transition was observed to be at Tc_onset = 4K for this
structure. This study is the first to clearly identify the relation between the
atomic arrangement and superconductivity in Ca-intercalated bilayer graphene.
|
We prove some improved estimates for the Ginzburg-Landau energy (with or
without magnetic field) in two dimensions, relating the asymptotic energy of an
arbitrary configuration to its vortices and their degrees, with possibly
unbounded numbers of vortices. The method is based on a localisation of the
``ball construction method" combined with a mass displacement idea which allows
to compensate for negative errors in the ball construction estimates by energy
``displaced" from close by.
Under good conditions, our main estimate allows to get a lower bound on the
energy which includes a finite order ``renormalized energy" of vortex
interaction, up to the best possible precision i.e. with only a $o(1)$ error
per vortex, and is complemented by local compactness results on the vortices.
This is used crucially in a forthcoming paper relating minimizers of the
Ginzburg-Landau energy with the Abrikosov lattice. It can also serve to provide
lower bounds for weighted Ginzburg-Landau energies.
|
We study the critical Ising model on the square lattice in bounded simply
connected domains with + and free boundary conditions. We relate the energy
density of the model to a fermionic observable and compute its scaling limit by
discrete complex analysis methods. As a consequence, we obtain a simple exact
formula for the scaling limit of the energy field one-point function in terms
of hyperbolic metric. This confirms the predictions originating in physics, but
also provides a higher precision.
|
We study the possibilty to produce single fourth SM family fermions at
electron-positron colliders via anomalous gamma-f4-f interactions.
|
The topology of closed manifolds forces interacting charges to appear in
pairs. We take advantage of this property in the setting of the conformal
boundary of $\mathrm{AdS}_5$ spacetime, topologically equivalent to the closed
manifold $S^1\times S^3$, by considering the coupling of two massless opposite
charges on it. Taking the interaction potential as the analog of Coulomb
interaction (derived from a fundamental solution of the $S^3$ Laplace-Beltrami
operator), a conformal $S^1\times S^3$ metric deformation is proposed, such
that free motion on the deformed metric is equivalent to motion on the round
metric in the presence of the interaction potential. We give explicit
expressions for the generators of the conformal algebra in the representation
induced by the metric deformation.
By identifying the charge as the color degree of freedom in QCD, and the two
charges system as a quark--anti-quark system, we argue that the associated
conformal wave operator equation could provide a realistic quantum mechanical
description of the simplest QCD system, the mesons.
Finally, we discuss the possibility of employing the compactification radius,
$R$, as another scale along $\Lambda_{QCD}$, by means of which, upon
reparametrizing $Q^2c^2$ as $\left( Q^2c^2 +\hbar^2 c^2/R^2\right)$, a
pertubative treatment of processes in the infrared could be approached.
|
Digital signatures are one of the simplest cryptographic building blocks that
provide appealing security characteristics such as authenticity,
unforgeability, and undeniability. In 1984, Shamir developed the first
Identity-based signature (IBS) to simplify public key infrastructure and
circumvent the need for certificates. It makes the process uncomplicated by
enabling users to verify digital signatures using only the identifiers of
signers, such as email, phone number, etc. Nearly all existing IBS protocols
rely on several theoretical assumption-based hard problems. Unfortunately,
these hard problems are unsafe and pose a hazard in the quantum realm. Thus,
designing IBS algorithms that can withstand quantum attacks and ensure
long-term security is an important direction for future research. Quantum
cryptography (QC) is one such approach. In this paper, we propose an IBS based
on QC. Our scheme's security is based on the laws of quantum mechanics. It
thereby achieves long-term security and provides resistance against quantum
attacks. We verify the proposed design's correctness and feasibility by
simulating it in a prototype quantum device and the IBM Qiskit quantum
simulator. The implementation code in qiskit with Jupyternotebook is provided
in the Annexure. Moreover, we discuss the application of our design in secure
email communication.
|
LiDAR has become one of the primary sensors in robotics and autonomous system
for high-accuracy situational awareness. In recent years, multi-modal LiDAR
systems emerged, and among them, LiDAR-as-a-camera sensors provide not only 3D
point clouds but also fixed-resolution 360{\deg}panoramic images by encoding
either depth, reflectivity, or near-infrared light in the image pixels. This
potentially brings computer vision capabilities on top of the potential of
LiDAR itself. In this paper, we are specifically interested in utilizing LiDARs
and LiDAR-generated images for tracking Unmanned Aerial Vehicles (UAVs) in
real-time which can benefit applications including docking, remote
identification, or counter-UAV systems, among others. This is, to the best of
our knowledge, the first work that explores the possibility of fusing the
images and point cloud generated by a single LiDAR sensor to track a UAV
without a priori known initialized position. We trained a custom YOLOv5 model
for detecting UAVs based on the panoramic images collected in an indoor
experiment arena with a MOCAP system. By integrating with the point cloud, we
are able to continuously provide the position of the UAV. Our experiment
demonstrated the effectiveness of the proposed UAV tracking approach compared
with methods based only on point clouds or images. Additionally, we evaluated
the real-time performance of our approach on the Nvidia Jetson Nano, a popular
mobile computing platform.
|
Let G be a discrete group which acts properly and isometrically on a complete
CAT(0)-space X. Consider an integer d with d=1 or d greater or equal to 3 such
that the topological dimension of X is bounded by d. We show the existence of a
G-CW-model E_fin(G) for the classifying space for proper G-actions with
dim(E_fin(G)) less or equal to d. Provided that the action is also cocompact,
we prove the existence of a G-CW-model E_vcyc(G) for the classifying space of
the family of virtually cyclic subgroups such that dim(E_vcyc(G)) is less or
equal to d+1.
|
Preliminary trajectory design is a global search problem that seeks multiple
qualitatively different solutions to a trajectory optimization problem. Due to
its high dimensionality and non-convexity, and the frequent adjustment of
problem parameters, the global search becomes computationally demanding. In
this paper, we exploit the clustering structure in the solutions and propose an
amortized global search (AmorGS) framework. We use deep generative models to
predict trajectory solutions that share similar structures with previously
solved problems, which accelerates the global search for unseen parameter
values. Our method is evaluated using De Jong's 5th function and a low-thrust
circular restricted three-body problem.
|
Starting from a microscopic model of liquids, we construct an effective
theory of an overlap field through duplication of the system and
coarse-graining. We then propose a recipe to extract a relaxation time and two
characteristic length scales of a supercooled liquid from this effective field
theory. Appealing to the Ginzburg-Landau-Wilson paradigm near the putative
critical point, we further conclude that this effective field theory resides
within the Ising universality class.
|
This article reports the measurement of the ionization quenching factor in
germanium for nuclear recoil energies between 0.4 and 6.3 keV$_{nr}$. Precise
knowledge of this factor in this energy range is relevant for coherent elastic
neutrino-nucleus scattering and low mass dark matter searches with
germanium-based detectors. Nuclear recoils were produced in a thin high-purity
germanium target with a very low energy threshold via irradiation with
monoenergetic neutron beams. The energy dependence of the ionization quenching
factor was directly measured via kinematically constrained coincidences with
surrounding liquid scintillator based neutron detectors. The systematic
uncertainties of the measurements are discussed in detail. With measured
quenching factors between 0.16 and 0.23 in the [0.4, 6.3] keV$_{nr}$ energy
range, the data are compatible with the Lindhard theory with a parameter $k$ of
0.162 $\pm$ 0.004 (stat+sys).
|
We study theoretically the replication of Kinetoplast DNA consisting of
several thousands separate mini-circles found in organisms of the class
Kinetoplastida. When the cell is not actively dividing these are topologically
connected in a marginally linked network of rings with only one connected
component. During cell division each mini-circle is removed from the network,
duplicated and then re-attached, along with its progeny. We study this process
under the hypothesis that there is a coupling between the topological state of
the mini-circles and the expression of genetic information encoded on them,
leading to the production of Topoisomerase. This model describes a
self-regulating system capable of full replication that reproduces several
previous experimental findings. We find that the fixed point of the system
depends on a primary free parameter of the model: the ratio between the rate of
removal of mini-circles from the network (R) and their (re)attachment rate (A).
The final topological state is found to be that of a marginally linked network
structure in which the fraction of mini-circles linked to the largest connected
component approaches unity as R/A decreases. Finally we discuss how this may
suggest an evolutionary trade-off between the speed of replication and the
accuracy with which a fully topologically linked state is produced.
|
The adiabatic quantum-flux-parametron (AQFP) is an energy-efficient
superconductor logic family that utilizes adiabatic switching. AQFP gates are
powered and clocked by ac excitation current; thus, to operate AQFP circuits at
high clock frequencies, it is required to carefully design the characteristic
impedance of excitation lines (especially, above AQFP gates) so that microwave
excitation current can propagate without reflections in the entire circuit. In
the present study, we design the characteristic impedance of the excitation
line using InductEx, which is a three-dimensional parameter extractor for
superconductor devices. We adjust the width of an excitation line using
InductEx such that the characteristic impedance becomes 50 {\Omega} even above
an AQFP gate. Then, we fabricate test circuits to verify the impedance of the
excitation line. We measure the impedance using the time domain reflectometry
(TDR). We also measure the S parameters of the excitation line to investigate
the maximum available clock frequency. Our experimental results indicate that
the characteristic impedance of the excitation line agrees well with the design
value even above AQFP gates, and that clock frequencies beyond 5 GHz are
available in large-scale AQFP circuits.
|
We summarize our recent results that model the formation of uniform spherical
silver colloids prepared by mixing iso-ascorbic acid and silver-amine complex
solutions in the absence of dispersants. We found that the experimental results
can be modeled effectively by the two-stage formation mechanism used previously
to model the preparation of colloidal gold spheres. The equilibrium
concentration of silver atoms and the surface tension of silver precursor
nanocrystals are both treated as free parameters, and the experimental reaction
time scale is fit by a narrow region of this two-parameter space. The kinetic
parameter required to match the final particle size is found to be very close
to that used previously in modeling the formation of uniform gold particles,
suggesting that similar kinetics governs the aggregation process. The model
also reproduces semi quantitatively the effects of temperature and solvent
viscosity on particle synthesis.
|
Atmospherical mesoscale models can offer unique potentialities to
characterize and discriminate potential astronomical sites. Our team has
recently completely validated the Meso-Nh model above Dome C (Lascaux et al.
2009, 2010). Using all the measurements of CN2 profiles (15 nights) performed
so far at Dome C during the winter time (Trinquet et al. 2008) we proved that
the model can reconstruct, on rich statistical samples, reliable values of all
the three most important parameters characterizing the turbulence features of
an antarctic site: the surface layer thickness, the seeing in the free
atmosphere and in the surface layer. Using the same Meso-Nh model configuration
validated above Dome C, an extended study is now on-going for other sites above
the antarctic plateau, more precisely South Pole and Dome A. In this
contribution we present the most important results obtained in the model
validation process and the results obtained in the comparison between different
astronomical sites above the internal plateau. The Meso-Nh model confirms its
ability in discriminating between different optical turbulence behaviors, and
there is evidence that the three sites have different characteristics regarding
the seeing and the surface layer thickness. We highlight that this study
provides the first homogeneous estimate, done with comparable statistics, of
the optical turbulence developed in the whole 20-22 km above the ground at Dome
C, South Pole and Dome A.
|
Epidemic models with inhomogeneous populations have been used to study major
outbreaks and recently Britton and Lindenstrand \cite{BL} described the case
when latency and infectivity have independent gamma distributions. They found
that variability in these random variables had opposite effects on the epidemic
growth rate. That rate increased with greater variability in latency but
decreased with greater variability in infectivity. Here we extend their result
by using the McKay bivariate gamma distribution for the joint distribution of
latency and infectivity, recovering the above effects of variability but
allowing possible correlation. We use methods of stochastic rate processes to
obtain explicit solutions for the growth of the epidemic and the evolution of
the inhomogeneity and information entropy. We obtain a closed analytic solution
to the evolution of the distribution of the number of uninfected individuals as
the epidemic proceeds, and a concomitant expression for the decay of entropy.
The family of McKay bivariate gamma distributions has a tractable information
geometry which provides a framework in which the evolution of distributions can
be studied as the outbreak grows, with a natural distance structure for
quantitative tracking of progress.
|
Two applications of Nash-Williams' theory of barriers to sequences on Banach
spaces are presented: The first one is the $c_0$-saturation of $C(K)$, $K$
countable compacta. The second one is the construction of weakly-null sequences
generalizing the example of Maurey-Rosenthal.
|
We investigate the quantum entanglement in rapidity space of the soft gluon
wave function of a quarkonium, in theories with non-trivial rapidity
evolutions. We found that the rapidity evolution drastically changes the
behavior of the entanglement entropy, at any given order in perturbation
theory. At large $N_c$, the reduced density matrices that "resum" the leading
rapidity-logs can be explicitly constructed, and shown to satisfy
Balitsky-Kovchegov (BK)-like evolution equations. We study their entanglement
entropy in a simplified $1+1$ toy model, and in 3D QCD. The entanglement
entropy in these cases, after re-summation, is shown to saturate the
Kolmogorov-Sinai bound of 1. Remarkably, in 3D QCD the essential growth rate of
the entanglement entropy is found to vanish at large rapidities, a result of
kinematical "quenching" in transverse space. The one-body reduction of the
entangled density matrix obeys a BFKL evolution equation, which can be recast
as an evolution in an emergent AdS space, at large impact-parameter and large
rapidity. This observation allows the extension of the perturbative wee parton
evolution at low-x, to a dual non-perturbative evolution of string bits in
curved AdS$_5$ space, with manifest entanglement entropy in the confining
regime.
|
Development of cloud computing enables to move Big Data in the hybrid cloud
services. This requires research of all processing systems and data structures
for provide QoS. Due to the fact that there are many bottlenecks requires
monitoring and control system when performing a query. The models and
optimization criteria for the design of systems in a hybrid cloud
infrastructures are created. In this article suggested approaches and the
results of this build.
|
We live in a period where bio-informatics is rapidly expanding, a significant
quantity of genomic data has been produced as a result of the advancement of
high-throughput genome sequencing technology, raising concerns about the costs
associated with data storage and transmission. The question of how to properly
compress data from genomic sequences is still open. Previously many researcher
proposed many compression method on this topic DNA Compression without machine
learning and with machine learning approach. Extending a previous research, we
propose a new architecture like modified DeepDNA and we have propose a new
methodology be deploying a double base-ed strategy for compression of DNA
sequences. And validated the results by experimenting on three sizes of
datasets are 100, 243, 356. The experimental outcomes highlight our improved
approach's superiority over existing approaches for analyzing the human
mitochondrial genome data, such as DeepDNA.
|
We explore the entire form of S-Matrix elements of a potential $C_{n-1}$
Ramond-Ramond (RR) form field, a tachyon and two transverse scalar fields on
both world volume and transverse directions of type IIB and IIA superstring
theories. Apart from $<V_{C^{-2}}V_{\phi^{0}}V_{\phi ^{0}}V_{T ^{0}}>$ the
other scattering amplitude, namely $<V_{C^{-1}}V_{\phi^{-1}}V_{\phi ^{0}}V_{T
^{0}}>$ is also revealed. We then start to compare all singularity structures
of symmetric and asymmetric analysis, generating all infinite singularity
structures as well as all order $\alpha'$ contact interactions on the whole
directions. This leads to deriving various new contact terms and several new
restricted Bianchi identities in both type IIB and IIA. It is also shown that
just some of the new couplings of type IIB (IIA) string theory can be
re-verified in an Effective Field Theory (EFT) by pull-back of branes. To
construct the rest of S-matrix elements one needs to first derive restricted
world volume (or bulk) Bianchi identities and then discover new EFT couplings
in both type IIB and IIA. Finally the presence of commutator of scalar fields
inside the exponential of Wess-Zumino action for non-BPS branes has been
confirmed as well.
|
We propose a basic theory of nonrelativistic spinful electrons on curves and
surfaces. In particular, we discuss the presence and effects of spin
connections, which describe how spinors and vectors couple to the geometry of
curves and surfaces. We derive explicit expressions of spin connections by
performing simple dimensional reduction from the three-dimensional flat space.
The spin connections act on electrons as spin-dependent magnetic fields, which
have been known as `pseudomagnetic fields' in the context of, e.g., graphenes
and Dirac/Weyl semimetals. We propose that these spin-dependent magnetic fields
are present universally on curves and surfaces, acting on electrons regardless
of the kinds of their spinorial degrees of freedom and their dispersion
relations. We discuss that the curvature effects via spin connections will
induce the spin Hall effect and induce the Dzyaloshinskii--Moriya interactions
between magnetic moments on curved surfaces, without relying on the
relativistic spin-orbit couplings. We also mention the importance of spin
connections on orbital physics of electrons on curved geometry.
|
The auxiliary/dynamic decoupling method of hep-th/0609001 applies to
perturbations of any co-homogeneity 1 background (such as a spherically
symmetric space-time or a homogeneous cosmology). Here it is applied to compute
the perturbations around a Schwarzschild black hole in an arbitrary dimension.
The method provides a clear insight for the existence of master equations. The
computation is straightforward, coincides with previous results of
Regge-Wheeler, Zerilli and Kodama-Ishibashi but does not require any ingenuity
in either the definition of variables or in fixing the gauge. We note that the
method's emergent master fields are canonically conjugate to the standard ones.
In addition, our action approach yields the auxiliary sectors.
|
Interacting argon atoms are simulated with a recently developed quantum
Langevin transport treatment that takes approximate account of the quantum
fluctuations inherent in microscopic many-body descriptions based on wave
packets. The mass distribution of the atomic clusters is affected significantly
near the critical temperature and thus it may be important to take account of
quantum fluctuations in molecular-dynamics simulations of cluster formation
processes.
|
The Zeldovich approximation (ZA) predicts the formation of a web of
singularities. While these singularities may only exist in the most formal
interpretation of the ZA, they provide a powerful tool for the analysis of
initial conditions. We present a novel method to find the skeleton of the
resulting cosmic web based on singularities in the primordial deformation
tensor and its higher order derivatives. We show that the A_3-lines predict the
formation of filaments in a two-dimensional model. We continue with
applications of the adhesion model to visualise structures in the local (z <
0.03) universe.
|
Property inference attacks allow an adversary to extract global properties of
the training dataset from a machine learning model. Such attacks have privacy
implications for data owners sharing their datasets to train machine learning
models. Several existing approaches for property inference attacks against deep
neural networks have been proposed, but they all rely on the attacker training
a large number of shadow models, which induces a large computational overhead.
In this paper, we consider the setting of property inference attacks in which
the attacker can poison a subset of the training dataset and query the trained
target model. Motivated by our theoretical analysis of model confidences under
poisoning, we design an efficient property inference attack, SNAP, which
obtains higher attack success and requires lower amounts of poisoning than the
state-of-the-art poisoning-based property inference attack by Mahloujifar et
al. For example, on the Census dataset, SNAP achieves 34% higher success rate
than Mahloujifar et al. while being 56.5x faster. We also extend our attack to
infer whether a certain property was present at all during training and
estimate the exact proportion of a property of interest efficiently. We
evaluate our attack on several properties of varying proportions from four
datasets and demonstrate SNAP's generality and effectiveness. An open-source
implementation of SNAP can be found at https://github.com/johnmath/snap-sp23.
|
Previous works for LiDAR-based 3D object detection mainly focus on the
single-frame paradigm. In this paper, we propose to detect 3D objects by
exploiting temporal information in multiple frames, i.e., the point cloud
videos. We empirically categorize the temporal information into short-term and
long-term patterns. To encode the short-term data, we present a Grid Message
Passing Network (GMPNet), which considers each grid (i.e., the grouped points)
as a node and constructs a k-NN graph with the neighbor grids. To update
features for a grid, GMPNet iteratively collects information from its
neighbors, thus mining the motion cues in grids from nearby frames. To further
aggregate the long-term frames, we propose an Attentive Spatiotemporal
Transformer GRU (AST-GRU), which contains a Spatial Transformer Attention (STA)
module and a Temporal Transformer Attention (TTA) module. STA and TTA enhance
the vanilla GRU to focus on small objects and better align the moving objects.
Our overall framework supports both online and offline video object detection
in point clouds. We implement our algorithm based on prevalent anchor-based and
anchor-free detectors. The evaluation results on the challenging nuScenes
benchmark show the superior performance of our method, achieving the 1st on the
leaderboard without any bells and whistles, by the time the paper is submitted.
|
The fixed point spectra of Morava E-theory $E_n$ under the action of finite
subgroups of the Morava stabilizer group $\mathbb{G}_n$ and their K(n)-local
Spanier--Whitehead duals can be used to approximate the K(n)-local sphere in
certain cases. For any finite subgroup F of the height 2 Morava stabilizer
group at p=2 we prove that the K(2)-local Spanier--Whitehead dual of the
spectrum $E_2^{hF}$ is $\Sigma^{44}E_2^{hF}$. These results are analogous to
the known results at height 2 and p=3. The main computational tool we use is
the topological duality resolution spectral sequence for the spectrum
$E_2^{h\mathbb{S}_2^1}$ at p=2.
|
Recent quantum reconstruction projects demand pure unitary time evolution
which seems to contradict the collapse postulate. Inspired by Zurek's
environment assisted invariance idea, a natural unitary realization of
wavefunction collapse is proposed using Grothendieck group construction for the
tensor product commutative monoid.
|
We study the geometry and mechanics (both classical and quantum) of potential
wells described by squares of Chebyshev polynomials. We show that in a small
neighbourhood of the locus cut out by them in the space of hyperelliptic
curves, these systems exhibit low-orders/low-orders resurgence, where
perturbative fluctuations about the vacuum determine perturbative fluctuations
about non-perturbative saddles.
|
Chalcogenide glasses possess several outstanding properties that enable
several ground breaking applications, such as optical discs, infrared cameras,
and thermal imaging systems. Despite the ubiquitous usage of these glasses, the
composition property relationships in these materials remain poorly understood.
Here, we use a large experimental dataset comprising approx 24000 glass
compositions made of 51 distinct elements from the periodic table to develop
machine learning models for predicting 12 properties, namely, annealing point,
bulk modulus, density, Vickers hardness, Littleton point, Youngs modulus, shear
modulus, softening point, thermal expansion coefficient, glass transition
temperature, liquidus temperature, and refractive index. These models, by far,
are the largest for chalcogenide glasses. Further, we use SHAP, a game theory
based algorithm, to interpret the output of machine learning algorithms by
analyzing the contributions of each element towards the models prediction of a
property. This provides a powerful tool for experimentalists to interpret the
models prediction and hence design new glass compositions with targeted
properties. Finally, using the models, we develop several glass selection
charts that can potentially aid in the rational design of novel chalcogenide
glasses for various applications.
|
The neutrino mixing matrix has been measured to be of a form consistent with
tribimaximal mixing, while the quark mixing matrix is almost diagonal. A scheme
based on flavour A_4 symmetry for understanding these patterns simultaneously
is presented.
|
Generative Adversarial Networks (GANs) have become a very popular tool for
implicitly learning high-dimensional probability distributions. Several
improvements have been made to the original GAN formulation to address some of
its shortcomings like mode collapse, convergence issues, entanglement, poor
visual quality etc. While a significant effort has been directed towards
improving the visual quality of images generated by GANs, it is rather
surprising that objective image quality metrics have neither been employed as
cost functions nor as regularizers in GAN objective functions. In this work, we
show how a distance metric that is a variant of the Structural SIMilarity
(SSIM) index (a popular full-reference image quality assessment algorithm), and
a novel quality aware discriminator gradient penalty function that is inspired
by the Natural Image Quality Evaluator (NIQE, a popular no-reference image
quality assessment algorithm) can each be used as excellent regularizers for
GAN objective functions. Specifically, we demonstrate state-of-the-art
performance using the Wasserstein GAN gradient penalty (WGAN-GP) framework over
CIFAR-10, STL10 and CelebA datasets.
|
The Gemini Planet Imager is an extreme AO instrument with an integral field
spectrograph (IFS) operating in Y, J, H, and K bands. Both the Gemini telescope
and the GPI instrument are very complex systems. Our goal is that the combined
telescope and instrument system may be run by one observer operating the
instrument, and one operator controlling the telescope and the acquisition of
light to the instrument. This requires a smooth integration between the two
systems and easily operated control interfaces. We discuss the definition of
the software and hardware interfaces, their implementation and testing, and the
integration of the instrument with the telescope environment.
|
We present a study of planar physical solutions to the Lorentz-Dirac equation
in a constant electromagnetic field. In this case, we reduced the Lorentz-Dirac
equation to the one second order differential equation. We obtained the
asymptotics of physical solutions to this equation at large proper times. It
turns out that, in the crossed constant uniform electromagnetic field with
vanishing invariants, a charged particle goes to a universal regime at large
times. We found the ratio of momentum components which tends to a constant
determined only by the external field. This effect is essentially due to a
radiation reaction. There is not such an effect for the Lorentz equation in
this field.
|
The concept of stability, originally introduced for polynomials, will be
extended to apply to the class of entire functions. This generalization will be
called Hurwitz stablility and the class of Hurwitz stable functions will serve
as the main focus of this paper. A first theorem will show how, given a
function of either of the Stieltjes classes, a Hurwitz stable function might be
constructed. A second approach to constructing Hurwitz stable functions, based
on using additional functions from the Laguerre-P\'{o}lya class, will be
presented in a second theorem.
|
Like a silver thread, quantum entanglement [1] runs through the foundations
and breakthrough applications of quantum information theory. It cannot arise
from local operations and classical communication (LOCC) and therefore
represents a more intimate relationship among physical systems than we may
encounter in the classical world. The `nonlocal' character of entanglement
manifests itself through a number of counterintuitive phenomena encompassing
Einstein-Podolsky-Rosen paradox [2,3], steering [4], Bell nonlocality [5] or
negativity of entropy [6,7]. Furthermore, it extends our abilities to process
information. Here, entanglement is used as a resource which needs to be shared
between several parties, eventually placed at remote locations. However
entanglement is not the only manifestation of quantum correlations. Notably,
also separable quantum states can be used as a shared resource for quantum
communication. The experiment presented in this paper highlights the
quantumness of correlations in separable mixed states and the role of classical
information in quantum communication by demonstrating entanglement distribution
using merely a separable ancilla mode.
|
LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and
Mapping) serves as a basis for indoor cleaning, navigation, and many other
useful applications in both industry and household. From a series of LiDAR
scans, it constructs an accurate, globally consistent model of the environment
and estimates a robot position inside it. SLAM is inherently computationally
intensive; it is a challenging problem to realize a fast and reliable SLAM
system on mobile robots with a limited processing capability. To overcome such
hurdles, in this paper, we propose a universal, low-power, and
resource-efficient accelerator design for 2D LiDAR SLAM targeting
resource-limited FPGAs. As scan matching is at the heart of SLAM, the proposed
accelerator consists of dedicated scan matching cores on the programmable logic
part, and provides software interfaces to facilitate the use. Our accelerator
can be integrated to various SLAM methods including the ROS (Robot Operating
System)-based ones, and users can switch to a different method without
modifying and re-synthesizing the logic part. We integrate the accelerator into
three widely-used methods, i.e., scan matching, particle filter, and
graph-based SLAM. We evaluate the design in terms of resource utilization,
speed, and quality of output results using real-world datasets. Experiment
results on a Pynq-Z2 board demonstrate that our design accelerates scan
matching and loop-closure detection tasks by up to 14.84x and 18.92x, yielding
4.67x, 4.00x, and 4.06x overall performance improvement in the above methods,
respectively. Our design enables the real-time performance while consuming only
2.4W and maintaining accuracy, which is comparable to the software counterparts
and even the state-of-the-art methods.
|
In this work, we perform a covariant treatment of quark-antiquark systems. We
calculate the spectra and wave functions using a formalism based on the
Covariant Spectator Theory (CST). Our results not only reproduce very well the
experimental data with a very small set of global parameters, but they also
allow a direct test of the predictive power of covariant kernels.
|
We consider a class of SUSY models in which the MSSM gauge group is
supplemented with a gauged $U(1)_{B-L}$ symmetry and a global $U(1)_{R}$
symmetry. This extension introduces only electrically neutral states, and the
new SUSY partners effectively double the number of states in the neutralino
sector that now includes a blino (from $B-L$) and singlino from a gauge singlet
superfield. If the DM density is saturated by a LSP neutralino, the model
yields quite a rich phenomenology depending on the DM composition. The LSP
relic density constraint provides a lower bound on the stop and gluino masses
of about 3 TeV and 4 TeV respectively, which is testable in the near future
collider experiments such as HL-LHC. The chargino mass lies between 0.24 TeV
and about 2.0 TeV, which can be tested based on the allowed decay channels. We
also find $m_{\tilde{\tau}_{1}}\gtrsim 500$ GeV, and
$m_{\tilde{e}},m_{\tilde{\mu}},m_{\tilde{\nu}^{S,P}} \gtrsim 1$ TeV. We
identify chargino-neutralino coannihilation processes in the mass region $0.24
\,{\rm TeV} \lesssim m_{\tilde{\chi}_{1}^{0}}\approx
m_{\tilde{\chi}_{1}^{\pm}}\lesssim 1.5$ TeV, and also coannihilation processes
involving stau, selectron, smuon and sneutrinos for masses around 1 TeV. In
addition, $A_{2}$ resonance solutions are found around 1 TeV, and $H_{2}$ and
$H_{3}$ resonance solutions are also shown around 0.5 TeV and 1 TeV . Some of
the $A_{2}$ resonance solutions with $\tan\beta \gtrsim 20$ may be tested by
the $A/H\rightarrow \tau^{+}\tau^{-}$ LHC searches. While the relic density
constraint excludes the bino-like DM, it is still possible to realize higgsino,
singlino and blino-like DM for various mass scales. We show that all these
solutions will be tested in future direct detection experiments such as
LUX-Zeplin and Xenon-nT.
|
The primary benefit of identifying a valid surrogate marker is the ability to
use it in a future trial to test for a treatment effect with shorter follow-up
time or less cost. However, previous work has demonstrated potential
heterogeneity in the utility of a surrogate marker. When such heterogeneity
exists, existing methods that use the surrogate to test for a treatment effect
while ignoring this heterogeneity may lead to inaccurate conclusions about the
treatment effect, particularly when the patient population in the new study has
a different mix of characteristics than the study used to evaluate the utility
of the surrogate marker. In this paper, we develop a novel test for a treatment
effect using surrogate marker information that accounts for heterogeneity in
the utility of the surrogate. We compare our testing procedure to a test that
uses primary outcome information (gold standard) and a test that uses surrogate
marker information, but ignores heterogeneity. We demonstrate the validity of
our approach and derive the asymptotic properties of our estimator and variance
estimates. Simulation studies examine the finite sample properties of our
testing procedure and demonstrate when our proposed approach can outperform the
testing approach that ignores heterogeneity. We illustrate our methods using
data from an AIDS clinical trial to test for a treatment effect using CD4 count
as a surrogate marker for RNA.
|
Given $\beta\in(1,2)$ the fat Sierpinski gasket $\mathcal S_\beta$ is the
self-similar set in $\mathbb R^2$ generated by the iterated function system
(IFS)
\[
f_{\beta,d}(x)=\frac{x+d}{\beta},\quad d\in\mathcal A:=\{(0, 0), (1,0),
(0,1)\}.
\] Then for each point $P\in\mathcal S_\beta$ there exists a sequence
$(d_i)\in\mathcal A^\mathbb N$ such that $P=\sum_{i=1}^\infty d_i/\beta^i$, and
the infinite sequence $(d_i)$ is called a \emph{coding} of $P$. In general, a
point in $\mathcal S_\beta$ may have multiple codings since the overlap region
$\mathcal O_\beta:=\bigcup_{c,d\in\mathcal A, c\ne
d}f_{\beta,c}(\Delta_\beta)\cap f_{\beta,d}(\Delta_\beta)$ has non-empty
interior, where $\Delta_\beta$ is the convex hull of $\mathcal S_\beta$. In
this paper we are interested in the invariant set
\[
\widetilde{\mathcal U}_\beta:=\left\{\sum_{i=1}^\infty \frac{d_i}{\beta^i}\in
\mathcal S_\beta: \sum_{i=1}^\infty\frac{d_{n+i}}{\beta^i}\notin\mathcal
O_\beta~\forall n\ge 0\right\}.
\]
Then each point in $ \widetilde{\mathcal U}_\beta$ has a unique coding. We
show that there is a transcendental number $\beta_c\approx 1.55263$ related to
the Thue-Morse sequence, such that $\widetilde{\mathcal U}_\beta$ has positive
Hausdorff dimension if and only if $\beta>\beta_{c}$. Furthermore, for
$\beta=\beta_c$ the set $\widetilde{\mathcal U}_\beta$ is uncountable but has
zero Hausdorff dimension, and for $\beta<\beta_c$ the set $\widetilde{\mathcal
U}_\beta$ is at most countable. Consequently, we also answer a conjecture of
Sidorov (2007). Our strategy is using combinatorics on words based on the
lexicographical characterization of $\widetilde{\mathcal U}_\beta$.
|
We show that the complex absorbing potential (CAP) method for computing
scattering resonances applies to an abstractly defined class of black box
perturbations of the Laplacian in $\mathbb{R}^n$ which can be analytically
extended from $\mathbb{R}^n$ to a conic neighborhood in $\mathbb{C}^n$ near
infinity. The black box setting allows a unifying treatment of diverse problems
ranging from obstacle scattering to scattering on finite volume surfaces.
|
This paper has 3 principal goals: (1) to survey what is know about mapping
class and Torelli groups of simply connected compact Kaehler manifolds, (2)
supplement these results, and (3) present a list of questions and open problems
to stimulate future work. Apart from reviewing general background, the paper
focuses on the case of hypersurfaces in projective space. We explain how older
results of Carlson--Toledo arXiv:alg-geom/9708002 and recent results of
Kreck--Su arXiv:2009.08054 imply that the homomorphism from the fundamental
group of the moduli space of hypersurfaces in P^4 to the mapping class group of
the underlying manifold has a very large kernel (contains a free group of rank
2) and has image of infinite index. This is in contrast to the case of curves,
where the homomorphism is an isomorphism.
|
We study the free complexification operation for compact quantum groups,
$G\to G^c$. We prove that, with suitable definitions, this induces a one-to-one
correspondence between free orthogonal quantum groups of infinite level, and
free unitary quantum groups satisfying $G=G^c$.
|
We developed a new method for determining the bulk etch rate velocity based
on both cone height and base diameter measurements of the etched tracks. This
method is applied here for the calibration of CR39 and Makrofol nuclear track
detectors exposed to 158 A GeV In^{49+} and Pb^{82+} ions, respectively. For
CR39 the peaks corresponding to indium ions and their different fragments are
well separated from Z/beta = 7 to 49: the detection threshold is at REL ~ 50
MeV cm^2 g^{-1}, corresponding to a nuclear fragment with Z/beta = 7. The
calibration of Makrofol with Pb^{82+} ions has shown all peaks due to lead ions
and their fragments from Z/beta ~ 51 to 83 (charge pickup). The detection
threshold of Makrofol is at REL ~ 2700 MeV cm^2 g^{-1}, corresponding to a
nuclear fragment with Z/beta = 51.
|
Image based modeling and laser scanning are two commonly used approaches in
large-scale architectural scene reconstruction nowadays. In order to generate a
complete scene reconstruction, an effective way is to completely cover the
scene using ground and aerial images, supplemented by laser scanning on certain
regions with low texture and complicated structure. Thus, the key issue is to
accurately calibrate cameras and register laser scans in a unified framework.
To this end, we proposed a three-step pipeline for complete scene
reconstruction by merging images and laser scans. First, images are captured
around the architecture in a multi-view and multi-scale way and are feed into a
structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the
SfM result, the laser scanning locations are automatically planned by
considering textural richness, structural complexity of the scene and spatial
layout of the laser scans. Finally, the images and laser scans are accurately
merged in a coarse-to-fine manner. Experimental evaluations on two ancient
Chinese architecture datasets demonstrate the effectiveness of our proposed
complete scene reconstruction pipeline.
|
In the matroid center problem, which generalizes the $k$-center problem, we
need to pick a set of centers that is an independent set of a matroid with rank
$r$. We study this problem in streaming, where elements of the ground set
arrive in the stream. We first show that any randomized one-pass streaming
algorithm that computes a better than $\Delta$-approximation for
partition-matroid center must use $\Omega(r^2)$ bits of space, where $\Delta$
is the aspect ratio of the metric and can be arbitrarily large. This shows a
quadratic separation between matroid center and $k$-center, for which the
Doubling algorithm gives an $8$-approximation using $O(k)$-space and one pass.
To complement this, we give a one-pass algorithm for matroid center that stores
at most $O(r^2\log(1/\varepsilon)/\varepsilon)$ points (viz., stream summary)
among which a $(7+\varepsilon)$-approximate solution exists, which can be found
by brute force, or a $(17+\varepsilon)$-approximation can be found with an
efficient algorithm. If we are allowed a second pass, we can compute a
$(3+\varepsilon)$-approximation efficiently; this also achieves almost the
known-best approximation ratio (of $3+\varepsilon$) with total running time of
$O((nr + r^{3.5})\log(1/\varepsilon)/\varepsilon + r^2(\log
\Delta)/\varepsilon)$, where $n$ is the number of input points.
We also consider the problem of matroid center with $z$ outliers and give a
one-pass algorithm that outputs a set of
$O((r^2+rz)\log(1/\varepsilon)/\varepsilon)$ points that contains a
$(15+\varepsilon)$-approximate solution. Our techniques extend to knapsack
center and knapsack center with outliers in a straightforward way, and we get
algorithms that use space linear in the size of a largest feasible set (as
opposed to quadratic space for matroid center).
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.