text
stringlengths 6
128k
|
---|
We study a supergravity model of inflation essentially depending on one
parameter which can be identified with the slope of the potential at the
origin. In this type of models the inflaton rolls at high energy from negative
values and a positive curvature potential. At some point defined by the
equation $\eta=1$ inflation starts. The potential curvature eventually changes
to negative values and inflation ends when $\eta=-1$. No spontaneous symmetry
breaking type mechanism for inflation of the new type to occur is here
required. The model naturally gives a bounded $\it{total}$ number of e-folds
which is typically close to the required number for observable inflation and it
is independent of the initial conditions for the inflaton. The energy scale
introduced is fixed by the amplitude of the anisotropies and is of the order of
the supersymmetry breaking scale. The model can also accommodate an spectral
index bigger or smaller than one without extreme fine tuning. We show that it
is possible to obtain reasonable numbers for cosmological parameters and, as an
example, we reproduce values obtained recently by Tegmark $\it{et. al.}$, from
WMAP and SDSS data alone.
|
Quantum Annealing (QA) is a computational framework where a quantum system's
continuous evolution is used to find the global minimum of an objective
function over an unstructured search space. It can be seen as a general
metaheuristic for optimization problems, including NP-hard ones if we allow an
exponentially large running time. While QA is widely studied from a heuristic
point of view, little is known about theoretical guarantees on the quality of
the solutions obtained in polynomial time.
In this paper we use a technique borrowed from theoretical physics, the
Lieb-Robinson (LR) bound, and develop new tools proving that short, constant
time quantum annealing guarantees constant factor approximations ratios for
some optimization problems when restricted to bounded degree graphs.
Informally, on bounded degree graphs the LR bound allows us to retrieve a
(relaxed) locality argument, through which the approximation ratio can be
deduced by studying subgraphs of bounded radius.
We illustrate our tools on problems MaxCut and Maximum Independent Set for
cubic graphs, providing explicit approximation ratios and the runtimes needed
to obtain them. Our results are of similar flavor to the well-known ones
obtained in the different but related QAOA (quantum optimization algorithms)
framework.
Eventually, we discuss theoretical and experimental arguments for further
improvements.
|
In this paper we describe a model of concurrency together with an algebraic
structure reflecting the parallel composition. For the sake of simplicity we
restrict to linear concurrent programs i.e. the ones with no loops nor
branching. Such programs are given a semantics using cubical areas. Such a
semantics is said to be geometric. The collection of all these cubical areas
enjoys a structure of tensor product in the category of semi-lattice with zero.
These results naturally extend to fully fledged concurrent programs up to some
technical tricks.
|
In recent years, algebraic studies of the differential calculus and integral
calculus in the forms of differential algebra and Rota-Baxter algebra have been
merged together to reflect the close relationship between the two calculi
through the First Fundamental Theorem of Calculus. In this paper we study this
relationship from a categorical point of view in the context of distributive
laws which can be tracked back to the distributive law of multiplication over
addition. The monad giving Rota-Baxter algebras and the comonad giving
differential algebras are constructed. Then a mixed distributive law of the
monad over the comonad is established. As a consequence, we obtain monads and
comonads giving the composite structures of differential and Rota-Baxter
algebras.
|
Machine Learning (ML) for Mineral Prospectivity Mapping (MPM) remains a
challenging problem as it requires the analysis of associations between
large-scale multi-modal geospatial data and few historical mineral commodity
observations (positive labels). Recent MPM works have explored Deep Learning
(DL) as a modeling tool with more representation capacity. However, these
overparameterized methods may be more prone to overfitting due to their
reliance on scarce labeled data. While a large quantity of unlabeled geospatial
data exists, no prior MPM works have considered using such information in a
self-supervised manner. Our MPM approach uses a masked image modeling framework
to pretrain a backbone neural network in a self-supervised manner using
unlabeled geospatial data alone. After pretraining, the backbone network
provides feature extraction for downstream MPM tasks. We evaluated our approach
alongside existing methods to assess mineral prospectivity of Mississippi
Valley Type (MVT) and Clastic-Dominated (CD) Lead-Zinc deposits in North
America and Australia. Our results demonstrate that self-supervision promotes
robustness in learned features, improving prospectivity predictions.
Additionally, we leverage explainable artificial intelligence techniques to
demonstrate that individual predictions can be interpreted from a geological
perspective.
|
We report magnetic, thermodynamic, thermal expansion, and on detailed optical
experiments on the layered compound $\alpha$-RuCl$_3$ focusing on the THz and
sub-gap optical response across the structural phase transition from the
monoclinic high-temperature to the rhombohedral low-temperature structure,
where the stacking sequence of the molecular layers is changed. This type of
phase transition is characteristic for a variety of tri-halides crystallizing
in a layered honeycomb-type structure and so far is unique, as the
low-temperature phase exhibits the higher symmetry. One motivation is to
unravel the microscopic nature of spin-orbital excitations via a study of
temperature and symmetry-induced changes. We document a number of highly
unusual findings: A characteristic two-step hysteresis of the structural phase
transition, accompanied by a dramatic change of the reflectivity. An electronic
excitation, which appears in a narrow temperature range just across the
structural phase transition, and a complex dielectric loss spectrum in the THz
regime, which could indicate remnants of Kitaev physics. Despite significant
symmetry changes across the monoclinic to rhombohedral phase transition, phonon
eigenfrequencies and the majority of spin-orbital excitations are not strongly
influenced. Obviously, the symmetry of the single molecular layers determine
the eigenfrequencies of most of these excitations. Finally, from this combined
terahertz, far- and mid-infrared study we try to shed some light on the so far
unsolved low energy (< 1eV) electronic structure of the ruthenium $4d^5$
electrons in $\alpha$-RuCl$_3$.
|
We present updates to \textsc{prism}, a photometric transit-starspot model,
and \textsc{gemc}, a hybrid optimisation code combining MCMC and a genetic
algorithm. We then present high-precision photometry of four transits in the
WASP-6 planetary system, two of which contain a starspot anomaly. All four
transits were modelled using \textsc{prism} and \textsc{gemc}, and the physical
properties of the system calculated. We find the mass and radius of the host
star to be $0.836\pm 0.063\,{\rm M}_\odot$ and $0.864\pm0.024\,{\rm R}_\odot$,
respectively. For the planet we find a mass of $0.485\pm 0.027\,{\rm M}_{\rm
Jup}$, a radius of $1.230\pm0.035\,{\rm R}_{\rm Jup}$ and a density of
$0.244\pm0.014\,\rho_{\rm Jup}$. These values are consistent with those found
in the literature. In the likely hypothesis that the two spot anomalies are
caused by the same starspot or starspot complex, we measure the stars rotation
period and velocity to be $23.80 \pm 0.15$\,d and $1.78 \pm
0.20$\,km\,s$^{-1}$, respectively, at a co-latitude of 75.8$^\circ$. We find
that the sky-projected angle between the stellar spin axis and the planetary
orbital axis is $\lambda = 7.2^{\circ} \pm 3.7^{\circ}$, indicating axial
alignment. Our results are consistent with and more precise than published
spectroscopic measurements of the Rossiter-McLaughlin effect. These results
suggest that WASP-6\,b formed at a much greater distance from its host star and
suffered orbital decay through tidal interactions with the protoplanetary disc.
|
A robust variational approach is used to investigate the sensitivity of the
rotation-vibration spectrum of phosphine (PH$_3$) to a possible cosmological
variation of the proton-to-electron mass ratio, $\mu$. Whilst the majority of
computed sensitivity coefficients, $T$, involving the low-lying vibrational
states acquire the expected values of $T\approx-1$ and $T\approx-1/2$ for
rotational and ro-vibrational transitions, respectively, anomalous
sensitivities are uncovered for the $A_1\!-\!A_2$ splittings in the
$\nu_2/\nu_4$, $\nu_1/\nu_3$ and $2\nu_4^{\ell=0}/2\nu_4^{\ell=2}$ manifolds of
PH$_3$. A pronounced Coriolis interaction between these states in conjunction
with accidentally degenerate $A_1$ and $A_2$ energy levels produces a series of
enhanced sensitivity coefficients. Phosphine is expected to occur in a number
of different astrophysical environments and has potential for investigating a
drifting constant. Furthermore, the displayed behaviour hints at a wider trend
in molecules of ${\bf C}_{3\mathrm{v}}\mathrm{(M)}$ symmetry, thus
demonstrating that the splittings induced by higher-order ro-vibrational
interactions are well suited for probing $\mu$ in other symmetric top molecules
in space, since these low-frequency transitions can be straightforwardly
detected by radio telescopes.
|
A novel atomistic effective Hamiltonian scheme, incorporating an original and
simple bilinear energetic coupling, is developed and used to investigate the
temperature dependent physical properties of the prototype antiferroelectric
PbZrO3 (PZO) system. This scheme reproduces very well the known experimental
hallmarks of the complex Pbam orthorhombic phase at low temperatures and the
cubic paraelectric state of Pm 3m symmetry at high temperatures. Unexpectedly,
it further predicts a novel intermediate state also of Pbam symmetry, but in
which anti-phase oxygen octahedral tiltings have vanished with respect to the
Pbam ground state. Interestingly, such new state exhibits a large dielectric
response and thermal expansion that remarkably agree with previous experimental
observations and the x-ray experiments we performed. We also conducted direct
first-principles calculations at 0K which further support such low energy
phase. Within this fresh framework, a re-examination of the properties of PZO
is thus called for.
|
We establish upper and lower bounds for the 2-limited broadcast domination
number of various grid graphs, in particular the Cartesian product of two
paths, a path and a cycle, and two cycles. The upper bounds are derived by
explicit constructions. The lower bounds are obtained via linear programming
duality by finding lower bounds for the fractional 2-limited multipacking
numbers of these graphs.
|
Neural networks are well-known to be vulnerable to imperceptible
perturbations in the input, called adversarial examples, that result in
misclassification. Generating adversarial examples for source code poses an
additional challenge compared to the domains of images and natural language,
because source code perturbations must retain the functional meaning of the
code. We identify a striking relationship between token frequency statistics
and learned token embeddings: the L2 norm of learned token embeddings increases
with the frequency of the token except for the highest-frequnecy tokens. We
leverage this relationship to construct a simple and efficient gradient-free
method for generating state-of-the-art adversarial examples on models of code.
Our method empirically outperforms competing gradient-based methods with less
information and less computational effort.
|
The advantage of the electronic and mobile learning platforms is the
dissemination of learning contents with ease, but these operate differently to
exchange the learning the learning contents from the server to the client.
integrating these learning platforms to operate as a single platform and
exchange the contents based on learners' request could improve the learning
efficiency and reduce the operational cost. this work introduces a Web Services
approach based on the client-server model to develop an integrated architecture
that join the two learning platforms. in this paper the architecture of the
learning platforms is presented and explained. furthermore, an adapter in a
form of Web services is developed and as a middleware for the client-server
communication.
|
We study a well-known communication abstraction called Uniform Reliable
Broadcast (URB). URB is central in the design and implementation of
fault-tolerant distributed systems, as many non-trivial fault-tolerant
distributed applications require communication with provable guarantees on
message deliveries. Our study focuses on fault-tolerant implementations for
time-free message-passing systems that are prone to node-failures. Moreover, we
aim at the design of an even more robust communication abstraction. We do so
through the lenses of self-stabilization---a very strong notion of
fault-tolerance. In addition to node and communication failures,
self-stabilizing algorithms can recover after the occurrence of arbitrary
transient faults; these faults represent any violation of the assumptions
according to which the system was designed to operate (as long as the algorithm
code stays intact).
This work proposes the first self-stabilizing URB solution for time-free
message-passing systems that are prone to node-failures. The proposed algorithm
has an O(bufferUnitSize) stabilization time (in terms of asynchronous cycles)
from arbitrary transient faults, where bufferUnitSize is a predefined constant
that can be set according to the available memory. Moreover, the communication
costs of our algorithm are similar to the ones of the non-self-stabilizing
state-of-the-art. The main differences are that our proposal considers repeated
gossiping of O(1) bits messages and deals with bounded space (which is a
prerequisite for self-stabilization). Specifically, each node needs to store up
to bufferUnitSize n records and each record is of size O(v + n log n) bits,
where n is the number of nodes in the system and v is the number of bits needed
to encode a single URB instance.
|
The assembly history of the Milky Way (MW) is a rapidly evolving subject,
with numerous small accretion events and at least one major merger proposed in
the MW's history. Accreted alongside these dwarf galaxies are globular clusters
(GCs), which act as spatially coherent remnants of these past events. Using
high precision differential abundance measurements from our recently published
study, we investigate the likelihood that the MW clusters NGC 362 and NGC 288
are galactic siblings, accreted as part of the Gaia-Sausage-Enceladus (GSE)
merger. To do this, we compare the two GCs at the 0.01 dex level for 20+
elements for the first time. Strong similarities are found, with the two
showing chemical similarity on the same order as those seen between the three
LMC GCs, NGC 1786, NGC 2210 and NGC 2257. However, when comparing GC abundances
directly to GSE stars, marked differences are observed. NGC 362 shows good
agreement with GSE stars in the ratio of Eu to Mg and Si, as well as a clear
dominance in the r- compared to the s-process, while NGC 288 exhibits only a
slight r-process dominance. When fitting the two GC abundances with a GSE-like
galactic chemical evolution model, NGC 362 shows agreement with both the model
predictions and GSE abundance ratios (considering Si, Ni, Ba and Eu) at the
same metallicity. This is not the case for NGC 288. We propose that the two are
either not galactic siblings, or GSE was chemically inhomogeneous enough to
birth two similar, but not identical clusters with distinct chemistry relative
to constituent stars.
|
We consider the possibility of using neural networks in experimental data
analysis in Daphne. We analyze the process $\gamma\gamma\to \pi^+ \pi^- \pi^0$
and its backgrounds using neural networks and we compare their performances
with traditional methods of applying cuts on several kinematical variables. We
find that the neural networks are more efficient and can be of great help for
processes with small number of produced events.
|
Turbulent flows are ubiquitous in astrophysical environments, and
understanding density structures and their statistics in turbulent media is of
great importance in astrophysics. In this paper, we study the density power
spectra, $P_{\rho}$, of transonic and supersonic turbulent flows through one
and three-dimensional simulations of driven, isothermal hydrodynamic turbulence
with root-mean-square Mach number in the range of $1 \la M_{\rm rms} \la 10$.
From one-dimensional experiments we find that the slope of the density power
spectra becomes gradually shallower as the rms Mach number increases. It is
because the density distribution transforms from the profile with {\it
discontinuities} having $P_{\rho} \propto k^{-2}$ for $M_{\rm rms} \sim 1$ to
the profile with {\it peaks} having $P_{\rho} \propto k^0$ for $M_{\rm rms} \gg
1$. We also find that the same trend is carried to three-dimension; that is,
the density power spectrum flattens as the Mach number increases. But the
density power spectrum of the flow with $M_{\rm rms} \sim 1$ has the Kolmogorov
slope. The flattening is the consequence of the dominant density structures of
{\it filaments} and {\it sheets}. Observations have claimed different slopes of
density power spectra for electron density and cold H I gas in the interstellar
medium. We argue that while the Kolmogorov spectrum for electron density
reflects the {\it transonic} turbulence of $M_{\rm rms} \sim 1$ in the warm
ionized medium, the shallower spectrum of cold H I gas reflects the {\it
supersonic} turbulence of $M_{\rm rms} \sim$ a few in the cold neutral medium.
|
For stratified Mukai flops of type $A_{n,k}, D_{2k+1}$ and $E_{6,I}$, it is
shown the fiber product induces isomorphisms on Chow motives. In contrast to
(standard) Mukai flops, the cup product is generally not preserved. For $A_{n,
2}$, $D_5$ and $E_{6, I}$ flops, quantum corrections are found through
degeneration/deformation to ordinary flops.
|
Bright gamma-ray flares observed from sources far beyond our Galaxy are best
explained if enormous amounts of energy are liberated by black holes. The
highest-energy particles in nature--the ultra-high energy cosmic rays--cannot
be confined by the Milky Way's magnetic field, and must originate from sources
outside our Galaxy. Here we summarize the themes of our book, "High Energy
Radiation from Black Holes: Gamma Rays, Cosmic Rays, and Neutrinos", just
published by Princeton University Press. In this book, we develop a
mathematical framework that can be used to help establish the nature of
gamma-ray sources, to evaluate evidence for cosmic-ray acceleration in blazars,
GRBs and microquasars, to decide whether black holes accelerate the ultra-high
energy cosmic rays, and to determine whether the Blandford-Znajek mechanism for
energy extraction from rotating black holes can explain the differences between
gamma-ray blazars and radio-quiet AGNs.
|
For the past decade there has been a considerable debate about the existence
of chaos in the mixmaster cosmological model. The debate has been hampered by
the coordinate, or observer dependence of standard chaotic indicators such as
Lyapanov exponents. Here we use coordinate independent, fractal methods to show
the mixmaster universe is indeed chaotic.
|
High-level transformation languages like Rascal include expressive features
for manipulating large abstract syntax trees: first-class traversals,
expressive pattern matching, backtracking and generalized iterators. We present
the design and implementation of an abstract interpretation tool, Rabit, for
verifying inductive type and shape properties for transformations written in
such languages. We describe how to perform abstract interpretation based on
operational semantics, specifically focusing on the challenges arising when
analyzing the expressive traversals and pattern matching. Finally, we evaluate
Rabit on a series of transformations (normalization, desugaring, refactoring,
code generators, type inference, etc.) showing that we can effectively verify
stated properties.
|
Finding exact Ramsey numbers is a problem typically restricted to relatively
small graphs. The flag algebra method was developed to find asymptotic results
for very large graphs, so it seems that the method is not suitable for finding
small Ramsey numbers. But this intuition is wrong, and we will develop a
technique to do just that in this paper.
We find new upper bounds for many small graph and hypergraph Ramsey numbers.
As a result, we prove the exact values $R(K_4^-,K_4^-,K_4^-)=28$, $R(K_8,C_5)=
29$, $R(K_9,C_6)= 41$, $R(Q_3,Q_3)=13$, $R(K_{3,5},K_{1,6})=17$, $R(C_3, C_5,
C_5)= 17$, and $R(K_4^-,K_5^-;3)= 12$.
We hope that this technique will be adapted to address other questions for
smaller graphs with the flag algebra method.
|
We propose a method to substantially improve the signal-to-noise ratio of
lattice correlation functions for bosonic operators or other operator
combinations with disconnected contributions. The technique is applicable for
correlations between operators on two planes (zero momentum correlators) when
the dimension of the plane is larger than the separation between the two planes
which are correlated. In this case, the correlation arises primarily from
points whose in-plane coordinates are close, but noise arises from all pairs of
points. By breaking each plane into bins and computing bin-bin correlations, it
is possible to capture these short-distance correlators exactly while replacing
(small) correlators at large spatial extent with a fit, with smaller
uncertainty than the data. The cost is only marginally larger than averaging
each plane before correlating, but the improvement in signal-to-noise can be
substantial. We test the method on correlators of the gradient-flowed
topological charge density and squared field strength, finding noise reductions
by a factor of $\sim$ 3$-$7 compared to the conventional approach on the same
ensemble of configurations.
|
The Davey-Stewartson equations are used to describe the long time evolution
of a three-dimensional packets of surface waves. Assuming that the argument
functions are quadratic in spacial variables, we find in this paper various
exact solutions modulo the most known symmetry transformations for the
Davey-Stewartson equations.
|
The lowest-lying states of the Borromean nucleus $^{17}$Ne ($^{15}$O+$p$ +
$p$) and its mirror nucleus $^{17}$N ($^{15}$N+$n$ + $n$) are compared by using
the hyperspheric adiabatic expansion. Three-body resonances are computed by use
of the complex scaling method. The measured size of $^{15}$O and the low-lying
resonances of $^{16}$F ($^{15}$O+$p$) are first used as constraints to
determine both central and spin-dependent two-body interactions. The
interaction obtained reproduces relatively accurately both experimental
three-body spectra. The Thomas-Ehrman shifts, involving excitation energy
differences, are computed and found to be less than 3% of the total Coulomb
energy shift for all states.
|
We consider improving the performance of a recently proposed sound-based
vehicle speed estimation method. In the original method, an intermediate
feature, referred to as the modified attenuation (MA), has been proposed for
both vehicle detection and speed estimation. The MA feature maximizes at the
instant of the vehicle's closest point of approach, which represents a training
label extracted from video recording of the vehicle's pass by. In this paper,
we show that the original labeling approach is suboptimal and propose a method
for label correction. The method is tested on the VS10 dataset, which contains
304 audio-video recordings of ten different vehicles. The results show that the
proposed label correction method reduces average speed estimation error from
7.39 km/h to 6.92 km/h. If the speed is discretized into 10 km/h classes, the
accuracy of correct class prediction is improved from 53.2% to 53.8%, whereas
when tolerance of one class offset is allowed, accuracy is improved from 93.4%
to 94.3%.
|
Recent inclusive and differential cross section measurements of the
associated production of top quark pairs with gauge bosons or heavy-flavor jets
are reported. A search for physics beyond the standard model in the top quark
sector is also presented. All measurements are based on data samples of
proton-proton collisions at $\sqrt{s}=13$ TeV collected by the ATLAS and CMS
experiments at the CERN LHC. No significant deviation from the standard model
predictions is observed.
|
In this paper, we study a disordered pinning model induced by a random walk
whose increments have a finite fourth moment and vanishing first and third
moments. It is known that this model is marginally relevant, and moreover, it
undergoes a phase transition in an intermediate disorder regime. We show that,
in the critical window, the point-to-point partition functions converge to a
unique limiting random measure, which we call the critical disordered pinning
measure. We also obtain an analogous result for a continuous counterpart to the
pinning model, which is closely related to two other models: one is a critical
stochastic Volterra equation that gives rise to a rough volatility model, and
the other is a critical stochastic heat equation with multiplicative noise that
is white in time and delta in space.
|
We present a general theory of three-dimensional nonparaxial
spatially-accelerating waves of the Maxwell equations. These waves constitute a
two-dimensional structure exhibiting shape-invariant propagation along
semicircular trajectories. We provide classification and characterization of
possible shapes of such beams, expressed through the angular spectra of
parabolic, oblate and prolate spheroidal fields. Our results facilitate the
design of accelerating beams with novel structures, broadening scope and
potential applications of accelerating beams.
|
Understanding the temperature dependence of the optical properties of thin
metal films is critical for designing practical devices for high temperature
applications in a variety of research areas, including plasmonics and
near-field radiative heat transfer. Even though the optical properties of bulk
metals at elevated temperatures have been studied, the temperature-dependent
data for thin metal films, with thicknesses ranging from few tens to few
hundreds of nanometers, is largely missing. In this work we report on the
optical constants of single- and polycrystalline gold thin films at elevated
temperatures in the wavelength range from 370 to 2000 nm. Our results show that
while the real part of the dielectric function changes marginally with
increasing temperature, the imaginary part changes drastically. For
200-nm-thick single- and polycrystalline gold films the imaginary part of the
dielectric function at 500 0C becomes nearly twice larger than that at room
temperature. In contrast, in thinner films (50-nm and 30-nm) the imaginary part
can show either increasing or decreasing behavior within the same temperature
range and eventually at 500 0C it becomes nearly 3-4 times larger than that at
room temperature. The increase in the imaginary part at elevated temperatures
significantly reduces the surface plasmon polariton propagation length and the
quality factor of the localized surface plasmon resonance for a spherical
particle. We provide experiment-fitted models to describe the
temperature-dependent gold dielectric function as a sum of one Drude and two
Critical Point oscillators. These causal analytical models could enable
accurate multiphysics modelling of gold-based nanophotonic and plasmonic
elements in both frequency and time domains.
|
Quantitative and qualitative analysis of acoustic backscattered signals from
the seabed bottom to the sea surface is used worldwide for fish stocks
assessment and marine ecosystem monitoring. Huge amounts of raw data are
collected yet require tedious expert labeling. This paper focuses on a case
study where the ground truth labels are non-obvious: echograms labeling, which
is time-consuming and critical for the quality of fisheries and ecological
analysis. We investigate how these tasks can benefit from supervised learning
algorithms and demonstrate that convolutional neural networks trained with
non-stationary datasets can be used to stress parts of a new dataset needing
human expert correction. Further development of this approach paves the way
toward a standardization of the labeling process in fisheries acoustics and is
a good case study for non-obvious data labeling processes.
|
Growth on transition metal substrates is becoming a method of choice to
prepare large-area graphene foils. In the case of nickel, where carbon has a
significant solubility, such a growth process includes at least two elementary
steps: (1) carbon dissolution into the metal, and (2) graphene precipitation at
the surface. Here, we dissolve calibrated amounts of carbon in nickel films,
using carbon ion implantation, and annealing at 725 \circ or 900 \circ. We then
use transmission electron microscopy to analyse the precipitation process in
detail: the latter appears to imply carbon diffusion over large distances and
at least two distinct microscopic mechanisms.
|
We show uniqueness in law for the critical SPDE \begin{eqnarray} \label{qq1}
dX_t = AX_t dt + (-A)^{1/2}F(X(t))dt + dW_t,\;\;
X_0 =x \in H, \end{eqnarray} where $A$ $ : \text{dom}(A) \subset H \to H$ is
a negative definite self-adjoint operator on a separable Hilbert space $H$
having $A^{-1}$ of trace class and $W$ is a cylindrical Wiener process on $H$.
Here $F: H \to H $ can be locally H\"older continuous with at most linear
growth (some functions $F$ which grow more than linearly can also be
considered). This leads to new uniqueness results for generalized stochastic
Burgers equations and for three-dimensional stochastic Cahn-Hilliard type
equations which have interesting applications. We do not know if uniqueness
holds under the sole assumption of continuity of $F$ plus growth condition as
stated in [Priola, Ann. of Prob. 49 (2021)]. To get weak uniqueness we use an
infinite dimensional localization principle and an optimal regularity result
for the Kolmogorov equation $ \lambda u - L u = f$ associated to the SPDE when
$F = z \in H$ is constant and $\lambda >0$. This optimal result is similar to a
theorem of [Da Prato, J. Evol. Eq. 3 (2003)].
|
In this note, we study an optimal transportation problem arising in density
functional theory. We derive an upper bound on the semi-classical
Hohenberg-Kohn functional derived by Cotar, Friesecke and Kl\"{u}ppelberg
(2012) which can be computed in a straightforward way for a given single
particle density. This complements a lower bound derived by the aforementioned
authors. We also show that for radially symmetric densities the optimal
transportation problem arising in the semi-classical Hohenberg-Kohn functional
can be reduced to a 1-dimensional problem. This yields a simple new proof of
the explicit solution to the optimal transport problem for two particles found
by Cotar, Friesecke and Kl\"{u}ppelberg (2012). For more particles, we use our
result to demonstrate two new qualitative facts: first, that the solution can
concentrate on higher dimensional submanifolds and second that the solution can
be non-unique, even with an additional symmetry constraint imposed.
|
A non-Grassmanian path integral representation is given for the solution of
the Klein-Gordon and the Dirac equations. The trajectories of the path integral
are rendered differentiable by the relativistic corrections. The
nonrelativistic limit is briefly discussed from the point of view of the
renormalization group.
|
Given a compact space $X$ and a commutative Banach algebra $A$, the character
spaces of $A$-valued function algebras on $X$ are investigated. The class of
natural $A$-valued function algebras, those whose characters can be described
by means of characters of $A$ and point evaluation homomorphisms, is introduced
and studied. For an admissible Banach $A$-valued function algebra $\mathcal{A}$
on $X$, conditions under which the character space $M(\mathcal{A})$ is
homeomorphic to $M(\mathfrak{A}) \times M(A)$ are presented, where
$\mathfrak{A}=C(X) \cap \mathcal{A}$ is the subalgebra of $\mathcal{A}$
consisting of scalar-valued functions. An illustration of the results is given
by some examples.
|
Boolean satisfiability (SAT) is a fundamental NP-complete problem with many
applications, including automated planning and scheduling. To solve large
instances, SAT solvers have to rely on heuristics, e.g., choosing a branching
variable in DPLL and CDCL solvers. Such heuristics can be improved with machine
learning (ML) models; they can reduce the number of steps but usually hinder
the running time because useful models are relatively large and slow. We
suggest the strategy of making a few initial steps with a trained ML model and
then releasing control to classical heuristics; this simplifies cold start for
SAT solving and can decrease both the number of steps and overall runtime, but
requires a separate decision of when to release control to the solver.
Moreover, we introduce a modification of Graph-Q-SAT tailored to SAT problems
converted from other domains, e.g., open shop scheduling problems. We validate
the feasibility of our approach with random and industrial SAT problems.
|
We experimentally demonstrate that the transmission of a 1030~nm, 1.3~ps
laser beam of 100 mJ energy through fog increases when its repetition rate
increases to the kHz range. Due to the efficient energy deposition by the laser
filaments in the air, a shockwave ejects the fog droplets from a substantial
volume of the beam, at a moderate energy cost. This process opens prospects for
applications requiring the transmission of laser beams through fogs and clouds.
|
A numerical method to implement a linearized Coulomb collision operator in
the two-weight $\delta f$ Monte Carlo method for multi-ion-species neoclassical
transport simulation is developed. The conservation properties and the
adjointness property of the operator in the collisions between two particle
species with different temperatures are verified. The linearized operator in a
$\delta f$ Monte Carlo code is benchmarked with other two kinetic simulations,
a $\delta f$ continuum gyrokinetic code with the same linearized collision
operator and a full-f PIC code with Nanbu collision operator. The benchmark
simulations of the equilibration process of plasma flow and temperature
fluctuation among several particle species show very good agreement between
$\delta f$ Monte Carlo code and the other two codes. An error in the H-theorem
in the two-weight $\delta f$ Monte Carlo method is found, which is caused by
the weight spreading phenomenon inherent in the two-weight $\delta f$ method.
It is demonstrated that the weight averaging method serves to restoring the
H-theorem without causing side effect.
|
We calculate the Plancherel formula for complex semisimple quantum groups,
that is, Drinfeld doubles of $ q $-deformations of compact semisimple Lie
groups. As a consequence we obtain a concrete description of their associated
reduced group $ C^* $-algebras. The main ingredients in our proof are the
Bernstein-Gelfand-Gelfand complex and the Hopf trace formula.
|
The High Level Trigger (HLT) of the future ALICE heavy-ion experiment has to
reduce its input data rate of up to 25 GB/s to at most 1.25 GB/s for output
before the data is written to permanent storage. To cope with these data rates
a large PC cluster system is being designed to scale to several 1000 nodes,
connected by a fast network. For the software that will run on these nodes a
flexible data transport and distribution software framework, described in this
thesis, has been developed. The framework consists of a set of separate
components, that can be connected via a common interface. This allows to
construct different configurations for the HLT, that are even changeable at
runtime. To ensure a fault-tolerant operation of the HLT, the framework
includes a basic fail-over mechanism that allows to replace whole nodes after a
failure. The mechanism will be further expanded in the future, utilizing the
runtime reconnection feature of the framework's component interface. To connect
cluster nodes a communication class library is used that abstracts from the
actual network technology and protocol used to retain flexibility in the
hardware choice. It contains already two working prototype versions for the TCP
protocol as well as SCI network adapters. Extensions can be added to the
library without modifications to other parts of the framework. Extensive tests
and measurements have been performed with the framework. Their results as well
as conclusions drawn from them are also presented in this thesis. Performance
tests show very promising results for the system, indicating that it can
fulfill ALICE's requirements concerning the data transport.
|
Using tools provided by the theory of abstract convexity, we extend
conditions for zero duality gap to the context of nonconvex and nonsmooth
optimization. Mimicking the classical setting, an abstract convex function is
the upper envelope of a family of abstract affine functions (being conventional
vertical translations of the abstract linear functions). We establish new
conditions for zero duality gap under no topological assumptions on the space
of abstract linear functions. In particular, we prove that the zero duality gap
property can be fully characterized in terms of an inclusion involving
(abstract) $\varepsilon-$subdifferentials. This result is new even for the
classical convex setting. Endowing the space of abstract linear functions with
the topology of pointwise convergence, we extend several fundamental facts of
functional/convex analysis. This includes (i) the classical
Banach--Alaoglu--Bourbaki theorem (ii) the subdifferential sum rule, and (iii)
a constraint qualification for zero duality gap which extends a fact
established by Borwein, Burachik and Yao (2014) for the conventional convex
case. As an application, we show with a specific example how our results can be
exploited to show zero duality for a family of nonconvex, non-differentiable
problems.
|
Random via failure is a major concern for post-fabrication reliability and
poor manufacturing yield. A demanding solution to this problem is redundant via
insertion during post-routing optimization. It becomes very critical when a
multi-layer routing solution already incurs a large number of vias. Very few
global routers addressed unconstrained via minimization (UVM) problem, while
using minimal pattern routing and layer assignment of nets. It also includes a
recent floorplan based early global routability assessment tool STAIRoute
\cite{karb2}.
This work addresses an early version of unconstrained via minimization
problem during early global routing by identifying a set of minimal bend
routing regions in any floorplan, by a new recursive bipartitioning framework.
These regions facilitate monotone pattern routing of a set of nets in the
floorplan by STAIRoute. The area/number balanced floorplan bipartitionining is
a multi-objective optimization problem and known to be NP-hard \cite{majum2}.
No existing approaches considered bend minimization as an objective and some of
them incurred higher runtime overhead. In this paper, we present a Greedy as
well as randomized neighbor search based staircase wave-front propagation
methods for obtaining optimal bipartitioning results for minimal bend routing
through multiple routing layers, for a balanced trade-off between routability,
wirelength and congestion.
Experiments were conducted on MCNC/GSRC floorplanning benchmarks for studying
the variation of early via count obtained by STAIRoute for different values of
the trade-off parameters ($\gamma, \beta$) in this multi-objective optimization
problem, using $8$ metal layers. We studied the impact of ($\gamma, \beta$)
values on each of the objectives as well as their linear combination function
$Gain$ of these objectives.
|
To solve time inefficiency issue, only potential pairs are compared in
string-matching-based source code plagiarism detection; wherein potentiality is
defined through a fast-yet-order-insensitive similarity measurement (adapted
from Information Retrieval) and only pairs which similarity degrees are higher
or equal to a particular threshold is selected. Defining such threshold is not
a trivial task considering the threshold should lead to high efficiency
improvement and low effectiveness reduction (if it is unavoidable). This paper
proposes two thresholding mechanisms---namely range-based and pair-count-based
mechanism---that dynamically tune the threshold based on the distribution of
resulted similarity degrees. According to our evaluation, both mechanisms are
more practical to be used than manual threshold assignment since they are more
proportional to efficiency improvement and effectiveness reduction.
|
During muscle contraction, myosin motors anchored to thick filaments bind to
and slide actin thin filaments. These motors rely on energy derived from ATP,
supplied, in part, by diffusion from the sarcoplasm to the interior of the
lattice of actin and myosin filaments. The radial spacing of filaments in this
lattice may change or remain constant during contraction. If the lattice is
isovolumetric, it must expand when the muscle shortens. If, however, the
spacing is constant or has a different pattern of axial and radial motion, then
the lattice changes volume during contraction, driving fluid motion and
assisting in the transport of molecules between the contractile lattice and the
surrounding intracellular space. We first create an
advective-diffusive-reaction flow model and show that the flow into and out of
the sarcomere lattice would be significant in the absence of lattice expansion.
Advective transport coupled to diffusion has the potential to substantially
enhance metabolite exchange within the crowded sarcomere. Using time-resolved
x-ray diffraction of contracting muscle, we next show that the contractile
lattice is neither isovolumetric nor constant in spacing. Instead, lattice
spacing is time-varying, depends on activation, and can manifest as an
effective time-varying Poisson ratio. The resulting fluid flow in the sarcomere
lattice of synchronous insect flight muscles is greater than expected for
constant lattice spacing conditions. Lattice spacing depends on a variety of
factors that produce radial force, including crossbridges, titin-like
molecules, and other structural proteins. Volume change and advective transport
varies with the phase of muscle stimulation but remains significant at all
conditions. Akin to "breathing," advective-diffusive transport in sarcomeres is
sufficient to promote metabolite exchange and may play a role in the regulation
of contraction itself.
|
We address the quantum melting phase transition of the Skyrme crystal. Based
on generic sum rules for two-dimensional, isotropic electron quantum liquids in
the lowest Landau level, we propose analytic expressions for the pair
distribution functions of spin-polarized and spin-unpolarized liquid phases at
filling factors $2/3\leq\nu\leq 1$. From the pair distribution functions we
calculate the energy of such liquid phases and compare with the energy of the
solid phase. The comparison suggests that the quantum melting phase transition
may lie much closer to $\nu=1$ than ever expected.
|
Uncloneable encryption, first introduced by Broadbent and Lord (TQC 2020) is
a quantum encryption scheme in which a quantum ciphertext cannot be distributed
between two non-communicating parties such that, given access to the decryption
key, both parties cannot learn the underlying plaintext. In this work, we
introduce a variant of uncloneable encryption in which several possible
decryption keys can decrypt a particular encryption, and the security
requirement is that two parties who receive independently generated decryption
keys cannot both learn the underlying ciphertext. We show that this variant of
uncloneable encryption can be achieved device-independently, i.e., without
trusting the quantum states and measurements used in the scheme, and that this
variant works just as well as the original definition in constructing quantum
money. Moreover, we show that a simple modification of our scheme yields a
single-decryptor encryption scheme, which was a related notion introduced by
Georgiou and Zhandry. In particular, the resulting single-decryptor encryption
scheme achieves device-independent security with respect to a standard
definition of security against random plaintexts. Finally, we derive an
"extractor" result for a two-adversary scenario, which in particular yields a
single-decryptor encryption scheme for single bit-messages that achieves
perfect anti-piracy security without needing the quantum random oracle model.
|
Structural properties, impedance, dielectric and electric modulus spectra
have been used to investigate the sintering temperature (Ts) effect on the
single phase cubic spinel Ni0.6Zn0.4Fe2O4 (NZFO) ceramics synthesized by
standard ceramic technique. Enhancement of dielectric constants is observed
with increasing Ts. The collective contribution of n-type and p-type carriers
yields a clear peak in notable unusual dielectric behavior is successfully
explained by the Rezlescu model. The non-Debye type long range dielectric
relaxation phenomena is explained by electric modulus formalism. Fast response
of the grain boundaries of the sample sintered at lower Ts sample leading to
small dielectric spin relaxation time, t (several nanoseconds) have been
determined using electric modulus spectra for the samples sintered at different
Ts. Two clear semicircles in impedance Cole-Cole plot have also been
successfully explained by employing two parallel RC equivalent circuits in
series configuration taking into account no electrode contribution. Such a long
relaxation time in NZFO ceramics could suitably be used in nanoscale spintronic
devices.
|
Estimating intra- and extra-axonal microstructure parameters, such as volume
fractions and diffusivities, has been one of the major efforts in brain
microstructure imaging with MRI. The Standard Model (SM) of diffusion in white
matter has unified various modeling approaches based on impermeable narrow
cylinders embedded in locally anisotropic extra-axonal space. However,
estimating the SM parameters from a set of conventional diffusion MRI (dMRI)
measurements is ill-conditioned. Multidimensional dMRI helps resolve the
estimation degeneracies, but there remains a need for clinically feasible
acquisitions that yield robust parameter maps. Here we find optimal
multidimensional protocols by minimizing the mean-squared error of machine
learning-based SM parameter estimates for two 3T scanners with corresponding
gradient strengths of $40$ and $80\,\unit{mT/m}$. We assess intra-scanner and
inter-scanner repeatability for 15-minute optimal protocols by scanning 20
healthy volunteers twice on both scanners. The coefficients of variation all SM
parameters except free water fraction are $\lesssim 10\%$ voxelwise and $1-4
\%$ for their region-averaged values. As the achieved SM reproducibility
outcomes are similar to those of conventional diffusion tensor imaging, our
results enable robust in vivo mapping of white matter microstructure in
neuroscience research and in the clinic.
|
The Omnid human-collaborative mobile manipulators are an experimental
platform for testing control architectures for autonomous and
human-collaborative multirobot mobile manipulation. An Omnid consists of a
mecanum-wheel omnidirectional mobile base and a series-elastic Delta-type
parallel manipulator, and it is a specific implementation of a broader class of
mobile collaborative robots ("mocobots") suitable for safe human
co-manipulation of delicate, flexible, and articulated payloads. Key features
of mocobots include passive compliance, for the safety of the human and the
payload, and high-fidelity end-effector force control independent of the
potentially imprecise motions of the mobile base. We describe general
considerations for the design of teams of mocobots; the design of the Omnids in
light of these considerations; manipulator and mobile base controllers to
achieve useful multirobot collaborative behaviors; and initial experiments in
human-multirobot collaborative mobile manipulation of large, unwieldy payloads.
For these experiments, the only communication among the humans and Omnids is
mechanical, through the payload.
|
We study a possible mechanism of the switching of the magnetic easy axis as a
function of hole concentration in (Ga,Mn)As epilayers. In-plane uniaxial
magnetic anisotropy along [110] is found to exceed intrinsic cubic
magnetocrystalline anisotropy above a hole concentration of p = 1.5 * 10^21
cm^-3 at 4 K. This anisotropy switching can also be realized by post-growth
annealing, and the temperature-dependent ac susceptibility is significantly
changed with increasing annealing time. On the basis of our recent scenario
[Phys. Rev. Lett. 94, 147203 (2005); Phys. Rev. B 73, 155204 (2006).], we
deduce that the growth of highly hole-concentrated cluster regions with [110]
uniaxial anisotropy is likely the predominant cause of the enhancement in [110]
uniaxial anisotropy at the high hole concentration regime. We can clearly rule
out anisotropic lattice strain as a possible origin of the switching of the
magnetic anisotropy.
|
We show that the observed quark masses seem to be consistent with a simple
scaling law. Due to the precise values of the heavy quarks we are able to
calculate the quark masses in the light quark sector. We discuss a possible
value for the strange quark mass. We show that the u-type quark masses obey the
scaling law very well.
|
Classical geometric mechanics, including the study of symmetries, Lagrangian
and Hamiltonian mechanics, and the Hamilton-Jacobi theory, are founded on
geometric structures such as jets, symplectic and contact ones. In this paper,
we shall use a partly forgotten framework of second-order (or stochastic)
differential geometry, developed originally by L. Schwartz and P.-A. Meyer, to
construct second-order counterparts of those classical structures. These will
allow us to study symmetries of stochastic differential equations (SDEs), to
establish stochastic Lagrangian and Hamiltonian mechanics and their key
relations with second-order Hamilton-Jacobi-Bellman (HJB) equations. Indeed,
stochastic prolongation formulae will be derived to study symmetries of SDEs
and mixed-order Cartan symmetries. Stochastic Hamilton's equations will follow
from a second-order symplectic structure and canonical transformations will
lead to the HJB equation. A stochastic variational problem on Riemannian
manifolds will provide a stochastic Euler-Lagrange equation compatible with HJB
one and equivalent to the Riemannian version of stochastic Hamilton's
equations. A stochastic Noether's theorem will also follow. The inspirational
example, along the paper, will be the rich dynamical structure of
Schr\"odinger's problem in optimal transport, where the latter is also regarded
as a Euclidean version of hydrodynamical interpretation of quantum mechanics.
|
In-memory ordered key-value stores are an important building block in modern
distributed applications. We present Honeycomb, a hybrid software-hardware
system for accelerating read-dominated workloads on ordered key-value stores
that provides linearizability for all operations including scans. Honeycomb
stores a B-Tree in host memory, and executes SCAN and GET on an FPGA-based
SmartNIC, and PUT, UPDATE and DELETE on the CPU. This approach enables large
stores and simplifies the FPGA implementation but raises the challenge of data
access and synchronization across the slow PCIe bus. We describe how Honeycomb
overcomes this challenge with careful data structure design, caching, request
parallelism with out-of-order request execution, wait-free read operations, and
batching synchronization between the CPU and the FPGA. For read-heavy YCSB
workloads, Honeycomb improves the throughput of a state-of-the-art ordered
key-value store by at least 1.8x. For scan-heavy workloads inspired by cloud
storage, Honeycomb improves throughput by more than 2x. The cost-performance,
which is more important for large-scale deployments, is improved by at least
1.5x on these workloads.
|
Recent spin-Seebeck experiments on thin ferromagnetic films apply a
temperature difference $\Delta T_{x}$ along the length $x$ and measure a
(transverse) voltage difference $\Delta V_{y}$ along the width $y$. The
connection between these effects is complex, involving: (1) thermal
equilibration between sample and substrate; (2) spin currents along the height
(or thickness) $z$; and (3) the measured voltage difference. The present work
studies in detail the first of these steps, and outlines the other two steps.
Thermal equilibration processes between the magnons and phonons in the sample,
as well as between the sample and the substrate leads to two surface modes,
with surface lengths $\lambda$, to provide for thermal equilibration.
Increasing the coupling between the two modes increases the longer mode length
and decreases the shorter mode length. The applied thermal gradient along $x$
leads to a thermal gradient along $z$ that varies as $\sinh{(x/\lambda)}$,
which can in turn produce fluxes of the carriers of up- and down- spins along
$z$, and gradients of their associated \textit{magnetoelectrochemical
potentials} $\bar{\mu}_{\uparrow,\downarrow}$, which vary as
$\sinh{(x/\lambda)}$. By the inverse spin Hall effect, this spin current along
$z$ can produce a transverse (along $y$) voltage difference $\Delta V_y$, which
also varies as $\sinh{(x/\lambda)}$.
|
In recent years, significant attention has been directed towards learning
average-reward Markov Decision Processes (MDPs). However, existing algorithms
either suffer from sub-optimal regret guarantees or computational
inefficiencies. In this paper, we present the first tractable algorithm with
minimax optimal regret of $\widetilde{\mathrm{O}}(\sqrt{\mathrm{sp}(h^*) S A
T})$, where $\mathrm{sp}(h^*)$ is the span of the optimal bias function $h^*$,
$S \times A$ is the size of the state-action space and $T$ the number of
learning steps. Remarkably, our algorithm does not require prior information on
$\mathrm{sp}(h^*)$. Our algorithm relies on a novel subroutine, Projected
Mitigated Extended Value Iteration (PMEVI), to compute bias-constrained optimal
policies efficiently. This subroutine can be applied to various previous
algorithms to improve regret bounds.
|
Metal-insulator transitions driven by disorder (Delta) and/or by electron
correlations (U) are investigated within the Anderson-Hubbard model with local
binary-alloy disorder using a simple but consistent mean-field approach. The
Delta-U phase diagram is derived and discussed for T=0 and finite temperatures.
|
We investigate the weak measurement experiment demonstrated by Ritchie et al.
[N. W. M. Ritchie, J. G. Story, and R. G. Hulet, Phys. Rev. Lett. 66, 1107
(1991)] from the viewpoint of the statistical hypothesis testing for the
weak-value amplification proposed by Susa and Tanaka [Y. Susa and S. Tanaka,
Phys. Rev. A 92, 012112 (2015)]. We conclude that the weak-value amplification
is a better method to determine whether the crystal used in the experiment is
birefringent than the measurement without postselection, when the angles of two
polarizers are almost orthogonal. This result gives a physical description and
intuition of the hypothesis testing and supports the experimental usefulness of
the weak-value amplification.
|
Sentiment analysis in conversations has gained increasing attention in recent
years for the growing amount of applications it can serve, e.g., sentiment
analysis, recommender systems, and human-robot interaction. The main difference
between conversational sentiment analysis and single sentence sentiment
analysis is the existence of context information which may influence the
sentiment of an utterance in a dialogue. How to effectively encode contextual
information in dialogues, however, remains a challenge. Existing approaches
employ complicated deep learning structures to distinguish different parties in
a conversation and then model the context information. In this paper, we
propose a fast, compact and parameter-efficient party-ignorant framework named
bidirectional emotional recurrent unit for conversational sentiment analysis.
In our system, a generalized neural tensor block followed by a two-channel
classifier is designed to perform context compositionality and sentiment
classification, respectively. Extensive experiments on three standard datasets
demonstrate that our model outperforms the state of the art in most cases.
|
Application development in the Internet of Things (IoT) is challenging
because it involves dealing with issues that attribute to different life-cycle
phases. First, the application logic has to be analyzed and then separated into
a set of distributed tasks for an underlying network. Then, the tasks have to
be implemented for the specific hardware. Moreover, we take different IoT
applications and present development of these applications using IoTSuite. In
this paper, we introduce a design and implementation of ToolSuite, a suite of
tools, for reducing burden of each stage of IoT application development
process. We take different class of IoT applications, largely found in the IoT
literature, and demonstrate these IoT application development using IoTSuite.
These applications have been tested on several IoT technologies such as
Android, Raspberry PI, Arduino, and JavaSE-enabled devices, Messaging protocols
such as MQTT, CoAP, WebSocket, Server technologies such as Node.js, Relational
database such as MySQL, and Microsoft Azure Cloud services.
|
We prove a stability theorem for families of holomorphically-parallelizable
manifolds in the category of Hermitian manifolds.
|
Using convex optimization, we propose entanglement-assisted quantum error
correction procedures that are optimized for given noise channels. We
demonstrate through numerical examples that such an optimized error correction
method achieves higher channel fidelities than existing methods. This improved
performance, which leads to perfect error correction for a larger class of
error channels, is interpreted in at least some cases by quantum teleportation,
but for general channels this interpretation does not hold.
|
The thermal evolution of a few thermodynamic properties of the nuclear
surface like its thermodynamic potential energy, entropy and the symmetry free
energy are examined for both semi-infinite nuclear matter and finite nuclei.
The Thomas-Fermi model is employed. Three Skyrme interactions, namely, SkM$^*$,
SLy4 and SK255 are used for the calculations to gauge the dependence of the
nuclear surface properties on the energy density functionals. For finite
nuclei, the surface observables are computed from a global liquid-drop inspired
fit of the energies and free energies of a host of nuclei covering the entire
periodic table. The hot nuclear system is modelled in a subtracted Thomas-Fermi
framework. Compared to semi-infinite nuclear matter, substantial changes in the
surface symmetry energy of finite nuclei are indicated; surface thermodynamic
potential energies for the two systems are, however, not too different.
Analytic expressions to fit the temperature and asymmetry dependence of the
surface thermodynamic potential of semi-infinite nuclear matter and the
temperature dependence of the surface free energy of finite nuclei are given.
|
The problem of finding a vector with the fewest nonzero elements that
satisfies an underdetermined system of linear equations is an NP-complete
problem that is typically solved numerically via convex heuristics or
nicely-behaved nonconvex relaxations. In this work we consider elementary
methods based on projections for solving a sparse feasibility problem without
employing convex heuristics. In a recent paper Bauschke, Luke, Phan and Wang
(2014) showed that, locally, the fundamental method of alternating projections
must converge linearly to a solution to the sparse feasibility problem with an
affine constraint. In this paper we apply different analytical tools that allow
us to show global linear convergence of alternating projections under familiar
constraint qualifications. These analytical tools can also be applied to other
algorithms. This is demonstrated with the prominent Douglas-Rachford algorithm
where we establish local linear convergence of this method applied to the
sparse affine feasibility problem.
|
We present new near-ultraviolet (NUV, $\lambda$ = 2479 $-$ 3306 $\r{A}$)
transmission spectroscopy of KELT-9b, the hottest known exoplanet, obtained
with the Colorado Ultraviolet Transit Experiment ($CUTE$) CubeSat. Two transits
were observed on September 28th and September 29th 2022, referred to as Visits
1 and 2 respectively. Using a combined transit and systematics model for each
visit, the best-fit broadband NUV light curves are R$_{\text{p}}$/R$_{\star}$
$=$ 0.136$_{0.0146}^{0.0125}$ for Visit 1 and R$_{\text{p}}$/R$_{\star}$ $=$
0.111$_{0.0190}^{0.0162}$ for Visit 2, appearing an average of 1.54$\times$
larger in the NUV than at optical wavelengths. While the systematics between
the two visits vary considerably, the two broadband NUV light curves are
consistent with each other. A transmission spectrum with 25 $\r{A}$ bins
suggests a general trend of excess absorption in the NUV, consistent with
expectations for ultra-hot Jupiters. Although we see an extended atmosphere in
the NUV, the reduced data lack the sensitivity to probe individual spectral
lines.
|
New developments in HPC technology in terms of increasing computing power on
multi/many core processors, high-bandwidth memory/IO subsystems and
communication interconnects, pose a direct impact on software and runtime
system development. These advancements have become useful in producing
high-performance collective communication interfaces that integrate efficiently
on a wide variety of platforms and environments. However, number of
optimization options that shows up with each new technology or software
framework has resulted in a \emph{combinatorial explosion} in feature space for
tuning collective parameters such that finding the optimal set has become a
nearly impossible task. Applicability of algorithmic choices available for
optimizing collective communication depends largely on the scalability
requirement for a particular usecase. This problem can be further exasperated
by any requirement to run collective problems at very large scales such as in
the case of exascale computing, at which impractical tuning by brute force may
require many months of resources. Therefore application of statistical, data
mining and artificial Intelligence or more general hybrid learning models seems
essential in many collectives parameter optimization problems. We hope to
explore current and the cutting edge of collective communication optimization
and tuning methods and culminate with possible future directions towards this
problem.
|
Disordered systems have grown in importance in the past decades, with similar
phenomena manifesting themselves in many different physical systems. Because of
the difficulty of the topic, theoretical progress has mostly emerged from
numerical studies or analytical approximations. Here, we provide an exact,
analytical solution to the problem of uniform phase disorder in a system of
identical scatterers arranged with varying separations along a line. Relying on
a relationship with Legendre functions, we demonstrate a simple approach to
computing statistics of the transmission probability (or the conductance, in
the language of electronic transport), and its reciprocal (or the resistance).
Our formalism also gives the probability distribution of the conductance, which
reveals features missing from previous approaches to the problem.
|
A nonlinear kinetic theory of cosmic ray (CR) acceleration in supernova
remnants (SNRs) is employed to investigate the properties of SNR RX
J1713.7-3946. Observations of the nonthermal radio and X-ray emission spectra
as well as the H.E.S.S. measurements of the very high energy gamma-ray emission
are used to constrain the astronomical and the particle acceleration parameters
of the system. Under the assumptions that RX J1713.7-3946 was a core collapse
supernova (SN) of type II/Ib with a massive progenitor, has an age of \approx
1600 yr and is at a distance of \approx 1 kpc, the theory gives indeed a
consistent description for all the existing observational data. Specifically it
is shown that an efficient production of nuclear CRs, leading to strong shock
modification, and a large downstream magnetic field strength B_d ~ 100 mkG can
reproduce in detail the observed synchrotron emission from radio to X-ray
frequencies together with the gamma-ray spectral characteristics as observed by
the H.E.S.S. telescopes. Small-scale filamentary structures observed in
nonthermal X-rays provide empirical confirmation for the field amplification
scenario which leads to a strong depression of the inverse Compton and
Bremsstrahlung fluxes. Going beyond that and using a semi-empirical relation
for young SNRs between the resulting CR pressure and the amplified magnetic
field energy upstream of the outer SN shock as well as a moderate upper bound
for the mechanical explosion energy, it is possible to also demonstrate the
actual need for a considerable shock modification in RX J1713.7-3946. It is
consistent with RX J1713.7-3946 being an efficient source of nuclear cosmic
rays.
|
[Abridged] We present Far Ultraviolet Spectroscopic Explorer (FUSE)
observations of the young, compact planetary nebula (PN) SwSt 1 along the line
of sight to its central star HD 167362. We detect circumstellar absorption
lines from several species against the continuum of the central star. The
physical parameters of the nebula derived from the FUSE data differ
significantly from those found from emission lines. We derive an electron
density n_e = 8800^{+4800}_{-2400} cm^{-3} from the column density ratio of the
excited S III fine structure levels, which is at least a factor of 3 lower than
all prior estimates. The gaseous iron abundance derived from the UV lines is
quite high ([Fe/S] = -0.35+/-0.12), which implies that iron is not
significantly depleted into dust. In contrast, optical and near-infrared
emission lines indicate that Fe is more strongly depleted: [Fe/H] =
-1.64+/-0.24 and [Fe/S] = -1.15+/-0.33. We do not detect nebular H_2
absorption, to a limit N(H_2) < 7\times10^14 cm^{-2}, at least four orders of
magnitude lower than the column density estimated from infrared H_2 emission
lines. Taken together, the lack of H_2 absorption, low n_e, and high gaseous Fe
abundance derived from the FUSE spectrum provide strong evidence that dense
structures (which can shield molecules and dust from the destructive effects of
energetic stellar photons) are not present along the line of sight to the
central star. On the other hand, there is substantial evidence for dust,
molecular material, and dense gas elsewhere in SwSt 1. Therefore, we conclude
that the nebula must have an inhomogeneous structure.
|
Dark matter annihilation in so-called ``spikes'' near black holes is believed
to be an important method of indirect dark matter detection. In the case of
circular particle orbits, the density profile of dark matter has a plateau at
small radii, the maximal density being limited by the annihilation
cross-section. However, in the general case of arbitrary velocity anisotropy
the situation is different. Particulary, for isotropic velocity distribution
the density profile cannot be shallower than r^{-1/2} in the very centre.
Indeed, a detailed study reveals that in many cases the term ``annihilation
plateau'' is misleading, as the density actually continues to rise towards
small radii and forms a weak cusp, rho ~ r^{-(beta+1/2)}, where beta is the
anisotropy coefficient. The annihilation flux, however, does not change much in
the latter case, if averaged over an area larger than the annihilation radius.
|
Obtaining reliable uncertainty estimates of neural network predictions is a
long standing challenge. Bayesian neural networks have been proposed as a
solution, but it remains open how to specify their prior. In particular, the
common practice of an independent normal prior in weight space imposes
relatively weak constraints on the function posterior, allowing it to
generalize in unforeseen ways on inputs outside of the training distribution.
We propose noise contrastive priors (NCPs) to obtain reliable uncertainty
estimates. The key idea is to train the model to output high uncertainty for
data points outside of the training distribution. NCPs do so using an input
prior, which adds noise to the inputs of the current mini batch, and an output
prior, which is a wide distribution given these inputs. NCPs are compatible
with any model that can output uncertainty estimates, are easy to scale, and
yield reliable uncertainty estimates throughout training. Empirically, we show
that NCPs prevent overfitting outside of the training distribution and result
in uncertainty estimates that are useful for active learning. We demonstrate
the scalability of our method on the flight delays data set, where we
significantly improve upon previously published results.
|
We present the results from first spectropolarimetric observations of the
solar photosphere acquired at the Dunn Solar Telescope with the Interferometric
Bidimensional Spectrometer. Full Stokes profiles were measured in the Fe I
630.15 nm and Fe I 630.25 nm lines with high spatial and spectral resolutions
for 53 minutes, with a Stokes V noise of 0.003 the continuum intensity level.
The dataset allows us to study the evolution of several magnetic features
associated with G-band bright points in the quiet Sun. Here we focus on the
analysis of three distinct processes, namely the coalescence, fragmentation and
cancellation of G-band bright points. Our analysis is based on a SIR inversion
of the Stokes I and V profiles of both Fe I lines. The high spatial resolution
of the G-band images combined with the inversion results helps to interpret the
undergoing physical processes. The appearance (dissolution) of high-contrast
G-band bright points is found to be related to the local increase (decrease) of
the magnetic filling factor, without appreciable changes in the field strength.
The cancellation of opposite-polarity bright points can be the signature of
either magnetic reconnection or the emergence/submergence of magnetic loops.
|
The Skyrme model and its generalisations provide a conceptually appealing
field-theory basis for the description of nuclear matter and, after its
coupling to gravity, also of neutron stars. In particular, a specific Skyrme
submodel, the so-called Bogomol'nyi-Prasad-Sommerfield (BPS) Skyrme model,
allows both for an exact field-theoretic and a mean-field treatment of neutron
stars, as a consequence of its perfect fluid property. A pure BPS Skyrme model
description of neutron stars, however, only describes the neutron star core, by
construction. Here we consider different possibilities to extrapolate a BPS
Skyrme neutron star at high baryon density to a description valid at lower
densities. In the exact field-theoretic case, a simple effective description of
the neutron star crust can be used, because the exact BPS Skyrme neutron star
solutions formally extend to sufficiently low densities. In the mean-field
case, on the other hand, the BPS Skyrme neutron star solutions always remain
above the nuclear saturation density and, therefore, must be joined to a
different nuclear physics equation of state already for the outer core. We
study the resulting neutron stars in both cases, facilitating an even more
complete comparison between Skyrmionic neutron stars and neutron stars obtained
from other approaches, as well as with observations.
|
This work provides improved guarantees for streaming principle component
analysis (PCA). Given $A_1, \ldots, A_n\in \mathbb{R}^{d\times d}$ sampled
independently from distributions satisfying $\mathbb{E}[A_i] = \Sigma$ for
$\Sigma \succeq \mathbf{0}$, this work provides an $O(d)$-space linear-time
single-pass streaming algorithm for estimating the top eigenvector of $\Sigma$.
The algorithm nearly matches (and in certain cases improves upon) the accuracy
obtained by the standard batch method that computes top eigenvector of the
empirical covariance $\frac{1}{n} \sum_{i \in [n]} A_i$ as analyzed by the
matrix Bernstein inequality. Moreover, to achieve constant accuracy, our
algorithm improves upon the best previous known sample complexities of
streaming algorithms by either a multiplicative factor of $O(d)$ or
$1/\mathrm{gap}$ where $\mathrm{gap}$ is the relative distance between the top
two eigenvalues of $\Sigma$.
These results are achieved through a novel analysis of the classic Oja's
algorithm, one of the oldest and most popular algorithms for streaming PCA. In
particular, this work shows that simply picking a random initial point $w_0$
and applying the update rule $w_{i + 1} = w_i + \eta_i A_i w_i$ suffices to
accurately estimate the top eigenvector, with a suitable choice of $\eta_i$. We
believe our result sheds light on how to efficiently perform streaming PCA both
in theory and in practice and we hope that our analysis may serve as the basis
for analyzing many variants and extensions of streaming PCA.
|
The functional calculus for normal elements in $C^*$-algebras is an important
tool of analysis. We consider polynomials $p(a,a^*)$ for elements $a$ with
small self-commutator norm $\|[a,a^*]\| \le \delta$ and show that many
properties of the functional calculus are retained modulo an error of order
$\delta$.
|
We study the disorder potential induced by random Coulomb impurities at the
surface of a topological insulator (TI). We use a simple model in which
positive and negative impurities are distributed uniformly throughout the bulk
of the TI, and we derive the magnitude of the disorder potential at the TI
surface using a self-consistent theory based on the Thomas-Fermi approximation
for screening by the Dirac mode. Simple formulas are presented for the mean
squared potential both at the Dirac point and far from it, as well as for the
characteristic size of electron/hole puddles at the Dirac point and the total
concentration of electrons/holes that they contain. We also derive an
expression for the autocorrelation function for the potential at the surface
and show that it has an unusually slow decay, which can be used to verify the
bulk origin of disorder. The implications of our model for the electron
conductivity of the surface are also presented.
|
Stationary, D-dimensional test branes, interacting with N-dimensional
Myers-Perry bulk black holes, are investigated in arbitrary brane and bulk
dimensions. The branes are asymptotically flat and axisymmetric around the
rotation axis of the black hole with a single angular momentum. They are also
spherically symmetric in all other dimensions allowing a total of O(1)xO(D-2)
group of symmetry. It is shown that even though this setup is the most natural
extension of the spherical symmetric problem to the simplest rotating case in
higher dimensions, the obtained solutions are not compatible with the spherical
solutions in the sense that the latter ones are not recovered in the
non-rotating limit. The brane configurations are qualitatively different from
the spherical problem, except in the special case of a 3-dimensional brane.
Furthermore, a quasi-static phase transition between the topologically
different solutions cannot be studied here, due to the lack of a general,
stationary, equatorial solution.
|
Two-photon laser scanning microscopy is widely used in a quickly growing
field of neuroscience. It is a fluorescence imaging technique that allows
imaging of living tissue up to a very high depth to study inherent brain
structure and circuitry. Our project deals with examining images from
two-photon calcium imaging, a brain-imaging technique that allows for study of
neuronal activity in hundreds of neurons and and. As statisticians, we worked
to apply various methods to better understand the sources of variations that
are inherent in neuroimages from this imaging technique that are not part of
the controlled experiment. Thus, images can be made available for studying the
effects of physical stimulation on the working brain. Currently there is no
system to examine and prepare such brain images. Thus we worked to develop
methods to work towards this end. Our data set had images of a rat's brain in
two states. In the first state the rat is sedated and merely observed and in
the other it is repeatedly simulated via electric shocks. We first started by
controlling for the movement of the brain to more accurately observe the
physical characteristics of the brain. We analyzed how the variance of the
brain images varied between pre and post stimulus by applying Levene's Test.
Furthermore, we were able to measure how much the images were shifted to see
the overall change in movement of the brain due to electrical stimulus.
Therefore, we were able to visually observe how the brain structure and
variance change due to stimulus effects in rat brains.
|
After 10 years of operations of the Large Area Telescope (LAT), a high-energy
pair-creation telescope onboard the Fermi satellite, the Fermi Collaboration
has produced two major catalogs: the 4FGL and the 3FHL. These catalogs
represent the best sample of potential very high energy (VHE) emitters that may
be studied by Imaging Atmospheric Cherenkov Telescopes (IACTs). Several methods
are used to extrapolate the Fermi-LAT spectra to TeV energies, generally using
simple analytical functions. The recent success of IACTs has motivated the
creation of catalogs listing the discoveries of these experiments. Among these
initiatives, gamma-cat excels as an open-access tool to archive high-level
results in the VHE field, such as catalogs, spectra and light curves. By using
these resources, we present a data-driven methodology to test the reliability
of different VHE extrapolation schemes used in the literature and evaluate
their accuracy reproducing real VHE observations.
|
Bird's-eye-view (BEV) semantic segmentation is becoming crucial in autonomous
driving systems. It realizes ego-vehicle surrounding environment perception by
projecting 2D multi-view images into 3D world space. Recently, BEV segmentation
has made notable progress, attributed to better view transformation modules,
larger image encoders, or more temporal information. However, there are still
two issues: 1) a lack of effective understanding and enhancement of BEV space
features, particularly in accurately capturing long-distance environmental
features and 2) recognizing fine details of target objects. To address these
issues, we propose OE-BevSeg, an end-to-end multimodal framework that enhances
BEV segmentation performance through global environment-aware perception and
local target object enhancement. OE-BevSeg employs an environment-aware BEV
compressor. Based on prior knowledge about the main composition of the BEV
surrounding environment varying with the increase of distance intervals,
long-sequence global modeling is utilized to improve the model's understanding
and perception of the environment. From the perspective of enriching target
object information in segmentation results, we introduce the center-informed
object enhancement module, using centerness information to supervise and guide
the segmentation head, thereby enhancing segmentation performance from a local
enhancement perspective. Additionally, we designed a multimodal fusion branch
that integrates multi-view RGB image features with radar/LiDAR features,
achieving significant performance improvements. Extensive experiments show
that, whether in camera-only or multimodal fusion BEV segmentation tasks, our
approach achieves state-of-the-art results by a large margin on the nuScenes
dataset for vehicle segmentation, demonstrating superior applicability in the
field of autonomous driving.
|
We study a regularized version of Hastings-Levitov planar random growth that
models clusters formed by the aggregation of diffusing particles. In this
model, the growing clusters are defined in terms of iterated slit maps whose
capacities are given by c_n=c|\Phi_{n-1}'(e^{\sigma+i\theta_n})|^{-\alpha},
\alpha \geq 0, where c>0 is the capacity of the first particle, {\Phi_n}_n are
the composed conformal maps defining the clusters of the evolution,
{\theta_n}_n are independent uniform angles determining the positions at which
particles are attached, and \sigma>0 is a regularization parameter which we
take to depend on c. We prove that under an appropriate rescaling of time, in
the limit as c converges to 0, the clusters converge to growing disks with
deterministic capacities, provided that \sigma does not converge to 0 too fast.
We then establish scaling limits for the harmonic measure flow, showing that by
letting \alpha tend to 0 at different rates it converges to either the Brownian
web on the circle, a stopped version of the Brownian web on the circle, or the
identity map. As the harmonic measure flow is closely related to the internal
branching structure within the cluster, the above three cases intuitively
correspond to the number of infinite branches in the model being either 1, a
random number whose distribution we obtain, or unbounded, in the limit as c
converges to 0.
We also present several findings based on simulations of the model with
parameter choices not covered by our rigorous analysis.
|
Machine translation (MT) for low-resource languages such as Ge'ez, an ancient
language that is no longer the native language of any community, faces
challenges such as out-of-vocabulary words, domain mismatches, and lack of
sufficient labeled training data. In this work, we explore various methods to
improve Ge'ez MT, including transfer-learning from related languages,
optimizing shared vocabulary and token segmentation approaches, finetuning
large pre-trained models, and using large language models (LLMs) for few-shot
translation with fuzzy matches. We develop a multilingual neural machine
translation (MNMT) model based on languages relatedness, which brings an
average performance improvement of about 4 BLEU compared to standard bilingual
models. We also attempt to finetune the NLLB-200 model, one of the most
advanced translation models available today, but find that it performs poorly
with only 4k training samples for Ge'ez. Furthermore, we experiment with using
GPT-3.5, a state-of-the-art LLM, for few-shot translation with fuzzy matches,
which leverages embedding similarity-based retrieval to find context examples
from a parallel corpus. We observe that GPT-3.5 achieves a remarkable BLEU
score of 9.2 with no initial knowledge of Ge'ez, but still lower than the MNMT
baseline of 15.2. Our work provides insights into the potential and limitations
of different approaches for low-resource and ancient language MT.
|
We present several supercongruences that may be viewed as $p$-adic analogues
of Ramanujan-type series for $1/\pi$ and $1/\pi^2$, and prove three of these
examples.
|
Neural networks (NNs) are employed to predict equations of state from a given
isotropic pair potential using the virial expansion of the pressure. The NNs
are trained with data from molecular dynamics simulations of monoatomic gases
and liquids, sampled in the $NVT$ ensemble at various densities. We find that
the NNs provide much more accurate results compared to the analytic low-density
limit estimate of the second virial coefficient. Further, we design and train
NNs for computing (effective) pair potentials from radial pair distribution
functions, $g(r)$, a task which is often performed for inverse design and
coarse-graining. Providing the NNs with additional information on the forces
greatly improves the accuracy of the predictions, since more correlations are
taken into account; the predicted potentials become smoother, are significantly
closer to the target potentials, and are more transferable as a result.
|
I perform a complete classification of 2d, quasi-1d and 1d topological
superconductors which originate from the suitable combination of inhomogeneous
Rashba spin-orbit coupling, magnetism and superconductivity. My analysis
reveals alternative types of topological superconducting platforms for which
Majorana fermions are accessible. Specifically, I observe that for quasi-1d
systems with Rashba spin-orbit coupling and time-reversal violating
superconductivity, as for instance due to a finite Josephson current flow,
Majorana fermions can emerge even in the absence of magnetism. Furthermore, for
the classification I also consider situations where additional "hidden"
symmetries emerge, with a significant impact on the topological properties of
the system. The latter, generally originate from a combination of space group
and complex conjugation operations that separately do not leave the Hamiltonian
invariant. Finally, I suggest alternative directions in topological quantum
computing for systems with additional unitary symmetries.
|
This paper explores opportunities to utilize Large Language Models (LLMs) to
make network configuration human-friendly, simplifying the configuration of
network devices and minimizing errors. We examine the effectiveness of these
models in translating high-level policies and requirements (i.e., specified in
natural language) into low-level network APIs, which requires understanding the
hardware and protocols. More specifically, we propose NETBUDDY for generating
network configurations from scratch and modifying them at runtime. NETBUDDY
splits the generation of network configurations into fine-grained steps and
relies on self-healing code-generation approaches to better take advantage of
the full potential of LLMs. We first thoroughly examine the challenges of using
these models to produce a fully functional & correct configuration, and then
evaluate the feasibility of realizing NETBUDDY by building a proof-of-concept
solution using GPT-4 to translate a set of high-level requirements into P4 and
BGP configurations and run them using the Kathar\'a network emulator.
|
It is increasingly common in many types of natural and physical systems
(especially biological systems) to have different types of measurements
performed on the same underlying system. In such settings, it is important to
align the manifolds arising from each measurement in order to integrate such
data and gain an improved picture of the system. We tackle this problem using
generative adversarial networks (GANs). Recently, GANs have been utilized to
try to find correspondences between sets of samples. However, these GANs are
not explicitly designed for proper alignment of manifolds. We present a new GAN
called the Manifold-Aligning GAN (MAGAN) that aligns two manifolds such that
related points in each measurement space are aligned together. We demonstrate
applications of MAGAN in single-cell biology in integrating two different
measurement types together. In our demonstrated examples, cells from the same
tissue are measured with both genomic (single-cell RNA-sequencing) and
proteomic (mass cytometry) technologies. We show that the MAGAN successfully
aligns them such that known correlations between measured markers are improved
compared to other recently proposed models.
|
Estimating 3D human pose from a single image is a challenging task. This work
attempts to address the uncertainty of lifting the detected 2D joints to the 3D
space by introducing an intermediate state-Part-Centric Heatmap Triplets
(HEMlets), which shortens the gap between the 2D observation and the 3D
interpretation. The HEMlets utilize three joint-heatmaps to represent the
relative depth information of the end-joints for each skeletal body part. In
our approach, a Convolutional Network (ConvNet) is first trained to predict
HEMlets from the input image, followed by a volumetric joint-heatmap
regression. We leverage on the integral operation to extract the joint
locations from the volumetric heatmaps, guaranteeing end-to-end learning.
Despite the simplicity of the network design, the quantitative comparisons show
a significant performance improvement over the best-of-grade methods (e.g.
$20\%$ on Human3.6M). The proposed method naturally supports training with
"in-the-wild" images, where only weakly-annotated relative depth information of
skeletal joints is available. This further improves the generalization ability
of our model, as validated by qualitative comparisons on outdoor images.
Leveraging the strength of the HEMlets pose estimation, we further design and
append a shallow yet effective network module to regress the SMPL parameters of
the body pose and shape. We term the entire HEMlets-based human pose and shape
recovery pipeline HEMlets PoSh. Extensive quantitative and qualitative
experiments on the existing human body recovery benchmarks justify the
state-of-the-art results obtained with our HEMlets PoSh approach.
|
The problem addressed here can be concisely formulated as follows: given a
stable surface orientation with a known reconstruction and given a direction in
the plane of this surface, find the atomic structure of the steps oriented
along that direction. We report a robust and generally applicable
variable-number genetic algorithm for step structure determination and
exemplify it by determining the structure of monatomic steps on
Si(114)-$2\times 1$. We show how the location of the step edge with respect to
the terrace reconstructions, the step width (number of atoms), and the
positions of the atoms in the step region can all be simultaneously determined.
|
A dominant rational self-map on a projective variety is called
$p$-cohomologically hyperbolic if the $p$-th dynamical degree is strictly
larger than other dynamical degrees. For such a map defined over
$\overline{\mathbb{Q}}$, we study lower bounds of the arithmetic degrees,
existence of points with Zariski dense orbit, and finiteness of preperiodic
points. In particular, we prove that, if $f$ is an $1$-cohomologically
hyperbolic map on a smooth projective variety, then (1) the arithmetic degree
of a $\overline{\mathbb{Q}}$-point with generic $f$-orbit is equal to the first
dynamical degree of $f$; and (2) there are $\overline{\mathbb{Q}}$-points with
generic $f$-orbit. Applying our theorem to the recently constructed rational
map with transcendental dynamical degree, we confirm that the arithmetic degree
can be transcendental.
|
This paper presents a novel framework for visual object recognition using
infinite-dimensional covariance operators of input features in the paradigm of
kernel methods on infinite-dimensional Riemannian manifolds. Our formulation
provides in particular a rich representation of image features by exploiting
their non-linear correlations. Theoretically, we provide a finite-dimensional
approximation of the Log-Hilbert-Schmidt (Log-HS) distance between covariance
operators that is scalable to large datasets, while maintaining an effective
discriminating capability. This allows us to efficiently approximate any
continuous shift-invariant kernel defined using the Log-HS distance. At the
same time, we prove that the Log-HS inner product between covariance operators
is only approximable by its finite-dimensional counterpart in a very limited
scenario. Consequently, kernels defined using the Log-HS inner product, such as
polynomial kernels, are not scalable in the same way as shift-invariant
kernels. Computationally, we apply the approximate Log-HS distance formulation
to covariance operators of both handcrafted and convolutional features,
exploiting both the expressiveness of these features and the power of the
covariance representation. Empirically, we tested our framework on the task of
image classification on twelve challenging datasets. In almost all cases, the
results obtained outperform other state of the art methods, demonstrating the
competitiveness and potential of our framework.
|
Motivated by recent work in Dynamical Sampling, we prove a necessary and
sufficient condition for a frame in a separable and infinite-dimensional
Hilbert space to admit the form $\{T^{n} \varphi \}_{n \geq 0}$ with $T \in
B(H)$. Also, a characterization of all the vectors $\varphi$ for which $\{T^{n}
\varphi \}_{n \geq 0}$ is a frame for some $T \in B(H)$ is provided. Some
auxiliary results on operator representations of Riesz frames are given as
well.
|
Given the important role that the galaxy bispectrum has recently acquired in
cosmology and the scale and precision of forthcoming galaxy clustering
observations, it is timely to derive the full expression of the large-scale
bispectrum going beyond approximated treatments which neglect integrated terms
or higher-order bias terms or use the Limber approximation. On cosmological
scales, relativistic effects that arise from observing on the past light-cone
alter the observed galaxy number counts, therefore leaving their imprints on
N-point correlators at all orders. In this paper we compute for the first time
the bispectrum including all general relativistic, local and integrated,
effects at second order, the tracers' bias at second order, geometric effects
as well as the primordial non-Gaussianity contribution. This is timely
considering that future surveys will probe scales comparable to the horizon
where approximations widely used currently may not hold; neglecting these
effects may introduce biases in estimation of cosmological parameters as well
as primordial non-Gaussianity.
|
Word-embeddings are vital components of Natural Language Processing (NLP)
models and have been extensively explored. However, they consume a lot of
memory which poses a challenge for edge deployment. Embedding matrices,
typically, contain most of the parameters for language models and about a third
for machine translation systems. In this paper, we propose Distilled Embedding,
an (input/output) embedding compression method based on low-rank matrix
decomposition and knowledge distillation. First, we initialize the weights of
our decomposed matrices by learning to reconstruct the full pre-trained
word-embedding and then fine-tune end-to-end, employing knowledge distillation
on the factorized embedding. We conduct extensive experiments with various
compression rates on machine translation and language modeling, using different
data-sets with a shared word-embedding matrix for both embedding and vocabulary
projection matrices. We show that the proposed technique is simple to
replicate, with one fixed parameter controlling compression size, has higher
BLEU score on translation and lower perplexity on language modeling compared to
complex, difficult to tune state-of-the-art methods.
|
We investigate the ability of discontinuous Galerkin (DG) methods to simulate
under-resolved turbulent flows in large-eddy simulation. The role of the
Riemann solver and the subgrid-scale model in the prediction of a variety of
flow regimes, including transition to turbulence, wall-free turbulence and
wall-bounded turbulence, are examined. Numerical and theoretical results show
the Riemann solver in the DG scheme plays the role of an implicit subgrid-scale
model and introduces numerical dissipation in under-resolved turbulent regions
of the flow. This implicit model behaves like a dynamic model and vanishes for
flows that do not contain subgrid scales, such as laminar flows, which is a
critical feature to accurately predict transition to turbulence. In addition,
for the moderate-Reynolds-number turbulence problems considered, the implicit
model provides a more accurate representation of the actual subgrid scales in
the flow than state-of-the-art explicit eddy viscosity models, including
dynamic Smagorinsky, WALE and Vreman. The results in this paper indicate new
best practices for subgrid-scale modeling are needed with high-order DG
methods.
|
Shape restricted regressions, including isotonic regression and concave
regression as special cases, are studied using priors on Bernstein polynomials
and Markov chain Monte Carlo methods. These priors have large supports, select
only smooth functions, can easily incorporate geometric information into the
prior, and can be generated without computational difficulty. Algorithms
generating priors and posteriors are proposed, and simulation studies are
conducted to illustrate the performance of this approach. Comparisons with the
density-regression method of Dette et al. (2006) are included.
|
The three-terminal heat device consisting of a cavity and coupled to a heat
bath is established. By tuning the temperatures of the electrodes and the
phonon bath, the device can function as a heat engine or a refrigerator. We
study the characteristic performance in the linear and nonlinear regime for
both setups. It is our focus here to analyze how the efficiency of the heat
engine and coefficient of performance of the refrigerator are affected by the
nonlinear transport. With such considerations, the maximum efficiency and power
are then optimized for various energy levels, temperatures and other
parameters.
|
It is of importance to develop statistical techniques to analyze
high-dimensional data in the presence of both complex dependence and possible
outliers in real-world applications such as imaging data analyses. We propose a
new robust high-dimensional regression with coefficient thresholding, in which
an efficient nonconvex estimation procedure is proposed through a thresholding
function and the robust Huber loss. The proposed regularization method accounts
for complex dependence structures in predictors and is robust against outliers
in outcomes. Theoretically, we analyze rigorously the landscape of the
population and empirical risk functions for the proposed method. The fine
landscape enables us to establish both {statistical consistency and
computational convergence} under the high-dimensional setting. The
finite-sample properties of the proposed method are examined by extensive
simulation studies. An illustration of real-world application concerns a
scalar-on-image regression analysis for an association of psychiatric disorder
measured by the general factor of psychopathology with features extracted from
the task functional magnetic resonance imaging data in the Adolescent Brain
Cognitive Development study.
|
The Fe1+yTe1-xSex series of materials is one of the prototype families of
Fe-based superconductors. To provide further insight into these materials we
present systematic inelastic neutron scattering measurements of the low energy
spin excitations for x=0.27, 0.36, 0.40, 0.49. These measurements show an
evolution of incommensurate spin excitations towards the (1/2 1/2 0) wave
vector with doping. Concentrations (x=0.40 and 0.49) which exhibit the most
robust superconducting properties have spin excitations closest to (1/2 1/2 0)
and also exhibit a strong spin resonance in the spin excitation spectrum below
Tc. The resonance signal appears to be closer to (1/2 1/2 0) than the
underlying spin excitations. We discuss the possible relationship between
superconductivity and spin excitations at the (1/2 1/2 0) wave vector and the
role that interstitial Fe may play.
|
We provide a new analysis of the Boltzmann equation with constant collision
kernel in two space dimensions. The scaling-critical Lebesgue space is
$L^2_{x,v}$; we prove global well-posedness and a version of scattering,
assuming that the data $f_0$ is sufficiently smooth and localized, and the
$L^2_{x,v}$ norm of $f_0$ is sufficiently small. The proof relies upon a new
scaling-critical bilinear spacetime estimate for the collision "gain" term in
Boltzmann's equation, combined with a novel application of the Kaniel-Shinbrot
iteration.
|
The distribution and evolution of the magnetic field at the solar poles
through a solar cycle is an important parameter in understanding the solar
dynamo. The accurate observations of the polar magnetic flux is very
challenging from the ecliptic view, mainly due to (a) geometric foreshortening
which limits the spatial resolution, and (b) the oblique view of predominantly
vertical magnetic flux elements, which presents rather small line-of-sight
component of the magnetic field towards the ecliptic. Due to these effects the
polar magnetic flux is poorly measured. Depending upon the measurement
technique, longitudinal versus full vector field measurement, where the latter
is extremely sensitive to the SNR achieved and azimuth disamiguation problem,
the polar magnetic flux measurements could be underestimated or overestimated.
To estimate the extent of systematic errors in magetic flux measurements at the
solar poles due to aforementioned projection effects we use MHD simulations of
quiet sun network as a reference solar atmosphere. Using the numerical model of
the solar atmosphere we simulate the observations from the ecliptic as well as
from out-of-ecliptic vantage points, such as from a solar polar orbit at
various heliographic latitudes. Using these simulated observations we make an
assessment of the systematic errors in our measurements of the magnetic flux
due to projection effects and the extent of under- or over estimation. We
suggest that such effects could contribute to reported missing open magnetic
flux in the heliosphere and that the multi-viewpoint observations from
out-of-the-ecliptic plane together with innovative Compact Doppler
Magnetographs provide the best bet for the future measurements.
|
Subsets and Splits