text
stringlengths 6
128k
|
---|
In the manufacturing process, sensor data collected from equipment is crucial
for building predictive models to manage processes and improve productivity.
However, in the field, it is challenging to gather sufficient data to build
robust models. This study proposes a novel predictive model based on the
Transformer, utilizing statistical feature embedding and window positional
encoding. Statistical features provide an effective representation of sensor
data, and the embedding enables the Transformer to learn both time- and
sensor-related information. Window positional encoding captures precise time
details from the feature embedding. The model's performance is evaluated in two
problems: fault detection and virtual metrology, showing superior results
compared to baseline models. This improvement is attributed to the efficient
use of parameters, which is particularly beneficial for sensor data that often
has limited sample sizes. The results support the model's applicability across
various manufacturing industries, demonstrating its potential for enhancing
process management and yield.
|
The Lambek-Grishin calculus is a symmetric extension of the Lambek calculus:
in addition to the residuated family of product, left and right division
operations of Lambek's original calculus, one also considers a family of
coproduct, right and left difference operations, related to the former by an
arrow-reversing duality. Communication between the two families is implemented
in terms of linear distributivity principles. The aim of this paper is to
complement the symmetry between (dual) residuated type-forming operations with
an orthogonal opposition that contrasts residuated and Galois connected
operations. Whereas the (dual) residuated operations are monotone, the Galois
connected operations (and their duals) are antitone. We discuss the algebraic
properties of the (dual) Galois connected operations, and generalize the
(co)product distributivity principles to include the negative operations. We
give a continuation-passing-style translation for the new type-forming
operations, and discuss some linguistic applications.
|
These lectures give an introduction to the theory of holographic
superconductors. These are superconductors that have a dual gravitational
description using gauge/gravity duality. After introducing a suitable
gravitational theory, we discuss its properties in various regimes: the probe
limit, the effects of backreaction, the zero temperature limit, and the
addition of magnetic fields. Using the gauge/gravity dictionary, these
properties reproduce many of the standard features of superconductors. Some
familiarity with gauge/gravity duality is assumed. A list of open problems is
included at the end.
|
Utilizing large language models (LLMs) to compose off-the-shelf visual tools
represents a promising avenue of research for developing robust visual
assistants capable of addressing diverse visual tasks. However, these methods
often overlook the potential for continual learning, typically by freezing the
utilized tools, thus limiting their adaptation to environments requiring new
knowledge. To tackle this challenge, we propose CLOVA, a Closed-Loop Visual
Assistant, which operates within a framework encompassing inference,
reflection, and learning phases. During the inference phase, LLMs generate
programs and execute corresponding tools to complete assigned tasks. In the
reflection phase, a multimodal global-local reflection scheme analyzes human
feedback to determine which tools require updating. Lastly, the learning phase
employs three flexible approaches to automatically gather training data and
introduces a novel prompt tuning scheme to update the tools, allowing CLOVA to
efficiently acquire new knowledge. Experimental findings demonstrate that CLOVA
surpasses existing tool-usage methods by 5% in visual question answering and
multiple-image reasoning, by 10% in knowledge tagging, and by 20% in image
editing. These results underscore the significance of the continual learning
capability in general visual assistants.
|
We investigate numerically, by a hybrid lattice Boltzmann method, the
morphology and the dynamics of an emulsion made of a polar active gel,
contractile or extensile, and an isotropic passive fluid. We focus on the case
of a highly off-symmetric ratio between the active and passive components. In
absence of any activity we observe an hexatic-ordered droplets phase, with some
defects in the layout. We study how the morphology of the system is affected by
activity both in the contractile and extensile case. In the extensile case a
small amount of activity favors the elimination of defects in the array of
droplets, while at higher activities, first aster-like rotating droplets
appear, and then a disordered pattern occurs. In the contractile case, at
sufficiently high values of activity, elongated structures are formed. Energy
and enstrophy behavior mark the transitions between the different regimes.
|
Electronic health records (EHRs) contain valuable patient data for
health-related prediction tasks, such as disease prediction. Traditional
approaches rely on supervised learning methods that require large labeled
datasets, which can be expensive and challenging to obtain. In this study, we
investigate the feasibility of applying Large Language Models (LLMs) to convert
structured patient visit data (e.g., diagnoses, labs, prescriptions) into
natural language narratives. We evaluate the zero-shot and few-shot performance
of LLMs using various EHR-prediction-oriented prompting strategies.
Furthermore, we propose a novel approach that utilizes LLM agents with
different roles: a predictor agent that makes predictions and generates
reasoning processes and a critic agent that analyzes incorrect predictions and
provides guidance for improving the reasoning of the predictor agent. Our
results demonstrate that with the proposed approach, LLMs can achieve decent
few-shot performance compared to traditional supervised learning methods in
EHR-based disease predictions, suggesting its potential for health-oriented
applications.
|
We analyze the distance $\mathcal{R}_T(u)$ between the first and the last
passage time of $\{X(t)-ct:t\in [0,T]\}$ at level $u$ in time horizon
$T\in(0,\infty]$, where $X$ is a centered Gaussian process with stationary
increments and $c\in\mathbb{R}$, given that the first passage time occurred
before $T$. Under some tractable assumptions on $X$, we find $\Delta(u)$ and
$G(x)$ such that
$$\lim_{u\to\infty}\mathbb{P}\left(\mathcal{R}_T(u)>\Delta(u)x\right)=G(x),$$
for $x\geq 0$. We distinguish two scenarios: $T<\infty$ and $T=\infty$, that
lead to qualitatively different asymptotics. The obtained results provide exact
asymptotics of the ultimate recovery time after the ruin in Gaussian risk
model.
|
Olfactory navigation is observed across species and plays a crucial role in
locating resources for survival. In the laboratory, understanding the
behavioral strategies and neural circuits underlying odor-taxis requires a
detailed understanding of the animal's sensory environment. For small model
organisms like C. elegans and larval D. melanogaster, controlling and measuring
the odor environment experienced by the animal can be challenging, especially
for airborne odors, which are subject to subtle effects from airflow,
temperature variation, and from the odor's adhesion, adsorption or reemission.
Here we present a method to flexibly control and precisely measure airborne
odor concentration in an arena with agar while imaging animal behavior.
Crucially and unlike previous methods, our method allows continuous monitoring
of the odor profile during behavior. We construct stationary chemical
landscapes in an odor flow chamber through spatially patterned odorized air.
The odor concentration is measured with a spatially distributed array of
digital gas sensors. Careful placement of the sensors allows the odor
concentration across the arena to be accurately inferred and continuously
monitored at all points in time. We use this approach to measure the precise
odor concentration that each animal experiences as it undergoes chemotaxis
behavior and report chemotaxis strategies for C. elegans and D. melanogaster
larvae populations under different spatial odor landscapes.
|
We develop a "variational mass" expansion approach, recently introduced in
the Gross--Neveu model, to evaluate some of the order parameters of chiral
symmetry breakdown in QCD. The method relies on a reorganization of the usual
perturbation theory with the addition of an "arbitrary quark mass $m$, whose
non-perturbative behaviour is inferred partly from renormalization group
properties, and from analytic continuation in $m$ properties. The resulting
ansatz can be optimized, and in the chiral limit $m \to 0$ we estimate the
dynamical contribution to the "constituent" masses of the light quarks
$M_{u,d,s}$; the pion decay constant $F_\pi$ and the quark condensate $< \bar q
q >$.
|
The spin torque exerted on a magnetic moment is a reaction to spin filtering
when spin-polarized electrons interact with a thin ferromagnetic film. We show
that, for certain conditions, a spin transmission resonance (STR) gives rise to
a failure of spin filtering. As a consequence, no spin is transfered to the
ferromagnet. The condition for STR depends on the incoming energy of electrons
and the thickness of the film. For a simple model we find that when the STR
condition is satisfied, the ferromagnetic film is transparent to the incoming
electrons.
|
We propose a method to generate a source of spin-polarized cold atoms which
are continuously extracted and guided from a magneto-optical trap using an
atom-diode effect. We show that it is possible to create a pipe-like potential
by overlapping two optical beams coupled with the two transitions of a
three-level system in a ladder configuration. With alkali-metal atoms, and in
particular with $^{87}$Rb, a proper choice of transitions enables both the
potential generation and optical pumping, thus polarizing the sample in a given
Zeeman state. We extend the Dalibard and Cohen-Tannoudji dressed-atom model of
radiative forces to the case of a three-level system. We derive expressions for
the average force and the different sources of momentum diffusion in the
resonant, non-perturbative regime. We show using numerical simulations that a
significant fraction of the atoms initially loaded can be guided over several
centimeters with output velocities of a few meters per second. This would
produce a collimated continuous source of slow spin-polarized atoms suitable
for atom interferometry.
|
The superconducting transition temperatures of high-Tc compounds based on
copper, iron, ruthenium and certain organic molecules are discovered to be
dependent on bond lengths, ionic valences, and Coulomb coupling between
electronic bands in adjacent, spatially separated layers [1]. Optimal
transition temperature, denoted as T_c0, is given by the universal expression
$k_BT_c0 = e^2 \Lambda / \ell\zeta$; $\ell$ is the spacing between interacting
charges within the layers, \zeta is the distance between interacting layers and
\Lambda is a universal constant, equal to about twice the reduced electron
Compton wavelength (suggesting that Compton scattering plays a role in
pairing). Non-optimum compounds in which sample degradation is evident
typically exhibit Tc < T_c0. For the 31+ optimum compounds tested, the
theoretical and experimental T_c0 agree statistically to within +/- 1.4 K. The
elemental high Tc building block comprises two adjacent and spatially separated
charge layers; the factor e^2/\zeta arises from Coulomb forces between them.
The theoretical charge structure representing a room-temperature superconductor
is also presented.
|
Identifying the properties of the first generation of seeds of massive black
holes is key to understanding the merger history and growth of galaxies.
Mergers between ~100 solar mass seed black holes generate gravitational waves
in the 0.1-10Hz band that lies between the sensitivity bands of existing
ground-based detectors and the planned space-based gravitational wave detector,
the Laser Interferometer Space Antenna (LISA). However, there are proposals for
more advanced detectors that will bridge this gap, including the third
generation ground-based Einstein Telescope and the space-based detector DECIGO.
In this paper we demonstrate that such future detectors should be able to
detect gravitational waves produced by the coalescence of the first generation
of light seed black-hole binaries and provide information on the evolution of
structure in that era. These observations will be complementary to those that
LISA will make of subsequent mergers between more massive black holes. We
compute the sensitivity of various future detectors to seed black-hole mergers,
and use this to explore the number and properties of the events that each
detector might see in three years of observation. For this calculation, we make
use of galaxy merger trees and two different seed black hole mass distributions
in order to construct the astrophysical population of events. We also consider
the accuracy with which networks of future ground-based detectors will be able
to measure the parameters of seed black hole mergers, in particular the
luminosity distance to the source. We show that distance precisions of ~30% are
achievable, which should be sufficient for us to say with confidence that the
sources are at high redshift.
|
The zero-shot performance of existing vision-language models (VLMs) such as
CLIP is limited by the availability of large-scale, aligned image and text
datasets in specific domains. In this work, we leverage two complementary
sources of information -- descriptions of categories generated by large
language models (LLMs) and abundant, fine-grained image classification datasets
-- to improve the zero-shot classification performance of VLMs across
fine-grained domains. On the technical side, we develop methods to train VLMs
with this "bag-level" image-text supervision. We find that simply using these
attributes at test-time does not improve performance, but our training
strategy, for example, on the iNaturalist dataset, leads to an average
improvement of 4-5% in zero-shot classification accuracy for novel categories
of birds and flowers. Similar improvements are observed in domains where a
subset of the categories was used to fine-tune the model. By prompting LLMs in
various ways, we generate descriptions that capture visual appearance, habitat,
and geographic regions and pair them with existing attributes such as the
taxonomic structure of the categories. We systematically evaluate their ability
to improve zero-shot categorization in natural domains. Our findings suggest
that geographic priors can be just as effective and are complementary to visual
appearance. Our method also outperforms prior work on prompt-based tuning of
VLMs. We release the benchmark, consisting of 14 datasets at
https://github.com/cvl-umass/AdaptCLIPZS , which will contribute to future
research in zero-shot recognition.
|
Prepositions are highly polysemous, and their variegated senses encode
significant semantic information. In this paper we match each preposition's
complement and attachment and their interplay crucially to the geometry of the
word vectors to the left and right of the preposition. Extracting such features
from the vast number of instances of each preposition and clustering them makes
for an efficient preposition sense disambigution (PSD) algorithm, which is
comparable to and better than state-of-the-art on two benchmark datasets. Our
reliance on no external linguistic resource allows us to scale the PSD
algorithm to a large WikiCorpus and learn sense-specific preposition
representations -- which we show to encode semantic relations and paraphrasing
of verb particle compounds, via simple vector operations.
|
In this paper, we consider the uplink of cell-free massive MIMO systems,
where a large number of distributed single antenna access points (APs) serve a
much smaller number of users simultaneously via limited backhaul. For the first
time, we investigate the performance of compute-and-forward (C&F) in such an
ultra dense network with a realistic channel model (including fading, pathloss
and shadowing). By utilising the characteristic of pathloss, a low complexity
coefficient selection algorithm for C\&F is proposed. We also give a greedy AP
selection method for message recovery. Additionally, we compare the performance
of C&F to some other promising linear strategies for distributed massive MIMO,
such as small cells (SC) and maximum ratio combining (MRC). Numerical results
reveal that C&F not only reduces the backhaul load, but also significantly
increases the system throughput for the symmetric scenario.
|
We show that certain infinitesimal operators of the Lie-point symmetries of
the incompressible 3D Navier-Stokes equations give rise to vortex solutions
with different characteristics. This approach allows an algebraic
classification of vortices and throws light on the alignment mechanism between
the vorticity and the vortex stretching vector. The symmetry algebra associated
with the Navier-Stokes equations turns out to be infinite- dimensional. New
vortical structures, generalizing in some cases well-known configurations such
as, for example, the Burgers and Lundgren solutions, are obtained and discussed
in relation to the value of the dynamic angle. A systematic treatment of the
boundary conditions invariant under the symmetry group of the equations under
study is also performed, and the corresponding invariant surfaces are
recognized.
|
This paper introduces and evaluates a hybrid technique that fuses efficiently
the eye-tracking principles of photosensor oculography (PSOG) and video
oculography (VOG). The main concept of this novel approach is to use a few fast
and power-economic photosensors as the core mechanism for performing high speed
eye-tracking, whereas in parallel, use a video sensor operating at low
sampling-rate (snapshot mode) to perform dead-reckoning error correction when
sensor movements occur. In order to evaluate the proposed method, we simulate
the functional components of the technique and present our results in
experimental scenarios involving various combinations of horizontal and
vertical eye and sensor movements. Our evaluation shows that the developed
technique can be used to provide robustness to sensor shifts that otherwise
could induce error larger than 5 deg. Our analysis suggests that the technique
can potentially enable high speed eye-tracking at low power profiles, making it
suitable to be used in emerging head-mounted devices, e.g. AR/VR headsets.
|
We develop a new robust stopping criterion in Partial Least Squares
Regressions (PLSR) components construction characterised by a high level of
stability. This new criterion is defined as a universal one since it is
suitable both for PLSR and its extension to Generalized Linear Regressions
(PLSGLR). This criterion is based on a non-parametric bootstrap process and has
to be computed algorithmically. It allows to test each successive components on
a preset significant level alpha. In order to assess its performances and
robustness with respect to different noise levels, we perform intensive
datasets simulations, with a preset and known number of components to extract,
both in the case n>p (n being the number of subjects and p the number of
original predictors), and for datasets with n<p. We then use t-tests to compare
the performance of our approach to some others classical criteria. The property
of robustness is particularly tested through resampling processes on a real
allelotyping dataset. Our conclusion is that our criterion presents also better
global predictive performances, both in the PLSR and PLSGLR (Logistic and
Poisson) frameworks.
|
In this work we report the synthesis and structural, electronic and magnetic
properties of La1.5Ca0.5CoMnO6 double-perovskite. This is a re-entrant spin
cluster material which exhibits a non-negligible negative exchange bias effect
when it is cooled in zero magnetic field from an unmagnetized state down to low
temperature. X-ray powder diffraction, X-ray photoelectron spectroscopy and
magnetometry results indicate mixed valence state at Co site, leading to
competing magnetic phases and uncompensated spins at the magnetic interfaces.
We compare the results for this Ca-doped material with those reported for the
resemblant compound La1.5Sr0.5CoMnO6, and discuss the much smaller spontaneous
exchange bias effect observed for the former in terms of its structural and
magnetic particularities. For La1.5Ca0.5CoMnO6, when successive magnetization
loops are carried, the spontaneous exchange bias field inverts its sign from
negative to positive from the first to the second measurement. We discuss this
behavior based on the disorder at the magnetic interfaces, related to the
presence of a glassy phase. This compound also exhibits a large conventional
exchange bias, for which there is no sign inversion of the exchange bias field
for consecutive cycles.
|
In previous work we established the existence of a Ricci flow starting with a
Riemann surface coupled with a nonatomic Radon measure as a conformal factor.
In this paper we prove uniqueness. Combining these two works yields a canonical
smoothing of such rough surfaces that also regularises their geometry at
infinity.
|
This work proposes a new algorithm for training a re-weighted L2 Support
Vector Machine (SVM), inspired on the re-weighted Lasso algorithm of Cand\`es
et al. and on the equivalence between Lasso and SVM shown recently by Jaggi. In
particular, the margin required for each training vector is set independently,
defining a new weighted SVM model. These weights are selected to be binary, and
they are automatically adapted during the training of the model, resulting in a
variation of the Frank-Wolfe optimization algorithm with essentially the same
computational complexity as the original algorithm. As shown experimentally,
this algorithm is computationally cheaper to apply since it requires less
iterations to converge, and it produces models with a sparser representation in
terms of support vectors and which are more stable with respect to the
selection of the regularization hyper-parameter.
|
In this work we compare and characterize the behavior of Langevin and
Dissipative Particle Dynamics (DPD) thermostats in a broad range of
non-equilibrium simulations of polymeric systems. Polymer brushes in relative
sliding motion, polymeric liquids in Poiseuille and Couette flows, and
brush-melt interfaces are used as model systems to analyze the efficiency and
limitations of different Langevin and DPD thermostat implementations. Widely
used coarse-grained bead-spring models under good and poor solvent conditions
are employed to assess the effects of the thermostats. We considered
equilibrium, transient, and steady state examples for testing the ability of
the thermostats to maintain constant temperature and to reproduce the
underlying physical phenomena in non-equilibrium situations. The common
practice of switching-off the Langevin thermostat in the flow direction is also
critically revisited. The efficiency of different weight functions for the DPD
thermostat is quantitatively analyzed as a function of the solvent quality and
the non-equilibrium situation.
|
We construct all skew braces of size $pq$ (where $p>q$ are primes) by using
Byott's classification of Hopf--Galois extensions of the same degree. For
$p\not\equiv 1 \pmod{q}$ there exists only one skew brace which is the trivial
one. When $p\equiv 1 \pmod{q}$, we have $2q+2$ skew braces, two of which are of
cyclic type (so, contained in Rump's classification) and $2q$ of non-abelian
type.
|
When considering the accuracy of sensors in an automated vehicle (AV), it is
not sufficient to evaluate the performance of any given sensor in isolation.
Rather, the performance of any individual sensor must be considered in the
context of the overall system design. Techniques like redundancy and different
sensing modalities can reduce the chances of a sensing failure. Additionally,
the use of safety models is essential to understanding whether any particular
sensing failure is relevant. Only when the entire system design is taken into
account can one properly understand the meaning of safety-relevant sensing
failures in an AV. In this paper, we will consider what should actually
constitute a sensing failure, how safety models play an important role in
mitigating potential failures, how a system-level approach to safety will
deliver a safe and scalable AV, and what an acceptable sensing failure rate
should be considering the full picture of an AV's architecture.
|
Stimulated Emission Depletion (STED) microscopy has emerged as a powerful
technique providing visualization of biological structures at the molecular
level in living samples. In this technique, the diffraction limit is broken by
selectively depleting the fluorophore's excited state by stimulated emission,
typically using a donut-shaped optical vortex beam. STED microscopy performs
unrivalably well in degraded optical conditions such as living tissues.
Nevertheless, photo-bleaching and acquisition time are among the main
challenges for imaging large volumetric field of views. In this regard, random
light beams like speckle patterns have proved to be especially promising for
three-dimensional imaging in compressed sensing schemes. Taking advantage of
the high spatial density of intrisic optical vortices in speckles -- the most
commonly used beam spatial structure used in STED microscopy -- we propose here
a novel scheme consisting in performing STED microscopy using speckles. Two
speckle patterns are generated at the excitation and the depletion wavelengths,
respectively, exhibiting inverted intensity contrasts. We illustrate spatial
resolution enhancement using complementary speckles as excitation and depletion
beam on both fluorescent beads and biological samples. Our results establish a
robust method for super-resolved three-dimensional imaging with promising
perspectives in terms of temporal resolution and photobleaching.
|
Y-kapellasite [Y3Cu9(OH)19Cl8] is a frustrated antiferromagnetic insulator
which remains paramagnetic down to a remarkably low N\'eel temperature of about
2 K. Having studied this material in the paramagnetic regime, in which phonons
are the only possible heat carriers, we report the observation of a planar
parallel thermal Hall effect coming unambiguously from phonons. This is an
advantage over the Kitaev quantum spin liquid candidates {\alpha}-RuCl3 and
Na2Co2TeO6 where in principle other heat carriers can be involved [1-4]. As it
happens, Y-kapellasite undergoes a structural transition attributed to the
positional freezing of a hydrogen atom below about 33 K. Above this transition,
the global crystal symmetry forbids the existence of a planar parallel signal -
the same situation as in Na2Co2TeO6 and cuprates [3-5]. This points to the
notion of a local symmetry breaking at the root of the phonon Hall effect. In
this context, the advantage of Y-kapellasite over Na2Co2TeO6 (with high levels
of Na disorder and stacking faults) and cuprates (with high levels of disorder
coming from dopants and oxygen vacancies) is its clean structure, where the
only degree of freedom available for local symmetry breaking is this hydrogen
atom randomly distributed over six equivalent positions above 33 K. This
provides a specific and concrete case for the general idea of local symmetry
breaking leading to the phonon Hall effect in a wide range of insulators.
|
We consider expansive group actions on a compact metric space containing a
special fixed point denoted by $0$, and endomorphisms of such systems whose
forward trajectories are attracted toward $0$. Such endomorphisms are called
asymptotically nilpotent, and we study the conditions in which they are
nilpotent, that is, map the entire space to $0$ in a finite number of
iterations. We show that for a large class of discrete groups, this property of
nil-rigidity holds for all expansive actions that satisfy a natural
specification-like property and have dense homoclinic points. Our main result
in particular shows that the class includes all residually finite solvable
groups and all groups of polynomial growth. For expansive actions of the group
$\mathbb{Z}$, we show that a very weak gluing property suffices for
nil-rigidity. For $\mathbb{Z}^2$-subshifts of finite type, we show that the
block-gluing property suffices. The study of nil-rigidity is motivated by two
aspects of the theory of cellular automata and symbolic dynamics: It can be
seen as a finiteness property for groups, which is representative of the theory
of cellular automata on groups. Nilpotency also plays a prominent role in the
theory of cellular automata as dynamical systems. As a technical tool of
possible independent interest, the proof involves the construction of tiered
dynamical systems where several groups act on nested subsets of the original
space.
|
Although automatic shot transition detection approaches are already
investigated for more than two decades, an effective universal human-level
model was not proposed yet. Even for common shot transitions like hard cuts or
simple gradual changes, the potential diversity of analyzed video contents may
still lead to both false hits and false dismissals. Recently, deep
learning-based approaches significantly improved the accuracy of shot
transition detection using 3D convolutional architectures and artificially
created training data. Nevertheless, one hundred percent accuracy is still an
unreachable ideal. In this paper, we share the current version of our deep
network TransNet V2 that reaches state-of-the-art performance on respected
benchmarks. A trained instance of the model is provided so it can be instantly
utilized by the community for a highly efficient analysis of large video
archives. Furthermore, the network architecture, as well as our experience with
the training process, are detailed, including simple code snippets for
convenient usage of the proposed model and visualization of results.
|
We consider the nonsmooth convex composition optimization problem where the
objective is a composition of two finite-sum functions and analyze stochastic
compositional variance reduced gradient (SCVRG) methods for them. SCVRG and its
variants have recently drawn much attention given their edge over stochastic
compositional gradient descent (SCGD); but the theoretical analysis exclusively
assumes strong convexity of the objective, which excludes several important
examples such as Lasso, logistic regression, principle component analysis and
deep neural nets. In contrast, we prove non-asymptotic incremental first-order
oracle (IFO) complexity of SCVRG or its novel variants for nonsmooth convex
composition optimization and show that they are provably faster than SCGD and
gradient descent. More specifically, our method achieves the total IFO
complexity of $O\left((m+n)\log\left(1/\epsilon\right)+1/\epsilon^3\right)$
which improves that of $O\left(1/\epsilon^{3.5}\right)$ and
$O\left((m+n)/\sqrt{\epsilon}\right)$ obtained by SCGD and accelerated gradient
descent (AGD) respectively. Experimental results confirm that our methods
outperform several existing methods, e.g., SCGD and AGD, on sparse
mean-variance optimization problem.
|
Homoepitaxy of W(110) and Mo(110) is performed in a kinetically-limited
regime to yield a nanotemplate in the form of a uniaxial array of hills and
grooves aligned along the [001] direction. The topography and organization of
the grooves were studied with RHEED and STM. The nanofacets, of type {210}, are
tilted 18° away from (110). The lateral period could be varied from 4 to
12nm by tuning the deposition temperature. Magnetic nanowires were formed in
the grooves by deposition of Fe at 150°C on such templates. Fe/W wires
display an easy axis along [001] and a mean blocking temperature Tb=100K
|
Since the late 16th century, scientists have continuously innovated and
developed new microscope types for various applications. Creating a new
architecture from the ground up requires substantial scientific expertise and
creativity, often spanning years or even decades. In this study, we propose an
alternative approach called "Differentiable Microscopy," which introduces a
top-down design paradigm for optical microscopes. Using all-optical phase
retrieval as an illustrative example, we demonstrate the effectiveness of
data-driven microscopy design through $\partial\mu$. Furthermore, we conduct
comprehensive comparisons with competing methods, showcasing the consistent
superiority of our learned designs across multiple datasets, including
biological samples. To substantiate our ideas, we experimentally validate the
functionality of one of the learned designs, providing a proof of concept. The
proposed differentiable microscopy framework supplements the creative process
of designing new optical systems and would perhaps lead to unconventional but
better optical designs.
|
Building up on previous work we propose a Dark Matter (DM) model with gauged
matter parity and dynamical gauge coupling unification, driven by the same
physics responsible for scotogenic neutrino mass generation. Our construction
is based on the extended gauge group \3311, whose spontaneous breaking leaves a
residual conserved matter parity, $M_{P}$, stabilizing the DM particle
candidates of the model. A key role is played by the Majorana ${\rm
SU(3)_{L}}$-octet leptons, in allowing successful gauge coupling unification
and one-loop scotogenic neutrino mass generation. Theoretical consistency
allows for a \emph{plethora} of new particles at the $\lsim \mathcal{O}$(10)
TeV scale, hence accessible to future collider and low-energy experiments.
|
Interpretability and explainability of AI are becoming increasingly important
in light of the rapid development of large language models (LLMs). This paper
investigates the interpretation of LLMs in the context of the knowledge-based
question answering. The main hypothesis of the study is that correct and
incorrect model behavior can be distinguished at the level of hidden states.
The quantized models LLaMA-2-7B-Chat, Mistral-7B, Vicuna-7B and the MuSeRC
question-answering dataset are used to test this hypothesis. The results of the
analysis support the proposed hypothesis. We also identify the layers which
have a negative effect on the model's behavior. As a prospect of practical
application of the hypothesis, we propose to train such "weak" layers
additionally in order to improve the quality of the task solution.
|
The aim of the presented study is to identify some properties of the dynamic
behavior of the cancellous bone and to identify the link between this
mechanical behavior and the microstructural properties. 7 cylinders of bovine
cancellous bone (diameter 41 mm, thickness 14 mm) were tested in quasi static
loading (0.001 s-1), 8 in dynamic loading (1000 s-1) and 10 in dynamic loading
(1500 s-1) with a confinement system. All the specimens were submitted to
imaging before the tests (pQCT) in order to indentify two microstructural
properties: Bone Volume / Total Volume ? BV/TV ? and Trabeculae Thickness ?
Tb.Th. The behavior of bovine cancellous bone under compression exhibits a
foam-type behavior over the whole range of strain rates explored in this study.
The results show that for the quasi-static tests only the stresses are
correlated with BV/TV. For the unconfined dynamic tests, the yield stress is
correlated to BV/TV and the plateau stress to BV/TV and Tb.Th. For the confined
tests, only the plateau stress is correlated to BV/TV and Tb.Th. The effect of
strain rate is an increase of the yield stress and the plateau stress. The
confinement has an effect on the measured values of compression stresses that
confirms the importance of marrow flow in the overall behavior.
|
Robotic exploration of underground environments is a particularly challenging
problem due to communication, endurance, and traversability constraints which
necessitate high degrees of autonomy and agility. These challenges are further
exacerbated by the need to minimize human intervention for practical
applications. While legged robots have the ability to traverse extremely
challenging terrain, they also engender new challenges for planning,
estimation, and control. In this work, we describe a fully autonomous system
for multi-robot mine exploration and mapping using legged quadrupeds, as well
as a distributed database mesh networking system for reporting data. In
addition, we show results from the DARPA Subterranean Challenge (SubT) Tunnel
Circuit demonstrating localization of artifacts after traversals of hundreds of
meters. These experiments describe fully autonomous exploration of an unknown
Global Navigation Satellite System (GNSS)-denied environment undertaken by
legged robots.
|
We introduce the notion of semibreak divisors on metric graphs (tropical
curves) and prove that every effective divisor class (of degree at most the
genus) has a semibreak divisor representative. This appropriately generalizes
the notion of break divisors (in degree equal to genus). Our method of proof is
new, even for the special case of break divisors. We provide an algorithm to
efficiently compute such semibreak representatives. Semibreak divisors provide
the tool to establish some basic properties of effective loci inside Picard
groups of metric graphs. We prove that effective loci are pure-dimensional
polyhedral sets. We also prove that a `generic' divisor class (in degree at
most the genus) has rank zero, and that the Abel-Jacobi map is `birational'
onto its image. These are analogues of classical results for Riemann surfaces.
|
We report the discovery of a locus of stars in the SDSS g-r vs. u-g
color-color diagram that connects the colors of white dwarfs and M dwarfs.
While its contrast with respect to the main stellar locus is only ~1:2300, this
previously unrecognized feature includes 863 stars from the SDSS Data Release
1. The position and shape of the feature are in good agreement with predictions
of a simple binary star model that consists of a white dwarf and an M dwarf,
with the components' luminosity ratio controlling the position along this
binary system locus. SDSS DR1 spectra for 47 of these objects strongly support
this model. The absolute magnitude--color distribution inferred for the white
dwarf component is in good agreement with the models of Bergeron et al. (1995).
|
An x-ray pulse-shaping scheme is put forward for imprinting an optical
frequency comb onto the radiation emitted on a driven x-ray transition, thus
producing an x-ray frequency comb. A four-level system is used to describe the
level structure of N ions driven by narrow-bandwidth x rays, an optical
auxiliary laser, and an optical frequency comb. By including many-particle
enhancement of the emitted resonance fluorescence, a spectrum is predicted
consisting of equally spaced narrow lines which are centered on an x-ray
transition energy and separated by the same tooth spacing as the driving
optical frequency comb. Given a known x-ray reference frequency, our comb could
be employed to determine an unknown x-ray frequency. While relying on the
quality of the light fields used to drive the ensemble of ions, the model has
validity at energies from the 100 eV to the keV range.
|
A viewing graph is a set of unknown camera poses, as the vertices, and the
observed relative motions, as the edges. Solving the viewing graph is an
essential step in a Structure-from-Motion procedure, where a set of relative
motions is obtained from a collection of 2D images. Almost all methods in the
literature solve for the rotations separately, through rotation averaging
process, and use them for solving the positions. Obtaining positions is the
challenging part because the translation observations only tell the direction
of the motions. It becomes more challenging when the set of edges comprises
pairwise translation observations between either near and far cameras. In this
paper an iterative method is proposed that overcomes these issues. Also a
method is proposed which obtains the rotations and positions simultaneously.
Experimental results show the-state-of-the-art performance of the proposed
methods.
|
In this article the concept of enantiomorphism is developed in terms of
topological, rather than geometrical, concepts. Chirality is to be associated
with enantiomorphic pairs which induce Optical Activity, while Helicity is to
be associated enantiomorphic pairs which induce a Faraday effect.
Experimentally, the existence of enantiomorphic pairs is associated with the
lack of a center of symmetry, which is also serves as a necessary condition for
Optical Activity. However, Faraday effects may or may not require a lack of a
center of symmetry. The two species of enantiomorphic pairs are distinct, as
the rotation of the plane of polarization by Optical Activity is a reciprocal
phenomenon, while rotation of the plane of polarization by the Faraday effect
is a non-reciprocal phenomenon. From a topological viewpoint, Maxwell's
electrodynamics indicates that the concept of Chirality is to be associated
with a third rank tensor density of Topological Spin induced by the interaction
of the 4 vector potentials {A,phi} and the field excitations {D,H}. The
distinct concept of Helicity is to be associated with the third rank tensor
field of Topological Torsion induced by the interaction of the 4 vector
potentials and field intensities {E,B}.
|
Kotlin is a novel language that represents an alternative to Java, and has
been recently adopted as a first-class programming language for Android
applications. Kotlin is achieving a significant diffusion among developers, and
several studies have highlighted various advantages of the language when
compared to Java.
The objective of this paper is to analyze a set of open-source Android apps,
to evaluate their transition to the Kotlin programming language throughout
their lifespan and understand whether the adoption of Kotlin has impacts on the
success of Android apps.
We mined all the projects from the F-Droid repository of Android open-source
applications, and we found the corresponding projects on the official Google
Play Store and on the GitHub platform. We defined a set of eight metrics to
quantify the relevance of Kotlin code in the latest update and through all
releases of an application. Then, we statistically analyzed the correlation
between the presence of Kotlin code in a project and popularity metrics mined
from the platforms where the apps were released.
Of a set of 1232 projects that were updated after October 2017, near 20%
adopted Kotlin and about 12% had more Kotlin code than Java; most of the
projects that adopted Kotlin quickly transitioned from Java to the new
language. The projects featuring Kotlin had on average higher popularity
metrics; a statistically significant correlation has been found between the
presence of Kotlin and the number of stars on the GitHub repository.
The Kotlin language seems able to guarantee a seamless migration from Java
for Android developers. With an inspection on a large set of open-source
Android apps, we observed that the adoption of the Kotlin language is rapid
(when compared to the average lifespan of an Android project) and seems to come
at no cost in terms of popularity among the users and other developers.
|
We propose a space mapping-based optimization algorithm for microscopic
interacting particle dynamics which are inappropriate for direct optimization.
This is of relevance for example in applications with bounded domains such that
the microscopic optimization is difficult. The space mapping algorithm exploits
the relationship of the microscopic description of the interacting particle
system and the corresponding macroscopic description as partial differential
equation in the "many particle limit". We validate the approach with the help
of a toy problem that allows for direct optimization. Then we study the
performance of the algorithm in two applications. An evacuation dynamic is
considered and the transportation of goods on a conveyor belt is optimized. The
numerical results underline the feasibility of the proposed approach.
|
In this paper, we develop the idea to partition the edges of a weighted graph
in order to uncover overlapping communities of its nodes. Our approach is based
on the construction of different types of weighted line graphs, i.e. graphs
whose nodes are the links of the original graph, that encapsulate differently
the relations between the edges. Weighted line graphs are argued to provide an
alternative, valuable representation of the system's topology, and are shown to
have important applications in community detection, as the usual node partition
of a line graph naturally leads to an edge partition of the original graph.
This identification allows us to use traditional partitioning methods in order
to address the long-standing problem of the detection of overlapping
communities. We apply it to the analysis of different social and geographical
networks.
|
We compute the deterministic approximation of products of Sobolev functions
of large Wigner matrices $W$ and provide an optimal error bound on their
fluctuation with very high probability. This generalizes Voiculescu's seminal
theorem [Voiculescu 1991] from polynomials to general Sobolev functions, as
well as from tracial quantities to individual matrix elements. Applying the
result to $\exp(\mathrm{i} tW)$ for large $t$, we obtain a precise decay rate
for the overlaps of several deterministic matrices with temporally well
separated Heisenberg time evolutions; thus we demonstrate the thermalisation
effect of the unitary group generated by Wigner matrices.
|
We calculate the Casimir-Lifshitz pressure in a system consisting of two
different 1D dielectric lamellar gratings having two different temperatures and
immersed in an environment having a third temperature. The calculation of the
pressure is based on the knowledge of the scattering operators, deduced using
the Fourier Modal Method. The behavior of the pressure is characterized in
detail as a function of the three temperatures of the system as well as the
geometrical parameters of the two gratings. We show that the interplay between
non-equilibrium effects and geometrical periodicity offers a rich scenario for
the manipulation of the force. In particular, we find regimes where the force
can be strongly reduced for large ranges of temperatures. Moreover, a repulsive
pressure can be obtained, whose features can be tuned by controlling the
degrees of freedom of the system. Remarkably, the transition distance between
attraction and repulsion can be decreased with respect to the case of two
slabs, implying an experimental interest for the observation of repulsion.
|
In this paper we study incidences for hyperbolas in $\mathbf{F}_p$ and show
how linear sum--product methods work for such curves. As an application we give
a purely combinatorial proof of a nontrivial upper bound for bilinear forms of
Kloosterman sums.
|
Scientists often explore and analyze large-scale scientific simulation data
by leveraging two- and three-dimensional visualizations. The data and tasks can
be complex and therefore best supported using myriad display technologies, from
mobile devices to large high-resolution display walls to virtual reality
headsets. Using a simulation of neuron connections in the human brain, we
present our work leveraging various web technologies to create a multi-platform
scientific visualization application. Users can spread visualization and
interaction across multiple devices to support flexible user interfaces and
both co-located and remote collaboration. Drawing inspiration from responsive
web design principles, this work demonstrates that a single codebase can be
adapted to develop scientific visualization applications that operate
everywhere.
|
Feelings of something belonging to someone is called "psychological
ownership." A common assumption is that writing with generative AI lowers
psychological ownership, but the extent to which this occurs and the role of
prompt length are unclear. We report on two experiments to better understand
the relationship between psychological ownership and prompt length.
Participants wrote short stories either completely by themselves or wrote
prompts of varying lengths, enforced through word limits. Results show that
when participants wrote longer prompts, they had higher levels of psychological
ownership. Their comments suggest they felt encouraged to think more about
their prompts and include more details about the story plot. However, these
benefits plateaued when the prompt length was 75-100% of the target story
length. Based on these results, we propose prompt entry interface designs that
nudge users with soft and hard constraints to write longer prompts for
increased psychological ownership.
|
We demonstrate experimentally that the long-range hydrodynamic interactions
in an incompressible quasi 2D isotropic fluid result in an anisotropic viscous
drag acting on elongated particles. The anisotropy of the drag is increasing
with increasing ratio of the particle length to the hydrodynamic scale given by
the Saffman-Delbr\"uck length. The micro-rheology data for translational and
rotational drags collected over three orders of magnitude of the effective
particle length demonstrate the validity of the current theoretical approaches
to the hydrodynamics in restricted geometry. The results also demonstrate
crossovers between the hydrodynamical regimes determined by the characteristic
length scales.
|
The emerging field of free-electron quantum optics enables electron-photon
entanglement and holds the potential for generating nontrivial photon states
for quantum information processing. Although recent experimental studies have
entered the quantum regime, rapid theoretical developments predict that
qualitatively unique phenomena only emerge beyond a certain interaction
strength. It is thus pertinent to identify the maximal electron-photon
interaction strength and the materials, geometries, and particle energies that
enable one to approach it. We derive an upper limit to the quantum vacuum
interaction strength between free electrons and single-mode photons, which
illuminates the conditions for the strongest interaction. Crucially, we obtain
an explicit energy selection recipe for electrons and photons to achieve
maximal interaction at arbitrary separations and identify two optimal regimes
favoring either fast or slow electrons over those with intermediate velocities.
We validate the limit by analytical and numerical calculations on canonical
geometries and provide near-optimal designs indicating the feasibility of
strong quantum interactions. Our findings offer fundamental intuition for
maximizing the quantum interaction between free electrons and photons and
provide practical design rules for future experiments on electron-photon and
electron-mediated photon-photon entanglement. They should also enable the
evaluation of key metrics for applications such as the maximum power of
free-electron radiation sources and the maximum acceleration gradient of
dielectric laser accelerators.
|
I compute the two-loop effective potential in the Landau gauge for a general
renormalizable field theory in four dimensions. Results are presented for the
\bar{MS} renormalization scheme based on dimensional regularization, and for
the \bar{DR} and \bar{DR}' schemes based on regularization by dimensional
reduction. The last of these is appropriate for models with softly broken
supersymmetry, such as the Minimal Supersymmetric Standard Model. I find the
parameter redefinition which relates the \bar{DR} and \bar{DR}' schemes at
two-loop order. I also discuss the renormalization group invariance of the
two-loop effective potential, and compute the anomalous dimensions for scalars
and the beta function for the vacuum energy at two-loop order in softly broken
supersymmetry. Several illustrative examples and consistency checks are
included.
|
Mn$_3$O$_4$ is a spin frustrated magnet that adopts a tetragonally distorted
spinel structure at ambient conditions and a CaMn$_2$O$_4$-type postspinel
structure at high pressure. We conducted both optical measurements and
\emph{ab} \emph{initio} calculations, and systematically studied the electronic
band structures of both the spinel and postspinel Mn$_3$O$_4$ phases. For both
phases, theoretical electronic structures are consistent with the optical
absorption spectra, and display characteristic band-splitting of the conduction
band. The band gap obtained from the absorption spectra is 1.91(6) eV for the
spinel phase, and 0.94(2) eV for the postspinel phase. Both phases are
charge-transfer type insulators. The Mn 3\emph{d} $t_2$$_g$ and O 2\emph{p}
form antibonding orbitals situated at the conduction band with higher energy.
|
We address the problem of extracting key steps from unlabeled procedural
videos, motivated by the potential of Augmented Reality (AR) headsets to
revolutionize job training and performance. We decompose the problem into two
steps: representation learning and key steps extraction. We propose a training
objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn
discriminative representations for various steps without any labels. Different
from prior works, we develop techniques to train a light-weight temporal module
which uses off-the-shelf features for self supervision. Our approach can
seamlessly leverage information from multiple cues like optical flow, depth or
gaze to learn discriminative features for key-steps, making it amenable for AR
applications. We finally extract key steps via a tunable algorithm that
clusters the representations and samples. We show significant improvements over
prior works for the task of key step localization and phase classification.
Qualitative results demonstrate that the extracted key steps are meaningful and
succinctly represent various steps of the procedural tasks.
|
The adsorption of a single ideal polymer chain on energetically heterogeneous
and rough surfaces is investigated using a variational procedure introduced by
Garel and Orland (Phys. Rev. B 55 (1997), 226). The mean polymer size is
calculated perpendicular and parallel to the surface and is compared to the
Gaussian conformation and to the results for polymers at flat and energetically
homogeneous surfaces. The disorder-induced enhancement of adsorption is
confirmed and is shown to be much more significant for a heterogeneous
interaction strength than for spatial roughness. This difference also applies
to the localization transition, where the polymer size becomes independent of
the chain length. The localization criterion can be quantified, depending on an
effective interaction strength and the length of the polymer chain.
|
In this paper we provide an updated analysis of the neutrino magnetic moments
(NMMs), discussing both the constraints on the magnitudes of the three
transition moments Lambda_i as well as the role of the CP violating phases
present both in the mixing matrix and in the NMM matrix. The scattering of
solar neutrinos off electrons in Borexino provides the most stringent
restrictions, due to its robust statistics and the low energies observed, below
1 MeV. Our new limit on the effective neutrino magnetic moment which follows
from the most recent Borexino data is 3.1 x 10^-11 mu_B at 90% C.L. This
corresponds to the individual transition magnetic moment constraints:
|Lambda_1| < 5.6 x10^-11 mu_B, |Lambda_2| < 4.0 x 10^-11 mu_B, and |Lambda_3| <
3.1 x 10^-11 mu_B (90% C.L.), irrespective of any complex phase. Indeed, the
incoherent admixture of neutrino mass eigenstates present in the solar flux
makes Borexino insensitive to the Majorana phases present in the NMM matrix.
For this reason we also provide a global analysis including the case of reactor
and accelerator neutrino sources, and presenting the resulting constraints for
different values of the relevant CP phases. Improved reactor and accelerator
neutrino experiments will be needed in order to underpin the full profile of
the neutrino electromagnetic properties.
|
Multi-Version Concurrency Control (MVCC) is a common mechanism for achieving
linearizable range queries in database systems and concurrent data-structures.
The core idea is to keep previous versions of nodes to serve range queries,
while still providing atomic reads and updates. Existing concurrent
data-structure implementations, that support linearizable range queries, are
either slow, use locks, or rely on blocking reclamation schemes. We present
EEMARQ, the first scheme that uses MVCC with lock-free memory reclamation to
obtain a fully lock-free data-structure supporting linearizable inserts,
deletes, contains, and range queries. Evaluation shows that EEMARQ outperforms
existing solutions across most workloads, with lower space overhead and while
providing full lock freedom.
|
A new fully quantum method describing penetration of packet from internal
well outside with its tunneling through the barrier of arbitrary shape used in
problems of quantum cosmology, is presented. The method allows to determine
amplitudes of wave function, penetrability $T_{\rm bar}$ and reflection $R_{\rm
bar}$ relatively the barrier (accuracy of the method: $|T_{\rm bar}+R_{\rm
bar}-1| < 1 \cdot 10^{-15}$), coefficient of penetration (i.e. probability of
the packet to penetrate from the internal well outside with its tunneling),
coefficient of oscillations (describing oscillating behavior of the packet
inside the internal well). Using the method, evolution of universe in the
closed Friedmann--Robertson--Walker model with quantization in presence of
positive cosmological constant, radiation and component of generalize Chaplygin
gas is studied. It is established (for the first time): (1) oscillating
dependence of the penetrability on localization of start of the packet; (2)
presence of resonant values of energy of radiation $E_{\rm rad}$, at which the
coefficient of penetration increases strongly. From analysis of these results
it follows: (1) necessity to introduce initial condition into both
non-stationary, and stationary quantum models; (2) presence of some definite
values for the scale factor $a$, where start of expansion of universe is the
most probable; (3) during expansion of universe in the initial stage its radius
is changed not continuously, but passes consequently through definite discrete
values and tends to continuous spectrum in latter time.
|
Deep learning-based massive MIMO CSI feedback has received a lot of attention
in recent years. Now, there exists a plethora of CSI feedback models mostly
based on auto-encoders (AE) architecture with an encoder network at the user
equipment (UE) and a decoder network at the gNB (base station). However, these
models are trained for a single user in a single-channel scenario, making them
ineffective in multi-user scenarios with varying channels and varying encoder
models across the users. In this work, we address this problem by exploiting
the techniques of multi-task learning (MTL) in the context of massive MIMO CSI
feedback. In particular, we propose methods to jointly train the existing
models in a multi-user setting while increasing the performance of some of the
constituent models. For example, through our proposed methods, CSINet when
trained along with STNet has seen a $39\%$ increase in performance while
increasing the sum rate of the system by $0.07bps/Hz$.
|
We extend the classical Agmon theorem on asymptotic completeness of two body
Schroedinger operators to cover a larger class of perturbations. This is
accomplished by means of a suitable limiting absorption principle. The proof of
the latter relies on methods from harmonic analysis centered around the
Stein-Tomas and Bochner-Riesz theorems.
|
This work proposes a framework using temporal data and domain knowledge in
order to analyze complex agronomical features. The expertise is first
formalized in an ontology, under the form of concepts and relationships between
them, and then used in conjunction with raw data and mathematical models to
design a software sensor. Next the software sensor outputs are put in relation
to product quality, assessed by quantitative measurements. This requires the
use of advanced data analysis methods, such as functional regression. The
methodology is applied to a case study involving an experimental design in
French vineyards. The temporal data consist of sap flow measurements, and the
goal is to explain fruit quality (sugar concentration and weight), using vine's
water courses through the various vine phenological stages. The results are
discussed, as well as the method genericity and robustness.
|
We introduce a new concept called scalability to adaptive control in this
paper. In particular, we analyze how to scale learning rates of adaptive weight
update laws of various adaptive control schemes with respect to given command
profiles to achieve a predictable closed-loop response. An illustrative
numerical example is provided to demonstrate the proposed concept, which
emphasize that it can be an effective tool for validation and verification of
adaptive controllers.
|
In this paper, we incorporate seasonal variations of insolation into the
global climate model C-GOLDSTEIN. We use a new approach for modelling
insolation from the space perspective presented in the authors' earlier work
and build it into the existing climate model.
Realistic monthly temperature distributions have been obtained after running
C-GOLDSTEIN with the new insolation component. Also, the average accuracy of
modelling the insolation within the model has been increased by 2%. In
addition, new types of experiments can now be performed with C-GOLDSTEIN, such
as the investigation of consequences of random variations of insolation on
temperature etc.
|
We present space-based ultraviolet/optical photometry and spectroscopy with
the Swift Ultra-Violet/Optical Telescope and Hubble Space Telescope,
respectively, along with ground-based optical photometry and spectroscopy and
near-infrared spectroscopy of supernova SN2017erp. The optical light curves and
spectra are consistent with a normal Type Ia supernova (SN Ia). Compared to
previous photometric samples in the near-ultraviolet (NUV), SN2017erp has
colors similar to the NUV-red category after correcting for Milky Way and host
dust reddening. We find the difference between SN2017erp and the NUV-blue
SN2011fe is not consistent with dust reddening alone but is similar to the SALT
color law, derived from rest-frame UV photometry of higher redshift SNe Ia.
This chromatic difference is dominated by the intrinsic differences in the UV
and only a small contribution from the expected dust reddening. Differentiating
the two can have important consequences for determining cosmological distances
with rest-frame UV photometry. This spectroscopic series is important for
analyzing SNe Ia with intrinsically redder NUV colors. We also show model
comparisons suggesting that metallicity could be the physical difference
between NUV-blue and NUV-red SNe Ia, with emission peaks from reverse
fluorescence near 3000 Angstroms implying a factor of ten higher metallicity in
the upper layers of SN2017erp compared to SN~2011fe. Metallicity estimates are
very model dependent however, and there are multiple effects in the UV. Further
models and UV spectra of SNe Ia are needed to explore the diversity of SNe Ia
which show seemingly independent differences in the near-UV peaks and mid-UV
flux levels.
|
Analytical and numerical calculations are presented for the mechanical
response of fiber networks in a state of axisymmetric prestress, in the limit
where geometric non-linearities such as fiber rotation are negligible. This
allows us to focus on the anisotropy deriving purely from the non-linear
force-extension curves of individual fibers. The number of independent elastic
coefficients for isotropic, axisymmetric and fully anisotropic networks are
enumerated, before deriving expressions for the response to a locally applied
force that can be tested against e.g. microrheology experiments. Localised
forces can generate anisotropy away from the point of application, so numerical
integration of non-linear continuum equations is employed to determine the
stress field, and induced mechanical anisotropy, at points located directly
behind and in front of a force monopole. Results are presented for the wormlike
chain model in normalised forms, allowing them to be easily mapped to a range
of systems. Finally, the relevance of these findings to naturally occurring
systems and directions for future investigation are discussed.
|
Using the maximum-likelihood detector (MLD) of a soliton with timing jitter
and noise, other than walk-out of the bit interval, timing jitter does not
degrade the performance of MLD. When the MLD is simulated with important
sampling method, even with a timing jitter standard deviation the same as the
full-width-half-maximum (FWHM) of the soliton, the signal-to-noise (SNR)
penalty is just about 0.2 dB. The MLD performs better than conventional scheme
to lengthen the decision window with additive noise proportional to the window
wide.
|
The asymptotic behavior of weak time-periodic solutions to the Navier-Stokes
equations with a drift term in the three-dimensional whole space is
investigated. The velocity field is decomposed into a time-independent and a
remaining part, and separate asymptotic expansions are derived for both parts
and their gradients. One observes that the behavior at spatial infinity is
determined by the corresponding Oseen fundamental solutions.
|
We demonstrate photonic crystal nanobeam cavities that support both TE- and
TM-polarized modes, each with a Quality factor greater than one million and a
mode volume on the order of the cubic wavelength. We show that these
orthogonally polarized modes have a tunable frequency separation and a high
nonlinear spatial overlap. We expect these cavities to have a variety of
applications in resonance-enhanced nonlinear optics.
|
We demonstrate the making of BaZrS3 thin films by molecular beam epitaxy
(MBE). BaZrS3 forms in the orthorhombic distorted-perovskite structure with
corner-sharing ZrS6 octahedra. The single-step MBE process results in films
smooth on the atomic scale, with near-perfect BaZrS3 stoichiometry and an
atomically-sharp interface with the LaAlO3 substrate. The films grow
epitaxially via two, competing growth modes: buffered epitaxy, with a
self-assembled interface layer that relieves the epitaxial strain, and direct
epitaxy, with rotated-cube-on-cube growth that accommodates the large lattice
constant mismatch between the oxide and the sulfide perovskites. This work sets
the stage for developing chalcogenide perovskites as a family of semiconductor
alloys with properties that can be tuned with strain and composition in
high-quality epitaxial thin films, as has been long-established for other
systems including Si-Ge, III-Vs, and II-Vs. The methods demonstrated here also
represent a revival of gas-source chalcogenide MBE.
|
We demonstrate the fractional Talbot effect of nonpraxial accelerating beams,
theoretically and numerically. It is based on the interference of nonparaxial
accelerating solutions of the Helmholtz equation in two dimensions. The effect
originates from the interfering lobes of a superposition of the solutions that
accelerate along concentric semicircular trajectories with different radii.
Talbot images form along certain central angles, which are referred to as the
Talbot angles. The fractional nonparaxial Talbot effect is obtained by choosing
the coefficients of beam components properly. A single nonparaxial accelerating
beam possesses duality --- it can be viewed as a Talbot effect of itself with
an infinite or zero Talbot angle. These results improve the understanding of
nonparaxial accelerating beams and the Talbot effect among them.
|
Considering the problem of finding all the integer solutions of the sum of
$M$ consecutive integer squares starting at $a^{2}$ being equal to a squared
integer $s^{2}$, it is shown that this problem has no solutions if
$M\equiv3,5,6,7,8$ or $10 (mod\,12)$ and has integer solutions if
$M\equiv0,9,24$ or $33 (mod\,72)$; or $M\equiv1,2$ or $16 (mod\,24)$; or
$M\equiv11 (mod\,12)$. All the allowed values of $M$ are characterized using
necessary conditions. If $M$ is a square itself, then $M\equiv1 (mod\,24)$ and
$(M-1)/24$ are all pentagonal numbers, except the first two.
|
Given a set $P$ of $n$ points in $\mathbb{R}^3$, we show that, for any
$\varepsilon >0$, there exists an $\varepsilon$-net of $P$ for halfspace
ranges, of size $O(1/\varepsilon)$. We give five proofs of this result, which
are arguably simpler than previous proofs \cite{msw-hnlls-90, cv-iaags-07,
pr-nepen-08}. We also consider several related variants of this result,
including the case of points and pseudo-disks in the plane.
|
We develop a new nonparametric approach for estimating the risk-neutral
density of asset prices and reformulate its estimation into a
double-constrained optimization problem. We evaluate our approach using the
S\&P 500 market option prices from 1996 to 2015. A comprehensive
cross-validation study shows that our approach outperforms the existing
nonparametric quartic B-spline and cubic spline methods, as well as the
parametric method based on the Normal Inverse Gaussian distribution. As an
application, we use the proposed density estimator to price long-term variance
swaps, and the model-implied prices match reasonably well with those of the
variance future downloaded from the CBOE website.
|
Optically levitated multiple nanoparticles has emerged as a platform for
studying complex fundamental physics such as non-equilibrium phenomena, quantum
entanglement, and light-matter interaction, which could be applied for sensing
weak forces and torques with high sensitivity and accuracy. An optical trapping
landscape of increased complexity is needed to engineer the interaction between
levitated particles beyond the single harmonic trap. However, existing
platforms based on spatial light modulators for studying interactions between
levitated particles suffered from low efficiency, instability at focal points,
the complexity of optical systems, and the scalability for sensing
applications. Here, we experimentally demonstrated that a metasurface which
forms two diffraction-limited focal points with a high numerical aperture (0.9)
and high efficiency (31%) can generate tunable optical potential wells without
any intensity fluctuations. A bistable potential and double potential wells
were observed in the experiment by varying the focal points distance, and two
nanoparticles were levitated in double potential wells for hours, which could
be used for investigating the levitated particles nonlinear dynamics, thermal
dynamics, and optical binding. This would pave the way for scaling the number
of levitated optomechanical devices or realizing paralleled levitated sensors.
|
We study the phase curves for the planets of our Solar System; which, is
considered as a non-compact planetary system. We focus on modeling the small
variations of the light curve, based on the three photometric effects:
reflection, ellipsoidal, and Doppler beaming. Theoretical predictions for these
photometric variations are proposed, as if a hypothetical external observer
would measure them. In contrast to similar studies of multi-planetary systems,
the physical and geometrical parameters for each planet of the Solar System are
well-known. Therefore, we can evaluate with accuracy the mathematical relations
that shape the planetary light curves for an external fictitious observer. Our
results suggest that in all the planets of study the ellipsoidal effect is very
weak, while the Doppler beaming effect is in general dominant. In fact, the
latter effect seems to be confirmed as the principal cause of variations of the
light curves for the planets. This affirmation could not be definitive in
Mercury or Venus where the Doppler beaming and the reflection effects have
similar amplitudes. The obtained phase curves for the Solar System planets show
interesting new features that have not been presented before, so the results
presented here are relevant in their application to other non-compact systems,
since they allow us to have an idea of what it is expected to find in their
light curves.
|
Galaxy groups are key tracers of galaxy evolution, cluster evolution, and
structure formation, yet they are difficult to study at even moderate redshift.
We have undertaken a project to observe a flux-limited sample of
intermediate-redshift (0.1 < z < 0.5) group candidates identified by the
XBootes Chandra survey. When complete, this project will nearly triple the
current number of groups with measured temperatures in this redshift range.
Here we present deep Suzaku/XIS and Chandra/ACIS follow-up observations of the
first 10 targets in this project; all are confirmed sources of diffuse, thermal
emission with derived temperatures and luminosities indicative of rich
groups/poor clusters. By exploiting the multi-wavelength coverage of the
XBootes/NOAO Deep Wide Field Survey (NDWFS) field, we aim to (1) constrain
non-gravitational effects that alter the energetics of the intragroup medium,
and (2) understand the physical connection between the X-ray and optical
properties of groups. We discuss the properties of the current group sample in
the context of observed cluster scaling relations and group and cluster
evolution and outline the future plans for this project.
|
Solar active regions are associated with Evershed outflows in sunspot
penumbrae, moat outflows surrounding sunspots, and extended inflows surrounding
active regions. The latter have been identified on established active regions
by various methods. The evolution of these inflows and their dependence on
active region properties as well as their impact on the global magnetic field
are not yet understood. We aim to understand the evolution of the average
inflows around emerging active regions and to derive an empirical model for
these inflows. We analyze horizontal flows at the surface of the Sun using
local correlation tracking of solar granules observed in continuum images of
SDO/HMI. We measure average flows of a sample of 182 isolated active regions up
to seven days before and after their emergence onto the solar surface with a
cadence of 12 hours. We investigate the average inflow properties with respect
to active region characteristics of total flux and latitude. We fit a model to
these observed inflows for a quantitative analysis. We find that converging
flows of around $20$ to $30$ m/s are first visible one day prior to emergence,
in agreement with recent results. These converging flows are present
independently of active region properties of latitude or flux. We confirm a
recently found prograde flow of about $40$ m/s at the leading polarity during
emergence. We find that the time after emergence when the latitudinal inflows
increase in amplitude depends on the flux of the active region, ranging from
one to four days after emergence and increasing with flux. The largest extent
of the inflows is up to about $7 \pm 1^\circ$ away from the center of the
active region within the first six days after emergence. The inflow velocities
have amplitudes of about $50$ m/s.
|
An acyclic mapping from an $n$ element set into itself is a mapping $\phi$
such that if $\phi^k(x) = x$ for some $k$ and $x$, then $\phi(x) = x$.
Equivalently, $\phi^\ell = \phi^{\ell+1} = ...$ for $\ell$ sufficiently large.
We investigate the behavior as $n \to \infty$ of a Markov chain on the
collection of such mappings. At each step of the chain, a point in the $n$
element set is chosen uniformly at random and the current mapping is modified
by replacing the current image of that point by a new one chosen independently
and uniformly at random, conditional on the resulting mapping being again
acyclic. We can represent an acyclic mapping as a directed graph (such a graph
will be a collection of rooted trees) and think of these directed graphs as
metric spaces with some extra structure. Heuristic calculations indicate that
the metric space valued process associated with the Markov chain should, after
an appropriate time and ``space'' rescaling, converge as $n \to \infty$ to a
real tree ($\R$-tree) valued Markov process that is reversible with respect to
a measure induced naturally by the standard reflected Brownian bridge. The
limit process, which we construct using Dirichlet form methods, is a Hunt
process with respect to a suitable Gromov-Hausdorff-like metric. This process
is similar to one that appears in earlier work by Evans and Winter as the limit
of chains involving the subtree prune and regraft tree (SPR) rearrangements
from phylogenetics.
|
Many psychological experiments have subjects repeat a task to gain the
statistical precision required to test quantitative theories of psychological
performance. In such experiments, time-on-task can have sizable effects on
performance, changing the psychological processes under investigation. Most
research has either ignored these changes, treating the underlying process as
static, or sacrificed some psychological content of the models for statistical
simplicity. We use particle Markov chain Monte-Carlo methods to study
psychologically plausible time-varying changes in model parameters. Using data
from three highly-cited experiments we find strong evidence in favor of a
hidden Markov switching process as an explanation of time-varying effects. This
embodies the psychological assumption of "regime switching", with subjects
alternating between different cognitive states representing different modes of
decision-making. The switching model explains key long- and short-term dynamic
effects in the data. The central idea of our approach can be applied quite
generally to quantitative psychological theories, beyond the models and data
sets that we investigate.
|
Controlled time-decaying harmonic oscillator changes the threshold of decay
order of the potential functions in order to exist the physical wave operators.
This threshold was first reported by Ishida and Kawamoto \cite{IK} for the
non-critical case. In this paper we deal with the critical case. As for the
critical case, the situation changes drastically, and much more rigorous
analysis is required than that of non-critical case. We study this critical
behavior and clarify the threshold by the power of the log growth of the
potential functions. Consequently, this result reveals the asymptotics for
quantum dynamics of the critical case.
|
Bundles of polymer filaments are responsible for the rich and unique
mechanical behaviors of many biomaterials, including cells and extracellular
matrices. In fibrin biopolymers, whose nonlinear elastic properties are crucial
for normal blood clotting, protofibrils self-assemble and bundle to form
networks of semiflexible fibers. Here we show that the extraordinary
strain-stiffening response of fibrin networks is a direct reflection of the
hierarchical architecture of the fibrin fibers. We measure the rheology of
networks of unbundled protofibrils and find excellent agreement with an affine
model of extensible wormlike polymers. By direct comparison with these data, we
show that physiological fibrin networks composed of thick fibers can be modeled
as networks of tight protofibril bundles. We demonstrate that the tightness of
coupling between protofibrils in the fibers can be tuned by the degree of
enzymatic intermolecular crosslinking by the coagulation Factor XIII.
Furthermore, at high stress, the protofibrils contribute independently to the
network elasticity, which may reflect a decoupling of the tight bundle
structure. The hierarchical architecture of fibrin fibers can thus account for
the nonlinearity and enormous elastic resilience characteristic of blood clots.
|
A computational method is introduced for choosing the regularization
parameter for total variation (TV) regularization. The approach is based on
computing reconstructions at a few different resolutions and various values of
regularization parameter. The chosen parameter is the smallest one resulting in
approximately discretization-invariant TV norms of the reconstructions. The
method is tested with X-ray tomography data measured from a walnut and compared
to the S-curve method. The proposed method seems to automatically adapt to the
desired resolution and noise level, and it yields useful results in the tests.
The results are comparable to those of the S-curve method; however, the S-curve
method needs a priori information about the sparsity of the unknown, while the
proposed method does not need any a priori information (apart from the choice
of a desired resolution). Mathematical analysis is presented for (partial)
understanding of the properties of the proposed parameter choice method. It is
rigorously proven that the TV norms of the reconstructions converge with any
choice of regularization parameter.
|
The relationship between international trade and foreign direct investment
(FDI) is one of the main features of globalization. In this paper we
investigate the effects of FDI on trade from a network perspective, since FDI
takes not only direct but also indirect channels from origin to destination
countries because of firms' incentive to reduce tax burden, to minimize
coordination costs, and to break barriers to market entry. We use a unique data
set of international corporate control as a measure of stock FDI to construct a
corporate control network (CCN) where the nodes are the countries and the edges
are the corporate control relationships. Based on the CCN, the network
measures, i.e., the shortest path length and the communicability, are computed
to capture the indirect channel of FDI. Empirically we find that corporate
control has a positive effect on trade both directly and indirectly. The result
is robust with different specifications and estimation strategies. Hence, our
paper provides strong empirical evidence of the indirect effects of FDI on
trade. Moreover, we identify a number of interplaying factors such as regional
trade agreements and the region of Asia. We also find that the indirect effects
are more pronounced for manufacturing sectors than for primary sectors such as
oil extraction and agriculture.
|
The neutron-rich $^{28,29}$F isotopes have been recently studied via knockout
and interaction cross-section measurements. The $2n$ halo in $^{29}$F has been
linked to the occupancy of $pf$ intruder configurations. We investigate bound
and continuum states in $^{29}$F, focusing on the $E1$ response of low-lying
excitations and the effect of dipole couplings on nuclear reactions.
$^{29}\text{F}$ ($^{27}\text{F}+n+n$) wave functions are built within the
hyperspherical harmonics formalism, and reaction cross sections are calculated
using the Glauber theory. Continuum states and $B(E1)$ transition probabilities
are described in a pseudostate approach using the analytical THO basis. The
corresponding structure form factors are used in CDCC calculations to describe
low-energy scattering. Parity inversion in $^{28}$F leads to a $^{29}$F ground
state characterized by 57.5% of $(p_{3/2})^2$ intruder components, a strong
dineutron configuration, and an increase of the matter radius with respect to
the core radius of $\Delta R=0.20$ fm. Glauber-model calculations for a carbon
target at 240 MeV/nucleon provide a total reaction cross section of 1370 mb, in
agreement with recent data. The model produces also a barely bound excited
state corresponding to a quadrupole excitation. $B(E1)$ calculations into the
continuum yield a total strength of 1.59 e$^2$fm$^2$ up to 6 MeV, and the $E1$
distribution exhibits a resonance at $\approx$ 0.85 MeV. Results using a
standard shell-model order for $^{28}$F lead to a considerable reduction of the
$B(E1)$ distribution. The four-body CDCC calculations for
$^{29}\text{F}+^{120}\text{Sn}$ around the Coulomb barrier are dominated by
dipole couplings, which totally cancel the Fresnel peak in the elastic cross
section. These results are consistent with a two-neutron halo and may guide
future experimental campaigns.
|
Machine learning requires data, but acquiring and labeling real-world data is
challenging, expensive, and time-consuming. More importantly, it is nearly
impossible to alter real data post-acquisition (e.g., change the illumination
of a room), making it very difficult to measure how specific properties of the
data affect performance. In this paper, we present AI Playground (AIP), an
open-source, Unreal Engine-based tool for generating and labeling virtual image
data. With AIP, it is trivial to capture the same image under different
conditions (e.g., fidelity, lighting, etc.) and with different ground truths
(e.g., depth or surface normal values). AIP is easily extendable and can be
used with or without code. To validate our proposed tool, we generated eight
datasets of otherwise identical but varying lighting and fidelity conditions.
We then trained deep neural networks to predict (1) depth values, (2) surface
normals, or (3) object labels and assessed each network's intra- and
cross-dataset performance. Among other insights, we verified that sensitivity
to different settings is problem-dependent. We confirmed the findings of other
studies that segmentation models are very sensitive to fidelity, but we also
found that they are just as sensitive to lighting. In contrast, depth and
normal estimation models seem to be less sensitive to fidelity or lighting and
more sensitive to the structure of the image. Finally, we tested our trained
depth-estimation networks on two real-world datasets and obtained results
comparable to training on real data alone, confirming that our virtual
environments are realistic enough for real-world tasks.
|
Distance-based dynamic texture recognition is an important research field in
multimedia processing with applications ranging from retrieval to segmentation
of video data. Based on the conjecture that the most distinctive characteristic
of a dynamic texture is the appearance of its individual frames, this work
proposes to describe dynamic textures as kernelized spaces of frame-wise
feature vectors computed using the Scattering transform. By combining these
spaces with a basis-invariant metric, we get a framework that produces
competitive results for nearest neighbor classification and state-of-the-art
results for nearest class center classification.
|
We investigate the possible existence of anomalous mass defects in the low
mass region of stellar sequences of strange stars.
We employ the nonperturbative equation of state derived in the framework of
the Field Correlator Method to describe the hydrostatic equilibrium of the
strange matter.
The large distance static $Q{\bar Q}$ potential $V_1$ and the gluon
condensate $G_2$ are the main parameters of the model.
We use the surface gravitational redshift measurements as a probe to
determine the ratio $({\cal P}/{\cal E})_C$ at the center of strange stars.
For $V_1=0$ and $G_2\gappr0.035\,{\rm GeV}^4$\,, we show that $({\cal
P}/{\cal E})_C\simeq0.262$ and the corresponding redshift $z_S\simeq0.47$ are
limiting values, at the maximum mass of the highest mass stellar sequence.
As a direct application of our study, we try to determine the values of $V_1$
and $G_2$ from astrophysical observations of the compact star 1E\,1207.4-5209.
Due to the uncertainties in the surface redshift determination, we made two
attempts to obtain the model parameters.
Our findings show that $({\cal P}/{\cal E})_C=0.073^{+0.029}_{+0.024}$\, at
68\% confidence, $V_1=0.44\pm0.10$\,GeV at 90\% confidence and
$G_2=0.008\pm0.001\,{\rm GeV}^4$ at 95\% confidence \, in the first attempt;
and $({\cal P}/{\cal E})_C=0.073^{+0.029}_{+0.024}=0.087\pm0.028$\, at 71\%
confidence, $V_1=0.43\pm0.085$\,GeV at 94\% confidence and
$G_2=0.0093\pm0.00092\,{\rm GeV}^4$ at 94\% confidence in the second attempt.
These values of $V_1$ and $G_2$ are in reasonable agreement with the lattice
and QCD sum rules calculations.
As a consequence of the high values of $V_1$ and $G_2$, the anomalous mass
defects of 1E\,1207.4-5209 are $|\Delta_2M|\simeq2.56\times10^{53}$\,erg\, in
the first attempt and $|\Delta_2M|\simeq2.94\times10^{53}$\,erg\, in the second
attempt.
|
Adaptive time series analysis has been applied to investigate variability of
CO2 concentration data, sampled weekly at Mauna Loa monitoring station. Due to
its ability to mitigate mode mixing, the recent time varying filter Empirical
Mode Decomposition (tvf-EMD) methodology is employed to extract local
narrowband oscillatory modes. In order to perform data analysis, we developed a
Python implementation of the tvf-EMD algorithm, referred to as pytvfemd. The
algorithm allowed to extract the trend and both the six month and the one year
periodicities, without mode mixing, even though the analysed data are noisy.
Furthermore, subtracting such modes the residuals are obtained, which are found
to be described by a normal distribution. Outliers occurrence was also
investigated and it is found that they occur in higher number toward the end of
the dataset, corresponding to solar cycles characterised by smaller sunspot
numbers. A more pronounced oscillation of the residuals is also observed in
this regard, in relation to the solar cycles activity too.
|
Although the development of spintronic devices has advanced significantly
over the past decade with the use of ferromagnetic materials, the extensive
implementation of such devices has been limited by the notable drawbacks of
these materials. Antiferromagnets claim to resolve many of these shortcomings
leading to faster, smaller, more energy-efficient, and more robust electronics.
Antiferromagnets exhibit many desirable properties including zero net
magnetization, imperviousness to external magnetic fields, intrinsic
high-frequency dynamics with a characteristic precession frequency on the order
of terahertz (THz), and the ability to serve as passive exchange-bias materials
in multiple magnetoresistance (MR)- based devices. In this Perspective article,
we will discuss the fundamental physics of magnetic structures in
antiferromagnets and their interactions with external stimuli such as spin
current, voltage, and magnons. A discussion on the challenges lying ahead is
also provided along with an outlook of future research directions of these
systems.
|
Cross sections for photon production in hadronic scattering processes have
been calculated according to an effective chiral field theory. For $\pi + \rho
\to \pi + \gamma$ and $\pi + \pi \to \rho + \gamma$ processes, these cross
sections have been implemented into a novel hadronic transport approach
(SMASH), which is suitable for collisions at low and intermediate energies. The
implementation is verified by systematically comparing the thermal photon rate
to theoretical expectations. The photon rates we obtain are compared to
previous works, where scattering processes mediated by $\omega$ mesons are
found to contribute significantly to the total photon production. Finally, the
impact of considering the finite width of the $\rho$ meson is investigated, and
a significant enhancement of photon production in the low-energy region is
observed. This work is the first step towards a consistent treatment of photon
emission in hybrid hydrodynamics+transport approaches. The quantification of
the importance of the hadronic stage for the resolution of the direct photon
flow puzzle is a next step and can be applied to identify equilibrium and
non-equilibrium effects in the hadronic afterburner.
|
Let $G$ be a connected reductive group over an algebraically closed field of
characteristic $p>0$. Given an indecomposable G-module $M$, one can ask when it
remains indecomposable upon restriction to the Frobenius kernel $G_r$, and when
its $G_r$-socle is simple (the latter being a strictly stronger condition than
the former). In this paper, we investigate these questions for $G$ having an
irreducible root system of type A. Using Schur functors and inverse Schur
functors as our primary tools, we develop new methods of attacking these
problems, and in the process obtain new results about classes of Weyl modules,
induced modules, and tilting modules that remain indecomposable over $G_r$.
|
Characterizing and localizing electronic energy degeneracies is important for
describ-ing and controlling electronic energy flow in molecules. We show, using
topological phase considerations that the Renner effect in polyatomic molecules
with more than 3 nuclei is necessarily accompanied by 'satellite' conical
intersections. In these intersections the non-adiabatic coupling term is on the
average half an integer. We present ab-inito results on the tetra-atomic
radical cation C2H2+ to demonstrate the theory
|
Interstellar dust grains are non-spherical and, in some environments,
partially aligned along the direction of the interstellar magnetic field.
Numerous alignment theories have been proposed, all of which examine the grain
rotational dynamics. In 1999, Lazarian & Draine introduced the important
concept of thermal flipping, in which internal relaxation processes induce the
grain body to flip while its angular momentum remains fixed. Through detailed
numerical simulations, we study the role of thermal flipping on the grain
dynamics during periods of relatively slow rotation, known as `crossovers', for
the special case of a spheroidal grain with a non-uniform mass distribution.
Lazarian & Draine proposed that rapid flipping during a crossover would lead to
`thermal trapping', in which a systematic torque, fixed relative to the grain
body, would time average to zero, delaying spin-up to larger rotational speeds.
We find that the time-averaged systematic torque is not zero during the
crossover and that thermal trapping is not prevalent. As an application, we
examine whether the classic Davis-Greenstein alignment mechanism is viable, for
grains residing in the cold neutral medium and lacking superparamagnetic
inclusions. We find that Davis-Greenstein alignment is not hindered by thermal
trapping, but argue that it is, nevertheless, too inefficient to yield the
alignment of large grains responsible for optical and infrared starlight
polarization. Davis-Greenstein alignment of small grains could potentially
contribute to the observed ultraviolet polarization. The theoretical and
computational tools developed here can also be applied to analyses of alignment
via radiative torques and rotational disruption of grains.
|
As large language models (LLMs) have become the norm in NLP, demonstrating
good performance in generation and reasoning tasks, one of its most fatal
disadvantages is the lack of factual correctness. Generating unfactual texts
not only leads to lower performances but also degrades the trust and validity
of their applications. Chain-of-Thought (CoT) prompting improves trust and
model performance on complex reasoning tasks by generating interpretable
reasoning chains, but still suffers from factuality concerns in
knowledge-intensive tasks. In this paper, we propose the Verify-and-Edit
framework for CoT prompting, which seeks to increase prediction factuality by
post-editing reasoning chains according to external knowledge. Building on top
of GPT-3, our framework lead to accuracy improvements in multiple open-domain
question-answering tasks.
|
Sparse representation (SR) and collaborative representation (CR) have been
successfully applied in many pattern classification tasks such as face
recognition. In this paper, we propose a novel Non-negative Sparse and
Collaborative Representation (NSCR) for pattern classification. The NSCR
representation of each test sample is obtained by seeking a non-negative sparse
and collaborative representation vector that represents the test sample as a
linear combination of training samples. We observe that the non-negativity can
make the SR and CR more discriminative and effective for pattern
classification. Based on the proposed NSCR, we propose a NSCR based classifier
for pattern classification. Extensive experiments on benchmark datasets
demonstrate that the proposed NSCR based classifier outperforms the previous SR
or CR based approach, as well as state-of-the-art deep approaches, on diverse
challenging pattern classification tasks.
|
We study the phase diagram of two models of spin-1/2 antiferromagnets
composed of corner-sharing tetrahedra, the basis of the pyrochlore structure.
Primarily, we focus on the Heisenberg antiferromaget on the checkerboard
lattice (also called the planar pyrochlore and crossed-chains model). This
model has an anisotropic limit, when the dimensionless ratio of two exchange
constants, J_\times/J << 1, in which it consists of one-dimensional spin chains
coupled weakly together in a frustrated fashion. Using recently developed
techniques combining renormalization group ideas and one-dimensional
bosonization and current algebra methods, we show that in this limit the model
enters a crossed dimer state with two-fold spontaneous symmetry breaking but no
magnetic order. We complement this result by an approximate ``quadrumer triplet
boson'' calculation, which qualitatively captures the physics of the
``plaquette valence bond solid'' state believed to obtain for J_\times/J = 1.
Using these known points in parameter space, the instabilities pointed to by
the quadrumer boson calculation, and the simple limit J_\times/J >> 1, we
construct a few candidate global phase diagrams for the model, and discuss the
nature of the quantum phase transitions contained therein. Finally, we apply
our quasi-one-dimensional techniques to an anisotropic limit of the
three-dimensional pyrochlore antiferromagnet, an approximate model for
magnetism in GeCu2O4. A crossed dimer state is predicted here as well.
|
In the present article, we review the classical covariant formulation of
Yang-Mills theory and general relativity in the presence of spacetime
boundaries, focusing mainly on the derivation of the presymplectic forms and
their properties. We further revisit the introduction of the edge modes and the
conditions which justify them, in the context where only field-independent
gauge transformations are considered. We particularly show that the presence of
edge modes is not justified by gauge invariance of the presymplectic form, but
rather by the condition that the presymplectic form is degenerate on the
initial field space, which allows to relate this presymplectic form to the
symplectic form on the gauge reduced field space via pullback.
|
We compare two approaches to describe the inner crust of neutron stars: on
the one hand, the simple coexistence of a liquid (clusters) and a gas phase,
and on the other hand, the energy minimization with respect to the density
profile, including Coulomb and surface effects. We find that the
phase-coexistence model gives a reasonable description of the densities in the
clusters and in the gas, but the precision is not high enough to obtain the
correct proton fraction at low baryon densities. We also discuss the surface
tension and neutron skin obtained within the energy minimization.
|
In a recent experiment by Schaetz group \cite{weckesser2021}, the quantum
$s$-wave regime has been attained for alkali and alkaline-earth atom-ion
combination (Li-Ba$^+$). We investigate possible outcomes from interaction of
this ion-atom pair at quantum regimes from a theoretical point of view. For
this purpose, Born-Oppenheimer potential energy surfaces are constructed for
the lowest three dissociation channels of (Ba-Li)$^+$ molecular system using a
multireference configuration interaction (MRCI) electronic structure
calculation. We present elastic, spin-exchange (SE), and diffusion cross
sections at different energy regimes. The collisional properties of this system
are calculated in terms of the scattering phase shifts and scattering cross
sections, and the semiclassical behavior at a relatively large energy limit is
also examined. For SE collisions, phase locking is obtained towards lower
partial waves.
|
A new scheme for amplification of coherent gamma rays is proposed. The key
elements are crystalline undulators - single crystals with periodically bent
crystallographic planes exposed to a high energy beam of charged particles
undergoing channeling inside the crystals. The scheme consists of two such
crystals separated by a vacuum gap. The beam passes the crystals successively.
The particles perform undulator motion inside the crystals following the
periodic shape of the crystallographic planes. Gamma rays passing the crystals
parallel to the beam get amplified due to interaction with the particles inside
the crystals. The term `gamma klystron' is proposed for the scheme because its
operational principles are similar to those of the optical klystron. A more
simple one-crystal scheme is considered as well for the sake of comparison. It
is shown that the gamma ray amplification in the klystron scheme can be reached
at considerably lower particle densities than in the one-crystal scheme,
provided that the gap between the crystals is sufficiently large.
|
Subsets and Splits