text
stringlengths 6
128k
|
---|
We report on the retrieval of the last missing papers of the "Senatore
folder", given by Majorana to one student of his in Naples in 1938, just before
his disappearance. The mentioned manuscript is conserved at the Domus
Galilaeana in Pisa (Italy), and was written in French, probably for a
conference in Leningrad (or in Kharkov) in 1933 (or in 1934). Majorana was
invited to attend to such a conference but, actually, he did not tke part to
it. The retrieved text deals with quantum electrodynamics by using the
formalism of field quantization. It is here reported, for the first time, in
translation. An accurate historical and scientific account is given as well.
|
Long-lived high-spin super- and hyper-deformed isomeric states, which exhibit
themselves by abnormal radioactive decays, have been observed using the 16O +
197Au and 28Si + 181Ta reactions. They make it possible to understand the
production, via secondary reactions, of the long-lived superheavy element with
Z = 112 and of the abnormally low energy and very enhanced alpha-particle
groups seen in various actinide sources. They might also explain some puzzling
phenomena seen in nature.
|
The leading-order spin-orbit coupling is included in a post-Newtonian
Lagrangian formulation of spinning compact binaries, which consists of the
Newtonian term, first post-Newtonian (1PN) and 2PN non-spin terms and 2PN
spin-spin coupling. This makes a 3PN spin-spin coupling occur in the derived
Hamiltonian. The spin-spin couplings are mainly responsible for chaos in the
Hamiltonians. However, the 3PN spin-spin Hamiltonian is small and has different
signs, compared with the 2PN spin-spin Hamiltonian equivalent to the 2PN
spin-spin Lagrangian. As a result, the probability of the occurrence of chaos
in the Lagrangian formulation without the spin-orbit coupling is larger than
that in the Lagrangian formulation with the spin-orbit coupling. Numerical
evidences support the claim.
|
We introduce the term net-proliferation in the context of fisheries and
establish relations between the proliferation and net-proliferation that are
economically and sustainably favored. The resulting square root laws are
analytically derived for species following the Beverton-Holt recurrence but, we
show, can also serve as reference points for other models. The practical
relevance of these analytically derived square root laws is tested on the the
Barramundi fishery in the Southern Gulf of Carpentaria, Australia. A
Beverton-Holt model, including stochasticity to account for model uncertainty,
is fitted to a time series of catch and abundance index for this fishery.
Simulations show, that despite the stochasticity, the population levels remain
sustainable under the square root law. The application, with its inherited
model uncertainty, sparks a risk sensitivity analysis regarding the probability
of populations falling below an unsustainable threshold. Characterization of
such sensitivity helps in the understanding of both dangers of overfishing and
potential remedies.
|
Random multiplicative processes $w_t =\lambda_1 \lambda_2 ... \lambda_t$
(with < \lambda_j > 0 ) lead, in the presence of a boundary constraint, to a
distribution $P(w_t)$ in the form of a power law $w_t^{-(1+\mu)}$. We provide a
simple and physically intuitive derivation of this result based on a random
walk analogy and show the following: 1) the result applies to the asymptotic
($t \to \infty$) distribution of $w_t$ and should be distinguished from the
central limit theorem which is a statement on the asymptotic distribution of
the reduced variable ${1 \over \sqrt{t}}(log w_t -< log w_t >)$; 2) the
necessary and sufficient conditions for $P(w_t)$ to be a power law are that
<log \lambda_j > < 0 (corresponding to a drift $w_t \to 0$) and that $w_t$ not
be allowed to become too small. We discuss several models, previously
unrelated, showing the common underlying mechanism for the generation of power
laws by multiplicative processes: the variable $\log w_t$ undergoes a random
walk biased to the left but is bounded by a repulsive ''force''. We give an
approximate treatment, which becomes exact for narrow or log-normal
distributions of $\lambda$, in terms of the Fokker-Planck equation. 3) For all
these models, the exponent $\mu$ is shown exactly to be the solution of
$\langle \lambda^{\mu} \rangle = 1$ and is therefore non-universal and depends
on the distribution of $\lambda$.
|
Using near-term quantum computers to achieve a quantum advantage requires
efficient strategies to improve the performance of the noisy quantum devices
presently available. We develop and experimentally validate two efficient error
mitigation protocols named "Noiseless Output Extrapolation" and "Pauli Error
Cancellation" that can drastically enhance the performance of quantum circuits
composed of noisy cycles of gates. By combining popular mitigation strategies
such as probabilistic error cancellation and noise amplification with efficient
noise reconstruction methods, our protocols can mitigate a wide range of noise
processes that do not satisfy the assumptions underlying existing mitigation
protocols, including non-local and gate-dependent processes. We test our
protocols on a four-qubit superconducting processor at the Advanced Quantum
Testbed. We observe significant improvements in the performance of both
structured and random circuits, with up to $86\%$ improvement in variation
distance over the unmitigated outputs. Our experiments demonstrate the
effectiveness of our protocols, as well as their practicality for current
hardware platforms.
|
In this paper, we find the necessary and sufficient conditions under which
two classes of (q, a, b)-metrics are projectively related to a Kropina metric.
|
I propose a model of radiative charged-lepton and neutrino masses with $A_4$
symmetry. The soft breaking of $A_4$ to $Z_3$ lepton triality is accomplished
by dimension-three terms. The breaking of $Z_3$ by dimension-two terms allow
cobimaximal neutrino mixing $(\theta_{13} \neq 0, \theta_{23} = \pi/4,
\delta_{CP} = \pm \pi/2)$ to be realized with only very small finite calculable
deviations from the residual lepton triality. This construction solves a
long-standing technical problem inherent in renormalizable $A_4$ models since
their inception.
|
In this paper we present a proof for automatic O(a) improvement in twisted
mass lattice QCD at maximal twist, which uses only the symmetries of the
leading part in the Symanzik effective action. In the process of the proof we
clarify that the twist angle is dynamically determined by vacuum expectation
values in the Symanzik theory. For maximal twist according to this definition,
we show that scaling violations of all quantities which have non-zero values in
the continuum limit are even in a. In addition, using Wilson Chiral
Perturbation Theory (WChPT), we investigate this definition for maximal twist
and compare it to other definitions which were already employed in actual
simulations.
|
In my previous paper, I prove the existence of the Kuranishi structure for
the moduli space $\mathfrak{M}$ of zero loci of $\mathbb{Z}/2$-harmonic spinors
on a 3-manifold. So a nature question we can ask is to compute the virtual
dimension for this moduli space
$\mathfrak{M}_{g_0}:=\mathfrak{M}\cap\{g=g_0\}$. In this paper, I will first
prove that $v-dim(\mathfrak{M}_{g_0})=0$. Secondly, I will generalize this
formula on 4-manifolds by using a special type of index developed by Jochen
Bruning, Robert Seeley, and Fangyun Yang.
|
We prove a version of Bressan's mixing conjecture where the advecting field
is constrained to be a shear at each time. Also, inspired by recent work of
Blumenthal, Coti Zelati and Gvalani, we construct a particularly simple example
of a shear flow which mixes at the optimal rate. The constructed vector field
alternates randomly in time between just two distinct shears.
|
This study seeks to understand conditions under which virtual gratings
produced via vibrotaction and friction modulation are perceived as similar and
to find physical origins in the results. To accomplish this, we developed two
single-axis devices, one based on electroadhesion and one based on out-of-plane
vibration. The two devices had identical touch surfaces, and the vibrotactile
device used a novel closed-loop controller to achieve precise control of
out-of-plane plate displacement under varying load conditions across a wide
range of frequencies. A first study measured the perceptual intensity
equivalence curve of gratings generated under electroadhesion and vibrotaction
across the 20-400Hz frequency range. A second study assessed the perceptual
similarity between two forms of skin excitation given the same driving
frequency and same perceived intensity. Our results indicate that it is largely
the out-of-plane velocity that predicts vibrotactile intensity relative to
shear forces generated by friction modulation. A high degree of perceptual
similarity between gratings generated through friction modulation and through
vibrotaction is apparent and tends to scale with actuation frequency suggesting
perceptual indifference to the manner of fingerpad actuation in the upper
frequency range.
|
We comment on a claim that axion strings show a long-term logarithmic
increase in the number of Hubble lengths per Hubble volume [arXiv:2007.04990],
thereby violating the standard "scaling" expectation of an O(1) constant. We
demonstrate that the string density data presented in [arXiv:2007.04990] are
consistent with standard scaling, at a string density consistent with that
obtained by us [arXiv:1908.03522, arXiv:2102.07723] and other groups. A
transient slow growth in Hubble lengths per Hubble volume towards its constant
scaling value is explained by standard network modelling [arXiv:2102.07723].
|
We propose an optimized parameter set for protein secondary structure
prediction using three layer feed forward back propagation neural network. The
methodology uses four parameters viz. encoding scheme, window size, number of
neurons in the hidden layer and type of learning algorithm. The input layer of
the network consists of neurons changing from 3 to 19, corresponding to
different window sizes. The hidden layer chooses a natural number from 1 to 20
as the number of neurons. The output layer consists of three neurons, each
corresponding to known secondary structural classes viz. alpha helix, beta
strands and coils respectively. It also uses eight different learning
algorithms and nine encoding schemes. Exhaustive experiments were performed
using non-homologues dataset. The experimental results were compared using
performance measures like Q3, sensitivity, specificity, Mathew correlation
coefficient and accuracy. The paper also discusses the process of obtaining a
stabilized cluster of 2530 records from a collection of 11340 records. The
graphs of these stabilized clusters of records with respect to accuracy are
concave, convergence is monotonic increasing and rate of convergence is
uniform. The paper gives BLOSUM62 as the encoding scheme, 19 as the window
size, 19 as the number of neurons in the hidden layer and One- Step Secant as
the learning algorithm with the highest accuracy of 78%. These parameter values
are proposed as the optimized parameter set for the three layer feed forward
back propagation neural network for the protein secondary structure predictionv
|
We quantify the extent to which extra relativistic energy density can be
concealed by a neutrino asymmetry without conflicting with the baryon asymmetry
measured by the Wilkinson Microwave Anisotropy Probe (WMAP). In the presence of
a large electron neutrino asymmetry, slightly more than seven effective
neutrinos are allowed by Big Bang Nucleosynthesis (BBN) and WMAP at 2\sigma.
The same electron neutrino degeneracy that reconciles the BBN prediction for
the primordial helium abundance with the observationally inferred value also
reconciles the LSND neutrino with BBN by suppressing its thermalization prior
to BBN.
|
Future network services present a significant challenge for network providers
due to high number and high variety of co-existing requirements. Despite many
advancements in network architectures and management schemes, congested network
links continue to constrain the Quality of Service (QoS) for critical
applications like tele-surgery and autonomous driving. A prominent,
complimentary approach consists of congestion control (CC) protocols which
regulate bandwidth at the endpoints before network congestion occurs. However,
existing CC protocols, including recent ones, are primarily designed to handle
small numbers of requirement classes, highlighting the need for a more granular
and flexible congestion control solution.
In this paper we introduce Hercules, a novel CC protocol designed to handle
heterogeneous requirements. Hercules is based on an online learning approach
and has the capability to support any combination of requirements within an
unbounded and continuous requirements space. We have implemented Hercules as a
QUIC module and demonstrate, through extensive analysis and real-world
experiments, that Hercules can achieve up to 3.5-fold improvement in QoS
compared to state-of-the-art CC protocols.
|
Infrared (IR) image super-resolution faces challenges from homogeneous
background pixel distributions and sparse target regions, requiring models that
effectively handle long-range dependencies and capture detailed local-global
information. Recent advancements in Mamba-based (Selective Structured State
Space Model) models, employing state space models, have shown significant
potential in visual tasks, suggesting their applicability for IR enhancement.
In this work, we introduce IRSRMamba: Infrared Image Super-Resolution via
Mamba-based Wavelet Transform Feature Modulation Model, a novel Mamba-based
model designed specifically for IR image super-resolution. This model enhances
the restoration of context-sparse target details through its advanced
dependency modeling capabilities. Additionally, a new wavelet transform feature
modulation block improves multi-scale receptive field representation, capturing
both global and local information efficiently. Comprehensive evaluations
confirm that IRSRMamba outperforms existing models on multiple benchmarks. This
research advances IR super-resolution and demonstrates the potential of
Mamba-based models in IR image processing. Code are available at
\url{https://github.com/yongsongH/IRSRMamba}.
|
Glassy polymer melts such as the plastics used in pipes, structural
materials, and medical devices are ubiquitous in daily life. They accumulate
damage over time due to their use, which limits their functionalities and
demands periodic replacement. The resulting economic and social burden could be
mitigated by the design of self-healing mechanisms that expand the lifespan of
materials. However, the characteristic low molecular mobility in glassy polymer
melts intrinsically limits the design of self-healing behavior. We demonstrate
through numerical simulations that controlled oscillatory deformations enhance
the local molecular mobility of glassy polymers without compromising their
structural or mechanical stability. We apply this principle to increase the
molecular mobility around the surface of a crack, inducing fracture repair and
recovering the mechanical properties of the pristine material. Our findings
establish a general physical mechanism of self-healing in glasses that may
inspire the design and processing of new glassy materials.
|
Detached WD+MS PCEBs are perhaps the most suitable objects for testing
predictions of close-compact binary-star evolution theories, in particular, CE
evolution. The population of WD+MS PCEBs has been simulated by several authors
in the past and compared with observations. However, most of those predictions
did not take the possible contributions to the envelope ejection from
additional sources of energy (mostly recombination energy) into account. Here
we update existing binary population models of WD+MS PCEBs by assuming that a
fraction of the recombination energy available within the envelope contributes
to ejecting the envelope. We performed Monte Carlo simulations of 10^7 MS+MS
binaries for 9 different models using standard assumptions for the initial
primary mass function, binary separations, and initial-mass-ratio distribution
and evolved these systems using the publicly available BSE code. Including a
fraction of recombination energy leads to a clear prediction of a large number
of long orbital period (>~10 days) systems mostly containing high-mass WDs. The
fraction of systems with He-core WD primaries increases with the CE efficiency
and the existence of very low-mass He WDs is only predicted for high values of
the CE efficiency (>~0.5). All models predict on average longer orbital periods
for PCEBs containing C/O-core WDs than for PCEBs containing He WDs. This effect
increases with increasing values of both efficiencies. Longer periods after the
CE phase are also predicted for systems containing more massive secondary
stars. The initial-mass-ratio distribution affects the distribution of orbital
periods, especially the distribution of secondary star masses. Our simulations,
in combination with a large and homogeneous observational sample, can provide
constraints on the values of the CE efficiencies, as well as on the
initial-mass-ratio distribution for MS+MS binary stars.
|
In this paper we derive new criterion for uniform stability assessment of the
linear periodic time-varying systems $\dot x=A(t)x,$ $A(t+T)=A(t).$ As a
corollary, the lower and upper bounds for the Floquet characteristic exponents
are established. The approach is based on the use of logarithmic norm of the
system matrix $A(t).$ Finally we analyze the robustness of the stability
property under external disturbance.
|
Many proposals for fault-tolerant quantum computation require injection of
'magic states' to achieve a universal set of operations. Some qubit states are
above a threshold fidelity, allowing them to be converted into magic states via
'magic state distillation', a process based on stabilizer codes from quantum
error correction.
We define quantum weight enumerators that take into account the sign of the
stabilizer operators. These enumerators completely describe the magic state
distillation behavior when distilling T-type magic states. While it is
straightforward to calculate them directly by counting exponentially many
operator weights, it is also an NP-hard problem to compute them in general.
This suggests that finding a family of distillation schemes with desired
threshold properties is at least as hard as finding the weight distributions of
a family of classical codes.
Additionally, we develop search algorithms fast enough to analyze all useful
5 qubit codes and some 7 qubit codes, finding no codes that surpass the best
known threshold.
|
In small area estimation, it is sometimes necessary to use model-based
methods to produce estimates in areas with little or no data. In official
statistics, we often require that some aggregate of small area estimates agree
with a national estimate for internal consistency purposes. Enforcing this
agreement is referred to as benchmarking, and while methods currently exist to
perform benchmarking, few are ideal for applications with non-normal outcomes
and benchmarks with uncertainty. Fully Bayesian benchmarking is a theoretically
appealing approach insofar as we can obtain posterior distributions conditional
on a benchmarking constraint. However, existing implementations may be
computationally prohibitive. In this paper, we critically review benchmarking
methods in the context of small area estimation in low- and middle-income
countries with binary outcomes and uncertain benchmarks, and propose a novel
approach in which an unbenchmarked method that produces area-level samples can
be combined with a rejection sampler or Metropolis-Hastings algorithm to
produce benchmarked posterior distributions in a computationally efficient way.
To illustrate the flexibility and efficiency of our approach, we provide
comparisons to an existing benchmarking approach in a simulation, and
applications to HIV prevalence and under-5 mortality estimation. Code
implementing our methodology is available in the R package stbench.
|
Artificial intelligence (AI) was initially developed as an implicit moral
agent to solve simple and clearly defined tasks where all options are
predictable. However, it is now part of our daily life powering cell phones,
cameras, watches, thermostats, vacuums, cars, and much more. This has raised
numerous concerns and some scholars and practitioners stress the dangers of AI
and argue against its development as moral agents that can reason about ethics
(e.g., Bryson 2008; Johnson and Miller 2008; Sharkey 2017; Tonkens 2009; van
Wynsberghe and Robbins 2019). Even though we acknowledge the potential threat,
in line with most other scholars (e.g., Anderson and Anderson 2010; Moor 2006;
Scheutz 2016; Wallach 2010), we argue that AI advancements cannot be stopped
and developers need to prepare AI to sustain explicit moral agents and face
ethical dilemmas in complex and morally salient environments.
|
The availability of Large Language Models (LLMs) which can generate code, has
made it possible to create tools that improve developer productivity.
Integrated development environments or IDEs which developers use to write
software are often used as an interface to interact with LLMs. Although many
such tools have been released, almost all of them focus on general-purpose
programming languages. Domain-specific languages, such as those crucial for IT
automation, have not received much attention. Ansible is one such YAML-based IT
automation-specific language. Red Hat Ansible Lightspeed with IBM Watson Code
Assistant, further referred to as Ansible Lightspeed, is an LLM-based service
designed explicitly for natural language to Ansible code generation.
In this paper, we describe the design and implementation of the Ansible
Lightspeed service and analyze feedback from thousands of real users. We
examine diverse performance indicators, classified according to both immediate
and extended utilization patterns along with user sentiments. The analysis
shows that the user acceptance rate of Ansible Lightspeed suggestions is higher
than comparable tools that are more general and not specific to a programming
language. This remains true even after we use much more stringent criteria for
what is considered an accepted model suggestion, discarding suggestions which
were heavily edited after being accepted. The relatively high acceptance rate
results in higher-than-expected user retention and generally positive user
feedback. This paper provides insights on how a comparatively small, dedicated
model performs on a domain-specific language and more importantly, how it is
received by users.
|
Spectroscopic and photometric observations of the peculiar object AM 2049-691
are presented here. Its systemic velocity is V(GSR) = (10956 +-30) km/s, and
the derived distance (H(0) = 75 km/s/Mpc) results 146 Mpc. A bridge is observed
between two very distinct nuclei whose separation is about 10 kpc, as well as
two tails that emerge from the extremes SW and NE of the main body and extend
up to 41 and 58 kpc respectively. The spectral characteristics of the all
observed zones are typical of H II regions of low excitation. The internal
reddening is quit high, particularly in the NE nucleus. All the derived
equivalent widths of the H(alpha)+[N II] lines indicate enhanced star formation
compared with isolated galaxies, specially in the NE nucleus; the equivalent
width corresponding to the integrated spectrum reflects starburst activity in
the whole object, and is compatible with a merger of two disk galaxies. All the
observed characteristics of AM 2049-691 indicate it is a merger, where a
overabundance of nitrogen is detected in one of the nuclei, which has the most
evolved population and would be the most massive one. The detected total IR
emission is not very high. The integrated total color B - V corresponds to a
Sc-Scd galaxy and its average integrated population is about F7 type.
Indicative B - V colors of the nuclei, corrected for internal absorption, are
in agreement with the spectroscopic results. The central radial velocity
dispersions at the nuclei suggest that the most massive galaxy would be the
progenitor of the SW component. The observed radial velocity curve shows the
presence of two subsystems, each one associated with a different nucleus.
|
It is critical that the qualities and features of synthetically-generated,
PMU measurements used for grid analysis matches those of measurements obtained
from field-based PMUs. This ensures that analysis results generated by
researchers during grid studies replicate those outcomes typically expected by
engineers in real-life situations. In this paper, essential features associated
with industry PMU-derived data measurements are analyzed for input
considerations in the generation of vast amounts of synthetic power system
data. Inherent variabilities in PMU data as a result of the random dynamics in
power system operations, oscillatory contents, and the prevalence of bad data
are presented. Statistical results show that in the generation of large
datasets of synthetic, grid measurements, an inclusion of different data
anomalies, ambient oscillation contents, and random cases of missing data
samples due to packet drops helps to improve the realism of experimental data
used in power systems analysis.
|
I review recent selected developments in the theory and modeling of
ultrarelativistic heavy-ion collisions. I explain why relativistic viscous
hydrodynamics is now used to model the expansion of the matter formed in these
collisions. I give examples of first quantitative predictions, and I discuss
remaining open questions associated with the description of the freeze-out
process. I argue that while the expansion process is now well understood, our
knowledge of initial conditions is still poor. Recent analyses of two-particle
correlations have revealed fine structures known as ridge and shoulder, which
extend over a long range in rapidity. These correlations are thought to
originate from initial state fluctuations, whose modeling is still crude. I
discuss triangular flow, a simple mechanism recently put forward, through which
fluctuations generate the observed correlation pattern.
|
Let $(W,S)$ be a Coxeter system, let $G$ be a group of symmetries of $(W,S)$
and let $f : W \to \GL (V)$ be the linear representation associated with a root
basis $(V, \langle .,. \rangle, \Pi)$.We assume that $G \subset \GL (V)$, and
that $G$ leaves invariant $\Pi$ and $\langle .,. \rangle$. We show that $W^G$
is a Coxeter group, we construct a subset $\tilde \Pi \subset V^G$ so that
$(V^G, \langle .,. \rangle, \tilde \Pi)$ is a root basis of $W^G$, and we show
that the induced representation $f^G : W^G \to \GL(V^G)$ is the linear
representation associated with $(V^G, \langle .,. \rangle, \tilde \Pi)$.In
particular, the latter is faithful. The fact that $W^G$ is a Coxeter group is
already known and is due to M\"uhlherr and H\'ee, but also follows directly
from the proof of the other results.
|
Thousands of exoplanet detections have been made over the last twenty-five
years using Doppler observations, transit photometry, direct imaging, and
astrometry. Each of these methods is sensitive to different ranges of orbital
separations and planetary radii (or masses). This makes it difficult to fully
characterize exoplanet architectures and to place our solar system in context
with the wealth of discoveries that have been made. Here, we use the EXtreme
PREcision Spectrograph (EXPRES) to reveal planets in previously undetectable
regions of the mass-period parameter space for the star $\rho$ Coronae
Borealis. We add two new planets to the previously known system with one hot
Jupiter in a 39-day orbit and a warm super-Neptune in a 102-day orbit. The new
detections include a temperate Neptune planet ($M{\sin{i}} \sim 20$ M$_\oplus$)
in a 281.4-day orbit and a hot super-Earth ($M{\sin{i}} = 3.7$ M$_\oplus$) in a
12.95-day orbit. This result shows that details of planetary system
architectures have been hiding just below our previous detection limits; this
signals an exciting era for the next generation of extreme precision
spectrographs.
|
A long standing conjecture of Richter and Thomassen states that the total
number of intersection points between any $n$ simple closed Jordan curves in
the plane, so that any pair of them intersect and no three curves pass through
the same point, is at least $(1-o(1))n^2$.
We confirm the above conjecture in several important cases, including the
case (1) when all curves are convex, and (2) when the family of curves can be
partitioned into two equal classes such that each curve from the first class is
touching every curve from the second class. (Two curves are said to be touching
if they have precisely one point in common, at which they do not properly
cross.)
An important ingredient of our proofs is the following statement: Let $S$ be
a family of the graphs of $n$ continuous real functions defined on
$\mathbb{R}$, no three of which pass through the same point. If there are $nt$
pairs of touching curves in $S$, then the number of crossing points is
$\Omega(nt\sqrt{\log t/\log\log t})$.
|
Sco X-1 is a low-mass X-ray binary (LMXB) that has one of the most precisely
determined set of binary parameters such as the mass accretion rate, companions
mass ratio and the orbital period. For this system, as well as for a large
fraction of other well-studied LMXBs, the observationally-inferred mass
accretion rate is known to strongly exceed the theoretically expected mass
transfer rate. We suggest that this discrepancy can be solved by applying a
modified magnetic braking prescription, which accounts for increased wind mass
loss in evolved stars compared to main sequence stars. Using our mass transfer
framework based on {\tt MESA}, we explore a large range of binaries at the
onset of the mass transfer. We identify the subset of binaries for which the
mass transfer tracks cross the Sco X-1 values for the mass ratio and the
orbital period. We confirm that no solution can be found for which the standard
magnetic braking can provide the observed accretion rates, while wind-boosted
magnetic braking can provide the observed accretion rates for many progenitor
binaries that evolve to the observed orbital period and mass ratio.
|
We study the hybrid inflation with a pseudo-Nambu-Goldstone boson inflaton
and two waterfall scalar fields. The $Z_2$ symmetry for the waterfall fields
keeps inflaton potential flat against quantum corrections coming from the
waterfall couplings, and it is broken spontaneously in the vacuum without a
domain wall problem within the Hubble horizon of our universe. We show that the
$Z_2$ invariant Higgs portal couplings to the waterfall fields are responsible
for the reheating process, leading to a sufficiently large reheating
temperature after inflation. In the presence of an extra $Z'_2$ symmetry, one
of the waterfall fields or another singlet scalar field becomes a dark matter
candidate. In particular, we find that preheating is sufficient to account for
the correct relic density of the waterfall dark matter.
|
The goal of this paper is to propose and discuss a practical way to implement
the Dirac algorithm for constrained field models defined on spatial regions
with boundaries. Our method is inspired in the geometric viewpoint developed by
Gotay, Nester, and Hinds (GNH) to deal with singular Hamiltonian systems. We
pay special attention to the specific issues raised by the presence of
boundaries and provide a number of significant examples -among them field
theories related to general relativity- to illustrate the main features of our
approach.
|
Preparing many body entangled states efficiently using available interactions
is a challenging task. One solution may be to couple a system collectively with
a probe that leaves residual entanglement in the system. We investigate the
entanglement produced between two possibly distant qubits 1 and 2 that interact
locally with a third qubit 3 under unitary evolution generated by pairwise
Hamiltonians. For the case where the Hamiltonians commute, relevant to certain
quantum nondemolition measurements, the entanglement between qubits 1 and 2 is
calculated explicitly for several classes of initial states and compared with
the case of noncommuting interaction Hamiltonians. This analysis can be helpful
to identify preferable physical system interactions for entangled state
synthesis.
|
We present the general theory of clean, two-dimensional, quantum Heisenberg
antiferromagnets which are close to the zero-temperature quantum transition
between ground states with and without long-range N\'{e}el order. For
N\'{e}el-ordered states, `nearly-critical' means that the ground state
spin-stiffness, $\rho_s$, satisfies $\rho_s \ll J$, where $J$ is the
nearest-neighbor exchange constant, while `nearly-critical' quantum-disordered
ground states have a energy-gap, $\Delta$, towards excitations with spin-1,
which satisfies $\Delta \ll J$. Under these circumstances, we show that the
wavevector/frequency-dependent uniform and staggered spin susceptibilities, and
the specific heat, are completely universal functions of just three
thermodynamic parameters. Explicit results for the universal scaling functions
are obtained by a $1/N$ expansion on the $O(N)$ quantum non-linear sigma model,
and by Monte Carlo simulations. These calculations lead to a variety of
testable predictions for neutron scattering, NMR, and magnetization
measurements. Our results are in good agreement with a number of numerical
simulations and experiments on undoped and lightly-doped $La_{2-\delta}
Sr_{\delta}Cu O_4$.
|
The division of one physical 5G communications infrastructure into several
virtual network slices with distinct characteristics such as bandwidth,
latency, reliability, security, and service quality is known as 5G network
slicing. Each slice is a separate logical network that meets the requirements
of specific services or use cases, such as virtual reality, gaming, autonomous
vehicles, or industrial automation. The network slice can be adjusted
dynamically to meet the changing demands of the service, resulting in a more
cost-effective and efficient approach to delivering diverse services and
applications over a shared infrastructure. This paper assesses various machine
learning techniques, including the logistic regression model, linear
discriminant model, k-nearest neighbor's model, decision tree model, random
forest model, SVC BernoulliNB model, and GaussianNB model, to investigate the
accuracy and precision of each model on detecting network slices. The report
also gives an overview of 5G network slicing.
|
This paper presents a novel norm-one-regularized, consensus-based imaging
algorithm, based on the Alternating Direction Method of Multipliers (ADMM).
This algorithm is capable of imaging composite dielectric and metallic targets
by using limited amount of data. The distributed capabilities of the ADMM
accelerates the convergence of the imaging. Recently, a Compressive Reflector
Antenna (CRA) has been proposed as a way to provide high-sensing-capacity with
a minimum cost and complexity in the hardware architecture. The ADMM algorithm
applied to the imaging capabilities of the Compressive Antenna (CA) outperforms
current state of the art iterative reconstruction algorithms, such as
Nesterov-based methods, in terms of computational cost; and it ultimately
enables the use of a CA in quasi-real-time, compressive sensing imaging
applications.
|
The sharp-line spectrum of the Bp star HR 6000 has peculiarities that
distinguish it from those of the HgMn stars with which it is sometimes
associated. The position of the star close to the center of the Lupus 3
molecular cloud, whose estimated age is on the order of 9.1 +/- 2.1 Myr, has
lead to the hypothesis that the anomalous peculiarities of HR 6000 can be
explained by the young age of the star. Observational material from HST
provides the opportunity to extend the abundance analysis previously performed
for the optical region and clarify the properties of this remarkable peculiar
star. Our aim was to obtain the atmospheric abundances for all the elements
observed in a broad region from 1250 to 10000 A. An LTE synthetic spectrum was
compared with a high-resolution spectrum observed with STIS equipment in the
1250-3040 A interval. The adopted model is an ATLAS12 model already used for
the abundance analysis of a high-resolution optical spectrum observed at ESO
with UVES. The stellar parameters are Teff=13450K, logg=4.3, and zero
microturbulent velocity. Abundances for 28 elements and 7 upper limits were
derived from the ultraviolet spectrum. Adding results from previous work, we
have now quantitative results for 37 elements, some of which show striking
contrasts with those of a broad sample of HgMn stars. The analysis has pointed
out ionization anomalies and line-to-line variation in the derived abundances,
in particular for silicon. The inferred discrepancies could be explained by
non-LTE effects and with the occurrence of diffusion and vertical abundance
stratification. In the framework of the last hypothesis, we obtained, by means
of trial and error, empirical step functions of abundance versus optical depth
log(tau_5000) for carbon, silicon, manganese, and gold, while we failed to find
such a function for phosphorous.
|
The linearization coefficient $\mathcal{L}(L_{n_1}(x)\dots L_{n_k}(x))$ of
classical Laguerre polynomials $L_n(x)$ is known to be equal to the number of
$(n_1,\dots,n_k)$-derangements, which are permutations with a certain
condition. Kasraoui, Stanton and Zeng found a $q$-analog of this result using
$q$-Laguerre polynomials with two parameters $q$ and $y$. Their formula
expresses the linearization coefficient of $q$-Laguerre polynomials as the
generating function for $(n_1,\dots,n_k)$-derangements with two statistics
counting weak excedances and crossings. In this paper their result is proved by
constructing a sign-reversing involution on marked perfect matchings.
|
With large volumes of health care data comes the research area of
computational phenotyping, making use of techniques such as machine learning to
describe illnesses and other clinical concepts from the data itself. The
"traditional" approach of using supervised learning relies on a domain expert,
and has two main limitations: requiring skilled humans to supply correct labels
limits its scalability and accuracy, and relying on existing clinical
descriptions limits the sorts of patterns that can be found. For instance, it
may fail to acknowledge that a disease treated as a single condition may really
have several subtypes with different phenotypes, as seems to be the case with
asthma and heart disease. Some recent papers cite successes instead using
unsupervised learning. This shows great potential for finding patterns in
Electronic Health Records that would otherwise be hidden and that can lead to
greater understanding of conditions and treatments. This work implements a
method derived strongly from Lasko et al., but implements it in Apache Spark
and Python and generalizes it to laboratory time-series data in MIMIC-III. It
is released as an open-source tool for exploration, analysis, and
visualization, available at https://github.com/Hodapp87/mimic3_phenotyping
|
In this paper, a boundary scheme is proposed for the two-dimensional
five-velocity (D2Q5) lattice Boltzmann method with heterogeneous surface
reaction, in which the unknown distribution function is determined locally
based on the kinetic flux of the incident particles. Compared with previous
boundary schemes, the proposed scheme has a clear physical picture that
reflects the consumption and production in the reaction. Furthermore, the
scheme only involves local information of boundary nodes such that it can be
easily applied to complex geometric structures. In order to validate the
accuracy of the scheme, some benchmark tests, including the
convection-diffusion problems in straight and inclined channels are conducted.
Numerical results are in excellent agreement with the analytical solutions, and
the convergence tests demonstrate that second-order spatial accuracy is
achieved for straight walls, and the order of accuracy is between 1.5 and 2.0
for general inclined walls. Finally, we simulated the density driving flow with
dissolution reactions in a two-dimensional cylindrical array, and the results
agree well with those in previous studies
|
Meson Green's functions and decay constants $f_{\Gamma}$ in different
channels $\Gamma$ are calculated using the Field Correlator Method. Both,
spectrum and $f_\Gamma$, appear to be expressed only through universal
constants: the string tension $\sigma$, $\alpha_s$, and the pole quark masses.
For the $S$-wave states the calculated masses agree with the experimental
numbers within $\pm 5$ MeV. For the $D$ and $D_s$ mesons the values of $f_{\rm
P} (1S)$ are equal to 210(10) and 260(10) MeV, respectively, and their ratio
$f_{D_s}/f_D$=1.24(3) agrees with recent CLEO experiment. The values $f_{\rm
P}(1S)=182, 216, 438$ MeV are obtained for the $B$, $B_s$, and $B_c$ mesons
with the ratio $f_{B_s}/f_B$=1.19(2) and $f_D/f_B$=1.14(2). The decay constants
$f_{\rm P}(2S)$ for the first radial excitations as well as the decay constants
$f_{\rm V}(1S)$ in the vector channel are also calculated. The difference of
about 20% between $f_{D_s}$ and $f_D$, $f_{B_s}$ and $f_B$ directly follows
from our analytical formulas.
|
The B$^0_s$ and B$^+$ production yields are measured in PbPb collisions at a
center-of-mass energy per nucleon pair of 5.02 TeV. The data sample, collected
with the CMS detector at the LHC, corresponds to an integrated luminosity of
1.7 nb$^{-1}$. The mesons are reconstructed in the exclusive decay channels
B$^0_s$ $\to$ J/$\psi(\mu^+\mu^-)\phi($K$^+$K$^-)$ and B$^+$ $\to$
J/$\psi(\mu^+\mu^-)$K$^+$, in the transverse momentum range 7-50 GeV/c and
absolute rapidity 0-2.4. The B$^0_s$ meson is observed with a statistical
significance in excess of five standard deviations for the first time in
nucleus-nucleus collisions. The measurements are performed as functions of the
transverse momentum of the B mesons and of the PbPb collision centrality. The
ratio of production yields of B$^0_s$ and B$^+$ is measured and compared to
theoretical models that include quark recombination effects.
|
There is a common theme to some research questions in additive combinatorics
and noise stability. Both study the following basic question: Let $\mathcal{P}$
be a probability distribution over a space $\Omega^\ell$ with all $\ell$
marginals equal. Let $\underline{X}^{(1)}, \ldots, \underline{X}^{(\ell)}$
where $\underline{X}^{(j)} = (X_1^{(j)}, \ldots, X_n^{(j)})$ be random vectors
such that for every coordinate $i \in [n]$ the tuples $(X_i^{(1)}, \ldots,
X_i^{(\ell)})$ are i.i.d. according to $\mathcal{P}$.
A central question that is addressed in both areas is:
- Does there exist a function $c_{\mathcal{P}}()$ independent of $n$ such
that for every $f: \Omega^n \to [0, 1]$ with $\mathrm{E}[f(X^{(1)})] = \mu >
0$: \begin{align*} \mathrm{E} \left[ \prod_{j=1}^\ell f(X^{(j)}) \right]
\ge c(\mu) > 0 \, ? \end{align*}
Instances of this question include the finite field model version of Roth's
and Szemer\'edi's theorems as well as Borell's result about the optimality of
noise stability of half-spaces.
Our goal in this paper is to interpolate between the noise stability theory
and the finite field additive combinatorics theory and address the question
above in further generality than considered before. In particular, we settle
the question for $\ell = 2$ and when $\ell > 2$ and $\mathcal{P}$ has bounded
correlation $\rho(\mathcal{P}) < 1$. Under the same conditions we also
characterize the _obstructions_ for similar lower bounds in the case of $\ell$
different functions. Part of the novelty in our proof is the combination of
analytic arguments from the theories of influences and hyper-contraction with
arguments from additive combinatorics.
|
Dual-frequency comb spectroscopy has emerged as a disruptive technique for
measuring wide-spanning spectra with high resolution, yielding a particularly
powerful technique for sensitive multi-component gas analysis. We present a
spectrometer system based on dual electro-optical combs with subsequent
conversion to the mid-infrared via tunable difference frequency generation,
operating in the range from 3 to 4.7 $\mu$m. The simultaneously recorded
bandwidth is up to 454(1) GHz and a signal-to-noise ratio of 7.3(2) x 10$^2$
Hz$^{-1/2}$ can be reached. The conversion preserves the coherence of the
dual-comb within 3 s measurement time. Concentration measurements of 5 ppm
methane at 3.3 $\mu$m, 100 ppm nitrous oxide at 3.9 $\mu$m and a mixture of 15
ppm carbon monoxide and 5 % carbon dioxide at 4.5 $\mu$m are presented with a
relative precision of 1.4 % in average after 2 s measurement time. The
noise-equivalent absorbance is determined to be less than 4.6(2) x 10$^{-3}$
Hz$^{-1/2}$.
|
The Active Flux scheme is a Finite Volume scheme with additional point values
distributed along the cell boundary. It is third order accurate and does not
require a Riemann solver: the continuous reconstruction serves as initial data
for the evolution of the points values. The intercell flux is then obtained
from the evolved values along the cell boundary by quadrature. This paper
focuses on the conceptual extension of Active Flux to include source terms, and
thus for simplicity assumes the homogeneous part of the equations to be linear.
To a large part, the treatment of the source terms is independent of the choice
of the homogeneous part of the system. Additionally, only systems are
considered which admit characteristics (instead of characteristic cones). This
is the case for scalar equations in any number of spatial dimensions and
systems in one spatial dimension. Here, we succeed to extend the Active Flux
method to include (possibly nonlinear) source terms while maintaining third
order accuracy of the method. This requires a novel (approximate) operator for
the evolution of point values and a modified update procedure of the cell
average. For linear acoustics with gravity, it is shown how to achieve a
well-balanced / stationarity preserving numerical method.
|
We introduce multi-centered dilatations of rings, schemes and algebraic
spaces, a basic algebraic concept. Dilatations of schemes endowed with a
structure (e.g. monoid, group or Lie algebra) are in favorable cases schemes
endowed with the same structure. As applications, we use our new formalism to
contribute to the understanding of mono-centered dilatations, to formulate and
deduce some multi-centered congruent isomorphisms and to interpret Rost double
deformation space as both "double-centered" and mono-centered dilatations.
|
Photoacoustic spectroscopys (PAS)-based methane (CH4) detectors have garnered
significant attention with various developed systems using near-infrared (NIR)
laser sources, which requires high-energy and narrow-linewidth laser sources to
achieve high-sensitivity and low-concentration gas detection. The anti-resonant
hollow-core fiber (ARHCF) lasers in the NIR and mid-infrared (MIR) spectral
domain show a great potential for spectroscopy and high-resolution gas
detection. In this work, we demonstrate the generation of a frequency-comb-like
Raman laser with high pulse energy spanning from ultraviolet (UV) (328 nm) to
NIR (2065 nm wavelength) based on a hydrogen (H2)-filled 7-ring ARHCF. The
gas-filled ARHCF fiber is pumped with a custom-laser at 1044 nm with ~100
{\mu}J pulse energy and a few nanoseconds duration. Through stimulated Raman
scattering process, we employ the sixth-order Stokes as case example located at
~1650 nm to demonstrate how the developed high-energy and narrow-linewidth
laser source can effectively be used to detect CH4 in the NIR-II region using
the photoacoustic modality. We report the efficient detection of CH4 with
sensitivity as low as ~550 ppb with an integration time of ~40 s. In
conclusion, the main goal of this work is to demonstrate and emphasize the
potential of the gas-filled ARHCF laser technology for compact next-generation
spectroscopy across different spectral regions.
|
The head-tail modes are described for the space charge tune shift
significantly exceeding the synchrotron tune. A general equation for the modes
is derived. The spatial shapes of the modes, their frequencies, and coherent
growth rates are explored. The Landau damping rates are also found. The
suppression of the transverse mode coupling instability by the space charge is
explained.
|
The vast majority of high-performance embedded systems implement multi-level
CPU cache hierarchies. But the exact behavior of these CPU caches has
historically been opaque to system designers. Absent expensive hardware
debuggers, an understanding of cache makeup remains tenuous at best. This
enduring opacity further obscures the complex interplay among applications and
OS-level components, particularly as they compete for the allocation of cache
resources. Notwithstanding the relegation of cache comprehension to proxies
such as static cache analysis, performance counter-based profiling, and cache
hierarchy simulations, the underpinnings of cache structure and evolution
continue to elude software-centric solutions. In this paper, we explore a novel
method of studying cache contents and their evolution via snapshotting. Our
method complements extant approaches for cache profiling to better formulate,
validate, and refine hypotheses on the behavior of modern caches. We leverage
cache introspection interfaces provided by vendors to perform live cache
inspections without the need for external hardware. We present CacheFlow, a
proof-of-concept Linux kernel module which snapshots cache contents on an
NVIDIA Tegra TX1 SoC (system on chip).
|
In this note we show the existence of a family of elliptic conic bundles in
P^4 of degree 8. This family has been overlooked and in fact falsely ruled out
in a series of classification papers. Our surfaces provide a counterexample to
a conjecture of Ellingsrud and Peskine. According to this conjecture there
should be no irregular m-ruled surface in P^4 for m at least 2.
|
Authorship attribution has become increasingly accurate, posing a serious
privacy risk for programmers who wish to remain anonymous. In this paper, we
introduce SHIELD to examine the robustness of different code authorship
attribution approaches against adversarial code examples. We define four
attacks on attribution techniques, which include targeted and non-targeted
attacks, and realize them using adversarial code perturbation. We experiment
with a dataset of 200 programmers from the Google Code Jam competition to
validate our methods targeting six state-of-the-art authorship attribution
methods that adopt a variety of techniques for extracting authorship traits
from source-code, including RNN, CNN, and code stylometry. Our experiments
demonstrate the vulnerability of current authorship attribution methods against
adversarial attacks. For the non-targeted attack, our experiments demonstrate
the vulnerability of current authorship attribution methods against the attack
with an attack success rate exceeds 98.5\% accompanied by a degradation of the
identification confidence that exceeds 13\%. For the targeted attacks, we show
the possibility of impersonating a programmer using targeted-adversarial
perturbations with a success rate ranging from 66\% to 88\% for different
authorship attribution techniques under several adversarial scenarios.
|
Operator learning has emerged as a new paradigm for the data-driven
approximation of nonlinear operators. Despite its empirical success, the
theoretical underpinnings governing the conditions for efficient operator
learning remain incomplete. The present work develops theory to study the data
complexity of operator learning, complementing existing research on the
parametric complexity. We investigate the fundamental question: How many
input/output samples are needed in operator learning to achieve a desired
accuracy $\epsilon$? This question is addressed from the point of view of
$n$-widths, and this work makes two key contributions. The first contribution
is to derive lower bounds on $n$-widths for general classes of Lipschitz and
Fr\'echet differentiable operators. These bounds rigorously demonstrate a
``curse of data-complexity'', revealing that learning on such general classes
requires a sample size exponential in the inverse of the desired accuracy
$\epsilon$. The second contribution of this work is to show that ``parametric
efficiency'' implies ``data efficiency''; using the Fourier neural operator
(FNO) as a case study, we show rigorously that on a narrower class of
operators, efficiently approximated by FNO in terms of the number of tunable
parameters, efficient operator learning is attainable in data complexity as
well. Specifically, we show that if only an algebraically increasing number of
tunable parameters is needed to reach a desired approximation accuracy, then an
algebraically bounded number of data samples is also sufficient to achieve the
same accuracy.
|
Single-photon light detection and ranging (lidar) captures depth and
intensity information of a 3D scene. Reconstructing a scene from observed
photons is a challenging task due to spurious detections associated with
background illumination sources. To tackle this problem, there is a plethora of
3D reconstruction algorithms which exploit spatial regularity of natural scenes
to provide stable reconstructions. However, most existing algorithms have
computational and memory complexity proportional to the number of recorded
photons. This complexity hinders their real-time deployment on modern lidar
arrays which acquire billions of photons per second. Leveraging a recent lidar
sketching framework, we show that it is possible to modify existing
reconstruction algorithms such that they only require a small sketch of the
photon information. In particular, we propose a sketched version of a recent
state-of-the-art algorithm which uses point cloud denoisers to provide
spatially regularized reconstructions. A series of experiments performed on
real lidar datasets demonstrates a significant reduction of execution time and
memory requirements, while achieving the same reconstruction performance than
in the full data case.
|
For a graph G, we construct two algebras, whose dimensions are both equal to
the number of spanning trees of G. One of these algebras is the quotient of the
polynomial ring modulo certain monomial ideal, while the other is the quotient
of the polynomial ring modulo certain powers of linear forms. We describe the
set of monomials that forms a linear basis in each of these two algebras. The
basis elements correspond to G-parking functions that naturally came up in the
abelian sandpile model. These ideals are instances of the general class of
monotone monomial ideals and their deformations. We show that the Hilbert
series of a monotone monomial ideal is always bounded by the Hilbert series of
its deformation. Then we define an even more general class of monomial ideals
associated with posets and construct free resolutions for these ideals. In some
cases these resolutions coincide with Scarf resolutions. We prove several
formulas for Hilbert series of monotone monomial ideals and investigate when
they are equal to Hilbert series of deformations. In the appendix we discuss
the sandpile model.
|
Understanding the power spectrum of the magnetization noise is a long
standing problem. While earlier work considered superposition of 'elementary'
jumps, without reference to the underlying physics, recent approaches relate
the properties of the noise with the critical dynamics of domain walls. In
particular, a new derivation of the power spectrum exponent has been proposed
for the random-field Ising model. We apply this approach to experimental data,
showing its validity and limitations.
|
Life occurs in ionic solutions, not pure water. The ionic mixtures of these
solutions are very different from water and have dramatic effects on the cells
and molecules of biological systems, yet theories and simulations cannot
calculate their properties. I suggest the reason is that existing theories stem
from the classical theory of ideal or simple gases in which (to a first
approximation) atoms do not interact. Even the law of mass action describes
reactants as if they were ideal. I propose that theories of ionic solutions
should start with the theory of complex fluids because that theory is designed
to deal with interactions from the beginning. The variational theory of complex
fluids is particularly well suited to describe mixtures like the solutions in
and outside biological cells. When a component or force is added to a solution,
the theory derives - by mathematics alone - a set of partial differential
equations that captures the resulting interactions self-consistently. Such a
theory has been implemented and shown to be computable in biologically relevant
systems but it has not yet been thoroughly tested in equilibrium or flow.
|
The aetiology of head and neck squamous cell carcinoma (HNSCC) involves
multiple carcinogens such as alcohol, tobacco and infection with human
papillomavirus (HPV). As the HPV infection influences the prognosis, treatment
and survival of patients with HNSCC, it is important to determine the HPV
status of these tumours. In this paper, we propose a novel triplet-ranking loss
function and a multiple instance learning pipeline for HPV status prediction.
This achieves a new state-of-the-art performance in HPV detection using only
the routine H&E stained WSIs on two HNSCC cohorts. Furthermore, a comprehensive
tumour microenvironment profiling was performed, which characterised the unique
patterns between HPV+/- HNSCC from genomic, immunology and cellular
perspectives. Positive correlations of the proposed score with different
subtypes of T cells (e.g. T cells follicular helper, CD8+ T cells), and
negative correlations with macrophages and connective cells (e.g. fibroblast)
were identified, which is in line with clinical findings. Unique gene
expression profiles were also identified with respect to HPV infection status,
and is in line with existing findings.
|
We study the dynamics of an infinite regular lattice of classical charged
oscillators. Each individual oscillator is described as a point particle
subject to a harmonic restoring potential, to the retarded electromagnetic
field generated by all the other particles, and to the radiation reaction
expressed according to the Lorentz--Dirac equation. Exact normal mode
solutions, describing the propagation of plane electromagnetic waves through
the lattice, are obtained for the complete linearized system of infinitely many
oscillators. At variance with all the available results, our method is valid
for any values of the frequency, or of the ratio between wavelength and lattice
parameter. A remarkable feature is that the proper inclusion of radiation
reaction in the dynamics of the individual oscillators does not give rise to
any extinction coefficient for the global normal modes of the lattice. The
dispersion relations resulting from our solution are numerically studied for
the case of a simple cubic lattice. New predictions are obtained in this way
about the behavior of the crystal at frequencies near the proper oscillation
frequency of the dipoles.
|
We investigate renormalization group limit cycles within the similarity
renormalization group (SRG) and discuss their signatures in the evolved
interaction. A quantitative method to detect limit cycles in the interaction
and to extract their period is proposed. Several SRG generators are compared
regarding their suitability for this purpose. As a test case, we consider the
limit cycle of the inverse square potential.
|
We give some new identities for (h; q)-Genocchi numbers and polynomials by
means of the fermionic p-adic q-integral on Zp and the weighted q-Bernstein
polynomials.
|
We study the impact of additional couplings in the relativistic mean field
(RMF) models, in conjunction with antikaon condensation, on various neutron
star properties. We analyze different properties such as in-medium antikaon and
nucleon effective masses, antikaon energies, chemical potentials and the
mass-radius relations of neutron star (NS). We calculate the NS properties with
the RMF (NL3), E-RMF (G1, G2) and FSU2.1 models, which are quite successful in
explaining several finite nuclear properties. Our results show that the onset
of kaon condensation in NS strongly depends on the parameters of the
Lagrangian, especially the additional couplings which play a significant role
at higher densities where antikaons dominate the behavior of equation of state.
|
We report on the discovery of two emission features observed in the X-ray
spectrum of the afterglow of the gamma-ray burst (GRB) of 16 Dec. 1999 by the
Chandra X-Ray Observatory. These features are identified with the Ly$_{\alpha}$
line and the narrow recombination continuum by hydrogenic ions of iron at a
redshift $z=1.00\pm0.02$, providing an unambiguous measurement of the distance
of a GRB. Line width and intensity imply that the progenitor of the GRB was a
massive star system that ejected, before the GRB event, $\approx 0.01 \Ms$ of
iron at a velocity $\approx 0.1 c$, probably by a supernova explosion.
|
Polymers, integral to advancements in high-tech fields, necessitate the study
of their thermal conductivity (TC) to enhance material attributes and energy
efficiency. The TC of polymers obtained by molecular dynamics (MD) calculations
and experimental measurements is slow, and it is difficult to screen polymers
with specific TC in a wide range. Existing machine learning (ML) techniques for
determining polymer TC suffer from the problems of too large feature space and
cannot guarantee very high accuracy. In this work, we leverage TCs from
accessible datasets to decode the Simplified Molecular Input Line Entry System
(SMILES) of polymers into ten features of distinct physical significance. A
novel evaluation model for polymer TC is formulated, employing four ML
strategies. The Gradient Boosting Decision Tree (GBDT)-based model, a focal
point of our design, achieved a prediction accuracy of R$^2$=0.88 on a dataset
containing 400 polymers. Furthermore, we used an interpretable ML approach to
discover the significant contribution of quantitative estimate of drug-likeness
and number of rotatable bonds features to TC, and analyzed the physical
mechanisms involved. The ML method we developed provides a new idea for
physical modeling of polymers, which is expected to be generalized and applied
widely in constructing polymers with specific TCs and predicting all other
properties of polymers.
|
Alternative theories of gravity and the parameterized deviation approach
allow black hole solutions to have additional parameters beyond mass, charge
and angular momentum. Matter fields could be, in principle, affected by the
additional parameters of these solutions. We compute the absorption cross
section of massless spin-0 waves by static Konoplya-Zhidenko black holes,
characterized by a deformation parameter introduced in the mass term, and
compare it with the well-known absorption of a Schwarzschild black hole with
the same mass. We compare our numerical results with the sinc approximation in
the high-frequency limit, finding excellent agreement.
|
We reveal properties of global modes of linear buoyancy instability in stars,
characterised by the celebrated Schwarzschild criterion, using non-Hermitian
topology. We identify a ring of Exceptional Points of order 4 that originates
from the pseudo-Hermitian and pseudo-chiral symmetries of the system. The ring
results from the merging of a dipole of degeneracy points in the Hermitian
stablystratified counterpart of the problem. Its existence is related to
spherically symmetric unstable modes. We obtain the conditions for which
convection grows over such radial modes. Those are met at early stages of
low-mass stars formation. We finally show that a topological wave is robust to
the presence of convective regions by reporting the presence of a mode
transiting between the wavebands in the non-Hermitian problem, strengthening
their relevance for asteroseismology.
|
Abstractive dialogue summarization has received increasing attention
recently. Despite the fact that most of the current dialogue summarization
systems are trained to maximize the likelihood of human-written summaries and
have achieved significant results, there is still a huge gap in generating
high-quality summaries as determined by humans, such as coherence and
faithfulness, partly due to the misalignment in maximizing a single
human-written summary. To this end, we propose to incorporate different levels
of human feedback into the training process. This will enable us to guide the
models to capture the behaviors humans care about for summaries. Specifically,
we ask humans to highlight the salient information to be included in summaries
to provide the local feedback , and to make overall comparisons among summaries
in terms of coherence, accuracy, coverage, concise and overall quality, as the
global feedback. We then combine both local and global feedback to fine-tune
the dialog summarization policy with Reinforcement Learning. Experiments
conducted on multiple datasets demonstrate the effectiveness and generalization
of our methods over the state-of-the-art supervised baselines, especially in
terms of human judgments.
|
We study the monoid of global invariant types modulo domination-equivalence
in the context of o-minimal theories. We reduce its computation to the problem
of proving that it is generated by classes of 1-types. We show this to hold in
Real Closed Fields, where generators of this monoid correspond to invariant
convex subrings of the monster model. Combined with arxiv:1702.06504, this
allows us to compute the domination monoid in the weakly o-minimal theory of
Real Closed Valued Fields.
|
Ordinary differential equations have been used to model dynamical systems in
a broad range. Model checking for parametric ordinary differential equations is
a necessary step to check whether the assumed models are plausible. In this
paper we introduce three test statistics for their different purposes. We first
give a trajectory matching-based test for the whole system. To further identify
which component function(s) would be wrongly modelled, we introduce two test
statistics that are based on integral matching and gradient matching
respectively. We investigate the asymptotic properties of the three test
statistics under the null, global and local alternative hypothesis. To achieve
these purposes, we also investigate the asymptotic properties of nonlinear
least squares estimation and two-step collocation estimation under both the
null and alternatives. The results about the estimations are also new in the
literature. To examine the performances of the tests, we conduct several
numerical simulations. A real data example about immune cell kinetics and
trafficking for influenza infection is analyzed for illustration.
|
This paper presents a distributed approach to provide persistent coverage of
an arbitrarily shaped area using heterogeneous coverage of fixed-wing unmanned
aerial vehicles (UAVs), and to recover from simultaneous failures of multiple
UAVs. The proposed approach discusses level-homogeneous deployment and
maintenance of a homogeneous fleet of fixed-wing UAVs given the boundary
information and the minimum loitering radius. The UAVs are deployed at
different altitude levels to provide heterogeneous coverage and sensing. We use
an efficient square packing method to deploy the UAVs, given the minimum loiter
radius and the area boundary. The UAVs loiter over the circles inscribed over
these packing squares in a synchronized motion to fulfill the full coverage
objective. An top-down hierarchy of the square packing, where each outer square
(super-square) is partitioned into four equal-sized inner squares (sub-square),
is exploited to introduce resilience in the deployed UAV-network. For a failed
sub-square UAV, a replacement neighbor is chosen considering the effective
coverage and deployed to the corresponding super-square at a higher altitude to
recover full coverage, trading-off with the quality of coverage of the
sub-area. This is a distributed approach as all the decision making is done
within close range of the loss region, and it can be scaled and adapted to
various large scale area and UAV configurations. Simulation results have been
presented to illustrate and verify the applicability of the approach.
|
We analyse the effect of synchronization between noise and periodic signal in
a two-state spatially extended system analytically. Resonance features are
demonstrated. To have the maximum cooperation between signal and noise, it is
shown that noise strength at resonance should increase linearly with the
frequency of the signal. The time scale of the process at resonance is also
shown to increase linearly with the period of the signal.
|
Mutualism is a biological interaction mutually beneficial for both species
involved, such as the interaction between plants and their pollinators. Real
mutualistic communities can be understood as weighted bipartite networks and
they present a nested structure and truncated power law degree and strength
distributions. We present a novel link aggregation model that works on a
strength-preferential attachment rule based on the Individual Neutrality
hypothesis. The model generates mutualistic networks with emergent nestedness
and truncated distributions. We provide some analytical results and compare the
simulated and empirical network topology. Upon further improving the shape of
the distributions, we have also studied the role of forbidden interactions on
the model and found that the inclusion of forbidden links does not prevent for
the appearance of super-generalist species. A Python script with the model
algorithms is available.
|
Building on the analytical description of the post-merger (ringdown) waveform
of coalescing, non-precessing, spinning, (BBHs) introduced in Phys. Rev. D 90,
024054 (2014), we propose an analytic, closed form, time-domain, representation
of the $\ell=m=2$ gravitational radiation mode emitted after merger. This
expression is given as a function of the component masses and dimensionless
spins $(m_{1,2},\chi_{1,2})$ of the two inspiralling objects, as well as of the
mass $M_{\rm BH}$ and (complex) frequency $\sigma_{1}$ of the fundamental
quasi-normal mode of the remnant black hole. Our proposed template is obtained
by fitting the post-merger waveform part of several publicly available
numerical relativity simulations from the Simulating eXtreme Spacetimes (SXS)
catalog and then suitably interpolating over (symmetric) mass ratio and spins.
We show that this analytic expression accurately reproduces ($\sim$~0.01 rad)
the phasing of the post-merger data of other datasets not used in its
construction. This is notably the case of the spin-aligned run SXS:BBH:0305,
whose intrinsic parameters are consistent with the 90\% credible intervals
reported by the parameter-estimation followup of GW150914 in Phys. Rev. Lett.
116 (2016) no.24, 241102. Using SXS waveforms as "experimental" data, we
further show that our template could be used on the actual GW150914 data to
perform a new measure the complex frequency of the fundamental quasi-normal
mode so to exploit the complete (high signal-to-noise-ratio) post-merger
waveform.
|
The integration of graphene with complex-oxide heterostructures such as
LaAlO$_3$/SrTiO$_3$ offers the opportunity to combine the multifunctional
properties of an oxide interface with the electronic properties of graphene.
The ability to control interface conduction through graphene and understanding
how it affects the intrinsic properties of an oxide interface are critical to
the technological development of novel multifunctional devices. Here we
demonstrate several device archetypes in which electron transport at an oxide
interface is modulated using a patterned graphene top gate. Nanoscale devices
are fabricated at the oxide interface by conductive atomic force microscope
(c-AFM) lithography, and transport measurements are performed as a function of
the graphene gate voltage. Experiments are performed with devices written
adjacent to or directly underneath the graphene gate. Unique capabilities of
this approach include the ability to create highly flexible device
configurations, the ability to modulate carrier density at the oxide interface,
and the ability to control electron transport up to the
single-electron-tunneling regime, while maintaining intrinsic transport
properties of the oxide interface. Our results facilitate the design of a
variety of nanoscale devices that combine unique transport properties of these
two intimately coupled two-dimensional electron systems.
|
This topical review describes the methodology of continuum variational and
diffusion quantum Monte Carlo calculations. These stochastic methods are based
on many-body wave functions and are capable of achieving very high accuracy.
The algorithms are intrinsically parallel and well-suited to petascale
computers, and the computational cost scales as a polynomial of the number of
particles. A guide to the systems and topics which have been investigated using
these methods is given. The bulk of the article is devoted to an overview of
the basic quantum Monte Carlo methods, the forms and optimisation of wave
functions, performing calculations within periodic boundary conditions, using
pseudopotentials, excited-state calculations, sources of calculational
inaccuracy, and calculating energy differences and forces.
|
AliEn (ALICE Environment) is a GRID-like system for large scale job
submission and distributed data management developed and used in the context of
ALICE, the CERN LHC heavy-ion experiment. With the aim of exploiting upcoming
Grid resources to run AliEn-managed jobs and store the produced data, the
problem of AliEn-EDG interoperability was addressed and an in-terface was
designed. One or more EDG (European Data Grid) User Interface machines run the
AliEn software suite (Cluster Monitor, Storage Element and Computing Element),
and act as interface nodes between the systems. An EDG Resource Broker is seen
by the AliEn server as a single Computing Element, while the EDG storage is
seen by AliEn as a single, large Storage Element; files produced in EDG sites
are registered in both the EDG Replica Catalogue and in the AliEn Data
Catalogue, thus ensuring accessibility from both worlds. In fact, both
registrations are required: the AliEn one is used for the data management, the
EDG one to guarantee the integrity and access to EDG produced data. A prototype
interface has been successfully deployed using the ALICE AliEn Server and the
EDG and DataTAG Testbeds.
|
In this paper, the problem of automating the pre-grasps generation for novel
3d objects has been discussed. The objects represented as cloud of 3D points
are split into parts and organized in a tree structure, where parts are
approximated by simple box primitives. Applying grasping only on the individual
object parts may miss a good grasp which involves a combination of parts. The
problem has been addressed by traversing the decomposition tree and checking
each node of the tree for possible pre-grasps against a set of conditions.
Further, a face mask has been introduced to encode the free and blocked faces
of the box primitives. Pre-grasps are generated only for the free faces.
Finally, the proposed method implemented on a set twenty-four household objects
and toys, where a grasp planner based on object slicing method has been used to
compute the contact-level grasp plan.
|
Blazars are high-energy engines providing us natural laboratories to study
particle acceleration, relativistic plasma processes, magnetic field dynamics,
black hole physics. Key informations are provided by observations at
high-energy (in particular by Fermi/LAT) and very-high energy (by Cherenkov
telescopes). I give a short account of the current status of the field, with
particular emphasis on the theoretical challenges connected to the observed
ultra-fast variability events and to the emission of flat spectrum radio
quasars in the very high energy band.
|
We report results from Hubble Space Telescope WFPC2 imaging of the field of
the luminous, bursting X-ray source in the globular cluster NGC 6441. Although
the X-ray position is known to a precision of a few arcseconds, this source is
only ~6'' from the cluster center, and the field contains hundreds of stars
within the 3'' X-ray error circle, making it difficult to isolate the optical
counterpart. Nevertheless, our multicolor images reveal a single, markedly
UV-excess object with m_{336}=19.0, m_{439}=19.3, within the X-ray error
circle. Correcting for substantial reddening and bandpass differences, we infer
B_0=18.1, (U-B)_0=-1.0, clearly an unusual star for a globular cluster.
Furthermore, we observe an ultraviolet intensity variation of 30% for this
object over 0.5 hr, as well as an even greater variation in m_{439} between two
HST observations taken approximately one year apart. The combination of
considerable UV-excess and significant variability strongly favors this object
as the optical counterpart to the low-mass X-ray binary X1746-370. With a group
of five optical counterparts to high-luminosity globular cluster X-ray sources
now known, we present a homogeneous set of HST photometry on these objects, and
compare their optical properties with those of field low-mass X-ray binaries.
The mean (U-B)_0 color of the cluster sources is identical to that of the field
sources, and the mean M_{B_0} is similar to bursters in the field. However, the
ratio of optical to X-ray flux of cluster sources seems to show a significantly
larger dispersion than field sources.
|
We present a generalized reduction procedure which encompasses the one based
on the momentum map and the projection method. By using the duality between
manifolds and ring of functions defined on them, we have cast our procedure in
an algebraic context. In this framework we give a simple example of reduction
in the non-commutative setting.
|
Central exclusive processes can be studied in CMS by combining the
information of the central detector with the Precision Proton Spectrometer
(PPS). PPS detectors, placed symmetrically at more than 200 m from the
interaction point, can detect the scattered protons that survive the
interaction. PPS has taken data at high luminosity while fully integrated in
the CMS experiment. The total amount of collected data corresponds to more than
100 fb$^{-1}$ during the LHC Run 2. PPS consists of 3D silicon tracking
stations as well as timing detectors that measure both the position and
direction of protons and their time-of-flight with high precision. The
detectors are hosted in special movable vacuum chambers, the Roman Pots, which
are placed in the primary vacuum of the LHC beam pipe. The sensors reach a
distance of few mm from the beam. Detectors have to operate in vacuum and must
be able to sustain highly non-uniform irradiation: sensors used in Run 2 have
accumulated an integrated dose with a local peak of $\sim 5 \cdot 10^{15}$
protons/cm$^2$. The timing system is made with high purity scCVD diamond
sensors. A new architecture with two diamond crystals read out in parallel by
the same electronic channel has been used to enhance the detector performance.
In this paper, after a general overview of the PPS detector, we describe the
timing system in detail. The sensor and the dedicated amplification chain are
described, together with the signal digitization technique. Performance of the
detector in Run 2 is reported. Recently the sensors used in Run 2 have been
tested for efficiency and timing performance in a dedicated test beam at DESY.
Preliminary results on radiation damage are reported. Important upgrades of the
timing system are ongoing for the LHC Run 3, with the goal of reaching an
ultimate timing resolution better than 30 ps; they are also discussed here.
|
Using the theory of minimal models of quasi-projective surfaces we give a new
proof of the theorem of Lin-Zaidenberg which says that every topologically
contractible algebraic curve in the complex affine plane has equation $X^n=Y^m$
in some algebraic coordinates on the plane. This gives also a proof of the
theorem of Abhyankar-Moh-Suzuki concerning embeddings of the complex line into
the plane. Independently, we show how to deduce the latter theorem from basic
properties of $\mathbb{Q}$-acyclic surfaces.
|
We examine the magnetic correlations in quantum spin models that were derived
recently as effective low-energy theories for electronic correlation effects on
the edge states of graphene nanoribbons. For this purpose, we employ quantum
Monte Carlo simulations to access the large-distance properties, accounting for
quantum fluctuations beyond mean-field-theory approaches to edge magnetism. For
certain chiral nanoribbons, antiferromagnetic inter-edge couplings were
previously found to induce a gapped quantum disordered ground state of the
effective spin model. We find that the extended nature of the intra-edge
couplings in the effective spin model for zigzag nanoribbons leads to a quantum
phase transition at a large, finite value of the inter-edge coupling. This
quantum critical point separates the quantum disordered region from a gapless
phase of stable edge magnetism at weak intra-edge coupling, which includes the
ground states of spin-ladder models for wide zigzag nanoribbons. To study the
quantum critical behavior, the effective spin model can be related to a model
of two antiferromagnetically coupled Haldane-Shastry spin-half chains with
long-ranged ferromagnetic intra-chain couplings. The results for the critical
exponents are compared also to several recent renormalization group
calculations for related long-ranged interacting quantum systems.
|
Magic is a critical property of quantum states that plays a pivotal role in
fault-tolerant quantum computation. Simultaneously, random states have emerged
as a key element in various randomized techniques within contemporary quantum
science. In this study, we establish a direct connection between these two
notions. More specifically, our research demonstrates that when a subsystem of
a quantum state is measured, the resultant projected ensemble of the unmeasured
subsystem can exhibit a high degree of randomness that is enhanced by the
inherent 'magic' of the underlying state. We demonstrate this relationship
rigorously for quantum state 2-designs, and present compelling numerical
evidence to support its validity for higher-order quantum designs. Our findings
suggest an efficient approach for leveraging magic as a resource to generate
random quantum states.
|
This paper presents preliminary works on using Word Embedding (word2vec) for
query expansion in the context of Personalized Information Retrieval.
Traditionally, word embeddings are learned on a general corpus, like Wikipedia.
In this work we try to personalize the word embeddings learning, by achieving
the learning on the user's profile. The word embeddings are then in the same
context than the user interests. Our proposal is evaluated on the CLEF Social
Book Search 2016 collection. The results obtained show that some efforts should
be made in the way to apply Word Embedding in the context of Personalized
Information Retrieval.
|
The Strict Avalanche Criterion (SAC) is a property of vectorial Boolean
functions that is used in the construction of strong S-boxes. We show in this
paper how to generalize the concept of SAC to address possible c-differential
attacks, in the realm of finite fields. We define the concepts of c-Strict
Avalanche Criterion (c-SAC) and c-Strict Avalanche Criterion of order m
(c-SAC(m)), and generalize results of (Li and Cusick, 2005). We also show
computationally how the new definition is not equivalent to the existing
concepts of c-bent1-ness (Stanica et al., 2020), nor (for n = m) PcN-ness
(Ellingsen et al., 2020)
|
We present a new method to analyze anisotropic flow from the genuine
correlation among a large number of particles, focusing on the practical
implementation of the method.
|
For more than 20 years it has been debated if yield stress fluids are solid
below the yield stress or actually flow; whether true yield stress fluids exist
or not. Advocates of the true yield stress picture have demonstrated that the
effective viscosity increases very rapidly as the stress is decreased towards
the yield stress. Opponents have shown that this viscosity increase levels off,
and that the material behaves as a Newtonian fluid of very high viscosity below
the yield stress. In this paper, we demonstrate experimentally (on four
different materials, using three different rheometers, five different
geometries, and two different measurement methods) that the low-stress
Newtonian viscosity is an artifact that arises in non steady state experiments.
For measurements as long as 10,000 seconds we find that the value of the
'Newtonian viscosity' increases indefinitely. This proves that the yield stress
exists and marks a sharp transition between flowing states and states where the
steady state viscosity is infinite -a solid!
|
A phase diagram for the step faceting phase, the step droplet phase, and the
Gruber-Mullins-Pokrovsky-Talapov (GMPT) phase on a crystal surface is obtained
by calculating the surface tension with the density matrix renormalization
group method. The model based on the calculations is the restricted
solid-on-solid (RSOS) model with a point-contact-type step-step attraction
(p-RSOS model) on a square lattice. The point-contact-type step-step attraction
represents the energy gain obtained by forming a bonding state with orbital
overlap at the meeting point of the neighbouring steps. Owing to the sticky
character of steps, there are two phase transition temperatures, $T_{f,1}$ and
$T_{f,2}$. At temperatures $T < T_{f,1}$, the anisotropic surface tension has a
disconnected shape around the (111) surface. At $T<T_{f,2}<T_{f,1}$, the
surface tension has a disconnected shape around the (001) surface. On the (001)
facet edge in the step droplet phase, the shape exponent normal to the mean
step running direction $\theta_n=2$ at $T$ near $T_{f,2}$, which is different
from the GMPT universal value $\theta_n=3/2$. On the (111) facet edge,
$\theta_n=4/3$ only on $T_{f,1}$. To understand how the system undergoes phase
transition, we focus on the connection between the p-RSOS model and the
one-dimensional spinless quasi-impenetrable attractive bosons at absolute zero.
|
A comparison of the hot and cool boundaries of the classical instability
strip with observations has been an important test for stellar structure and
evolution models of post- and main sequence stars. Over the last few years, the
number of pulsating pre-main sequence (PMS) stars has increased significantly:
36 PMS pulsators and candidates are known as of June 2007. This number allows
to investigate the location of the empirical PMS instability region and to
compare its boundaries to those of the classical (post- and main sequence)
instability strip. Due to the structural differences of PMS and (post-)main
sequence stars, the frequency spacings for nonradial modes will be measurably
different, thus challenging asteroseismology as a diagnostic tool.
|
The discrete element method (DEM) is providing a new modeling approach for
describing sea ice dynamics. It exploits particle-based methods to characterize
the physical quantities of each sea ice floe along its trajectory under
Lagrangian coordinates. One major challenge in applying the DEM models is the
heavy computational cost when the number of floes becomes large. In this paper,
an efficient Lagrangian parameterization algorithm is developed, which aims at
reducing the computational cost of simulating the DEM models while preserving
the key features of the sea ice. The new parameterization takes advantage of a
small number of artificial ice floes, named the superfloes, to effectively
approximate a considerable number of the floes, where the parameterization
scheme satisfies several important physics constraints. The physics constraints
guarantee the superfloe parameterized system will have similar short-term
dynamical behavior as the full system. These constraints also allow the
superfloe parameterized system to accurately quantify the long-range
uncertainty, especially the non-Gaussian statistical features, of the full
system. In addition, the superfloe parameterization facilitates a systematic
noise inflation strategy that significantly advances an ensemble-based data
assimilation algorithm for recovering the unobserved ocean field underneath the
sea ice. Such a new noise inflation method avoids ad hoc tunings as in many
traditional algorithms and is computationally extremely efficient. Numerical
experiments based on an idealized DEM model with multiscale features illustrate
the success of the superfloe parameterization in quantifying the uncertainty
and assimilating both the sea ice and the associated ocean field.
|
In this paper we compute classical Minkowsky spacetime solutions of pure
SU(2) and SU(3) gauge theories, in Landau gauge. The solutions are regular
everywhere except at the origin and/or infinity, are characterized by a four
momentum $k$ such that $k^2 = 0$ and resemble QED configurations. The classical
solutions suggest a particle-independent description of hadrons, similarly to
the Atomic and Nuclear energy levels, which is able to reproduce the heavy
quarkonium spectrum with a precision below 10%. Typical errors in the
theoretical mass prediction relative to the measured mass being of the order of
2-4%.
|
Background: Analysing tumour architecture for metastatic potential usually
focuses on phenotypic differences due to cellular morphology or specific
genetic mutations, but often ignore the cell's position within the
heterogeneous substructure. Similar disregard for local neighborhood structure
is common in mathematical models.
Methods: We view the dynamics of disease progression as an evolutionary game
between cellular phenotypes. A typical assumption in this modeling paradigm is
that the probability of a given phenotypic strategy interacting with another
depends exclusively on the abundance of those strategies without regard local
heterogeneities. We address this limitation by using the Ohtsuki-Nowak
transform to introduce spatial structure to the go vs. grow game.
Results: We show that spatial structure can promote the invasive (go)
strategy. By considering the change in neighbourhood size at a static boundary
-- such as a blood-vessel, organ capsule, or basement membrane -- we show an
edge effect that allows a tumour without invasive phenotypes in the bulk to
have a polyclonal boundary with invasive cells. We present an example of this
promotion of invasive (EMT positive) cells in a metastatic colony of prostate
adenocarcinoma in bone marrow.
Interpretation: Pathologic analyses that do not distinguish between cells in
the bulk and cells at a static edge of a tumour can underestimate the number of
invasive cells. We expect our approach to extend to other evolutionary game
models where interaction neighborhoods change at fixed system boundaries.
|
In deep learning, transfer learning (TL) has become the de facto approach
when dealing with image related tasks. Visual features learnt for one task have
been shown to be reusable for other tasks, improving performance significantly.
By reusing deep representations, TL enables the use of deep models in domains
with limited data availability, limited computational resources and/or limited
access to human experts. Domains which include the vast majority of real-life
applications. This paper conducts an experimental evaluation of TL, exploring
its trade-offs with respect to performance, environmental footprint, human
hours and computational requirements. Results highlight the cases were a cheap
feature extraction approach is preferable, and the situations where an
expensive fine-tuning effort may be worth the added cost. Finally, a set of
guidelines on the use of TL are proposed.
|
High-scale supersymmetry (SUSY) with a split spectrum has become increasingly
interesting given the current experimental results. A SUSY scale above the weak
scale could be naturally associated with a heavy unstable gravitino, whose
decays populate the dark matter (DM) particles. In the mini-split scenario with
gravitino at about the PeV scale and the lightest TeV scale neutralino being (a
component of) DM, the requirement that the DM relic abundance resulting from
gravitino decays does not overclose the Universe and satisfies the indirect
detection constraints demand the reheating temperature to be below 10^9 -
10^{10} GeV. On the other hand, the BICEP2 result prefers a heavy inflaton with
mass at around 10^{13} GeV and a reheating temperature at or above 10^9 GeV
with some general assumptions. The mild tension could be alleviated if SUSY
scale is even higher with the gravitino mass above the PeV scale. Intriguingly,
in no-scale supergravity, gravitinos could be very heavy at about 10^{13} GeV,
the inflaton mass scale, while gauginos could still be light at the TeV scale.
|
In singularity generating spacetimes both the out-going and in-going
expansions of null geodesic congruences $\theta ^{+}$ and $\theta ^{-}$ should
become increasingly negative without bound, inside the horizon. This behavior
leads to geodetic incompleteness which in turn predicts the existence of a
singularity. In this work we inquire on whether, in gravitational collapse,
spacetime can sustain singularity-free trapped surfaces, in the sense that such
a spacetime remains geodetically complete. As a test case, we consider a well
known solution of the Einstien Field Equations which is Schwarzschild-like at
large distances and consists of a fluid with a $p=-\rho $ equation of state
near $r=0$. By following both the expansion parameters $\theta ^{+}$ and
$\theta ^{-}$ across the horizon and into the black hole we find that both
$\theta ^{+}$ and $\theta ^{+}\theta ^{-}$ have turning points inside the
trapped region. Further, we find that deep inside the black hole there is a
region $0\leq r<r_{0}$ (that includes the black hole center) which is not
trapped. Thus the trapped region is bounded both from outside and inside. The
spacetime is geodetically complete, a result which violates a condition for
singularity formation. It is inferred that in general if gravitational collapse
were to proceed with a $p=-\rho $ fluid formation, the resulting black hole may
be singularity-free.
|
I discuss the relationship between edge exponents in the statistics of work
done, dynamical phase transitions, and the role of different kinds of
excitations appearing when a non-equilibrium protocol is performed on a closed,
gapped, one-dimensional system. I show that the edge exponent in the
probability density function of the work is insensitive to the presence of
interactions and can take only one of three values: +1/2, -1/2 and -3/2. It
also turns out that there is an interesting interplay between spontaneous
symmetry breaking or the presence of bound states and the exponents. For
instantaneous global protocols, I find that the presence of the one-particle
channel creates dynamical phase transitions in the time evolution.
|
The $g$-girth-thickness $\theta(g,G)$ of a graph $G$ is the smallest number
of planar subgraphs of girth at least $g$ whose union is $G$. In this paper, we
calculate the $4$-girth-thickness $\theta(4,G)$ of the complete $m$-partite
graph $G$ when each part has an even number of vertices.
|
Aligning Large Language Models (LLMs) is crucial for enhancing their safety
and utility. However, existing methods, primarily based on preference datasets,
face challenges such as noisy labels, high annotation costs, and privacy
concerns. In this work, we introduce Alignment from Demonstrations (AfD), a
novel approach leveraging high-quality demonstration data to overcome these
challenges. We formalize AfD within a sequential decision-making framework,
highlighting its unique challenge of missing reward signals. Drawing insights
from forward and inverse reinforcement learning, we introduce divergence
minimization objectives for AfD. Analytically, we elucidate the mass-covering
and mode-seeking behaviors of various approaches, explaining when and why
certain methods are superior. Practically, we propose a computationally
efficient algorithm that extrapolates over a tailored reward model for AfD. We
validate our key insights through experiments on the Harmless and Helpful
tasks, demonstrating their strong empirical performance while maintaining
simplicity.
|
The transformation theory of optics and acoustics is developed for the
equations of linear anisotropic elasticity. The transformed equations
correspond to non-unique material properties that can be varied for a given
transformation by selection of the matrix relating displacements in the two
descriptions. This gauge matrix can be chosen to make the transformed density
isotropic for any transformation although the stress in the transformed
material is not generally symmetric. Symmetric stress is obtained only if the
gauge matrix is identical to the transformation matrix, in agreement with
Milton et al. (2006). The elastic transformation theory is applied to the case
of cylindrical anisotropy. The equations of motion for the transformed material
with isotropic density are expressed in Stroh format, suitable for modeling
cylindrical elastic cloaking. It is shown that there is a preferred approximate
material with symmetric stress that could be a useful candidate for making
cylindrical elastic cloaking devices.
|
Subsets and Splits