text
stringlengths 6
128k
|
---|
Angular distributions of differential cross sections from the latest CLAS
data sets \cite{bradford}, for the reaction ${\gamma}+p {\to} K^{+} +
{\Lambda}$ have been analyzed using associated Legendre polynomials. This
analysis is based upon theoretical calculations in Ref. \cite{fasano} where all
sixteen observables in kaon photoproduction can be classified into four
Legendre classes. Each observable can be described by an expansion of
associated Legendre polynomial functions. One of the questions to be addressed
is how many associated Legendre polynomials are required to describe the data.
In this preliminary analysis, we used data models with different numbers of
associated Legendre polynomials. We then compared these models by calculating
posterior probabilities of the models. We found that the CLAS data set needs no
more than four associated Legendre polynomials to describe the differential
cross section data. In addition, we also show the extracted coefficients of the
best model.
|
Thomas Schelling introduced his agent-based model of segregation in 1971 and
concluded that even when there is a low amount of intolerance within society
that segregation will develop if people follow their individual preferences. A
large body of literature building of this framework has been built and has
bolstered this claim. This paper aims to take the same framework but instead
look for ways to get to an integrated state. We focus on Allport's contact
hypothesis that states that if there is equal status among groups, common goals
among groups, and an institutional mechanism supporting intergroup contact then
intergroup contact can reduce prejudice. We incorporate the contact hypothesis
by having individuals adjust their intolerance based on their current
neighborhood composition and the ease of conforming to their surroundings.
Furthermore, we add in positive and negative media effects, as individuals are
likely to get information about an outgroup from the media (e.g., news, TV,
movies, etc.) that they consume. We find that having a society composed of
individuals who do not easily conform to their surroundings and displaying
positive examples of both groups in media promote integration within society.
|
Double Fibonacci sequences are introduced and they are related to operations
with Fibonacci modules. Generalizations and examples are also discussed.
|
We examine the dynamics of a single hole in the gapless phase of the Kitaev
honeycomb model, focusing on the slow-hole regime where the bare hopping
amplitude $t$ is much less than the Kitaev exchange energy $J$. In this regime,
the hole does not generate gapped flux excitations and is dressed only by the
gapless fermion excitations. Investigating the single-hole spectral function,
we find that the hole propagates coherently with a quasiparticle weight that is
finite but approaches zero as $t/J \to 0$. This conclusion follows from two
approximate treatments, which capture the same physics in complementary ways.
Both treatments use the stationary limit as an exactly solvable starting point
to study the spectral function approximately (i) by employing a variational
approach in terms of a trial state that interpolates between the limits of a
stationary hole and an infinitely fast hole and (ii) by considering a special
point in the gapless phase that corresponds to a simplified one-dimensional
problem.
|
Reconfigurable Intelligent Surface (RIS) is one of the key technologies for
the upcoming 6th Generation (6G) communications, which can improve the signal
strength at the receivers by adding artificial propagation paths. In the
context of Downlink (DL) Multi-User Multiple-Input Multiple-Output (MU-MIMO)
communications, designing an appropriate Beamforming (BF) scheme to take full
advantage of this reconfigured propagation environment and improve the network
capacity is a major challenge. Due to the spatial dimension provided by MIMO
systems, independent data streams can be transmitted to multiple users
simultaneously on the same radio resources. It is important to note that
serving the same subset of users over a period of time may lead to undesired
areas where the average Electromagnetic Field Exposure (EMFE) exceeds
regulatory limits. To address this challenge, in this paper, we propose a Dual
Gradient Descent (Dual-GD)-based Electromagnetic Field (EMF)-aware MU-MIMO BF
scheme that aims to optimize the overall capacity under EMFE constraints in
RIS-aided 6G cellular networks.
|
Continuum observations of molecular clouds have revealed a surprising amount
of substructure in the form of filaments of a few pc length and cores of ~0.1
pc diameter. Understanding the evolution of these substructures towards star
formation requires the kinematic and dynamical insights provided uniquely by
sensitive line observations at high angular and spectral resolution. In this
short paper, we describe how an ngVLA can probe effectively the dynamics of
filaments and cores in nearby star-forming molecular clouds using the NH3
rotation-inversion transitions at 24 GHz. Such emission has been proven to
trace well the high column density environments of star-forming cores and
filaments but higher-resolution observations are needed to reveal important
details of how dense gas is flowing within and onto these substructures. In
particular, we describe how 150 x 18-m antennas with a maximum baseline of 1 km
can be used to map sensitively NH3 emission across high column density
locations in clouds in roughly an order of magnitude less time than with the
current Jansky VLA.
|
In this paper we solve the Helmholtz equation with multigrid preconditioned
Krylov subspace methods. The class of Shifted Laplacian preconditioners are
known to significantly speed-up Krylov convergence. However, these
preconditioners have a parameter beta, a measure of the complex shift. Due to
contradictory requirements for the multigrid and Krylov convergence, the choice
of this shift parameter can be a bottleneck in applying the method. In this
paper, we propose a wavenumber-dependent minimal complex shift parameter which
is predicted by a rigorous k-grid Local Fourier Analysis (LFA) of the multigrid
scheme. We claim that, given any (regionally constant) wavenumber, this minimal
complex shift parameter provides the reader with a parameter choice that leads
to efficient Krylov convergence. Numerical experiments in one and two spatial
dimensions validate the theoretical results. It appears that the proposed
complex shift is both the minimal requirement for a multigrid V-cycle to
converge, as well as being near-optimal in terms of Krylov iteration count.
|
We prove a lower bound for the entropy dissipation of the Landau equation
with Coulomb potentials by a weighted Lebesgue norm $L^3_{-5/3}$. In
particular, we enhance the weight exponent from $-5$, which was established by
Desvillettes, to $-5/3$. Moreover, we prove that the weighted Lebesgue norm
$L^3_{-5/3}$ is optimal for both exponents.
|
Block copolymer, a synthesized polymer material, has found many applications
in industry. It is consisting of multiple sequences of monomer alternating in
series with different monomer blocks. The combination of different polymers
endows the polymer material with rich properties, which are the key to their
important applications. In this paper, we model the copolymers with
Landau-Brazovskii model with additional constraints reflecting physical
structures, which is in the form of a second order variational problem.
Critical points of the functional are interpreted as states of polymers. By
reducing to handy situations, we find a nontrivial periodic minimal solution.
Moreover, the proof is kept as simple and self-contained as possible in our
specific case.
|
We construct lattice action for three-dimensional ${\cal N}=4$ supersymmetric
gauge theory with matter fields in the fundamental representation.
|
We define tropical analogues of the notions of linear space and Plucker
coordinate and study their combinatorics. We introduce tropical analogues of
intersection and dualization and define a tropical linear space built by
repeated dualization and transverse intersection to be constructible. Our main
result that all constructible tropical linear spaces have the same f-vector and
are ``series-parallel''. We conjecture that this f-vector is maximal for all
tropical linear spaces with equality precisely for the series-parallel tropical
linear spaces. We present many partial results towards this conjecture.
In addition we relate tropical linear spaces to linear spaces defined over
power series fields and give many examples and counter-examples illustrating
aspects of this relationship. We describe a family of particularly nice
series-parallel linear spaces, which we term tree spaces, that realize the
conjectured maximal f-vector and are constructed in a manner similar to the
cyclic polytopes.
|
In this work, we simulate the expected device performance and the scaling
perspectives of Carbon nanotube Field Effect Transistors (CNT-FETs), with doped
source and drain extensions. The simulations are based on the self-consistent
solution of the 3D Poisson-Schroedinger equation with open boundary conditions,
within the Non-Equilibrium Green's Function formalism, where arbitrary gate
geometry and device architecture can be considered. The investigation of short
channel effects for different gate configurations and geometry parameters shows
that double gate devices offer quasi ideal subthreshold slope and DIBL without
extremely thin gate dielectrics. Exploration of devices with parallel CNTs show
that On currents per unit width can be significantly larger than the silicon
counterpart, while high-frequency performance is very promising.
|
We study Subgraph Isomorphism on graph classes defined by a fixed forbidden
graph. Although there are several ways for forbidding a graph, we observe that
it is reasonable to focus on the minor relation since other well-known
relations lead to either trivial or equivalent problems. When the forbidden
minor is connected, we present a near dichotomy of the complexity of Subgraph
Isomorphism with respect to the forbidden minor, where the only unsettled case
is $P_{5}$, the path of five vertices. We then also consider the general case
of possibly disconnected forbidden minors. We show fixed-parameter tractable
cases and randomized XP-time solvable cases parameterized by the size of the
forbidden minor $H$. We also show that by slightly generalizing the tractable
cases, the problem becomes NP-complete. All unsettle cases are equivalent to
$P_{5}$ or the disjoint union of two $P_{5}$'s. As a byproduct, we show that
Subgraph Isomorphism is fixed-parameter tractable parameterized by vertex
integrity. Using similar techniques, we also observe that Subgraph Isomorphism
is fixed-parameter tractable parameterized by neighborhood diversity.
|
We investigate the behaviour of the QCD evolution towards high-energy, in the
diffusive approximation, in the limit where the fluctuation contribution is
large. Our solution for the equivalent stochastic Fisher equation predicts the
amplitude as well as the whole set of correlators in the strong noise limit.
The speed of the front and the diffusion coefficient are obtained. We analyse
the consequences on high-energy evolution in QCD.
|
Spin pumping is becoming an established method to generate voltages from
magnetic dynamics. The standard detection method of spin pumping is based on
open circuit voltage measurement across ferromagnetic (FM) and non-magnetic
(NM) bi-layers, where the inverse spin-Hall effect (ISHE) can convert spin
currents into electrical charge accumulation. In this paper, we present that it
is also possible to measure the associated electric charge current generated in
FM/NM bi-layers, by using a macroscopic closed circuitry detection method.
Using variable load resistors connected in series to the sample, we quantified
charge currents and associated electric power dissipation as a function of the
load resistance. By using basic circuit analysis, we are able to describe spin
pumping cells as a non-ideal voltage source or equivalent current source with
an internal resistor.
|
CP violation in $B$ decays is reviewed in the Standard Model (SM) and beyond
the SM. The present explanation of CP violation in terms of a phase in the
Cabibbo-Kobayashi-Maskawa (CKM) matrix can be tested through a variety of CP
asymmetries in neutral and charged $B$ decays. Usually, new mechanisms of CP
nonconservation enter via $B-\bar{B}$ mixing and violate SM constraints on the
CKM parameters in a few characteristic ways. Different models can be partially
distinguished by penguin-dominated $B$ decay rate measurements. In radiative
decays, large mixing-induced asymmetries may occur due to new contributions to
the decay amplitude.
|
A set of points $X = X_B \cup X_R \subseteq \mathbb{R}^d$ is linearly
separable if the convex hulls of $X_B$ and $X_R$ are disjoint, hence there
exists a hyperplane separating $X_B$ from $X_R$. Such a hyperplane provides a
method for classifying new points, according to which side of the hyperplane
the new points lie. When such a linear separation is not possible, it may still
be possible to partition $X_B$ and $X_R$ into prespecified numbers of groups,
in such a way that every group from $X_B$ is linearly separable from every
group from $X_R$. We may also discard some points as outliers, and seek to
minimize the number of outliers necessary to find such a partition. Based on
these ideas, Bertsimas and Shioda proposed the classification and regression by
integer optimization (CRIO) method in 2007. In this work we explore the integer
programming aspects of the classification part of CRIO, in particular
theoretical properties of the associated formulation. We are able to find
facet-inducing inequalities coming from the stable set polytope, hence showing
that this classification problem has exploitable combinatorial properties.
|
High-precision 3D printing technology opens to almost endless opportunities
to design complex shapes present in tailored architected materials. The scope
of this work is to review the latest studies regarding 3D printed lattice
structures that involve the use of photopolymers fabricated by Material Jetting
(MJ), with a focus on the widely used Polyjet and MultiJet techniques. The main
aspects governing this printing process are introduced to determine their
influence during the fabrication of 3D printed lattices. Performed experimental
studies, considered assumptions, and constitutive models for the respective
numerical simulations are analyzed. Furthermore, an overview of the latest
extensively studied 3D printed architected lattice materials is exposed by
emphasizing their achieved mechanical performances through the use of Ashby
plots. Then, we highlight the advantages, limitations, and challenges of the
material jetting technology to manufacture tunable architected materials for
innovative devices, oriented to several engineering applications. Finally,
possible approaches for future works and gaps to be covered by further research
are indicated, including cost and environmental-related issues.
|
The dynamics of economies and infectious disease are inexorably linked:
economic well-being influences health (sanitation, nutrition, treatment
capacity, etc.) and health influences economic well-being (labor productivity
lost to sickness and disease). Often societies are locked into "poverty traps"
of poor health and poor economy. Here, using a simplified coupled
disease-economic model with endogenous capital growth we demonstrate the
formation of poverty traps, as well as ways to escape them. We suggest two
possible mechanisms of escape both motivated by empirical data: one, through an
influx of capital (development aid), and another through changing the
percentage of GDP spent on healthcare. We find that a large influx of capital
is successful in escaping the poverty trap, but increasing health spending
alone is not. Our results demonstrate that escape from a poverty trap may be
possible, and carry important policy implications in the world-wide
distribution of aid and within-country healthcare spending.
|
Nearly 50 post-common-envelope (post-CE) close binary central stars of
planetary nebulae (CSPNe) are now known. Most contain either main sequence or
white dwarf (WD) companions that orbit the WD primary in around 0.1-1.0 days.
Only PN~G222.8-04.2 and NGC~5189 have post-CE CSPNe with a Wolf-Rayet star
primary (denoted [WR]), the low-mass analogues of massive Wolf-Rayet stars. It
is not well understood how H-deficient [WR] CSPNe form, even though they are
relatively common, appearing in over 100 PNe. The discovery and
characterisation of post-CE [WR] CSPNe is essential to determine whether
proposed binary formation scenarios are feasible to explain this enigmatic
class of stars. The existence of post-CE [WR] binaries alone suggests binary
mergers are not necessarily a pathway to form [WR] stars. Here we give an
overview of the initial results of a radial velocity monitoring programme of
[WR] CSPNe to search for new binaries. We discuss the motivation for the survey
and the associated strong selection effects. The mass functions determined for
PN~G222.8-04.2 and NGC~5189, together with literature photometric variability
data of other [WR] CSPNe, suggest that of the post-CE [WR] CSPNe yet to be
found, most will have WD or subdwarf O/B-type companions in wider orbits than
typical post-CE CSPNe (several days or months c.f. less than a day).
|
We have obtained exact results for the Ising model on a hierarchical lattice
with a scale-free degree distribution, high clustering coefficient, and
small-world behavior. By varying the probability p of long-range bonds, the
entire spectrum from an unclustered, non-small-world network to a
highly-clustered, small-world system is studied. We obtain analytical
expressions for the degree distribution P(k) and clustering coefficient C for
all p, as well as the average path length l for p=0 and 1. The Ising model on
this network is studied through an exact renormalization-group transformation
of the quenched bond probability distribution, using up to 562,500 probability
bins to represent the distribution. For p < 0.494, we find power-law critical
behavior of the magnetization and susceptibility, with critical exponents
continuously varying with p, and exponential decay of correlations away from
T_c. For p >= 0.494, where the network exhibits small-world character, the
critical behavior radically changes: We find a highly unusual phase transition,
namely an inverted Berezinskii-Kosterlitz-Thouless singularity, between a
low-temperature phase with non-zero magnetization and finite correlation length
and a high-temperature phase with zero magnetization and infinite correlation
length. Approaching T_c from below, the magnetization and the susceptibility
respectively exhibit the singularities of exp(-C/sqrt(T_c-T)) and
exp(D/sqrt(T_c-T)), with C and D positive constants. With long-range bond
strengths decaying with distance, we see a phase transition with power-law
critical singularities for all p, an unusually narrow critical region and
important corrections to power-law behavior that depend on the exponent
characterizing the decay of long-range interactions.
|
Quantum key distribution (QKD) is known to be unconditionally secure in
principle, but quantifying the security of QKD protocols from a practical
standpoint continues to remain an important challenge. Here, we focus on
phase-based QKD protocols and characterize the security of the 3 and n-pulse
Differential Phase Shift Quantum Key Distribution (DPS QKD) protocols against
individual attacks. In particular, we focus on the minimum error discrimination
(MED) and cloning attacks and obtain the corresponding shrinking factor by
which the sifted key needs to be shrunk in order to get a secure key. We
compare the secure key rates thus obtained with the known lower bounds under a
general individual attack. In a departure from the theoretical lower bounds,
which have no explicit attack strategies, our work provides a practical
assessment of the security of phase-based protocols based on attacks with known
implementations.
|
We study the electroweak phase transition and the critical bubble in the
scale-invariant two Higgs doublet model taking the recent LHC data into
account. The sphaleron energy in this model is evaluated for the first time. It
is found that the strong first-order electroweak phase transition is the
inevitable consequence to be consistent with the observed 125 GeV Higgs boson.
In such a case, the signal strength of the Higgs decay to two gammas and the
triple Higgs boson coupling could deviate from the SM values by $-10$% and
$+82$%, respectively.
|
Polymer-based batteries offer potentially higher power densities and a
smaller ecological footprint compared to state-of-the-art lithium-ion batteries
comprising inorganic active materials. However, in order to benefit from this
potential advantages, further research to find suitable material compositions
is required. In the present paper, we compare two different electrode
composites of poly(2,2,6,6-tetramethylpiperidinyloxy-4-ylmethacrylate) (PTMA)
and CMK-8, one produced with and one without crosslinking the PTMA. The
influence of both approaches on the corresponding electrodes is comparatively
investigated using electrochemical measurements and statistical 3D
microstructure analysis based on synchrotron X-ray tomography. A particular
focus is put on the local heterogeneity in the coating and how the crosslinking
influences the interaction between PTMA and CMK-8. It is shown that crosslinked
PTMA--compared to its non-crosslinked counterpart--exhibits a more
heterogeneous microstructure and, furthermore, leads to better surface coverage
of CMK-8, larger pores and shorter transportation pathways through the latter.
These changes improve the electrochemical properties of the electrode.
|
The brain can be considered as a system that dynamically optimizes the
structure of anatomical connections based on the efficiency requirements of
functional connectivity. To illustrate the power of this principle in
organizing the complexity of brain architecture, we portray the functional
connectivity as diffusion on the current network structure. The diffusion
drives adaptive rewiring, resulting in changes to the network to enhance its
efficiency. This dynamic evolution of the network structure generates, and thus
explains, modular small-worlds with rich club effects, f eatures commonly
observed in neural anatomy. Taking wiring length and propagating waves into
account leads to the morphogenesis of more specific neural structures that are
stalwarts of the detailed brain functional anatomy, such as parallelism,
divergence, convergence, super-rings, and super-chains. By showing how such
structures emerge, largely independently of their specific biological
realization, we offer a new conjecture on how natural and artificial brain-like
structures can be physically implemented.
|
We study the oscillations of a uniform longitudinal chromoelectric field in a
dynamically-evolving momentum-space anisotropic background in the weak field
limit. Evolution equations for the background are derived by taking moments of
the Boltzmann equation in two cases: (i) a fixed relaxation time and (ii) a
relaxation time that is proportional to the local inverse transverse momentum
scale of the plasma. The second case allows us to reproduce 2nd-order viscous
hydrodynamical dynamics in the limit of small shear viscosity to entropy ratio.
We then linearize the Boltzmann-Vlasov equation in a dynamically-evolving
background and obtain an integro-differential evolution equation for the
chromoelectric field. We present numerical solutions to this
integro-differential equation for a variety of different initial conditions and
shear viscosity to entropy density ratios. The dynamical equations obtained are
novel in that they include a non-trivial time-dependent momentum-space
anisotropic background and the effect of collisional damping for the first
time.
|
Progress in natural language processing (NLP) models that estimate
representations of word sequences has recently been leveraged to improve the
understanding of language processing in the brain. However, these models have
not been specifically designed to capture the way the brain represents language
meaning. We hypothesize that fine-tuning these models to predict recordings of
brain activity of people reading text will lead to representations that encode
more brain-activity-relevant language information. We demonstrate that a
version of BERT, a recently introduced and powerful language model, can improve
the prediction of brain activity after fine-tuning. We show that the
relationship between language and brain activity learned by BERT during this
fine-tuning transfers across multiple participants. We also show that, for some
participants, the fine-tuned representations learned from both
magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI)
are better for predicting fMRI than the representations learned from fMRI
alone, indicating that the learned representations capture
brain-activity-relevant information that is not simply an artifact of the
modality. While changes to language representations help the model predict
brain activity, they also do not harm the model's ability to perform downstream
NLP tasks. Our findings are notable for research on language understanding in
the brain.
|
We derive here a new highly selective photoelectron-based chirality-sensing
technique that utilizes 'locally-chiral' laser pulses. We show that this
approach results in strong chiral discrimination, where the standard
forwards/backwards asymmetry of photoelectron circular dichroism (PECD) is
lifted. The resulting dichroism is much larger and more robust than
conventional PECD, is found in all hemispheres, and is not symmetric or
antisymmetric with respect to any symmetry operator. Remarkably, a CD of up to
10% survives in the angularly-integrated above-threshold ionization (ATI)
spectra, and of up to 5% in the total ionization rates. We demonstrate these
results through ab-initio calculations in the chiral molecules
Bromochlorofluoromethane, Limonene, Fenchone, and Camphor. We also explore the
parameter-space of the locally-chiral field and show that the observed CD is
strongly correlated to the degree of chirality of the light, validating it as a
measure for chiral-interaction strengths. Our results pave the way for highly
selective probing of ultrafast chirality in ATI, can potentially lead to
all-optical enantio-separation, and motivate the use of locally-chiral light
for enhancing ultrafast spectroscopies.
|
In the traditional approach to controlling superconducting qubits using
microwave pulses, the field of pulse shaping has emerged in order to assist in
the removal of leakage and increase gate fidelity. However, the challenge of
scaling microwave control electronics has created an opportunity to explore
alternative methods such as single-flux quantum (SFQ) pulses. For qubits
controlled by SFQ pulses, high fidelity gates can be achieved by optimizing the
binary control sequence. We extend the notion of the derivative removal by
adiabatic gate (DRAG) framework a transmon qubit controlled by SFQ drivers and
propose pulse sequences that can be stored in 22 bits or fewer, with gate
fidelities exceeding 99.99%. This modest memory requirement could help reduce
the footprint of the SFQ coprocessors and power dissipation while preserving
their inherent advantages of scalability and cost-effectiveness.
|
We present the first calculations of the gravitational radiation produced by
nonaxisymmetric dynamical instability in a rapidly rotating compact star. The
star deforms into a bar shape, shedding $\sim 4\%$ of its mass and $\sim 17\%$
of its angular momentum. The gravitational radiation is calculated in the
quadrupole approximation. For a mass $M \sim 1.4$ M$_{\odot}$ and radius $R
\sim 10$ km, the gravitational waves have frequency $\sim 4$ kHz and amplitude
$h \sim 2 \times 10^{-22}$ at the distance of the Virgo Cluster. They carry off
energy $\Delta E/M \sim 0.1\%$ and radiate angular momentum $\Delta J/J \sim
0.7\%$.
|
We study the problem of recovering a planted matching in randomly weighted
complete bipartite graphs $K_{n,n}$. For some unknown perfect matching $M^*$,
the weight of an edge is drawn from one distribution $P$ if $e \in M^*$ and
another distribution $Q$ if $e \notin M^*$. Our goal is to infer $M^*$, exactly
or approximately, from the edge weights. In this paper we take
$P=\exp(\lambda)$ and $Q=\exp(1/n)$, in which case the maximum-likelihood
estimator of $M^*$ is the minimum-weight matching $M_{\text{min}}$. We obtain
precise results on the overlap between $M^*$ and $M_{\text{min}}$, i.e., the
fraction of edges they have in common. For $\lambda \ge 4$ we have almost
perfect recovery, with overlap $1-o(1)$ with high probability. For $\lambda <
4$ the expected overlap is an explicit function $\alpha(\lambda) < 1$: we
compute it by generalizing Aldous' celebrated proof of the $\zeta(2)$
conjecture for the un-planted model, using local weak convergence to relate
$K_{n,n}$ to a type of weighted infinite tree, and then deriving a system of
differential equations from a message-passing algorithm on this tree.
|
We derive symmetries and adjunction inequalities of the knot Floer homology
groups which appear to be especially interesting for homologically essential
knots. Furthermore, we obtain an adjunction inequality for cobordism maps in
knot Floer homologies. We demonstrate the adjunction inequalities and
symmetries in explicit calculations which recover some of the main results from
[1] on longitude Floer homology and also give rise to vanishing results on knot
Floer homologies. Furthermore, using symmetries we prove that the knot Floer
homology of a fiber distinguishes $\stwo\times\sone$ from other $\sone$-bundles
over surfaces.
|
We report on a survey of the inner part of the Galactic Plane in very high
energy gamma-rays, with the H.E.S.S. Cherenkov telescope system. The Galactic
Plane between +-30deg in longitude and +-3deg in latitude relative to the
Galactic Centre was observed in 500 pointings for a total of 230 hours,
reaching an average flux sensitivity of 2% of the Crab Nebula at energies above
200 GeV. Fourteen previously unknown sources were detected at a significance
level greater than 4 sigma after accounting for all trials involved in the
search. Initial results on the eight most significant of these sources were
already reported elsewhere. Here we present detailed spectral and morphological
information for all the new sources, along with a discussion on possible
counterparts in other wavelength bands. The distribution in Galactic latitude
of the detected sources appears to be consistent with a scale height in the
Galactic disk for the parent population smaller than 100 pc, consistent with
expectations for supernova remnants and/or pulsar wind nebulae.
|
We present a new compilation of Type Ia supernovae (SNe Ia), a new dataset of
low-redshift nearby-Hubble-flow SNe and new analysis procedures to work with
these heterogeneous compilations. This ``Union'' compilation of 414 SN Ia,
which reduces to 307 SNe after selection cuts, includes the recent large
samples of SNe Ia from the Supernova Legacy Survey and ESSENCE Survey, the
older datasets, as well as the recently extended dataset of distant supernovae
observed with HST. A single, consistent and blind analysis procedure is used
for all the various SN Ia subsamples, and a new procedure is implemented that
consistently weights the heterogeneous data sets and rejects outliers. We
present the latest results from this Union compilation and discuss the
cosmological constraints from this new compilation and its combination with
other cosmological measurements (CMB and BAO). The constraint we obtain from
supernovae on the dark energy density is $\Omega_\Lambda=
0.713^{+0.027}_{-0.029} (stat)}^{+0.036}_{-0.039} (sys)}$, for a flat, LCDM
Universe. Assuming a constant equation of state parameter, $w$, the combined
constraints from SNe, BAO and CMB give
$w=-0.969^{+0.059}_{-0.063}(stat)^{+0.063}_{-0.066} (sys)$. While our results
are consistent with a cosmological constant, we obtain only relatively weak
constraints on a $w$ that varies with redshift. In particular, the current SN
data do not yet significantly constrain $w$ at $z>1$. With the addition of our
new nearby Hubble-flow SNe Ia, these resulting cosmological constraints are
currently the tightest available.
|
In this brief note I address the question not frequently asked, namely, why
did it take two decades between Einstein's first proposal of photons and
derivation of the full Planck formula from first principles of Statistical
Mechanics, albeit with a fundamental new approach to counting of states.
Secondly, why did it fall to an independent inquirer, S. N. Bose, in far away
Dacca to arrive at the correct derivation of this formula, arguably a most
crucial one of the first half of the twentieth century? Reasonable hypotheses
are proposed for answers to both. I also argue that the timing of Bose's
communication to Einstein played a crucial role in Einstein's approval of de
Broglie's thesis and hence the emergence of the definitive version of quantum
mechanics in the late 1920's.
|
Recent advances in large language models (LLMs) have blurred the boundary of
high-quality text generation between humans and machines, which is favorable
for generative text steganography. While, current advanced steganographic
mapping is not suitable for LLMs since most users are restricted to accessing
only the black-box API or user interface of the LLMs, thereby lacking access to
the training vocabulary and its sampling probabilities. In this paper, we
explore a black-box generative text steganographic method based on the user
interfaces of large language models, which is called LLM-Stega. The main goal
of LLM-Stega is that the secure covert communication between Alice (sender) and
Bob (receiver) is conducted by using the user interfaces of LLMs. Specifically,
We first construct a keyword set and design a new encrypted steganographic
mapping to embed secret messages. Furthermore, to guarantee accurate extraction
of secret messages and rich semantics of generated stego texts, an optimization
mechanism based on reject sampling is proposed. Comprehensive experiments
demonstrate that the proposed LLM-Stega outperforms current state-of-the-art
methods.
|
For a large prime $p$, and a polynomial $f$ over a finite field $F_p$ of $p$
elements, we obtain a lower bound on the size of the multiplicative subgroup of
$F_p^*$ containing $H\ge 1$ consecutive values $f(x)$, $x = u+1, \ldots, u+H$,
uniformly over $f\in F_p[X]$ and an $u \in F_p$.
|
Social platforms are heavily used by individuals to share their thoughts and
personal information. However, due to regret over time about posting
inappropriate social content, embarrassment, or even life or relationship
changes, some past posts might also pose serious privacy concerns for them. To
cope with these privacy concerns, social platforms offer deletion mechanisms
that allow users to remove their contents. Quite naturally, these deletion
mechanisms are really useful for removing past posts as and when needed.
However, these same mechanisms also leave the users potentially vulnerable to
attacks by adversaries who specifically seek the users' damaging content and
exploit the act of deletion as a strong signal for identifying such content.
Unfortunately, today user experiences and contextual expectations regarding
such attacks on deletion privacy and deletion privacy in general are not well
understood.
To that end, in this paper, we conduct a user survey-based exploration
involving 191 participants to unpack their prior deletion experiences, their
expectations of deletion privacy, and how effective they find the current
deletion mechanisms. We find that more than 80% of the users have deleted at
least a social media post, and users self-reported that, on average, around 35%
of their deletions happened after a week of posting. While the participants
identified the irrelevancy (due to time passing) as the main reason for content
removal, most of them believed that deletions indicate that the deleted content
includes some damaging information to the owner. Importantly, the participants
are significantly more concerned about their deletions being noticed by
large-scale data collectors (e.g., the government) than individuals from their
social circle. Finally, the participants felt that popular deletion mechanisms
are not very effective in protecting the privacy of those deletions.
|
We address stabilization of linear time-invariant (LTI), single-input
single-output (SISO) systems in the Laplace domain, with a stable controller in
a single feedback loop. Such stabilization is called strong. Plants that
satisfy a parity interlacing property are known to be strongly stabilizable.
Finding such controllers is a well known difficult problem. Existing general
methods are based on either manual search or a clever use of Nevanlinna-Pick
interpolation with polynomials of possibly high integer order. Here we present
a new, simple, and general method for strongly stabilizing systems of relative
degree less than 3. We call our method Real to Integers (RTI). Our theoretical
contributions constitute proposing the functional form used, which involves a
product of several terms of the form $\displaystyle \left ( \frac{s+a}{s+b}
\right )^m$, showing that real $m$'s will arise whenever the plant is strongly
stabilizable, and proving that integer $m$'s can be obtained by continuously
varying free parameters (i.e., the $a$'s and $b$'s). Our practical
contributions include demonstrating a simple way, based on a trigonometric
trick, to adjust the fractional powers until they take reasonable integer
values. We include brief but necessary associated discussion to make the paper
accessible to a broad audience. We also present ten numerical examples of
successful control design with varying levels of difficulty, including plants
whose transfer functions have relative degrees of 0, 1 or 2; and with right
half plane zeros of multiplicity possibly exceeding one.
|
We present a general radiative transfer model which allows the Zeeman
diagnostics of complex and unresolved solar magnetic fields. Present modeling
techniques still rely to a large extent on a-priori assumptions about the
geometry of the underlying magnetic field. In an effort to obtain a more
flexible and unbiased approach we pursue a rigorous statistical description of
the underlying atmosphere. Based on a Markov random field model the atmospheric
structures are characterized in terms of probability densities and spatial
correlations. This approach allows us to derive a stochastic transport equation
for polarized light valid in a regime with an arbitrary fluctuating magnetic
field on finite scales. One of the key ingredients of the derived stochastic
transfer equation is the correlation length which provides an additional degree
of freedom to the transport equation and can be used as a diagnostic parameter
to estimate the characteristic length scale of the underlying magnetic field.
It is shown that the stochastic transfer equation represents a natural
extension of the (polarized) line formation under the micro- and macroturbulent
assumption and contains both approaches as limiting cases. In particular, we
show how in an inhomogeneous atmosphere asymmetric Stokes profiles develop and
that the correlation length directly controls the degree of asymmetry and net
circular polarization (NCP). In a number of simple numerical model calculations
we demonstrate the importance of a finite correlation length for the polarized
line formation and its impact on the resulting Stokes line profiles.
|
It is well known that, whenever $k$ divides $n$, the complete $k$-uniform
hypergraph on $n$ vertices can be partitioned into disjoint perfect matchings.
Equivalently, the set of $k$-subsets of an $n$-set can be partitioned into
parallel classes so that each parallel class is a partition of the $n$-set.
This result is known as Baranyai's theorem, which guarantees the existence of
\emph{Baranyai partitions}. Unfortunately, the proof of Baranyai's theorem uses
network flow arguments, making this result non-explicit. In particular, there
is no known method to produce Baranyai partitions in time and space that scale
linearly with the number of hyperedges in the hypergraph. It is desirable for
certain applications to have an explicit construction that generates Baranyai
partitions in linear time. Such an efficient construction is known for $k=2$
and $k=3$. In this paper, we present an explicit recursive quadrupling
construction for $k=4$ and $n=4t$, where $t \equiv 0,3,4,6,8,9
~(\text{mod}~12)$. In a follow-up paper (Part II), the other values of~$t$,
namely $t \equiv 1,2,5,7,10,11 ~(\text{mod}~12)$, will be considered.
|
A "bigger is better" explosion in the number of parameters in deep neural
networks has made it increasingly challenging to make state-of-the-art networks
accessible in compute-restricted environments. Compression techniques have
taken on renewed importance as a way to bridge the gap. However, evaluation of
the trade-offs incurred by popular compression techniques has been centered on
high-resource datasets. In this work, we instead consider the impact of
compression in a data-limited regime. We introduce the term low-resource double
bind to refer to the co-occurrence of data limitations and compute resource
constraints. This is a common setting for NLP for low-resource languages, yet
the trade-offs in performance are poorly studied. Our work offers surprising
insights into the relationship between capacity and generalization in
data-limited regimes for the task of machine translation. Our experiments on
magnitude pruning for translations from English into Yoruba, Hausa, Igbo and
German show that in low-resource regimes, sparsity preserves performance on
frequent sentences but has a disparate impact on infrequent ones. However, it
improves robustness to out-of-distribution shifts, especially for datasets that
are very distinct from the training distribution. Our findings suggest that
sparsity can play a beneficial role at curbing memorization of low frequency
attributes, and therefore offers a promising solution to the low-resource
double bind.
|
Transistor aging is one of the major concerns that challenges designers in
advanced technologies. It profoundly degrades the reliability of circuits
during its lifetime as it slows down transistors resulting in errors due to
timing violations unless large guardbands are included, which leads to
considerable performance losses. When it comes to Neural Processing Units
(NPUs), where increasing the inference speed is the primary goal, such
performance losses cannot be tolerated. In this work, we are the first to
propose a reliability-aware quantization to eliminate aging effects in NPUs
while completely removing guardbands. Our technique delivers a graceful
inference accuracy degradation over time while compensating for the
aging-induced delay increase of the NPU. Our evaluation, over ten
state-of-the-art neural network architectures trained on the ImageNet dataset,
demonstrates that for an entire lifetime of 10 years, the average accuracy loss
is merely 3%. In the meantime, our technique achieves 23% higher performance
due to the elimination of the aging guardband.
|
Quantum technologies rely on the ability to coherently manipulate, process
and transfer information, encoded in quantum states, along quantum channels.
Decoherence induced by the environment introduces errors, thus setting limits
on the efficiency of any quantum-enhanced protocol or device. A fundamental
bound on the ability of a noisy quantum channel to transmit quantum (classical)
information is given by its quantum (classical) capacity. Generally, the longer
is a quantum channel the more are the introduced errors, and hence the worse is
its capacity. In this Letter we show that for non-Markovian quantum channels
this is not always true: surprisingly the capacity of a longer channel can be
greater than the one of a shorter channel. We introduce a general theoretical
framework linking non-Markovianity to the capacities of quantum channels, and
demonstrate in full generality how harnessing non-Markovianity may improve the
efficiency of quantum information processing and communication.
|
Binary black hole (BBH) mergers, particularly those with component masses in
the pair-instability gap, may be produced by hierarchical mergers in the disks
surrounding Active Galactic Nuclei (AGN). While the interaction of an embedded
BBH with an AGN disk is typically assumed to facilitate a merger, recent
high-resolution hydrodynamical simulations challenge this assumption. However,
these simulations often have simplified treatments for the gas thermodynamics.
In this work, we model the possible consequence of various feedback from an
embedded BBH with a simple model that maintains an enhanced temperature profile
around each binary component. We show that when the minidisks around each BH
become hotter than the background by a factor of three, the BBH orbital
evolution switches from expansion to contraction. By analyzing the
gravitational torque profile, we find that this change in direction is driven
by a weakening of the minidisk spirals and their positive torque on the binary.
Our results highlight the important role of thermodynamics around BBHs and its
effect on their orbital evolution, suggesting that AGN disks could be efficient
factories for BBH mergers.
|
In the Steiner Tree Augmentation Problem (STAP), we are given a graph $G =
(V,E)$, a set of terminals $R \subseteq V$, and a Steiner tree $T$ spanning
$R$. The edges $L := E \setminus E(T)$ are called links and have non-negative
costs. The goal is to augment $T$ by adding a minimum cost set of links, so
that there are 2 edge-disjoint paths between each pair of vertices in $R$. This
problem is a special case of the Survivable Network Design Problem, which can
be approximated to within a factor of 2 using iterative rounding~\cite{J2001}.
We give the first polynomial time algorithm for STAP with approximation ratio
better than 2. In particular, we achieve an approximation ratio of $(1.5 +
\varepsilon)$. To do this, we employ the Local Search approach of~\cite{TZ2022}
for the Tree Augmentation Problem and generalize their main decomposition
theorem from links (of size two) to hyper-links.
We also consider the Node-Weighted Steiner Tree Augmentation Problem
(NW-STAP) in which the non-terminal nodes have non-negative costs. We seek a
cheapest subset $S \subseteq V \setminus R$ so that $G[R \cup S]$ is
2-edge-connected. Using a result of Nutov~\cite{N2010}, there exists an $O(\log
|R|)$-approximation for this problem. We provide an $O(\log^2
(|R|))$-approximation algorithm for NW-STAP using a greedy algorithm leveraging
the spider decomposition of optimal solutions.
|
Optimal control problems naturally arise in many scientific applications
where one wishes to steer a dynamical system from a certain initial state
$\mathbf{x}_0$ to a desired target state $\mathbf{x}^*$ in finite time $T$.
Recent advances in deep learning and neural network-based optimization have
contributed to the development of methods that can help solve control problems
involving high-dimensional dynamical systems. In particular, the framework of
neural ordinary differential equations (neural ODEs) provides an efficient
means to iteratively approximate continuous time control functions associated
with analytically intractable and computationally demanding control tasks.
Although neural ODE controllers have shown great potential in solving complex
control problems, the understanding of the effects of hyperparameters such as
network structure and optimizers on learning performance is still very limited.
Our work aims at addressing some of these knowledge gaps to conduct efficient
hyperparameter optimization. To this end, we first analyze how truncated and
non-truncated backpropagation through time affect runtime performance and the
ability of neural networks to learn optimal control functions. Using analytical
and numerical methods, we then study the role of parameter initializations,
optimizers, and neural-network architecture. Finally, we connect our results to
the ability of neural ODE controllers to implicitly regularize control energy.
|
We investigate the high-scale behaviour of Higgs sectors beyond the Standard
Model, pointing out that the proper matching of the quartic couplings before
applying the renormalisation group equations (RGEs) is of crucial importance
for reliable predictions at larger energy scales. In particular, the common
practice of leading-order parameters in the RGE evolution is insufficient to
make precise statements on a given model's UV behaviour, typically resulting in
uncertainties of many orders of magnitude. We argue that, before applying
N-loop RGEs, a matching should even be performed at N-loop order in contrast to
common lore. We show both analytical and numerical results where the impact is
sizeable for three minimal extensions of the Standard Model: a singlet
extension, a second Higgs doublet and finally vector-like quarks. We highlight
that the known two-loop RGEs tend to moderate the running of their one-loop
counterparts, typically delaying the appearance of Landau poles. For the
addition of vector-like quarks we show that the complete two-loop matching and
RGE evolution hints at a stabilisation of the electroweak vacuum at high
energies, in contrast to results in the literature.
|
We develop a general theory of Hopf image of a Hopf algebra representation,
with the associated concept of inner faithful representation, modelled on the
notion of faithful representation of a discrete group. We study several
examples, including group algebras, enveloping algebras of Lie algebras,
pointed Hopf algebras, function algebras, twistings and cotwistings, and we
present a Tannaka duality formulation of the notion of Hopf image.
|
The intersections of mental health and computing education is under-examined.
In this systematic literature review, we evaluate the state-of-the-art of
research in mental health and well-being interventions, assessments, and
concerns like anxiety and depression in computer science and computing
education. The studies evaluated occurred across the computing education
pipeline from introductory to PhD courses and found some commonalities
contributing to high reporting of anxiety and depression in those studied. In
addition, interventions that were designed to address mental health topics
often revolved around self-guidance. Based on our review of the literature, we
recommend increasing sample sizes and focusing on the design and development of
tools and interventions specifically designed for computing professionals and
students.
|
In this work, we calculate the amplitudes of the processes $c\bar c({^3P_J})
\rightarrow DD,DD^*, D^*D^* \rightarrow c\bar c({^3P_J})$ in the leading order
of the nonrelativistic expansion. The imaginary parts of the amplitudes are
corresponding to the branch decay widthes of the charmonium $c\bar c({^3P_J})
\rightarrow DD,DD^*, D^*D^*$ and the real parts are corresponding to the mass
shifts of the charmonium $c\bar c({^3P_J})$ due to these decay channels. After
absorbing the polynomial contributions which are pure real and include the UV
divergences, the ratios between the branch decay widthes and the corresponding
mass shifts are only dependent on the center-of-mass energy. We find the decay
widthes and the mass shifts of the $^3P_2$ states are exact zero in the leading
order. The ratios between the branch decay widthes and the mass shifts for the
$^3P_0, {^3P_1}$ states are larger than 5 when the center-of-mass energy is
above the $DD,DD^*, D^*D^*$ threshold. The dependence of the mass shifts on the
center-of-mass energy is nontrivial especially when the center-of-mass energy
is below the threshold. The analytic results can be extended to the $b$ quark
sector directly.
|
We present some examples of locally conformal symplectic structures of the
first kind on compact nilmanifolds which do not admit Vaisman metrics. One of
these examples does not admit locally conformal K\"ahler metrics and all the
structures come from left-invariant locally conformal symplectic structures on
the corresponding nilpotent Lie groups. Under certain topological restrictions
related with the compactness of the canonical foliation, we prove a structure
theorem for locally conformal symplectic manifolds of the first kind. In the
non compact case, we show that they are the product of a real line with a
compact contact manifold and, in the compact case, we obtain that they are
mapping tori of compact contact manifolds by strict contactomorphisms.
Motivated by the aforementioned examples, we also study left-invariant locally
conformal symplectic structures on Lie groups. In particular, we obtain a
complete description of these structures (with non-zero Lee $1$-form) on
connected simply connected nilpotent Lie groups in terms of locally conformal
symplectic extensions and symplectic double extensions of symplectic nilpotent
Lie groups. In order to obtain this description, we study locally conformal
symplectic structures of the first kind on Lie algebras.
|
The results of a theoretical investigation of an ultracold, neutral plasma
composed of equal mass positive and negative charges are reported. In our
simulations, the plasma is created by the fast dissociation of a neutral
particle. The temperature of the plasma is controlled by the relative energy of
the dissociation. We studied the early time evolution of this system where the
initial energy was tuned so that the plasma is formed in the strongly coupled
regime. In particular, we present results on the temperature evolution and
three body recombination. In the weakly coupled regime, we studied how an
expanding plasma thermalizes and how the scattering between ions affects the
expansion. Because the expansion causes the density to drop, the velocity
distribution only evolves for a finite time with the final distribution
depending on the number of particles and initial temperature of the plasma.
|
We examine and discuss the spatial evolution of the statistical properties of
mechanically generated surface gravity wave fields, initialised with
unidirectional spectral energy distributions, uniformly distributed phases and
Rayleigh distributed amplitudes. We demonstrate that nonlinear interactions
produce an energy cascade towards high frequency modes with a directional
spread and triggers localised intermittent bursts. By analysing the probability
density function of Fourier mode amplitudes in the high frequency range of the
wave energy spectrum, we show that a heavy-tailed distribution emerges with
distance from the wave generator as a result of these intermittent bursts,
departing from the originally imposed Rayleigh distribution, even under
relatively weak nonlinear conditions.
|
The recently proposed ALFRED challenge task aims for a virtual robotic agent
to complete complex multi-step everyday tasks in a virtual home environment
from high-level natural language directives, such as "put a hot piece of bread
on a plate". Currently, the best-performing models are able to complete less
than 5% of these tasks successfully. In this work we focus on modeling the
translation problem of converting natural language directives into detailed
multi-step sequences of actions that accomplish those goals in the virtual
environment. We empirically demonstrate that it is possible to generate gold
multi-step plans from language directives alone without any visual input in 26%
of unseen cases. When a small amount of visual information is incorporated,
namely the starting location in the virtual environment, our best-performing
GPT-2 model successfully generates gold command sequences in 58% of cases. Our
results suggest that contextualized language models may provide strong visual
semantic planning modules for grounded virtual agents.
|
Lie symmetry method is applied to find analytic solutions of
initial-boundary-value problems of transient conduction in semi-infinite solid
with constant surface temperature or constant heat flux condition. The
solutions are obtained in a manner highlighting the systematic procedure of
extending the symmetry method for a PDE to investigate BVPs of the PDE. A
comparative analysis of numerical and closed form solutions is carried out for
a physical problem of heat conduction in a semi-infinite solid bar made of AISI
304 stainless steel.
|
Sagittarius A* exhibits frequent flaring activity across the electromagnetic
spectrum. Signatures of an orbiting hot spot have been identified in the
polarized millimeter wavelength light curves observed with ALMA in 2017
immediately after an X-ray flare. The nature of these hot spots remains
uncertain. We expanded existing theoretical hot-spot models created to describe
the Sgr A* polarized emission at millimeter wavelengths. We sampled the
posterior space, identifying best-fitting parameters and characterizing
uncertainties. Using the numerical radiative transfer code ipole, we defined a
semi-analytical model describing a ball of plasma orbiting Sgr A*, threaded
with a magnetic field and emitting synchrotron radiation. We then explored the
posterior space in the Bayesian framework of dynesty. We fit the static
background emission separately, using a radiatively inefficient accretion flow
model. We considered eight models with a varying level of complexity,
distinguished by choices regarding dynamically important cooling, non-Keplerian
motion, and magnetic field polarity. All models converge to realizations that
fit the data, but one model without cooling, non-Keplerian motion, and magnetic
field pointing toward us improved the fit significantly and also matched the
observed circular polarization. Our models represent observational data well
and allow testing various effects in a systematic manner. From our analysis, we
have inferred an inclination of $155-160$ deg, which corroborates previous
estimates, a preferred period of 90 minutes, and an orbital radius of $9-12$
gravitational radii. Our non-Keplerian models indicate a preference for an
orbital velocity of $0.6-0.9$ times the Keplerian value. Last, all our models
agree on a high dimensionless spin value ($a_{*}>0.8$), but the impact of spin
on the corresponding light curves is subdominant with respect to other
parameters.
|
This paper proposes that the distinctively human capacity for cumulative,
adaptive, open-ended cultural evolution came about through two
temporally-distinct cognitive transitions. First, the origin of Homo-specific
culture over two MYA was made possible by the onset of a finer-grained
associative memory that allowed episodes to be encoded in greater detail. This
in turn meant more overlap amongst the distributed representations of these
episodes, such that they could more readily evoke one another through
self-triggered recall (STR). STR enabled representational redescription, the
chaining of thoughts and actions, and the capacity for a stream of thought.
Second, fully cognitive modernity following the appearance of anatomical
modernity after 200,000 BP, was made possible by the onset of contextual focus
(CF): the ability to shift between an explicit convergent mode conducive to
logic and refinement of ideas, and an implicit divergent mode conducive to
free-association, viewing situations from radically new perspectives, concept
combination, analogical thinking, and insight. This paved the way for an
integrated, creative internal network of understandings, and behavioral
modernity. We discuss feasible neural mechanisms for this two-stage proposal,
and outline how STR and CF differ from other proposals. We provide
computational evidence for the proposal obtained with an agent-based model of
cultural evolution in which agents invent ideas for actions and imitate the
fittest of their neighbors' actions. Mean fitness and diversity of actions
across the artificial society increased with STR, and even more so with CF, but
CF was only effective if STR was already in place. CF was most effective
following a change in task, which supports its hypothesized role in escaping
mental fixation. The proposal is discussed in the context of transition theory
in the life sciences.
|
With the field of two-dimensional (2D) magnetic materials expanding rapidly,
noncollinear topological magnetic textures in 2D materials are attracting
growing interest recently. As the in-plane counterpart of magnetic skyrmions,
magnetic bimerons have the same topological advantages, but are rarely observed
in experiments. Employing first-principles calculations and Monte Carlo
simulations, we predict that the centrosymmetric transition metal halide CoX2
(X = Cl, Br) monolayers can be promising candidates for observing the
frustration-induced bimerons. These bimerons crystallize into stable triangular
lattice under an appropriate magnetic field. Compared to the skyrmions driven
by the Dzyaloshinskii-Moriya interaction or the long-ranged magnetic
dipole-dipole interactions, these frustration-induced bimerons have much
smaller size and flexible tunability. Furthermore, the biaxial strain provides
an effective method to tune the frustration and thereby to tune the bimeron
lattice. In detail, for CoCl2 monolayer, tensile strain can be applied to
generate bimeron lattice, further shrink bimeron size and increase the density
of bimerons. For CoBr2 monolayer with inherent bimeron lattice state, a unique
orientation rotation of bimeron lattice controlled by compressive strain is
predicted.
|
This paper discusses the stability to linearized radial perturbations of
spherically symmetric thin-shell wormholes with a "phantom-like" equation of
state for the exotic matter at the throat: $P=\omega\sigma$, $\omega<0$, where
$\sigma$ is the energy-density of the shell and $P$ the surface pressure. This
equation is analogous to the generalized Chaplygin-gas equation of state used
by E.F. Eiroa. The analysis, which differes from Eiroa's in its basic approach,
is carried out for wormholes constructed from the following spacetimes:
Schwarzschild, de Sitter and anti de Sitter, Reissner-Nordstrom, and regular
charged black-hole spacetimes, as well as from black holes in dilaton and
generalized dilaton-axion gravity.
|
Deep generative models parametrised by neural networks have recently started
to provide accurate results in modelling natural images. In particular,
generative adversarial networks provide an unsupervised solution to this
problem. In this work we apply this kind of technique to the simulation of
particle-detector response to hadronic jets. We show that deep neural networks
can achieve high-fidelity in this task, while attaining a speed increase of
several orders of magnitude with respect to traditional algorithms.
|
The Joint United Nations Programme on HIV/AIDS (UNAIDS) has developed the
Estimation and Projection Package (EPP) for making national estimates and
short-term projections of HIV prevalence based on observed prevalence trends at
antenatal clinics. Assessing the uncertainty about its estimates and
projections is important for informed policy decision making, and we propose
the use of Bayesian melding for this purpose. Prevalence data and other
information about the EPP model's input parameters are used to derive a
probabilistic HIV prevalence projection, namely a probability distribution over
a set of future prevalence trajectories. We relate antenatal clinic prevalence
to population prevalence and account for variability between clinics using a
random effects model. Predictive intervals for clinic prevalence are derived
for checking the model. We discuss predictions given by the EPP model and the
results of the Bayesian melding procedure for Uganda, where prevalence peaked
at around 28% in 1990; the 95% prediction interval for 2010 ranges from 2% to
7%.
|
The goal of this paper is to develop novel tools for understanding the local
structure of systems of functions, e.g. time-series data points, such as the
total correlation function, the Cohen class of the data set, the data operator
and the average lack of concentration. The Cohen class of the data operator
gives a time-frequency representation of the data set. Furthermore, we show
that the von Neumann entropy of the data operator captures local features of
the data set and that it is related to the notion of the effective
dimensionality. The accumulated Cohen class of the data operator gives us a
low-dimensional representation of the data set and we quantify this in terms of
the average lack of concentration and the von Neumann entropy of the data
operator by an application of a Berezin-Lieb inequality. The framework for our
approach is provided by quantum harmonic analysis.
|
The main theme of this paper is to use toric degeneration to produce distinct
homogeneous quasimorphisms on the group of Hamiltonian diffeomorphisms. We
focus on the (complex $n$-dimensional) quadric hypersurface and the del Pezzo
surfaces, and study two classes of distinguished Lagrangian submanifolds that
appear naturally in a toric degeneration, namely the Lagrangian torus which is
the monotone fiber of a Lagrangian torus fibration, and the Lagrangian spheres
that appear as vanishing cycles. For the quadrics, we prove that the group of
Hamiltonian diffeomorphisms admits two distinct homogeneous quasimorphisms and
derive some superheaviness results. Along the way, we show that the toric
degeneration is compatible with the Biran decomposition. This implies that for
$n=2$, the Lagrangian fiber torus (Gelfand--Zeitlin torus) is Hamiltonian
isotopic to the Chekanov torus, which answers a question of Y. Kim. We give
applications to $C^0$-symplectic topology which include the
Entov--Polterovich--Py question for the quadric hypersurface. We also prove
analogous results for the del Pezzo surfaces.
|
We consider the following nonlinear Schr\"{o}dinger equation of derivative
type: \begin{equation}i \partial_t u + \partial_x^2 u +i |u|^{2} \partial_x u
+b|u|^4u=0 , \quad (t,x) \in \mathbb{R}\times\mathbb{R}, \ b \in\mathbb{R}.
\end{equation} If $b=0$, this equation is known as a gauge equivalent form of
well-known derivative nonlinear Schr\"{o}dinger equation (DNLS), which is mass
critical and completely integrable. The equation can be considered as a
generalized equation of DNLS while preserving mass criticality and Hamiltonian
structure. For DNLS it is known that if the initial data $u_0\in
H^1(\mathbb{R})$ satisfies the mass condition $\| u_0\|_{L^2}^2 <4\pi$, the
corresponding solution is global and bounded. In this paper we first establish
the mass condition on the equation for general $b\in\mathbb{R}$, which is
exactly corresponding to $4\pi$-mass condition for DNLS, and then characterize
it from the viewpoint of potential well theory. We see that the mass threshold
value gives the turning point in the structure of potential wells generated by
solitons. In particular, our results for DNLS give a characterization of both
$4\pi$-mass condition and algebraic solitons.
|
Recent observations of Ultra High Energy Cosmic rays suggest a small
violation of Lorentz symmetry. Such a violation is expected in schemes with
discrete/quantized spacetime. We examine this situation and suggest tests which
could be carried out, for example by NASA's GLAST Satellite. The considerations
are extrapolated to the large scale cosmos.
|
I make a novel contact between string theory and degenerate fermion dynamics
in thin semiconductors. Utilizing AdS/CFT correspondence in string theory and
tunability of coupling parameters in condensed matter systems, I focus on the
possibilities testing string theory from tabletop experiments. I first discuss
the observation that stability of Fermi surface is classifiable according to
K-theory. I then elaborate two concrete realization of Fermi surfaces of zero
and two dimensions. Both are realized by complex of D3-branes and D7-branes of
relative codimension 6 and 4, respectively. The setup with Fermi point models
gauge dynamics of multiply stacked graphenes at half-filling. I show that
string theory predicts dynamical generation of mass gap and metal-insulator
quantum phase transition at zero temperature. I emphasize that conformally
invariant gauge theory dynamics of the setup plays a crucial role, leading to
novel conformal phase transition. The setup with Fermi surface is in
collaboration with Dongsu Bak and is based on charged black hole and models
relativistic Fermi liquid. We find positive evidence for this identification
from both equilibrium thermodynamics at or near zero temperature and
out-of-equilibrium linear response and transport properties. I argue that
fluctuation of black hole horizon provides holographic realization consistent
with Fermi liquid for thermodynamics and interesting departures therefrom in
transport properties.
|
We show that gap-acoustic solitons, i.e., optical gap solitons with
electrostrictive coupling to sound modes, can be produced with velocities down
to less than 2.5% of the speed of light using a fiber Bragg grating that is
linearly coupled to a non-Bragg fiber over a finite domain. Forward- and
backward-moving light pulses in the non-Bragg fiber that reach the coupling
region simultaneously couple into the Bragg fiber and form a moving soliton,
which then propagates beyond the coupling region.
|
There has been increasing interest in exploring the capabilities of advanced
large language models (LLMs) in the field of information extraction (IE),
specifically focusing on tasks related to named entity recognition (NER) and
relation extraction (RE). Although researchers are exploring the use of
few-shot information extraction through in-context learning with LLMs, they
tend to focus only on using correct or positive examples for demonstration,
neglecting the potential value of incorporating incorrect or negative examples
into the learning process. In this paper, we present c-ICL, a novel few-shot
technique that leverages both correct and incorrect sample constructions to
create in-context learning demonstrations. This approach enhances the ability
of LLMs to extract entities and relations by utilizing prompts that incorporate
not only the positive samples but also the reasoning behind them. This method
allows for the identification and correction of potential interface errors.
Specifically, our proposed method taps into the inherent contextual information
and valuable information in hard negative samples and the nearest positive
neighbors to the test and then applies the in-context learning demonstrations
based on LLMs. Our experiments on various datasets indicate that c-ICL
outperforms previous few-shot in-context learning methods, delivering
substantial enhancements in performance across a broad spectrum of related
tasks. These improvements are noteworthy, showcasing the versatility of our
approach in miscellaneous scenarios.
|
YY Gem is a short-period eclipsing binary system containing two nearly
identical, rapidly rotating, very active early-M dwarfs. This binary represents
an important benchmark system for calibrating empirical relations between
fundamental properties of low-mass stars and for testing theories of interior
structure and evolution of these objects. Both components of YY Gem exhibit
inflated radii, which has been attributed to poorly understood magnetic
activity effects. Despite a long history of magnetic activity studies of this
system no direct magnetic field measurements have been made for it. Here we
present a comprehensive characterisation of the surface magnetic field in both
components of YY Gem. We reconstructed the global field topologies with the
help of a tomographic inversion technique applied to high-resolution
spectropolarimetric data. This analysis revealed moderately complex global
fields with a typical strength of 200-300 G and anti-aligned dipolar
components. A complementary Zeeman intensification analysis of the disentangled
intensity spectra showed that the total mean field strength reaches 3.2-3.4 kG
in both components of YY Gem. We used these results together with other recent
magnetic field measurements of M dwarfs to investigate the relation between the
global and small-scale fields in these stars. We also assessed predictions of
competing magnetoconvection interior structure models developed for YY Gem,
finding that only one of them anticipated the surface field strength compatible
with our observations. Results of our star spot mapping of YY Gem do not
support the alternative family of theoretical stellar models which attempts to
explain the radii inflation by postulating a large spot filling factor.
|
We investigate the charge transfer characteristics of one and two excess
charges in a DNA base-pair dimer using a model Hamiltonian approach. The
electron part comprises diagonal and off-diagonal Coulomb matrix elements such
a correlated hopping and the bond-bond interaction, which were recently
calculated by Starikov [E. B. Starikov, Phil. Mag. Lett. {\bf 83}, 699 (2003)]
for different DNA dimers. The electronic degrees of freedom are coupled to an
ohmic or a super-ohmic bath serving as dissipative environment. We employ the
numerical renormalization group method in the nuclear tunneling regime and
compare the results to Marcus theory for the thermal activation regime. For
realistic parameters, the rate that at least one charge is transferred from the
donor to the acceptor in the subspace of two excess electrons significantly
exceeds the rate in the single charge sector. Moreover, the dynamics is
strongly influenced by the Coulomb matrix elements. We find sequential and pair
transfer as well as a regime where both charges remain self-trapped. The
transfer rate reaches its maximum when the difference of the on-site and
inter-site Coulomb matrix element is equal to the reorganization energy which
is the case in a GC-GC dimer. Charge transfer is completely suppressed for two
excess electrons in AT-AT in an ohmic bath and replaced by damped coherent
electron-pair oscillations in a super-ohmic bath. A finite bond-bond
interaction $W$ alters the transfer rate: it increases as function of $W$ when
the effective Coulomb repulsion exceeds the reorganization energy (inverted
regime) and decreases for smaller Coulomb repulsion.
|
The times of maximum brightness collected in the GEOS RR Lyr database allowed
us to trace the period variations of a sample of 123 galactic RRab variables.
These data span a time baseline exceeding 100 years. Clear evidence of period
increases or decreases at constant rates has been found, suggesting
evolutionary effects. The observed rates are slightly larger than those
predicted by theoretical models; moreover, there is an unexpected large
percentage of RRab stars showing a period decrease. The new possibilities
offered by the use of robotic telecopes (TAROTs, REM) and of data from
satellite (CoRoT) are expected to speed up the project to measure stellar
evolution in real time. It is noteworthy that the outlines of this project have
been sketched during several GEOS meetings, where the different knowledge of
amateur and professional astronomers found a very profitable synthesis.
|
Finding the optimal policy for multi-period perishable inventory systems
requires solving computationally-expensive stochastic dynamic programs (DP). To
avoid the difficulty of solving DP models, we propose a framework that uses an
externality term to capture the long-term impact of ordering decisions on the
average cost over an infinite horizon. By approximating the externality term,
we yield a tractable approximate optimality condition, which is solved through
standard marginal analysis. The resulted policy is near-optimal in long-run
average cost and ordering decisions.
|
Kaluza-Klein reductions of low energy string effective actions possess a
continuous $O(d,d) $ symmetry. The non-geometric elements of this group,
parameterized by a bi-vector $\beta$, are not inherited from the symmetries of
the higher-dimensional theory, but constitute instead a symmetry enhancement
produced by the isometries of the background. The realization of this
enhancement in the parent theory was recently defined as $\beta$ symmetry, a
powerful tool that allows to avoid the field reparameterizations of the
Kaluza-Klein procedure. In this paper we further explore this symmetry and its
impact on the first order $\alpha'$-corrections. We derive the $\beta$
transformation rules from the frame formulation of Double Field Theory (DFT),
and connect them to the corresponding rules in the Metsaev-Tseytlin and
Bergshoeff-de Roo supergravity schemes. It follows from our results that
$\beta$ symmetry is a necessary condition for the uplift of string
$\alpha'$-expansions to DFT.
|
Engineering and optimization of wireless propagation channels will be one of
the key elements of future communication technologies. Metasurfaces may offer a
wide spectrum of functionalities for passive and tunable reflecting devices,
overcoming fundamental limits of commonly used conventional phase-gradient
reflectarrays and metasurfaces. In this paper, we develop an efficient way for
the design and implementation of metasurfaces with high-efficiency anomalous
reflector functionalities. The developed numerical method provides accurate,
fast, and simple metasurface designs, taking into account non-local near-field
interactions between array elements. The design method is validated by
manufacturing and experimental testing of highly efficient anomalous reflectors
for the millimetre-wave band.
|
Simulating and analysing detailed observations of astrophysical sources for
very high energy (VHE) experiments, like the Cherenkov Telescope Array (CTA),
can be a demanding task especially in terms of CPU consumption and required
storage. In this context, we propose an innovative cloud computing architecture
based on Amazon Web Services (AWS) aiming to decrease the amount of time
required to simulate and analyse a given field by distributing the workload and
exploiting the large computational power offered by AWS. We detail how the
various services offered by the Amazon online platform are jointly used in our
architecture and we report a comparison of the execution times required for
simulating observations of a test source with the CTA, by a single machine and
the cloud-based approach. We find that, by using AWS, we can run our
simulations more than 2 orders of magnitude faster than by using a general
purpose workstation for the same cost. We suggest to consider this method when
observations need to be simulated, analysed, and concluded within short
timescales.
|
We study a family of polynomials introduced by Daigle and Freudenburg, which
contains the famous V\'en\'ereau polynomials and defines
$\mathbb{A}^2$-fibrations over $\mathbb{A}^2$. According to the
Dolgachev-Weisfeiler conjecture, every such fibration should have the structure
of a locally trivial $\mathbb{A}^2$-bundle over $\mathbb{A}^2$. We follow an
idea of Kaliman and Zaidenberg to show that these fibrations are locally
trivial $\mathbb{A}^2$-bundles over the punctured plane, all of the same
specific form $X_f$, depending on an element $f\in k[a^{\pm 1},b^{\pm 1}][x]$.
We then introduce the notion of bivariables and show that the set of
bivariables is in bijection with the set of locally trivial bundles $X_f$ that
are trivial. This allows us to give another proof of Lewis's result stating
that the second V\'en\'ereau polynomial is a variable and also to trivialise
other elements of the family $X_f$. We hope that the terminology and methods
developed here may lead to future study of the whole family $X_f$.
|
Automatic Speech Recognition (ASR) systems generalize poorly on accented
speech. The phonetic and linguistic variability of accents present hard
challenges for ASR systems today in both data collection and modeling
strategies. The resulting bias in ASR performance across accents comes at a
cost to both users and providers of ASR.
We present a survey of current promising approaches to accented speech
recognition and highlight the key challenges in the space. Approaches mostly
focus on single model generalization and accent feature engineering. Among the
challenges, lack of a standard benchmark makes research and comparison
especially difficult.
|
Astrophysical measurements away from the 1 AU orbit of Earth can enable
several astrophysical science cases that are challenging or impossible to
perform from Earthbound platforms, including: building a detailed understanding
of the extragalactic background light throughout the electromagnetic spectrum;
measurements of the properties of dust and ice in the inner and outer solar
system; determinations of the mass of planets and stellar remnants far from
luminous stars using gravitational microlensing; and stable time-domain
astronomy. Though potentially transformative for astrophysics, opportunities to
fly instrumentation capable of these measurements are rare, and a mission to
the distant solar system that includes instrumentation expressly designed to
perform astrophysical science, or even one primarily for a different purpose
but capable of precise astronomical investigation, has not yet been flown. In
this White Paper, we describe the science motivations for this kind of
measurement, and advocate for future flight opportunities that permit
intersectional collaboration and cooperation to make these science
investigations a reality.
|
We discuss the theoretical interpretation of observational data concerning
the clustering of galaxies at high redshifts. Building on the theoretical
machinery developed by Matarrese et al. (1997), we make detailed quantitative
predictions of galaxy clustering statistics for a variety of cosmological
models, taking into account differences in spatial geometry and initial
fluctuation spectra and exploring the role of bias as a complicating factor in
these calculations. We demonstrate that the usual description of evolution (in
terms of the parameters $\epsilon$ and $r_0$) is not useful for realistic
galaxy clustering models. We compare the detailed predictions of the variation
of correlation functions with redshift against current observational data to
constrain available models of structure formation. Theories that fit the
present-day abundance of rich clusters are generally compatible with the
observed redshift evolution of galaxy clustering if galaxies are no more than
slightly biased at $z\sim 1$. We also discuss the interpretation of a
concentration of Lyman-break galaxies found by Steidel et al. (1998), coming to
the conclusion that such concentrations are not unexpected in `standard' models
of structure formation.
|
Style transfer methods typically generate a single stylized output of color
and texture coupling for reference styles, and color transfer schemes may
introduce distortion or artifacts when processing reference images with
duplicate textures. To solve the problem, we propose a Color and Texture Dual
Pipeline Lightweight Style Transfer CTDP method, which employs a dual pipeline
method to simultaneously output the results of color and texture transfer.
Furthermore, we designed a masked total variation loss to suppress artifacts
and small texture representations in color transfer results without affecting
the semantic part of the content. More importantly, we are able to add texture
structures with controllable intensity to color transfer results for the first
time. Finally, we conducted feature visualization analysis on the texture
generation mechanism of the framework and found that smoothing the input image
can almost completely eliminate this texture structure. In comparative
experiments, the color and texture transfer results generated by CTDP both
achieve state-of-the-art performance. Additionally, the weight of the color
transfer branch model size is as low as 20k, which is 100-1500 times smaller
than that of other state-of-the-art models.
|
Most previous neural text-to-speech (TTS) methods are mainly based on
supervised learning methods, which means they depend on a large training
dataset and hard to achieve comparable performance under low-resource
conditions. To address this issue, we propose a semi-supervised learning method
for neural TTS in which labeled target data is limited, which can also resolve
the problem of exposure bias in the previous auto-regressive models.
Specifically, we pre-train the reference model based on Fastspeech2 with much
source data, fine-tuned on a limited target dataset. Meanwhile, pseudo labels
generated by the original reference model are used to guide the fine-tuned
model's training further, achieve a regularization effect, and reduce the
overfitting of the fine-tuned model during training on the limited target data.
Experimental results show that our proposed semi-supervised learning scheme
with limited target data significantly improves the voice quality for test data
to achieve naturalness and robustness in speech synthesis.
|
We introduce a rapid and precise analytical approach for analyzing cerebral
blood flow (CBF) using Diffuse Correlation Spectroscopy (DCS) with the
application of the Extreme Learning Machine (ELM). Our evaluation of ELM and
existing algorithms involves a comprehensive set of metrics. We assess these
algorithms using synthetic datasets for both semi-infinite and multi-layer
models. The results demonstrate that ELM consistently achieves higher fidelity
across various noise levels and optical parameters, showcasing robust
generalization ability and outperforming iterative fitting algorithms. Through
a comparison with a computationally efficient neural network, ELM attains
comparable accuracy with reduced training and inference times. Notably, the
absence of a back-propagation process in ELM during training results in
significantly faster training speeds compared to existing neural network
approaches. This proposed strategy holds promise for edge computing
applications with online training capabilities.
|
Observations of transition region emission in solar active regions represent
a powerful tool for determining the properties of hot coronal loops. In this
Letter we present the analysis of new observations of active region moss taken
with the Extreme Ultraviolet Imaging Spectrometer (EIS) on the \textit{Hinode}
mission. We find that the intensities predicted by steady, uniformly heated
loop models are too intense relative to the observations, consistent with
previous work. To bring the model into agreement with the observations a
filling factor of about 16% is required. Furthermore, our analysis indicates
that the filling factor in the moss is nonuniform and varies inversely with the
loop pressure.
|
Nominal algebra includes $\alpha$-equality and freshness constraints on
nominal terms endowed with a nominal set semantics that facilitates reasoning
about languages with binders. Nominal unification is decidable and unitary,
however, its extension with equational axioms such as Commutativity (which is
finitary) is no longer finitary unless permutation fixed-point constraints are
used. In this paper, we extend the notion of nominal algebra by introducing
fixed-point constraints and provide a sound semantics using strong nominal
sets. We show, by providing a counter-example, that the class of nominal sets
is not a sound denotation for this extended nominal algebra. To recover
soundness we propose two different formulations of nominal algebra, one
obtained by restricting to a class of fixed-point contexts that are in direct
correspondence with freshness contexts and another obtained by using a
different set of derivation rules.
|
The name of Oka principle, or Oka-Grauert principle, is traditionally used to
refer to the holomorphic incarnation of the homotopy principle: on a Stein
space, every problem that can be solved in the continuous category, can be
solved in the holomorphic category as well. In this note, we begin the study of
the same kind of questions on a Levi-flat manifold; more precisely, we try to
obtain a classification of CR-bundles on a semiholomorphic foliation of type
(n, 1). Our investigation should only be considered a preliminary exploration,
as it deals only with some particular cases, either in terms of regularity or
bidegree of the bundle, and partial results.
|
We develop a quasi-polynomial time Las Vegas algorithm for approximating Nash
equilibria in polymatrix games over trees, under a mild renormalizing
assumption. Our result, in particular, leads to an expected polynomial-time
algorithm for computing approximate Nash equilibria of tree polymatrix games in
which the number of actions per player is a fixed constant. Further, for trees
with constant degree, the running time of the algorithm matches the best known
upper bound for approximating Nash equilibria in bimatrix games (Lipton,
Markakis, and Mehta 2003).
Notably, this work closely complements the hardness result of Rubinstein
(2015), which establishes the inapproximability of Nash equilibria in
polymatrix games over constant-degree bipartite graphs with two actions per
player.
|
We provide a method to solve optimization problem when objective function is
a complex stochastic simulator of an urban transportation system. To reach this
goal, a Bayesian optimization framework is introduced. We show how the choice
of prior and inference algorithm effect the outcome of our optimization
procedure. We develop dimensionality reduction techniques that allow for our
optimization techniques to be applicable for real-life problems. We develop a
distributed, Gaussian Process Bayesian regression and active learning models
that allow parallel execution of our algorithms and enable usage of high
performance computing. We present a fully Bayesian approach that is more sample
efficient and reduces computational budget. Our framework is supported by
theoretical analysis and an empirical study. We demonstrate our framework on
the problem of calibrating a multi-modal transportation network of city of
Bloomington, Illinois. Finally, we discuss directions for further research.
|
We consider the application of permutation orbifold constructions towards a
new possible understanding of the genus zero property in Monstrous and
Generalized Moonshine. We describe a theory of twisted Hecke operators in this
setting and conjecture on the form of Generalized Moonshine replication
formulas.
|
M dwarfs produce explosive flare emission in the near-UV and optical
continuum, and the mechanism responsible for this phenomenon is not
well-understood. We present a near-UV/optical flare spectrum from the rise
phase of a secondary flare, which occurred during the decay of a much larger
flare. The newly formed flare emission resembles the spectrum of an early-type
star, with the Balmer lines and continuum in absorption. We model this
observation phenomonologically as a temperature bump (hot spot) near the
photosphere of the M dwarf. The amount of heating implied by our model (\Delta
T_phot ~ 16,000K) is far more than predicted by chromospheric backwarming in
current 1D RHD flare models (\Delta T_phot ~ 1200K).
|
B\"uchi's problem asks whether there exists a positive integer $M$ such that
any sequence $(x_n)$ of at least $M$ integers, whose second difference of
squares is the constant sequence $(2)$, satisifies $x_n^2=(x+n)^2$ for some
$x\in\Z$. A positive answer to B\"uchi's problem would imply that there is no
algorithm to decide whether or not an arbitrary system of quadratic diagonal
forms over $\Z$ can represent an arbitrary given vector of integers. We give
explicitly an infinite family of polynomial parametrizations of non-trivial
length $4$ B\"uchi sequences of integers. In turn, these parametrizations give
an explicit infinite family of curves (which we suspect to be hyperelliptic)
with the following property: any integral point on one of these curves would
give a length $5$ non-trivial B\"uchi sequence of integers (it is not known
whether any such sequence exists).
|
We study a system of two qubits interacting with a common environment,
described by a two-spin boson model. We demonstrate two competing roles of the
environment: inducing entanglement between the two qubits and making them
decoherent. For the environment of a single harmonic oscillator, if its
frequency is commensurate with the induced two-qubit coupling strength, the two
qubits could be maximally entangled and the environment could be separable. In
the case of the environment of a bosonic bath, the gap of its spectral density
function is essential to generate entanglement between two qubits at
equilibrium and for it to be used as a quantum data bus.
|
We give quantitative bounds for the number of quasi-integral points in orbits
of semigroups of rational maps under some conditions, generalizing previous
work of L. C. Hsia and J. Silverman (2011) for orbits generated by the
iterations of one rational map.
|
The properties of the ground state of liquid $^4$He are studied using a
correlated basis function of the form $\prod_{i<j} \psi(r_{ij})$. Here,
$\psi(r)$ is chosen as the exact solution of the Schr\"{o}dinger equation for
two $^4$He atoms. A hard-sphere plus an attractive square well is used as the
interaction potential between $^4$He atoms. The pair distribution function is
calculated using approximate integral methods, namely the Percus-Yevick (PY)
equation and Hypernetted Chain (HNC) approximation. The values thus obtained
are used to calculate the ground state energy, which is found to be -4.886 K
using the PY equation. The liquid structure factor is also obtained using the
pair distribution function. The values for the pair distribution function and
liquid structure factor are compared with experimental results and earlier
theoretical calculations.
|
We present the results of a survey of young intermediate mass stars (age
$<$~5 Myr, 1.5 $<M_{\star} \leq $ 15 $M_{\odot}$) in the W5 massive star
forming region. We use combined optical, near-infrared and {\it Spitzer} Space
Telescope photometry and optical spectroscopy to define a sample of stars of
spectral type A and B and examine their infrared excess properties. We find
objects with infrared excesses characteristic of optically thick disks, i.e.
Herbig AeBe stars. These stars are rare: $<$1.5% of the entire spectroscopic
sample of A and B stars, and absent among stars more massive than 2.4
$M_\odot$. 7.5% of the A and B stars possess infrared excesses in a variety of
morphologies that suggest their disks are in some transitional phase between an
initial, optically thick accretion state and later evolutionary states. We
identify four morphological classes based on the wavelength dependence of the
observed excess emission above theoretical photospheric levels: (a) the
optically thick disks; (b) disks with an optically thin excess over the
wavelength range 2 to 24 $\micron$, similar to that shown by Classical Be
stars; (c) disks that are optically thin in their inner regions based on their
infrared excess at 2-8 $\micron$ and optically thick in their outer regions
based on the magnitude of the observed excess emission at 24 $\micron$; (d)
disks that exhibit empty inner regions (no excess emission at $\lambda$ $\leq$
8 $\micron$) and some measurable excess emission at 24 $\micron$. A sub-class
of disks exhibit no significant excess emission at $\lambda \leq$ 5.8
$\micron$, have excess emission only in the {\it Spitzer} 8 $\micron$ band and
no detection at 24 $\micron$. We discuss these spectral energy distribution
(SED) types, suggest physical models for disks exhibiting these emission
patterns and additional observations to test these theories.
|
With the increasing imaging and processing capabilities of today's mobile
devices, user authentication using iris biometrics has become feasible.
However, as the acquisition conditions become more unconstrained and as image
quality is typically lower than dedicated iris acquisition systems, the
accurate segmentation of iris regions is crucial for these devices. In this
work, an end to end Fully Convolutional Deep Neural Network (FCDNN) design is
proposed to perform the iris segmentation task for lower-quality iris images.
The network design process is explained in detail, and the resulting network is
trained and tuned using several large public iris datasets. A set of methods to
generate and augment suitable lower quality iris images from the high-quality
public databases are provided. The network is trained on Near InfraRed (NIR)
images initially and later tuned on additional datasets derived from visible
images. Comprehensive inter-database comparisons are provided together with
results from a selection of experiments detailing the effects of different
tunings of the network. Finally, the proposed model is compared with
SegNet-basic, and a near-optimal tuning of the network is compared to a
selection of other state-of-art iris segmentation algorithms. The results show
very promising performance from the optimized Deep Neural Networks design when
compared with state-of-art techniques applied to the same lower quality
datasets.
|
We explore the transition to hydrodynamics in a weakly-coupled model of
quark-gluon plasma given by kinetic theory in the relaxation time approximation
with conformal symmetry. We demonstrate that the gradient expansion in this
model has a vanishing radius of convergence due to the presence of a transient
(nonhydrodynamic) mode, in a way similar to results obtained earlier in
strongly-coupled gauge theories. This suggests that the mechanism by which
hydrodynamic behaviour emerges is the same, which we further corroborate by a
novel comparison between solutions of different weakly and strongly coupled
models. However, in contrast with other known cases, we find that not all the
singularities of the analytic continuation of the Borel transform of the
gradient expansion correspond to transient excitations of the microscopic
system: some of them reflect analytic properties of the kinetic equation when
the proper time is continued to complex values.
|
Millimeter-wave (MMW) imaging is emerging as a promising technique for safe
security inspection. It achieves a delicate balance between imaging resolution,
penetrability and human safety, resulting in higher resolution compared to
low-frequency microwave, stronger penetrability compared to visible light, and
stronger safety compared to X ray. Despite of recent advance in the last
decades, the high cost of requisite large-scale antenna array hinders
widespread adoption of MMW imaging in practice. To tackle this challenge, we
report a large-scale single-shot MMW imaging framework using sparse antenna
array, achieving low-cost but high-fidelity security inspection under an
interpretable learning scheme. We first collected extensive full-sampled MMW
echoes to study the statistical ranking of each element in the large-scale
array. These elements are then sampled based on the ranking, building the
experimentally optimal sparse sampling strategy that reduces the cost of
antenna array by up to one order of magnitude. Additionally, we derived an
untrained interpretable learning scheme, which realizes robust and accurate
image reconstruction from sparsely sampled echoes. Last, we developed a neural
network for automatic object detection, and experimentally demonstrated
successful detection of concealed centimeter-sized targets using 10% sparse
array, whereas all the other contemporary approaches failed at the same sample
sampling ratio. The performance of the reported technique presents higher than
50% superiority over the existing MMW imaging schemes on various metrics
including precision, recall, and mAP50. With such strong detection ability and
order-of-magnitude cost reduction, we anticipate that this technique provides a
practical way for large-scale single-shot MMW imaging, and could advocate its
further practical applications.
|
Wave localization is a ubiquitous phenomenon. It refers to situations that
transmitted waves in scattering media are trapped in space and remain confined
in the vicinity of the initial site until dissipated. Here we report a phase
transition from acoustically extended to localized states in arrays of
identical air-filled bubbles in water. It is shown that the acoustic
localization in such media is coincident with the complete band gap of a
lattice arrangement of the air-bubbles. When the localization or the band gap
occurs, a peculiar collective behavior of the bubbles appears.
|
We perform a detailed phenomenological study of high-energy neutrino deep
inelastic scattering (DIS) focused on LHC far-forward experiments such as
FASER$\nu$ and SND@LHC. To this aim, we parametrise the neutrino fluxes
reaching these LHC far-forward experiments in terms of `neutrino PDFs' encoding
their energy and rapidity dependence by means of the LHAPDF framework. We
integrate these neutrino PDFs in the recently developed POWHEG-BOX-RES
implementation of neutrino-induced DIS to produce predictions accurate at
next-to-leading order (NLO) in the QCD coupling matched to parton showers (PS)
with Pythia8. We present NLO+PS predictions for final-state distributions
within the acceptance of FASER$\nu$ and SND@LHC as well as for two experiments
of the proposed Forward Physics Facility (FPF), FASER$\nu$2 and FLArE. We
quantify the impact of NLO QCD corrections, of the parton showering and
hadronisation settings in Pythia8, of the QED shower, and of the incoming
neutrino flavour for the description of these observables, and compare our
predictions with the GENIE neutrino event generator. Our work demonstrates the
relevance of modern higher-order event generators to achieve the key scientific
targets of the LHC neutrino experiments.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.