text
stringlengths 6
128k
|
---|
Using ab initio calculations based on density-functional theory and effective
model analysis, we propose that the trigonal YH3(Space Group: P-3c1) at ambient
pressure is a node-line semimetal when spin-orbit coupling (SOC) is ignored.
This trigonal YH3 has very clean electronic structure near Fermi level and its
nodal lines locate very closely to the Fermi energy, which makes it a perfect
system for model analysis. Symmetry analysis shows that the nodal ring in this
compound is protected by the glide-plane symmetry, where the band inversion of
|Y+,dxz> and |H1-,s> orbits at Gamma point is responsible for the formation of
the nodal lines. When SOC is included, the line nodes are prohibited by the
glide-plane symmetry, and a small gap (~5 meV) appears, which leads YH3 to be a
strong topological insulator with Z2 indices (1,000). Thus the glide-plane
symmetry plays an opposite role in the formation of the nodal lines in cases
without and with SOC. As the SOC-induced gap is so small that can be neglected,
this P-3c1 YH3 may be a good candidate for experimental explorations on the
fundamental physics of topological node-line semimetals. We find the surface
states of this P-3c1 phase are somehow unique and may be helpful to identify
the real ground state of YH3 in the experiment.
|
Triple-layered ruthenate Sr$_4$Ru$_3$O$_{10}$ shows a first-order itinerant
metamagnetic transition for in-plane magnetic fields. Our experiments revealed
rather surprising behavior in the low-temperature transport properties near
this transition. The in-plane magnetoresistivity $\rho$$_{ab}$(H) exhibits
ultrasharp steps as the magnetic field sweeps down through the transition.
Temperature sweeps of $\rho$$_{ab}$ for fields within the transition regime
show non-metallic behavior in the up-sweep cycle of magnetic field, but show a
significant drop in the down-sweep cycle. These observations indicate that the
transition occurs via a new electronic phase separation process; a lowly
polarized state is mixed with a ferromagnetic state within the transition
regime.
|
We show Akari data, Herschel data and data from the SCUBA2 camera on JCMT, of
molecular clouds. We focus on pre-stellar cores within the clouds. We present
Akari data of the L1147-1157 ring in Cepheus and show how the data indicate
that the cores are being externally heated. We present SCUBA2 and Herschel data
of the Ophiuchus region and show how the environment is also affecting core
evolution in this region. We discuss the effects of the magnetic field in the
Lupus I region, and how this lends support to a model for the formation and
evolution of cores in filamentary molecular clouds.
|
Lagrangian formalism for the Lagrangians homogeneous of degree two in
velocities is considered. It is shown that the reduced dynamics obtained by
neglecting one generalized coordinate is, in general, described by the Herglotz
extension of Lagrangian formalism. This result is applied to the propagation of
light in general gravitational field leading to the extended Fermat principle.
|
We apply weak-coupling perturbation theory to the Holstein molecular crystal
model in order to compute an electron-phonon correlation function
characterizing the shape and size of the polaron lattice distortion in one,
two, and three dimensions. This correlation function is computed exactly to
leading order in the electron-phonon coupling constant, permitting a complete
description of correlations in any dimension for both isotropic and arbitrarily
anisotropic cases. Using this exact result, the width of the polaron is
characterized along arbitrary directions. The width of the polaron thus
determined disagrees in every dimension with some well-known characterizations
of polarons, signalling in particular the breakdown of the adiabatic
approximation and the characterizations of self-trapping associated with it.
|
We have measured electrical transport across epitaxial, nanometer-sized
metal-semiconductor interfaces by contacting CoSi2-islands grown on Si(111)
with an STM-tip. The conductance per unit area was found to increase with
decreasing diode area. Indeed, the zero-bias conductance was found to be about
10^4 times larger than expected from downscaling a conventional diode. These
observations are explained by a model, which predicts a narrower barrier for
small diodes and therefore a greatly increased contribution of tunneling to the
electrical transport.
|
I review the multiphase cooling flow equations that reduce to a relatively
simple form for a wide class of self-similar density distributions described by
a single parameter, $k$. It is shown that steady-state cooling flows are
\emph{not} consistent with all possible emissivity profiles which can therefore
be used as a test of the theory. In combination, they provide strong
constraints on the temperature profile and mass distribution within the cooling
radius. The model is applied to ROSAT HRI data for 3 Abell clusters. At one
extreme ($k\sim1$) these show evidence for cores in the mass distribution of
size 110--140$ h_{50}^{-1}$kpc and have temperatures which decline towards the
flow centre. At the other ($k\mapsto\infty$), the mass density and gas
temperature both rise sharply towards the flow centre. The former are more
consistent with measured temperatures, which suggests that the density
distribution in the intracluster medium contains the minimum possible mixture
of low-density components.
|
Human activity recognition (HAR) by wearable sensor devices embedded in the
Internet of things (IOT) can play a significant role in remote health
monitoring and emergency notification, to provide healthcare of higher
standards. The purpose of this study is to investigate a human activity
recognition method of accrued decision accuracy and speed of execution to be
applicable in healthcare. This method classifies wearable sensor acceleration
time series data of human movement using efficient classifier combination of
feature engineering-based and feature learning-based data representation.
Leave-one-subject-out cross-validation of the method with data acquired from 44
subjects wearing a single waist-worn accelerometer on a smart textile, and
engaged in a variety of 10 activities, yields an average recognition rate of
90%, performing significantly better than individual classifiers. The method
easily accommodates functional and computational parallelization to bring
execution time significantly down.
|
The industrial manufacturing of chemicals consumes a significant amount of
energy and raw materials. In principle, the development of new catalysts could
greatly improve the efficiency of chemical production. However, the discovery
of viable catalysts can be exceedingly challenging because it is difficult to
know the efficacy of a candidate without experimentally synthesizing and
characterizing it. This study explores the feasibility of using fault-tolerant
quantum computers to accelerate the discovery of homogeneous catalysts for
nitrogen fixation, an industrially important chemical process. It introduces a
set of ground-state energy estimation problems representative of calculations
needed for the discovery of homogeneous catalysts and analyzes them on three
dimensions: economic utility, classical hardness, and quantum resource
requirements. For the highest utility problem considered, two steps of a
catalytic cycle for the generation of cyanate anion from dinitrogen, the
economic utility of running these computations is estimated to be $200,000, and
the required runtime for double-factorized phase estimation on a fault-tolerant
superconducting device is estimated under conservative assumptions to be
139,000 QPU-hours. The computational cost of an equivalent DMRG calculation is
estimated to be about 400,000 CPU-hours. These results suggest that, with
continued development, it will be feasible for fault-tolerant quantum computers
to accelerate the discovery of homogeneous catalysts.
|
The Landau-Lifshitz equation is derived as the reduction of a geodesic flow
on the group of maps into the rotation group. Passing the symmetries of spatial
isotropy to the reduced space is an example of semidirect product reduction by
stages.
|
How human brain function emerges from structure has intrigued researchers for
decades and numerous models have been put forward, yet none of them yields a
close structure-function relation. Here we present a resonance model based on
neuronal spike timing dependent plasticity (STDP) principle to describe the
spontaneous cortical activity by incorporating the dynamic interactions between
neuronal populations into a wave equation, which is able to accurately predict
the resting brain functional connectivity (FC), including the resting-state
networks. Besides, the proposed model provides strong theoretical and
experimental evidences that the spontaneous dynamic coupling between brain
regions fluctuates with a low frequency. Crucially, it is able to account for
how the negative functional correlations emerge during resonance. We test the
model with a large cohort of subjects (1038) from the Human Connectome Project
(HCP) S1200 release in both time and frequency domain, which exhibits superior
performance to existing eigen-decomposition models.
|
Square-wave pulse generation with a variable duty ratio can be realized with
the help of ideas of Talbot array illuminators formulated for binary phase
gratings. A binary temporal phase modulation of CW laser field propagating
through a group-delay-dispersion circuit of the fractional Talbot length $P/Q$
results in a well defined sequence of square-wave-form pulses. When $P=1$ a
duty ratio of the pulses $D$ is $1/2$ for $Q=4$ and $1/3$ for $Q=3$ and 6.
Maximum intensity of the pulses doubles and triples compared to the CW
intensity for $D=1/2$ and $1/3$, respectively. These pulses can be used for
return-to-zero laser field modulation in optical fiber communication. For
$D=1/3$ extra features between the pulses are found originating from a finite
rise and drop time of phase in a binary phase modulation. Similar effect as a
benefit of the time-space analogy is predicted for binary phase gratings and
interpreted as gleams produced by imperfect edges of the components of the
rectangular phase gratings.
|
Given a Finsler space (M,F) on a manifold M, the averaging method associates
to Finslerian geometric objects affine geometric objects} living on $M$. In
particular, a Riemannian metric is associated to the fundamental tensor $g$ and
an affine, torsion free connection is associated to the Chern-Rund connection.
As an illustration of the technique, a generalization of the Gauss-Bonnet
theorem to Berwald surfaces using the average metric is presented. The parallel
transport and curvature endomorphisms of the average connection are obtained.
The holonomy group for a Berwald space is discussed. New affine, local
isometric invariants of the original Finsler metric. The heredity of the
property of symmetric space from the Finsler space to the average Riemannian
metric is proved.
|
Among the numerous questions that arise concerning the exploitation of
petroleum from unconventional reservoirs, lie the questions of the composition
of hydrocarbons present in deep seated HP-HT reservoirs or produced during
in-situ upgrading steps of heavy oils and oil shales. Our research shows that
experimental hydrocarbon cracking results obtained in the laboratory cannot be
extrapolated to geological reservoir conditions in a simple manner. Our
demonstration is based on two examples: 1) the role of the hydrocarbon mixture
composition on reaction kinetics (the "mixing effect") and the effects of
pressure (both in relationship to temperature and time). The extrapolation of
experimental data to geological conditions requires investigation of the
free-radical reaction mechanisms through a computed kinetic model. We propose a
model that takes into account 52 reactants as of today, and which can be
continuously improved by addition of new reactants as research proceeds. This
model is complete and detailed enough to be simulated in large ranges of
temperature (150-500\degree C) and pressures (1-1500 bar). It is thus adapted
to predict the hydrocarbons evolution from upgrading conditions to geological
reservoirs.
|
A method for identifying statistical equilibrium stages in dynamical
multifragmentation paths as provided by transport models, already successfully
tested for for the reaction ^{129}Xe+^{119}Sn at 32 MeV/u is applied here to a
higher energy reaction, ^{129}Xe+^{119}Sn at 50 MeV/u. The method evaluates
equilibrium from the point of view of the microcanonical multifragmentation
model (MMM) and reactions are simulated by means of the stochastic mean field
model (SMF). A unique solution, corresponding to the maximum population of the
system phase space, was identified suggesting that a huge part of the available
phase space is occupied even in the case of the 50 MeV/u reaction, in presence
of a considerable amount of radial collective flow. The specific equilibration
time and volume are identified and differences between the two systems are
discussed.
|
We show that every rationally sampled dilation-and-modulation system is
unitarily equivalent with a multi-window Gabor system. As a consequence, frame
theoretical results from Gabor analysis can be directly transferred to
dilation-and-modulation systems.
|
The HST treasury program BUFFALO provides extended wide-field imaging of the
six Hubble Frontier Fields galaxy clusters. Here we present the combined strong
and weak-lensing analysis of Abell 370, a massive cluster at z=0.375. From the
reconstructed total projected mass distribution in the 6arcmin x 6arcmin
BUFFALO field-of-view, we obtain the distribution of massive substructures
outside the cluster core and report the presence of a total of seven
candidates, each with mass $\sim 5 \times 10^{13}M_{\odot}$. Combining the
total mass distribution derived from lensing with multi-wavelength data, we
evaluate the physical significance of each candidate substructure, and conclude
that 5 out of the 7 substructure candidates seem reliable, and that the mass
distribution in Abell 370 is extended along the North-West and South-East
directions. While this finding is in general agreement with previous studies,
our detailed spatial reconstruction provides new insights into the complex mass
distribution at large cluster-centric radius. We explore the impact of the
extended mass reconstruction on the model of the cluster core and in
particular, we attempt to physically explain the presence of an important
external shear component, necessary to obtain a low root-mean-square separation
between the model-predicted and observed positions of the multiple images in
the cluster core. The substructures can only account for up to half the
amplitude of the external shear, suggesting that more effort is needed to fully
replace it by more physically motivated mass components. We provide public
access to all the lensing data used as well as the different lens models.
|
The AKARI All-Sky Survey provided the first bright point source catalog
detected at 90um. Starting from this catalog, we selected galaxies by matching
AKARI sources with those in the IRAS PSCz. Next, we have measured total GALEX
FUV and NUV flux densities. Then, we have matched this sample with SDSS and
2MASS galaxies. By this procedure, we obtained the final sample which consists
of 607 galaxies. If we sort the sample with respect to 90um, their average SED
shows a coherent trend: the more luminous at 90um, the redder the global SED
becomes. The M_r--NUV-r color-magnitude relation of our sample does not show
bimodality, and the distribution is centered on the green valley between the
blue cloud and red sequence seen in optical surveys. We have established
formulae to convert FIR luminosity from AKARI bands to the total infrared (IR)
luminosity L_TIR. With these formulae, we calculated the star formation
directly visible with FUV and hidden by dust. The luminosity related to star
formation activity (L_SF) is dominated by L_TIR even if we take into account
the far-infrared (FIR) emission from dust heated by old stars. At high star
formation rate (SFR) (> 20 Msun yr^-1), the fraction of directly visible SFR,
SFR_FUV, decreases. We also estimated the FUV attenuation A_FUV from
FUV-to-total IR (TIR) luminosity ratio. We also examined the L_TIR/L_FUV-UV
slope (FUV- NUV) relation. The majority of the sample has L_TIR/L_FUV ratios 5
to 10 times lower than expected from the local starburst relation, while some
LIRGs and all the ULIRGs of this sample have higher L_TIR/L_FUV ratios. We
found that the attenuation indicator L_TIR/L_FUV is correlated to the stellar
mass of galaxies, M*, but there is no correlation with specific SFR (SSFR),
SFR/M*, and dust attenuation L_TIR/L_FUV. (abridged)
|
Modern electronic design automation (EDA) tools can handle the complexity of
state-of-the-art electronic systems by decomposing them into smaller blocks or
cells, introducing different levels of abstraction and staged design flows.
However, throughout each independent-optimised design step, overhead and
inefficiency can accumulate in the resulting overall design. Performing
design-specific optimisation from a more global viewpoint requires more time
due to the larger search space, but has the potential to provide solutions with
improved performance. In this work, a fully-automated, multi-objective (MO) EDA
flow is introduced to address this issue. It specifically tunes drive strength
mapping, preceding physical implementation, through multi-objective
population-based search algorithms. Designs are evaluated with respect to their
power, performance and area (PPA). The proposed approach is aimed at digital
circuit optimisation at the block-level, where it is capable of expanding the
design space and offers a set of trade-off solutions for different
case-specific utilisation. We have applied the proposed MOEDA framework to
ISCAS-85 and EPFL benchmark circuits using a commercial 65nm standard cell
library. The experimental results demonstrate how the MOEDA flow enhances the
solutions initially generated by the standard digital flow, and how
simultaneously a significant improvement in PPA metrics is achieved.
|
By now, tens of gravitational-wave (GW) events have been detected by the LIGO
and Virgo detectors. These GWs have all been emitted by compact binary
coalescence, for which we have excellent predictive models. However, there
might be other sources for which we do not have reliable models. Some are
expected to exist but to be very rare (e.g., supernovae), while others may be
totally unanticipated. So far, no unmodeled sources have been discovered, but
the lack of models makes the search for such sources much more difficult and
less sensitive. We present here a search for unmodeled GW signals using
semi-supervised machine learning. We apply deep learning and outlier detection
algorithms to labeled spectrograms of GW strain data, and then search for
spectrograms with anomalous patterns in public LIGO data. We searched $\sim
13\%$ of the coincident data from the first two observing runs. No candidates
of GW signals were detected in the data analyzed. We evaluate the sensitivity
of the search using simulated signals, we show that this search can detect
spectrograms containing unusual or unexpected GW patterns, and we report the
waveforms and amplitudes for which a $50\%$ detection rate is achieved.
|
Haisch and Rueda have recently proposed a model in which the inertia of
charged particles is a consequence of their interaction with the
electromagnetic zero-point field. This model is based on the observation that
in an accelerated frame the momentum distribution of vacuum fluctuations is not
isotropic. We analyze this issue through standard techniques of relativistic
field theory, first by regarding the field A_mu as a classical random field,
and then by making reference to the mass renormalization procedure in Quantum
Electrodynamics and scalar-QED.
|
Pressure-gradient-induced separation of swept and unswept turbulent boundary
layers, based on the DNS studies of Coleman et al. (J. Fluid Mech. 2018 &
2019), have been analyzed for various nonequilibrium effects. The goal is to
isolate physical processes critical to near-wall flow modeling. The
decomposition of skin friction into contributing physical terms, proposed by
Renard and Deck (J. Fluid Mech. 2016) (short: RD decomposition), affords
several key insights into the near-wall physics of these flows. In the unswept
case, spatial growth term (encapsulating nonequilibrium effects) and TKE
production appear to be the dominant contributing terms in the RD decomposition
in the separated and pressure-gradient zones, but a closer inspection reveals
that only the spatial growth term dominates in the inner layer close to the
separation bubble, implying a strong need for incorporating nonequilibrium
terms in the wall modeling of this case. The comparison of streamwise RD
decomposition of swept and unswept cases shows that a larger accumulated
Clauser-pressure-gradient parameter history in the latter energizes the outer
dynamics in the APG, leading to diminished separation bubble size in the
unswept case. The spanwise RD decomposition in the swept case indicates that
the downstream spanwise flow largely retains the upstream ZPG characteristics.
This seems to ease the near-wall modeling challenge in the separated region,
especially for basic models with an inherent log-law assumption. Wall-modeled
LES of the swept and unswept cases are then performed using three wall models,
validating many of the modeling implications from the DNS. In particular, the
extension of RD decomposition to wall models underpins the criticality of
spatial growth term close to the separation bubble, and the corresponding
superior predictions by the PDE wall model due to its accurate capturing of
this term.
|
In this paper, the Entropically Damped Artificial Compressibility (EDAC)
formulation of Clausen (2013) is used in the context of the Smoothed Particle
Hydrodynamics (SPH) method for the simulation of incompressible fluids.
Traditionally, weakly-compressible SPH (WCSPH) formulations have employed
artificial compressiblity to simulate incompressible fluids. EDAC is an
alternative to the artificial compressiblity scheme wherein a pressure
evolution equation is solved in lieu of coupling the fluid density to the
pressure by an equation of state. The method is explicit and is easy to
incorporate into existing SPH solvers using the WCSPH formulation. This is
demonstrated by coupling the EDAC scheme with the recently proposed Transport
Velocity Formulation (TVF) of Adami et al. (2013). The method works for both
internal flows and for flows with a free surface (a drawback of the TVF
scheme). Several benchmark problems are considered to evaluate the proposed
scheme and it is found that the EDAC scheme gives results that are as good or
sometimes better than those produced by the TVF or standard WCSPH. The scheme
is robust and produces smooth pressure distributions and does not require the
use of an artificial viscosity in the momentum equation although using some
artificial viscosity is beneficial.
|
This review gives a pedagogical introduction to the eigenstate thermalization
hypothesis (ETH), its basis, and its implications to statistical mechanics and
thermodynamics. In the first part, ETH is introduced as a natural extension of
ideas from quantum chaos and random matrix theory (RMT). To this end, we
present a brief overview of classical and quantum chaos, as well as RMT and
some of its most important predictions. The latter include the statistics of
energy levels, eigenstate components, and matrix elements of observables.
Building on these, we introduce the ETH and show that it allows one to describe
thermalization in isolated chaotic systems without invoking the notion of an
external bath. We examine numerical evidence of eigenstate thermalization from
studies of many-body lattice systems. We also introduce the concept of a quench
as a means of taking isolated systems out of equilibrium, and discuss results
of numerical experiments on quantum quenches. The second part of the review
explores the implications of quantum chaos and ETH to thermodynamics. Basic
thermodynamic relations are derived, including the second law of
thermodynamics, the fundamental thermodynamic relation, fluctuation theorems,
and the Einstein and Onsager relations. In particular, it is shown that quantum
chaos allows one to prove these relations for individual Hamiltonian
eigenstates and thus extend them to arbitrary stationary statistical ensembles.
We then show how one can use these relations to obtain nontrivial universal
energy distributions in continuously driven systems. At the end of the review,
we briefly discuss the relaxation dynamics and description after relaxation of
integrable quantum systems, for which ETH is violated. We introduce the concept
of the generalized Gibbs ensemble, and discuss its connection with ideas of
prethermalization in weakly interacting systems.
|
We introduce a class of branching processes in which the reproduction or
lifetime distribution at a given time depends on the total cumulative number of
individuals who have been born in the population until that time. We focus on a
continuous-time version of these processes, called total-progeny-dependent
birth-and-death processes, and study some of their properties through the
analysis of their fluid (deterministic) approximation. These properties include
the maximum population size, the total progeny size at extinction, the time to
reach the maximum population size, and the time until extinction. As the fluid
approach does not allow us to approximate the time until extinction directly,
we propose several methods to complement this approach. We also use the fluid
approach to study the behaviour of the processes as we increase the magnitude
of the individual birth rate.
|
We investigate the Galois structure of algebraic units in cyclic extensions
of number fields and thereby obtain strong new results on the existence of
independent Minkowski $S$-units.
|
This paper investigates the influence of two graft transformations on the
distance spectral radius of connected uniform hypergraphs. Specifically, we
study $k$-uniform hypertrees with given size, maximum degree and number of
vertices of maximum degree, and give the structure of such hypergraph with
maximum distance spectral radius.
|
At present, artificial intelligence in the form of machine learning is making
impressive progress, especially the field of deep learning (DL) [1]. Deep
learning algorithms have been inspired from the beginning by nature,
specifically by the human brain, in spite of our incomplete knowledge about its
brain function. Learning from nature is a two-way process as discussed in
[2][3][4], computing is learning from neuroscience, while neuroscience is
quickly adopting information processing models. The question is, what can the
inspiration from computational nature at this stage of the development
contribute to deep learning and how much models and experiments in machine
learning can motivate, justify and lead research in neuroscience and cognitive
science and to practical applications of artificial intelligence.
|
The atmospheric plasma is regarded as an effective method for surface
treatments because it can reduce the period of process and does not need
expensive vacuum apparatus. The performance of non-transferred plasma torches
is significantly depended on jet flow characteristics out of the nozzle. In
order to produce the high performance of a torch, the maximum discharge
velocity near an annular gap in the torch should be maintained. Also, the
compulsory swirl is being produced to gain the shape that can concentrate the
plasma at the center of gas flow. Numerical analysis of two different
mathematical models used for simulating plasma characteristics inside an
atmospheric plasma torch is carried out. A qualitative comparison is made in
this study to test the accuracy of these two different model predictions of an
atmospheric plasma torch. Numerical investigations are carried out to examine
the influence of different model assumptions on the resulting plasma
characteristics. Significant variations in the results in terms of the plasma
velocity and temperature are observed. These variations will influence the
subsequent particle dynamics in the thermal spraying process. The uniformity of
plasma distribution is investigated. For analyzing the swirl effects in the
plenum chamber and the flow distribution, FVM (finite volume method) and a
SIMPLE algorithm are used for solving the governing equations.
|
Finite element simulations are an enticing tool to evaluate heart valve
function in healthy and diseased patients; however, patient-specific
simulations derived from 3D echocardiography are hampered by several technical
challenges. In this work, we present an open-source method to enforce matching
between finite element simulations and in vivo image-derived heart valve
geometry in the absence of patient-specific material properties, leaflet
thickness, and chordae tendineae structures. We evaluate FEBio Finite Element
Simulations with Shape Enforcement (FINESSE) using three synthetic test cases
covering a wide range of model complexity. Our results suggest that FINESSE can
be used to not only enforce finite element simulations to match an
image-derived surface, but to also estimate the first principal leaflet strains
within +/- 0.03 strain. Key FINESSE considerations include: (i) appropriately
defining the user-defined penalty, (ii) omitting the leaflet commissures to
improve simulation convergence, and (iii) emulating the chordae tendineae
behavior via prescribed leaflet free edge motion or a chordae emulating force.
We then use FINESSE to estimate the in vivo valve behavior and leaflet strains
for three pediatric patients. In all three cases, FINESSE successfully matched
the target surface with median errors similar to or less than the smallest
voxel dimension. Further analysis revealed valve-specific findings, such as the
tricuspid valve leaflet strains of a 2-day old patient with HLHS being larger
than those of two 13-year old patients. The development of this open source
pipeline will enable future studies to begin linking in vivo leaflet mechanics
with patient outcomes
|
Resonant productions of the first generation scalar and vector diquarks at
high energy hadron-hadron (pp), lepton-hadron (ep) and lepton-lepton (e+e-)
colliders are investigated. Taking into account the hadronic component of the
photon, diquarks can be produced resonantly in the lepton-hadron and
lepton-lepton collisions. Production rates, decay widths and signatures of
diquarks are discussed using the general, SU(3)_{C} x SU(2)_{W} x U(1)_{Y}
invariant, effective Lagrangian. The corresponding dijet backgrounds are
examined in the interested invariant mass regions. The attainable mass limits
and couplings are obtained for the diquarks that can be produced in hadron
collisions and in resolved photon processes. It is shown that hadron collider
with center of mass energy sqrt(s)=14 TeV will be able to discover scalar and
vector diquarks with masses up to m_{DQ}=9 TeV for quark-diquark-quark coupling
alpha_{DQ}=0.1. Relatively, lighter diquarks can be probed at ep and e+e-
collisions in more clear environment.
|
We have performed a density functional study of fifteen different structural
models of the Si(557)-Au surface reconstruction. Here we present a brief
summary of the main structural trends obtained for the more favourable models,
focusing afterwards in a detailed description of the atomic structure,
electronic properties and, simulated STM images of the most stable model
predicted by our calculations. This structure is in very good agreement with
that recently proposed from X-ray diffraction measurements by Robinson et al.
[Phys. Rev. Lett. 88, (2002) 096194].
|
A new perspective of the Green's function in a boundary value problem as the
only eigenstate in an auxiliary formulation is introduced. In this treatment,
the Green's function can be perceived as a defect state in the presence of a
$\delta$-function potential, the height of which depends on the Green's
function itself. This approach is illustrated in one-dimensional and
two-dimensional Helmholtz equation problems, with an emphasis on systems that
are open and have a non-Hermitian potential. We then draw an analogy between
the Green's function obtained this way and a chiral edge state circumventing a
defect in a topological lattice, which shines light on the local minimum of the
Green's function at the source position.
|
This paper aims to efficiently enable large language models (LLMs) to use
external knowledge and goal guidance in conversational recommender system (CRS)
tasks. Advanced LLMs (e.g., ChatGPT) are limited in domain-specific CRS tasks
for 1) generating grounded responses with recommendation-oriented knowledge, or
2) proactively leading the conversations through different dialogue goals. In
this work, we first analyze those limitations through a comprehensive
evaluation, showing the necessity of external knowledge and goal guidance which
contribute significantly to the recommendation accuracy and language quality.
In light of this finding, we propose a novel ChatCRS framework to decompose the
complex CRS task into several sub-tasks through the implementation of 1) a
knowledge retrieval agent using a tool-augmented approach to reason over
external Knowledge Bases and 2) a goal-planning agent for dialogue goal
prediction. Experimental results on two multi-goal CRS datasets reveal that
ChatCRS sets new state-of-the-art benchmarks, improving language quality of
informativeness by 17% and proactivity by 27%, and achieving a tenfold
enhancement in recommendation accuracy.
|
In this paper we present a queueing network approach to the problem of
routing and rebalancing a fleet of self-driving vehicles providing on-demand
mobility within a capacitated road network. We refer to such systems as
autonomous mobility-on-demand systems, or AMoD. We first cast an AMoD system
into a closed, multi-class BCMP queueing network model. Second, we present
analysis tools that allow the characterization of performance metrics for a
given routing policy, in terms, e.g., of vehicle availabilities, and first and
second order moments of vehicle throughput. Third, we propose a scalable method
for the synthesis of routing policies, with performance guarantees in the limit
of large fleet sizes. Finally, we validate our theoretical results on a case
study of New York City. Collectively, this paper provides a unifying framework
for the analysis and control of AMoD systems, which subsumes earlier Jackson
and network flow models, provides a quite large set of modeling options (e.g.,
the inclusion of road capacities and general travel time distributions), and
allows the analysis of second and higher-order moments for the performance
metrics.
|
We study the dynamics of a star orbiting a merging black-hole binary (BHB) in
a coplanar triple configuration. During the BHB's orbital decay, the system can
be driven across the apsidal precession resonance, where the apsidal precession
rate of the stellar orbit matches that of the inner BHB. As a result, the
system gets captured into a state of resonance advection until the merger of
the BHB, leading to an extreme eccentricity growth of the stellar orbit. This
resonance advection occurs when the inner binary has a non-zero eccentricity
and unequal masses. The resonant driving of the stellar eccentricity can
significantly alter the hardening rate of the inner BHB, and produce
observational signatures to uncover the presence of nearby merging or merged
BHBs.
|
Euclidean quantum measure in Regge calculus with independent area tensors is
considered using example of the Regge manifold of a simple structure. We go
over to integrations along certain contours in the hyperplane of complex
connection variables. Discrete connection and curvature on classical solutions
of the equations of motion are not, strictly speaking, genuine connection and
curvature, but more general quantities and, therefore, these do not appear as
arguments of a function to be averaged, but are the integration (dummy)
variables. We argue that upon integrating out the latter the resulting measure
can be well-defined on physical hypersurface (for the area tensors
corresponding to certain edge vectors, i.e. to certain metric) as positive and
having exponential cutoff at large areas on condition that we confine ourselves
to configurations which do not pass through degenerate metrics.
|
In this paper, we have worked out a pseudo two dimensional (2D) analytical
model for surface potential and drain current of a long channel p-type Dual
Material Gate (DMG) Gate All-Around (GAA) nanowire Tunneling Field Effect
Transistor (TFET). The model incorporates the effect of drain voltage, gate
metal work functions, thickness of oxide and silicon nanowire radius. The model
does not assume a fully depleted channel. With the help of this model we have
demonstrated the accumulation of charge at the interface of the two gates. The
accuracy of the model is tested using the 3D device simulator Silvaco Atlas.
|
An adjoint pair of contravariant functors between abelian categories can be
extended to the adjoint pair of their derived functors in the associated
derived categories. We describe the reflexive complexes and interpret the
achieved results in terms of objects of the initial abelian categories. In
particular we prove that, for functors of any finite cohomological dimension,
the objects of the initial abelian categories which are reflexive as stalk
complexes form the largest class where a Cotilting Theorem in the sense of
Colby and Fuller works.
|
Morphological evolution of expanding shells of fast-mode magnetohydrodynamic
(MHD) waves through an inhomogeneous ISM is investigated in order to
qualitatively understand the complicated morphology of shell-type supernova
remnants (SNR). Interstellar clouds with high Alfv\'en velocity act as concave
lenses to diverge the MHD waves, while those with slow Alfv\'en velocity act as
convex lenses to converge the waves to the focal points. By combination of
various types of clouds and fluctuations with different Alfv\'en velocities,
sizes, or wavelengths, the MHD-wave shells attain various morphological
structures, exhibiting filaments, arcs, loops, holes, and focal strings,
mimicking old and deformed SNRs.
|
In this paper, we have studied biharmonic hypersurfaces in space form
$\bar{M}^{n+1}(c)$ with constant sectional curvature $c$. We have obtained that
biharmonic hypersurfaces $M^{n}$ with at most three distinct principal
curvatures in $\bar{M}^{n+1}(c)$ has constant mean curvature. We also obtain
the full classification of biharmonic hypersurfaces with at most three distinct
principal curvatures in arbitrary dimension space form $\bar{M}^{n+1}(c)$.
|
Contrastive learning has shown outstanding performances in both supervised
and unsupervised learning, and has recently been introduced to solve weakly
supervised learning problems such as semi-supervised learning and noisy label
learning. Despite the empirical evidence showing that semi-supervised labels
improve the representations of contrastive learning, it remains unknown if
noisy supervised information can be directly used in training instead of after
manual denoising. Therefore, to explore the mechanical differences between
semi-supervised and noisy-labeled information in helping contrastive learning,
we establish a unified theoretical framework of contrastive learning under weak
supervision. Specifically, we investigate the most intuitive paradigm of
jointly training supervised and unsupervised contrastive losses. By translating
the weakly supervised information into a similarity graph under the framework
of spectral clustering based on the posterior probability of weak labels, we
establish the downstream classification error bound. We prove that
semi-supervised labels improve the downstream error bound whereas noisy labels
have limited effects under such a paradigm. Our theoretical findings here
provide new insights for the community to rethink the role of weak supervision
in helping contrastive learning.
|
Mixture models trained via EM are among the simplest, most widely used and
well understood latent variable models in the machine learning literature.
Surprisingly, these models have been hardly explored in text generation
applications such as machine translation. In principle, they provide a latent
variable to control generation and produce a diverse set of hypotheses. In
practice, however, mixture models are prone to degeneracies---often only one
component gets trained or the latent variable is simply ignored. We find that
disabling dropout noise in responsibility computation is critical to successful
training. In addition, the design choices of parameterization, prior
distribution, hard versus soft EM and online versus offline assignment can
dramatically affect model performance. We develop an evaluation protocol to
assess both quality and diversity of generations against multiple references,
and provide an extensive empirical study of several mixture model variants. Our
analysis shows that certain types of mixture models are more robust and offer
the best trade-off between translation quality and diversity compared to
variational models and diverse decoding approaches.\footnote{Code to reproduce
the results in this paper is available at
\url{https://github.com/pytorch/fairseq}}
|
The first galaxies in the Universe are built up where cold dark matter (CDM)
forms large scale filamentary structure. Although the galaxies are expected to
emit numerous Lya photons, they are surrounded by plentiful neutral hydrogen
with a typical optical depth for Lya of ~10^5 (HI halos) before the era of
cosmological reionization. The HI halo almost follows the cosmological Hubble
expansion with some anisotropic corrections around the galaxy because of the
gravitational attraction by the underlying CDM filament. In this paper, we
investigate the detectability of the Lya emissions from the first galaxies,
examining their dependence on viewing angles. Solving the Lya line transfer
problem in an anisotropically expanding HI halo, we show that the escape
probability from the HI halo is the largest in direction along the filament
axis. If the Lya source is observed with a narrow-band filter, the difference
of apparent Lya line luminosities among viewing angles can be a factor of > 40
at an extreme case. Furthermore, we evaluate the predicted physical features of
the Lya sources and flux magnification by gravitational lensing effect due to
clusters of galaxies along the filament. We conclude that, by using the next
generation space telescopes like the JWST, the Lya emissions from the first
galaxies whose CDM filament axes almost face to us can be detected with the S/N
of > 10.
|
The topic of semantic segmentation has witnessed considerable progress due to
the powerful features learned by convolutional neural networks (CNNs). The
current leading approaches for semantic segmentation exploit shape information
by extracting CNN features from masked image regions. This strategy introduces
artificial boundaries on the images and may impact the quality of the extracted
features. Besides, the operations on the raw image domain require to compute
thousands of networks on a single image, which is time-consuming. In this
paper, we propose to exploit shape information via masking convolutional
features. The proposal segments (e.g., super-pixels) are treated as masks on
the convolutional feature maps. The CNN features of segments are directly
masked out from these maps and used to train classifiers for recognition. We
further propose a joint method to handle objects and "stuff" (e.g., grass, sky,
water) in the same framework. State-of-the-art results are demonstrated on
benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling
computational speed.
|
Fermi's golden rule defines the transition rate between weakly coupled states
and can thus be used to describe a multitude of molecular processes including
electron-transfer reactions and light-matter interaction. However, it can only
be calculated if the wave functions of all internal states are known, which is
typically not the case in molecular systems. Marcus theory provides a
closed-form expression for the rate constant, which is a classical limit of the
golden rule, and indicates the existence of a normal regime and an inverted
regime. Semiclassical instanton theory presents a more accurate approximation
to the golden-rule rate including nuclear quantum effects such as tunnelling,
which has so far been applicable to complex anharmonic systems in the normal
regime only. In this paper we extend the instanton method to the inverted
regime and study the properties of the periodic orbit, which describes the
tunnelling mechanism via two imaginary-time trajectories, one of which now
travels in negative imaginary time. It is known that tunnelling is particularly
prevalent in the inverted regime, even at room temperature, and thus this
method is expected to be useful in studying a wide range of molecular
transitions occurring in this regime.
|
The Cooper pairs in superconducting condensates are shown to acquire a
temperature-dependent dc magnetic moment under the effect of the circularly
polarized electromagnetic radiation. The mechanisms of this inverse Faraday
effect are investigated within the simplest version of the phenomenological
dynamic theory for superfluids, namely, the time-dependent Ginzburg-Landau (GL)
model. The light-induced magnetic moment is shown to be strongly affected by
the nondissipative oscillatory contribution to the superconducting order
parameter dynamics which appears due to the nonzero imaginary part of the GL
relaxation time. The relevance of the latter quantity to the Hall effect in
superconducting state allows to establish the connection between the direct and
inverse Faraday phenomena.
|
We discuss a special class of quantum gravity phenomena that occur on the
scale of the Universe as a whole at any stage of its evolution. These phenomena
are a direct consequence of the zero rest mass of gravitons, conformal
non-invariance of the graviton field, and one-loop finiteness of quantum
gravity. The effects are due to graviton-ghost condensates arising from the
interference of quantum coherent states. Each of coherent states is a state of
gravitons and ghosts of a wavelength of the order of the horizon scale and of
different occupation numbers. The state vector of the Universe is a coherent
superposition of vectors of different occupation numbers. To substantiate the
reliability of macroscopic quantum effects, the formalism of one-loop quantum
gravity is discussed in detail. The theory is constructed as follows:
Faddeev-Popov path integral in Hamilton gauge -> factorization of classical and
quantum variables, allowing the existence of a self-consistent system of
equations for gravitons, ghosts and macroscopic geometry -> transition to the
one-loop approximation. The ghost sector corresponding to the Hamilton gauge
ensures of one-loop finiteness of the theory off the mass shell. The
Bogolyubov-Born-Green-Kirckwood-Yvon (BBGKY) chain for the spectral function of
gravitons renormalized by ghosts is used to build a self-consistent theory of
gravitons in the isotropic Universe. We found three exact solutions of the
equations, consisting of BBGKY chain and macroscopic Einstein's equations. The
solutions describe virtual graviton, ghost, and instanton condensates and are
reproduced at the level of exact solutions for field operators and state
vectors. Each exact solution corresponds to a certain phase state of
graviton-ghost substratum. We establish conditions under which a continuous
quantum-gravity phase transitions occur between different phases of the
graviton-ghost condensate.
|
We have derived whole-sky CMB polarization maps from the WMAP 5 year
polarization data, using the Harmonic Internal Linear Combination (HILC)
method. Our HILC method incorporates spatial variability of linear weights in a
natural way and yields continuous linear weights over the entire sky. To
estimate the power spectrum of HILC maps, we have derived a unbiased quadratic
estimator, which is similar to the WMAP team's cross power estimator, but in a
more convenient form for HILC maps. From our CMB polarization map, we have
obtained TE correlation and E mode power spectra without applying any mask.
They are similar to the WMAP team's estimation and consistent with the WMAP
best-fit $\Lambda$CDM model. Foreground reduction by HILC method is more
effective for high resolution and low noise data. Hence, our HILC method will
enable effective foreground reduction in polarization data from the Planck
surveyor.
|
Electrospinning is a modern alternative to the expanded method for producing
porous polytetrafluoroethylene membranes. High strength and relative
elongation, as well as the ability to maintain these properties for a long time
when exposed to aggressive media at high temperatures, determine the
application scope of the electrospun polytetrafluoroethylene membranes. Herein,
we report the effect of polytetrafluoroethylene suspension content in the
spinning solution, heat treatment mode (quenching and annealing) and aggressive
media at high temperatures on the tensile strength and relative elongation of
electrospun polytetrafluoroethylene membranes. Membranes fabricated from
spinning solutions with 50 to 60 wt % polytetrafluoroethylene suspension
content that underwent quenching were characterized by the highest tensile
strength and relative elongation. Electrospun polytetrafluoroethylene membranes
also demonstrated high chemical resistance to concentrated mineral acids and
alkalis, a bipolar aprotic solvent, engine oil and deionized water at 100 deg
for 48 hours.
|
Forty years ago, Wiesner pointed out that quantum mechanics raises the
striking possibility of money that cannot be counterfeited according to the
laws of physics. We propose the first quantum money scheme that is (1)
public-key, meaning that anyone can verify a banknote as genuine, not only the
bank that printed it, and (2) cryptographically secure, under a "classical"
hardness assumption that has nothing to do with quantum money. Our scheme is
based on hidden subspaces, encoded as the zero-sets of random multivariate
polynomials. A main technical advance is to show that the "black-box" version
of our scheme, where the polynomials are replaced by classical oracles, is
unconditionally secure. Previously, such a result had only been known relative
to a quantum oracle (and even there, the proof was never published). Even in
Wiesner's original setting -- quantum money that can only be verified by the
bank -- we are able to use our techniques to patch a major security hole in
Wiesner's scheme. We give the first private-key quantum money scheme that
allows unlimited verifications and that remains unconditionally secure, even if
the counterfeiter can interact adaptively with the bank. Our money scheme is
simpler than previous public-key quantum money schemes, including a knot-based
scheme of Farhi et al. The verifier needs to perform only two tests, one in the
standard basis and one in the Hadamard basis -- matching the original intuition
for quantum money, based on the existence of complementary observables. Our
security proofs use a new variant of Ambainis's quantum adversary method, and
several other tools that might be of independent interest.
|
Roof type is one of the most critical building characteristics for wind
vulnerability modeling. It is also the most frequently missing building feature
from publicly available databases. An automatic roof classification framework
is developed herein to generate high-resolution roof-type data using machine
learning. A Convolutional Neural Network (CNN) was trained to classify roof
types using building-level satellite images. The model achieved an F1 score of
0.96 on predicting roof types for 1,000 test buildings. The CNN model was then
used to predict roof types for 161,772 single-family houses in New Hanover
County, NC, and Miami-Dade County, FL. The distribution of roof type in city
and census tract scales was presented. A high variance was observed in the
dominant roof type among census tracts. To improve the completeness of the
roof-type data, imputation algorithms were developed to populate missing roof
data due to low-quality images, using critical building attributes and
neighborhood-level roof characteristics.
|
During a typical silo discharge, the material flow rate is determined by the
contact forces between the grains. Here, we report an original study concerning
the discharge of a two-dimensional silo filled with repelling magnetic grains.
This non-contact interaction leads to a different dynamics from the one
observed with conventional granular materials. We found that, although the flow
rate dependence on the aperture size follows roughly the power-law with an
exponent $3/2$ found in non-repulsive systems, the density and velocity
profiles during the discharge are totally different. New phenomena must be
taken into account. Despite the absence of contacts, clogging and intermittence
were also observed for apertures smaller than a critical size determined by the
effective radius of the repulsive grains.
|
In a class of supergravity models, the gluino and photino are massless at
tree level and receive small masses through radiative corrections. In such
models, one expects a gluino-gluon bound state, the $R_0$, to have a mass of
between 1.0 and 2.2 GeV and a lifetime between $10^{-10}$ and $10^{-6}$
seconds. Applying peturbative QCD methods (whose validity we discuss), we
calculate the production cross sections of $R_0$'s in $e-p$, $\pi-p$, $K-p$,
$\overline{p}-p$ and $p-p$ collisions. Signatures are also discussed.
|
Sterile neutrinos with masses at the $\mathrm{keV}$ scale and mixing to the
active neutrinos offer an elegant explanation of the observed dark matter (DM)
density. However, the very same mixing inevitably leads to radiative photon
emission and the non-observation of such peaked $X$-ray lines rules out this
minimal sterile neutrino DM hypothesis. We show that in the context of the
Standard Model effective field theory with sterile neutrinos ($\nu$SMEFT),
higher dimensional operators can produce sterile neutrino DM in a broad range
of parameter space. In particular, $\nu$SMEFT interactions can open the large
mixing parameter space due to their destructive interference, through operator
mixing or matching, in the $X$-ray emission. We also find that, even in the
zero mixing limit, the DM density can always be explained by $\nu$SMEFT
operators. The testability of the studied $\nu$SMEFT operators in searches for
electric dipole moments, neutrinoless double beta decay, and pion decay
measurements is discussed.
|
In this paper, some monotonicity and concavity results of several functions
involving the psi and polygamma functions are proved, and then some known
inequalities are extended and generalized.
|
Like the ordinary power spectrum, higher-order spectra (HOS) describe signal
properties that are invariant under translations in time. Unlike the power
spectrum, HOS retain phase information from which details of the signal
waveform can be recovered. Here we consider the problem of identifying multiple
unknown transient waveforms which recur within an ensemble of records at
mutually random delays. We develop a new technique for recovering filters from
HOS whose performance in waveform detection approaches that of an optimal
matched filter, requiring no prior information about the waveforms. Unlike
previous techniques of signal identification through HOS, the method applies
equally well to signals with deterministic and non-deterministic HOS. In the
non-deterministic case, it yields an additive decomposition, introducing a new
approach to the separation of component processes within non-Gaussian signals
having non-deterministic higher moments. We show a close relationship to
minimum-entropy blind deconvolution (MED), which the present technique improves
upon by avoiding the need for numerical optimization, while requiring only
numerically stable operations of time shift, element-wise multiplication and
averaging, making it particularly suited for real-time applications. The
application of HOS decomposition to real-world signals is demonstrated with
blind denoising, detection and classification of normal and abnormal heartbeats
in electrocardiograms.
|
We analyze the breaking of Lorentz invariance in a 3D model of fermion fields
self-coupled through four-fermion interactions. The low-energy limit of the
theory contains various sub-models which are similar to those used in the study
of the graphene or in the description of irrational charge fractionalization.
|
The new concept of numerical smoothness is applied to RKDG methods on the
scalar nonlinear conservation laws. The main result is an a posteriori error
estimate for the RKDG methods of arbitrary order in space and time, with
optimal convergence rate. In this paper, the case of smooth solutions is the
focus point. However, the error analysis framework is prepared to deal with
discontinuous solutions in the future.
|
(abridged) Hard X-ray surveys performed by the INTEGRAL satellite have
discovered a conspicuous fraction (up to 30%) of unidentified objects among the
detected sources. Here we continue our identification program by selecting
probable optical candidates using positional cross-correlation with soft X-ray,
radio, and/or optical archives, and performing optical spectroscopy on them. As
a result, we identified or more accurately characterized 44 counterparts of
INTEGRAL sources: 32 active galactic nuclei, with redshift 0.019 < z < 0.6058,
6 cataclysmic variables (CVs), 5 high-mass X-ray binaries (2 of which in the
Small Magellanic Cloud), and 1 low-mass X-ray binary. This was achieved by
using 7 telescopes of various sizes and archival data from two online
spectroscopic surveys. The main physical parameters of these hard X-ray sources
were also determined using the available multiwavelength information. AGNs are
the most abundant population among hard X-ray objects, and our results confirm
this tendency when optical spectroscopy is used as an identification tool. The
deeper sensitivity of recent INTEGRAL surveys enables one to begin detecting
hard X-ray emission above 20 keV from sources such as LINER-type AGNs and
non-magnetic CVs.
|
An attractive technique to explore for super-high-energy cosmic neutrino
fluxes, via deep underwater acoustic detection, is discussed. Acoustic signals
emitted by the neutrino induced cascades at large distances (10-50 km) from
cascades are considered. It is argued that an existing hydroacoustic array of
2400 hydrophones, which is available in the Great Ocean near Kamchatka
Peninsula, could be used as a base for an exploratory acoustic neutrino
telescope SADCO (Sea Acoustic Detector of Cosmic Objects). The detection volume
for registration of cascades with energies in the range of $10^{20-21} eV$ is
estimated to be hundreds of cubic kilometers. Some models of extremely high
energy elementary particle production in the Universe (for example the
topological defect model) may be examined by such a detector. Tests of this
technique are hoped for within a year.
|
The apex region of a capped (5,5) carbon nanotube (CNT) has been modelled
with the DFT package ONETEP, using boundary conditions provided by a classical
calculation with a conducting surface in place of the CNT. Results from the DFT
solution include the Fermi level and the physical distribution and energies of
individual Kohn-Sham orbitals for the CNT tip. Application of an external
electric field changes the orbital number of the highest occupied molecular
orbital (the HOMO) and consequently changes the distribution of the HOMO on the
CNT.
|
This paper deals with an attempt of proof of the Riemann Hypothesis (RH). Let
$T>10^{10}$ arbitrarily large. Let the region $\Omega_T=\Big\{z=x+i y\ \Big|\
\frac{1}{2}<x<1, \ 0<y<T\Big\}.$ There is a finite number $N_T$ of roots of
$\zeta(z)$ in $\Omega_T$. The aim of the paper is to prove that $N_T=0$.
Suppose that $N_T>0$. There exists at least one root $\rho=\frac{1}{2}+{\bf
u}+i\gamma $ whose real part is greater or equal to the real part of all the
other roots in $\Omega_T$. Let $v\geq \frac{3}{2}$. Let $\varepsilon>0$
arbitrarily small. We prove that $f(z)=\frac{\zeta'(z)}{\zeta(z)}$ is analytic
in the open disk $\Omega_\varepsilon=\Big\{
\Big|z-\Big(\rho+\frac{\varepsilon}{2}+v\Big)\Big|\Big\}< v.$ Let
$s=\rho+\varepsilon$. We prove, from the Taylor series of $\zeta(s)$, that
$f(s)\sim \frac{1}{\varepsilon}\rightarrow \infty$ when $\varepsilon\rightarrow
0$, and that, through the representation of $f(s)$ as a Taylor series,
$f(s)=f(c_0)-(v-\frac{\varepsilon}{2})f'(c_0)
+\frac{(v-\frac{\varepsilon}{2})^2}{2!}f''(c_0)-\frac{(v-\frac{\varepsilon}{2})^3}{3!}f^{(3)}(c_0)+\dots\mbox{\
for\ }c_0=\rho+\frac{\varepsilon}{2}+v,$ in $\Omega_\varepsilon$, that
$f(s)\not\rightarrow \infty$ when $\varepsilon\rightarrow 0$, a contradiction
which allows us to prove RH.
|
The spread of coronavirus and anti-vaccine conspiracies online hindered
public health responses to the pandemic. We examined the content of external
articles shared on Twitter from February to June 2020 to understand how
conspiracy theories and fake news competed with legitimate sources of
information. Examining external content--articles, rather than social media
posts--is a novel methodology that allows for non-social media specific
analysis of misinformation, tracking of changing narratives over time, and
determining which types of resources (government, news, scientific, or dubious)
dominate the pandemic vaccine conversation. We find that distinct narratives
emerge, those narratives change over time, and lack of government and
scientific messaging on coronavirus created an information vacuum filled by
both traditional news and conspiracy theories.
|
In the stochastic network model of Britton and Lindholm [Dynamic random
networks in dynamic populations. Journal of Statistical Physics, 2010], the
number of individuals evolves according to a supercritical linear birth and
death process, and a random social index is assigned to each individual at
birth, which controls the rate at which connections to other individuals are
created. We derive a rate for the convergence of the degree distribution in
this model towards the mixed Poisson distribution determined by Britton and
Lindholm based on heuristic arguments. In order to do so, we deduce the degree
distribution at finite time and derive an approximation result for mixed
Poisson distributions to compute an upper bound for the total variation
distance to the asymptotic degree distribution.
|
The problem of quantum state preparation is one of the main challenges in
achieving the quantum advantage. Furthermore, classically, for multi-level
problems, our ability to solve the corresponding quantum optimal control
problems is rather limited. The ability of the latter to feed into the former
may result in significant progress in quantum computing. To address this
challenge, we propose a formulation of quantum optimal control that makes use
of artificial boundary conditions for the Schr\"odinger equation in combination
with spectral methods. The resulting formulations are well suited for
investigating periodic potentials and lend themselves to direct numerical
treatment using conventional methods for bounded domains.
|
We homogeneously analyzed the \chandra\ X-ray observations of 10
gravitational lenses, HE 0047-1756, QJ 0158-4325, SDSS 0246-0805, HE 0435-1223,
SDSS 0924+0219, SDSS 1004+4112, HE 1104-1805, PG 1115+080, Q 1355-2257, and Q
2237+0305, to measure the differential X-ray absorption between images, the
metallicity, and the dust-to-gas ratio of the lens galaxies. We detected
differential absorption in all lenses except SDSS 0924+0219 and HE 1104-1805.
This doubles the sample of dust-to-gas ratio measurements in cosmologically
distant lens galaxies. We successfully measured the gas phase metallicity of
three lenses, Q 2237+0305, SDSS 1004+4112, and B 1152+199 from the X-ray
spectra. Our results suggest a linear correlation between metallicity and
dust-to-gas ratio (i.e., a constant metal-to-dust ratio), consistent with what
is found for nearby galaxies. We obtain an average dust-to-gas ratio
$E(B-V)/N_H=1.17^{+0.41}_{-0.31} \times 10^{-22}\rm mag\,cm^2\,atom^{-1}$ in
the lens galaxies, with an intrinsic scatter of $\rm0.3\,dex$. Combining these
results with data from GRB afterglows and quasar foreground absorbers, we found
a mean dust-to-gas ratio $\mdtg,$ now significantly lower than the average
Galactic value, $1.7\,\times 10^{-22}\,\rm mag\, cm^{2}\, atoms^{-1}.$ This
suggests evolution of dust-to-gas ratios with redshift and lower average
metallicities for the higher redshift galaxies, consistent with current metal
and dust evolution models of interstellar medium. The slow evolution in the
metal-to-dust ratio with redshift implies very rapid dust formation in high
redshift ($z>2$) galaxies.
|
Weather is a key production factor in agricultural crop production and at the
same time the most significant and least controllable source of peril in
agriculture. These effects of weather on agricultural crop production have
triggered a widespread support for weather derivatives as a means of mitigating
the risk associated with climate change on agriculture. However, these products
are faced with basis risk as a result of poor design and modelling of the
underlying weather variable (temperature). In order to circumvent these
problems, a novel time-varying mean-reversion L\'evy regime-switching model is
used to model the dynamics of the deseasonalized temperature dynamics. Using
plots and test statistics, it is observed that the residuals of the
deseasonalized temperature data are not normally distributed. To model the
non-normality in the residuals, we propose using the hyperbolic distribution to
capture the semi-heavy tails and skewness in the empirical distributions of the
residuals for the shifted regime. The proposed regime-switching model has a
mean-reverting heteroskedastic process in the base regime and a L\'evy process
in the shifted regime. By using the Expectation-Maximization algorithm, the
parameters of the proposed model are estimated. The proposed model is flexible
as it modelled the deseasonalized temperature data accurately.
|
We investigate the Hawking radiation in the gauge-Higgs-Yukawa theory. The
ballistic model is proposed as an effective description of the system. We find
that a spherical domain wall around the black hole is formed by field dynamics
rather than thermal phase-transition. The formation is a general property of
the black hole whose Hawking temperature is equal to or greater than the energy
scale of the theory. The formation of the electroweak wall and that of the GUT
wall are shown. We also find a phenomenon of the spontaneous charging-up of the
black hole by the wall. The Hawking radiation drives a mechanism of the
charge-transportation into the black hole when C- and CP-violation are assumed.
The mechanism can strongly transport the hyper-charge into a black hole of the
electroweak scale.
|
For the classical principal chiral model with boundary, we give the subset of
the Yangian charges which remains conserved under certain integrable boundary
conditions, and extract them from the monodromy matrix. Quantized versions of
these charges are used to deduce the structure of rational solutions of the
reflection equation, analogous to the 'tensor product graph' for solutions of
the Yang-Baxter equation. We give a variety of such solutions, including some
for reflection from non-trivial boundary states, for the SU(N) case, and
confirm these by constructing them by fusion from the basic solutions.
|
In this article, we consider the two-dimensional stochastic Navier-Stokes
equation (SNSE) on a smooth bounded domain, driven by affine-linear
multiplicative white noise and with random initial conditions and Dirichlet
boundary conditions. The random initial condition is allowed to anticipate the
forcing noise. Our main objective is to prove the existence of a solution to
the SNSE under sufficient Malliavin regularity of the initial condition. To
this end we employ anticipating calculus techniques.
|
We report the first results from an X-ray polarimeter with a micropattern gas
proportional counter using an amorphous silicon active matrix readout. With
100% polarized X-rays at 4.5 keV, we obtain a modulation factor of 0.33 +/-
0.03, confirming previous reports of the high polarization sensitivity of a
finely segmented pixel proportional counter. The detector described here has a
geometry suitable for the focal plane of an astronomical X-ray telescope.
Amorphous silicon readout technology will enable additional extensions and
improvements.
|
In this article, for the first time in the context of TOP trap, the necessary
and sufficient conditions for the adiabatic evolution of weak field seeking
states have been quantitatively examined. It has been well accepted since
decades that adiabaticity has to be obeyed by the atoms for successful magnetic
trapping. However, we show, on the contrary, that atoms can also be confined
beyond the adiabatic limit. Hence, our findings open new possibilities to relax
the restrictions of atom trapping in laboratories.
|
In a class of renormalizable three-dimensional abelian gauge theory the
Lorentz invariance is spontaneously broken by dynamical generation of a
magnetic field $B$. The true ground state resembles that of the quantum Hall
effect. An originally topologically massive photon becomes gapless, fulfilling
the role of the Nambu-Goldstone boson associated with the spontaneous breaking
of the Lorentz invariance. We give a simple explanation and a sufficient
condition for the spontaneous breaking of the Lorentz invariance with the aid
of the Nambu-Goldstone theorem. The decrease of the energy density by $B \not=
0$ is understood mostly due to the shift in zero-point energy of photons. For
PASCOS'94.
|
This paper describes a geometry based technique for feature extraction
applicable to segmentation-based word recognition systems. The proposed system
extracts the geometric features of the character contour. This features are
based on the basic line types that forms the character skeleton. The system
gives a feature vector as its output. The feature vectors so generated from a
training set, were then used to train a pattern recognition engine based on
Neural Networks so that the system can be benchmarked.
|
In this paper we study twisted algebras of multiplier Hopf ($^*$-)algebras
which generalize all kinds of smash products such as generalized smash
products, twisted smash products, diagonal crossed products, L-R-smash
products, two-sided crossed products and two-sided smash products for the
ordinary Hopf algebras appeared in [P-O].
|
F1-ATPase catalyses ATP hydrolysis and converts the cellular chemical energy
into mechanical rotation. The hydrolysis reaction in F1-ATPase does not follow
the widely believed Michaelis-Menten mechanism. Instead, the hydrolysis
mechanism behaves in an ATP-dependent manner. We develop a model for enzyme
kinetics and hydrolysis cooperativity of F1-ATPase which involves the
binding-state changes to the coupling catalytic reactions. The quantitative
analysis and modeling suggest the existence of complex cooperative hydrolysis
between three different catalysis sites of F1-ATPase. This complexity may be
taken into account to resolve the arguments on the bindingchange mechanism in
F1-ATPase.
|
We use group schemes to construct optimal packings of lines through the
origin. In this setting, optimal line packings are naturally characterized
using representation theory, which in turn leads to a necessary integrality
condition for the existence of equiangular central group frames. We conclude
with an infinite family of optimal line packings using the group schemes
associated with certain Suzuki 2-groups, specifically, extensions of Heisenberg
groups. Notably, this is the first known infinite family of equiangular tight
frames generated by representations of nonabelian groups.
|
The negligible intrinsic spin-orbit coupling (SOC) in graphene can be
enhanced by proximity effects in stacked heterostructures of graphene and
transition metal dichalcogenides (TMDCs). The composition of the TMDC layer
plays a key role in determining the nature and strength of the resultant SOC
induced in the graphene layer. Here, we study the evolution of the
proximity-induced SOC as the TMDC layer is deliberately defected. Alloyed ${\rm
G/W_{\chi}Mo_{1-\chi}Se_2}$ heterostructures with diverse compositions ($\chi$)
and defect distributions are simulated using density functional theory.
Comparison with continuum and tight-binding models allows both local and global
signatures of the metal-atom alloying to be clarified. Our findings show that,
despite some dramatic perturbation of local parameters for individual defects,
the low-energy spin and electronic behaviour follow a simple effective medium
model which depends only on the composition ratio of the metallic species in
the TMDC layer. Furthermore, we demonstrate that the topological state of such
alloyed systems can be feasibly tuned by controlling this ratio.
|
We investigate the supersymmetric D-brane configurations in the pp-wave
backgrounds proposed by Maldacena and Maoz. We study the surviving
supersymmetry in a D-brane configuration from the worldvolume point of view.
When we restrict ourselves to the background with N=(2,2) supersymmetry and no
holomorphic Killing vector term, there are two types of supersymmetric
D-branes: A-type and B-type. An A-type brane is wrapped on a special Lagrangian
submanifold, and the imaginary part of the superpotential should be constant on
its worldvolume. On the other hand, a B-type brane is wrapped on a complex
submanifold, and the superpotential should be constant on its worldvolume. The
results are almost consistent with the worldsheet theory in the lightcone
gauge. The inclusion of gauge fields is also discussed and found BPS D-branes
with the gauge field excitations. Furthermore, we consider the backgrounds with
holomorphic Killing vector terms and N=(1,1) supersymmetric backgrounds.
|
We consider the zero-temperature fixed points controlling the critical
behavior of the $d$-dimensional random-field Ising, and more generally $O(N)$,
models. We clarify the nature of these fixed points and their stability in the
region of the $(N,d)$ plane where one passes from a critical behavior
satisfying the $d\rightarrow d-2$ dimensional reduction to one where it breaks
down due to the appearance of strong enough nonanalyticities in the functional
dependence of the cumulants of the renormalized disorder. We unveil an
intricate and unusual behavior.
|
We develop team semantics for Linear Temporal Logic (LTL) to express
hyperproperties, which have recently been identified as a key concept in the
verification of information flow properties. Conceptually, we consider an
asynchronous and a synchronous variant of team semantics. We study basic
properties of this new logic and classify the computational complexity of its
satisfiability, path, and model checking problem. Further, we examine how
extensions of these basic logics react on adding other atomic operators.
Finally, we compare its expressivity to the one of HyperLTL, another recently
introduced logic for hyperproperties. Our results show that LTL under team
semantics is a viable alternative to HyperLTL, which complements the
expressivity of HyperLTL and has partially better algorithmic properties.
|
The numerical solution of eigenvalue problems is essential in various
application areas of scientific and engineering domains. In many problem
classes, the practical interest is only a small subset of eigenvalues so it is
unnecessary to compute all of the eigenvalues. Notable examples are the
electronic structure problems where the $k$-th smallest eigenvalue is closely
related to the electronic properties of materials. In this paper, we consider
the $k$-th eigenvalue problems of symmetric dense matrices with low-rank
off-diagonal blocks. We present a linear time generalized LDL decomposition of
$\mathcal{H}^2$ matrices and combine it with the bisection eigenvalue algorithm
to compute the $k$-th eigenvalue with controllable accuracy. In addition, if
more than one eigenvalue is required, some of the previous computations can be
reused to compute the other eigenvalues in parallel. Numerical experiments show
that our method is more efficient than the state-of-the-art dense eigenvalue
solver in LAPACK/ScaLAPACK and ELPA. Furthermore, tests on electronic state
calculations of carbon nanomaterials demonstrate that our method outperforms
the existing HSS-based bisection eigenvalue algorithm on 3D problems.
|
By means of ab-initio calculations we investigate the optical properties of
pure a-SiN$_x$ samples, with $x \in [0.4, 1.8]$, and samples embedding silicon
nanoclusters (NCs) of diameter $0.5 \leq d \leq 1.0$ nm. In the pure samples
the optical absorption gap and the radiative recombination rate vary according
to the concentration of Si-N bonds. In the presence of NCs the radiative rate
of the samples is barely affected, indicating that the intense
photoluminescence of experimental samples is mostly due to the matrix itself
rather than to the NCs. Besides, we evidence an important role of Si-N-Si bonds
at the NC/matrix interface in the observed photoluminescence trend.
|
Recent theoretical studies inspired by experiments on the Kitaev magnet
$\alpha$-RuCl$_3$ highlight the nontrivial impact of phonons on the thermal
Hall conductivity of chiral topological phases. Here we introduce mixed
mesoscopic-macroscopic devices that allow refined thermal-transport probes of
non-Abelian spin liquids with Ising topological order. These devices feature a
quantum-coherent mesoscopic region with negligible phonon conductance, flanked
by macroscopic lobes that facilitate efficient thermalization between chiral
Majorana edge modes and bulk phonons. We show that our devices enable $(i)$
accurate determination of the quantized thermal Hall conductivity, $(ii)$
identification of non-Abelian Ising anyons via the temperature dependence of
the thermal conductance, and most interestingly $(iii)$ single-anyon detection
through heat-based anyon interferometry. Analogous results apply broadly to
phonon-coupled chiral topological orders.
|
Motivated by the learned iterative soft thresholding algorithm (LISTA), we
introduce a general class of neural networks suitable for sparse reconstruction
from few linear measurements. By allowing a wide range of degrees of
weight-sharing between the layers, we enable a unified analysis for very
different neural network types, ranging from recurrent ones to networks more
similar to standard feedforward neural networks. Based on training samples, via
empirical risk minimization we aim at learning the optimal network parameters
and thereby the optimal network that reconstructs signals from their
low-dimensional linear measurements. We derive generalization bounds by
analyzing the Rademacher complexity of hypothesis classes consisting of such
deep networks, that also take into account the thresholding parameters. We
obtain estimates of the sample complexity that essentially depend only linearly
on the number of parameters and on the depth. We apply our main result to
obtain specific generalization bounds for several practical examples, including
different algorithms for (implicit) dictionary learning, and convolutional
neural networks.
|
The recent success of Transformers in the language domain has motivated
adapting it to a multimodal setting, where a new visual model is trained in
tandem with an already pretrained language model. However, due to the excessive
memory requirements from Transformers, existing work typically fixes the
language model and train only the vision module, which limits its ability to
learn cross-modal information in an end-to-end manner. In this work, we focus
on reducing the parameters of multimodal Transformers in the context of
audio-visual video representation learning. We alleviate the high memory
requirement by sharing the parameters of Transformers across layers and
modalities; we decompose the Transformer into modality-specific and
modality-shared parts so that the model learns the dynamics of each modality
both individually and together, and propose a novel parameter sharing scheme
based on low-rank approximation. We show that our approach reduces parameters
of the Transformers up to 97$\%$, allowing us to train our model end-to-end
from scratch. We also propose a negative sampling approach based on an instance
similarity measured on the CNN embedding space that our model learns together
with the Transformers. To demonstrate our approach, we pretrain our model on
30-second clips (480 frames) from Kinetics-700 and transfer it to audio-visual
classification tasks.
|
We report superconducting properties of PrO1-xFxBiS2 compounds, synthesized
by the vacuum encapsulation technique. The synthesized PrO1-xFxBiS2 (x=0.1,
0.3, 0.5, 0.7 and 0.9) samples are crystallized in a tetragonal P4/nmm space
group. Both transport and DC magnetic susceptibility measurements showed bulk
superconductivity below 4 K. The maximum Tc is obtained for x = 0.7 sample.
Under applied magnetic field both Tc onset and Tc (R =0) decrease to lower
temperatures. We estimated highest upper critical field [Hc2(0)] for
PrO0.3F0.7BiS2 sample to be above 4 T (Tesla). The thermally activated flux
flow (TAFF) activation energy (U0) is estimated 54.63 meV in 0.05 Tesla field
for PrO0.3F0.7BiS2 sample. Hall measurement results showed that electron charge
carriers are the dominating ones in these compounds. Thermoelectric effects
(Thermal conductivity and Seebeck coefficient) data suggest strong
electron-electron correlations in this material.
|
We present numerical simulations of binary neutron star mergers, comparing
irrotational binaries to binaries of NSs rotating aligned to the orbital
angular momentum. For the first time, we study spinning BNSs employing nuclear
physics equations of state, namely the ones of Lattimer and Swesty as well as
Shen, Horowitz, and Teige. We study mainly equal mass systems leading to a
hypermassive neutron star (HMNS), and analyze in detail its structure and
dynamics. In order to exclude gauge artifacts, we introduce a novel coordinate
system used for post-processing. The results for our equal mass models show
that the strong radial oscillations of the HMNS modulate the instantaneous
frequency of the gravitational wave (GW) signal to an extend that leads to
separate peaks in the corresponding Fourier spectrum. In particular, the high
frequency peaks which are often attributed to combination frequencies can also
be caused by the modulation of the m=2 mode frequency in the merger phase. As a
consequence for GW data analysis, the offset of the high frequency peak does
not necessarily carry information about the radial oscillation frequency.
Further, the low frequency peak in our simulations is dominated by the
contribution of the plunge and the first 1-2 bounces. The amplitude of the
radial oscillations depends on the initial NS spin, which therefore has a
complicated influence on the spectrum. Another important result is that HMNSs
can consist of a slowly rotating core with an extended, massive envelope
rotating close to Keplerian velocity, contrary to the common notion that a
rapidly rotating core is necessary to prevent a prompt collapse. Finally, our
estimates on the amount of unbound matter show a dependency on the initial NS
spin, explained by the influence of the latter on the amplitude of radial
oscillations, which in turn cause shock waves.
|
We present measurements of microwave-induced Shapiro steps in a
superconducting nanobridge weak link in the dissipative branch of a hysteretic
current-voltage characteristic. We demonstrate that Shapiro steps can be used
to infer a reduced critical current and associated effective local temperature.
Our observation of Shapiro steps in the dissipative branch hows that a finite
Josephson coupling exists in the dissipative state and thus can be used to put
an upper limit on the effective temperature and on the size of the region that
can be heated above the critical temperature. This work provides evidence that
Josephson behaviour can still exist in thermally-hysteretic weak link devices
and will allow extension of the temperature ranges that nanobridge based single
flux quantum circuits, nanoSQUIDs and Josephson voltage standards can be used.
|
Recent research has successfully adapted vision-based convolutional neural
network (CNN) architectures for audio recognition tasks using Mel-Spectrograms.
However, these CNNs have high computational costs and memory requirements,
limiting their deployment on low-end edge devices. Motivated by the success of
efficient vision models like InceptionNeXt and ConvNeXt, we propose
AudioRepInceptionNeXt, a single-stream architecture. Its basic building block
breaks down the parallel multi-branch depth-wise convolutions with descending
scales of k x k kernels into a cascade of two multi-branch depth-wise
convolutions. The first multi-branch consists of parallel multi-scale 1 x k
depth-wise convolutional layers followed by a similar multi-branch employing
parallel multi-scale k x 1 depth-wise convolutional layers. This reduces
computational and memory footprint while separating time and frequency
processing of Mel-Spectrograms. The large kernels capture global frequencies
and long activities, while small kernels get local frequencies and short
activities. We also reparameterize the multi-branch design during inference to
further boost speed without losing accuracy. Experiments show that
AudioRepInceptionNeXt reduces parameters and computations by 50%+ and improves
inference speed 1.28x over state-of-the-art CNNs like the Slow-Fast while
maintaining comparable accuracy. It also learns robustly across a variety of
audio recognition tasks. Codes are available at
https://github.com/StevenLauHKHK/AudioRepInceptionNeXt.
|
We have analysed the Rhodes/HartRAO survey at 2326 MHz and derived the global
angular power spectrum of Galactic continuum emission. In order to measure the
angular power spectrum of the diffuse component, point sources were removed
from the map by median filtering. A least-square fit to the angular power
spectrum of the entire survey with a power law spectrum C_l proportional to
l^{-alpha}, gives alpha = 2.43 +/- 0.01 for l = 2-100. The angular power
spectrum of radio emission appears to steepen at high Galactic latitudes and
for observed regions with |b| > 20 deg, the fitted spectral index is alpha =
2.92 +/- 0.07. We have extrapolated this result to 30 GHz (the lowest frequency
channel of Planck) and estimate that no significant contribution to the sky
temperature fluctuation is likely to come from synchrotron at degree-angular
scales
|
Long sentences have been a persistent issue in written communication for many
years since they make it challenging for readers to grasp the main points or
follow the initial intention of the writer. This survey, conducted using the
PRISMA guidelines, systematically reviews two main strategies for addressing
the issue of long sentences: a) sentence compression and b) sentence splitting.
An increased trend of interest in this area has been observed since 2005, with
significant growth after 2017. Current research is dominated by supervised
approaches for both sentence compression and splitting. Yet, there is a
considerable gap in weakly and self-supervised techniques, suggesting an
opportunity for further research, especially in domains with limited data. In
this survey, we categorize and group the most representative methods into a
comprehensive taxonomy. We also conduct a comparative evaluation analysis of
these methods on common sentence compression and splitting datasets. Finally,
we discuss the challenges and limitations of current methods, providing
valuable insights for future research directions. This survey is meant to serve
as a comprehensive resource for addressing the complexities of long sentences.
We aim to enable researchers to make further advancements in the field until
long sentences are no longer a barrier to effective communication.
|
This paper presents exact formulas for the regularity and depth of powers of
edge ideals of an edge-weighted star graph. Additionally, we provide exact
formulas for the regularity of powers of the edge ideal of an edge-weighted
integrally closed path, as well as lower bounds on the depth of powers of such
an edge ideal.
|
It is proposed that $T$ violation in physics, as well as the masses of
electron and $u, d$ quarks, arise from a pseudoscalar interaction with a new
spin 0 field $\tau(x)$, odd in $P$ and $T$, but even in $C$. This interaction
contains a factor $i\gamma_5$ in the quark and lepton Dirac algebra, so that
the full Hamiltonian is $P$, $T$ conserving; but by spontaneous symmetry
breaking, the new field $\tau(x)$ has a nonzero expectation value $<\tau>\neq
0$ that breaks $P$ and $T$ symmetry. Oscillations of $\tau(x)$ about its
expectation value produce a new particle, the "timeon". The mass of timeon is
expected to be high because of its flavor-changing properties.
The main body of the paper is on the low energy phenomenology of the timeon
model. As we shall show, for the quark system the model gives a compact
3-dimensional geometric picture consisting of two elliptic plates and one
needle, which embodies the ten observables: six quark masses, three Eulerian
angles $\theta_{12}, \theta_{23}, \theta_{31}$ and the Jarlskog invariant of
the CKM matrix.
For leptons, we assume that the neutrinos do not have a direct timeon
interaction; therefore, the lowest neutrino mass is zero. The timeon
interaction with charged leptons yields the observed nonzero electron mass,
analogous to the up and down quark masses. Furthermore, the timeon model for
leptons contains two fewer theoretical parameters than observables. Thus, there
are two testable relations between the three angles $\theta_{12}, \theta_{23},
\theta_{31}$ and the Jarlskog invariant of the neutrino mapping matrix.
|
We consider the following evolutionary Hamilton-Jacobi equation with initial
condition: \begin{equation*} \begin{cases}
\partial_tu(x,t)+H(x,u(x,t),\partial_xu(x,t))=0,\\ u(x,0)=\phi(x), \end{cases}
\end{equation*} where $\phi(x)\in C(M,\mathbb{R})$. Under some assumptions on
the convexity of $H(x,u,p)$ with respect to $p$ and the Osgood growth of
$H(x,u,p)$ with respect to $u$, we establish an implicitly variational
principle and provide an intrinsic relation between viscosity solutions and
certain minimal characteristics. Moreover, we obtain a representation formula
of the viscosity solution of the evolutionary Hamilton-Jacobi equation.
|
We analyze the effect of nonlinear boundary conditions on an
advection-diffusion equation on the half-line. Our model is inspired by models
for crystal growth where diffusion models diffusive relaxation of a
displacement field, advection is induced by apical growth, and boundary
conditions incorporate non-adiabatic effects on displacement at the boundary.
The equation, in particular the boundary fluxes, possesses a discrete gauge
symmetry, and we study the role of simple, entire solutions, here periodic,
homoclinic, or heteroclinic relative to this gauge symmetry, in the global
dynamics.
|
We propose a computationally efficient algorithm for the device activity
detection problem in the multi-cell massive multi-input multi-output (MIMO)
system, where the active devices transmit their signature sequences to multiple
BSs in multiple cells and all the BSs cooperate to detect the active devices.
The device activity detection problem has been formulated as a maximum
likelihood maximization (MLE) problem in the literature. The state-of-the-art
algorithm for solving the problem is the (random) coordinate descent (CD)
algorithm. However, the CD algorithm fails to exploit the special sparsity
structure of the solution of the device activity detection problem, i.e., most
of devices are not active in each time slot. In this paper, we propose a novel
active set selection strategy to accelerate the CD algorithm and propose an
efficient active set CD algorithm for solving the considered problem.
Specifically, at each iteration, the proposed active set CD algorithm first
selects a small subset of all devices, namely the active set, which contains a
few devices that contribute the most to the deviation from the first-order
optimality condition of the MLE problem thus potentially can provide the most
improvement to the objective function, then applies the CD algorithm to perform
the detection for the devices in the active set. Simulation results show that
the proposed active set CD algorithm significantly outperforms the
state-of-the-art CD algorithm in terms of the computational efficiency.
|
$CaWO_4$ and $Al_2O_3$ are well-established target materials used by
experiments searching for rare events like the elastic scattering off of a
hypothetical Dark Matter particle. In recent years, experiments have reached
detection thresholds for nuclear recoils at the 10 eV-scale. At this energy
scale, a reliable Monte Carlo simulation of the expected background is crucial.
However, none of the publicly available general-purpose simulation packages are
validated at this energy scale and for these targets. The recently started
ELOISE project aims to provide reliable simulations of electromagnetic particle
interactions for this use case by obtaining experimental reference data,
validating the simulation code against them, and, if needed, calibrating the
code to the reference data.
|
Characterizing multipartite quantum systems is crucial for quantum computing
and many-body physics. The problem, however, becomes challenging when the
system size is large and the properties of interest involve correlations among
a large number of particles. Here we introduce a neural network model that can
predict various quantum properties of many-body quantum states with constant
correlation length, using only measurement data from a small number of
neighboring sites. The model is based on the technique of multi-task learning,
which we show to offer several advantages over traditional single-task
approaches. Through numerical experiments, we show that multi-task learning can
be applied to sufficiently regular states to predict global properties, like
string order parameters, from the observation of short-range correlations, and
to distinguish between quantum phases that cannot be distinguished by
single-task networks. Remarkably, our model appears to be able to transfer
information learnt from lower dimensional quantum systems to higher dimensional
ones, and to make accurate predictions for Hamiltonians that were not seen in
the training.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.