text
stringlengths 6
128k
|
---|
The interpretation of Higgs data is typically based on different assumptions
about whether there can be additional decay modes of the Higgs or if any
couplings can be bounded by theoretical arguments. Going beyond these
assumptions requires either a precision measurement of the Higgs width or an
absolute measurement of a coupling to eliminate a flat direction in precision
fits that occurs when $|g_{hVV}/g_{hVV}^{SM}|>1$, where $V=W^\pm, Z$. In this
paper we explore how well a high energy muon collider can test Higgs physics
without having to make assumptions on the total width of the Higgs. In
particular, we investigate off-shell methods for Higgs production used at the
LHC and searches for invisible decays of the Higgs to see how powerful they are
at a muon collider. We then investigate the theoretical requirements on a model
which can exist in such a flat direction. Combining expected Higgs precision
with other constraints, the most dangerous flat direction is described by
generalized Georgi-Machacek models. We find that by combining direct searches
with Higgs precision, a high energy muon collider can robustly test single
Higgs precision down to the $\mathcal{O}(.1\%)$ level without having to assume
SM Higgs decays. Furthermore, it allows one to bound new contributions to the
width at the sub-percent level as well. Finally, we comment on how even in this
difficult flat direction for Higgs precision, a muon collider can robustly test
or discover new physics in multiple ways. Expanding beyond simple coupling
modifiers/EFTs, there is a large region of parameter space that muon colliders
can explore for EWSB that is not probed with only standard Higgs precision
observables.
|
In recent literature there has been a lot of interest in the phenomena of
noise induced transport in the absence of an average bias occurring in
spatially periodic systems far from equilibrium. One of the main motivations in
this area is to understand the mechanism behind the operation of biological
motors at molecular scale. These molecular motors convert chemical energy
available during the hydrolysis of ATP into mechanical motion to transport
cargo and vesicles in living cells with very high reliability, adaptability and
efficiency in a very noisy environment. The basic principle behind such a
motion, namely the Brownian ratchet principle, has applications in
nanotechnology as novel nanoparticle separation devices. Also, the mechanism of
ratchet operation finds applications in game theory. Here, we briefly focus on
the physical concepts underlying the constructive role of noise in assisting
transport at a molecular level. The nature of particle currents, the energetic
efficiency of these motors, the entropy production in these systems and the
phenomenon of resonance/coherence are discussed.
|
Often machine learning methods are applied and results reported in cases
where there is little to no information concerning accuracy of the output.
Simply because a computer program returns a result does not insure its
validity. If decisions are to be made based on such results it is important to
have some notion of their veracity. Contrast trees represent a new approach for
assessing the accuracy of many types of machine learning estimates that are not
amenable to standard (cross) validation methods. In situations where
inaccuracies are detected boosted contrast trees can often improve performance.
A special case, distribution boosting, provides an assumption free method for
estimating the full probability distribution of an outcome variable given any
set of joint input predictor variable values.
|
The ground state of the Hubbard model is studied within the single-site
approximation (SSA) and beyond the SSA. Within the SSA, the ground state is a
typical Mott insulator at the critical point n=1 and U/W=+infty, with n being
the electron density per unit cell, W the bandwidth of electrons, and U the
on-site repulsion, and is a normal Fermi liquid except for the critical point.
Beyond the SSA, the normal Fermi liquid is unstable against a non-normal Fermi
liquid state except for a trivial case of U=0 such as a magnetic or
superconducting state in two and higher dimensions. In order to explain actual
observed metal-insulator transitions, one or several effects among the
electron-phonon interaction, multi-band or multi-orbital effects, and effects
of disorder should be considered beyond the Hubbard model.
|
We construct global-in-time singular dynamics for the (renormalized) cubic
fourth order nonlinear Schr\"odinger equation on the circle, having the white
noise measure as an invariant measure. For this purpose, we introduce the
"random-resonant / nonlinear decomposition", which allows us to single out the
singular component of the solution. Unlike the classical McKean, Bourgain, Da
Prato-Debussche type argument, this singular component is nonlinear, consisting
of arbitrarily high powers of the random initial data. We also employ a random
gauge transform, leading to random Fourier restriction norm spaces. For this
problem, a contraction argument does not work and we instead establish
convergence of smooth approximating solutions by studying the partially
iterated Duhamel formulation under the random gauge transform. We reduce the
crucial nonlinear estimates to boundedness properties of certain random
multilinear functionals of the white noise.
|
In this paper we present a solution to the task of "unsupervised domain
adaptation (UDA) of a given pre-trained semantic segmentation model without
relying on any source domain representations". Previous UDA approaches for
semantic segmentation either employed simultaneous training of the model in the
source and target domains, or they relied on an additional network, replaying
source domain knowledge to the model during adaptation. In contrast, we present
our novel Unsupervised BatchNorm Adaptation (UBNA) method, which adapts a given
pre-trained model to an unseen target domain without using -- beyond the
existing model parameters from pre-training -- any source domain
representations (neither data, nor networks) and which can also be applied in
an online setting or using just a few unlabeled images from the target domain
in a few-shot manner. Specifically, we partially adapt the normalization layer
statistics to the target domain using an exponentially decaying momentum
factor, thereby mixing the statistics from both domains. By evaluation on
standard UDA benchmarks for semantic segmentation we show that this is superior
to a model without adaptation and to baseline approaches using statistics from
the target domain only. Compared to standard UDA approaches we report a
trade-off between performance and usage of source domain representations.
|
Sponsored Search Auctions (SSAs) arguably represent the problem at the
intersection of computer science and economics with the deepest applications in
real life. Within the realm of SSAs, the study of the effects that showing one
ad has on the other ads, a.k.a. externalities in economics, is of utmost
importance and has so far attracted the attention of much research. However,
even the basic question of modeling the problem has so far escaped a definitive
answer. The popular cascade model is arguably too idealized to really describe
the phenomenon yet it allows a good comprehension of the problem. Other models,
instead, describe the setting more adequately but are too complex to permit a
satisfactory theoretical analysis. In this work, we attempt to get the best of
both approaches: firstly, we define a number of general mathematical
formulations for the problem in the attempt to have a rich description of
externalities in SSAs and, secondly, prove a host of results drawing a nearly
complete picture about the computational complexity of the problem. We
complement these approximability results with some considerations about
mechanism design in our context.
|
We present a new approach to three-dimensional electromagnetic scattering
problems via fast isogeometric boundary element methods. Starting with an
investigation of the theoretical setting around the electric field integral
equation within the isogeometric framework, we show existence, uniqueness, and
quasi-optimality of the isogeometric approach. For a fast and efficient
computation, we then introduce and analyze an interpolation-based fast
multipole method tailored to the isogeometric setting, which admits competitive
algorithmic and complexity properties. This is followed by a series of
numerical examples of industrial scope, together with a detailed presentation
and interpretation of the results.
|
We analyze third-harmonic generation from high-index dielectric nanoparticles
and discuss the basic features and multipolar nature of the parametrically
generated electromagnetic fields near the Mie-type optical resonances. By
combining both analytical and numerical methods, we study the nonlinear
scattering from simple nanoparticle geometries such as spheres and disks in the
vicinity of the magnetic dipole resonance. We reveal the approaches for
manipulating and directing the resonantly enhanced nonlinear emission with
subwavelength all-dielectric structures that can be of a particular interest
for novel designs of nonlinear optical antennas and engineering the magnetic
optical nonlinear response at nanoscale.
|
The influence of the spin-dependent phase shifts (SDPS) associated to the
electronic reflection and transmission amplitudes acquired by electrons upon
scattering on the potential barrier on the Andreev reflection probability of
electron and hole excitations for a ferromagnet/isolator/d-wave superconductor
(FIS) contact and the charge conductance of the FIS contact is studied. Various
superconductor orientations are considered. It is found that SDPS can suppress
the zero-potential peak and restore finite-potential peaks in the charge
conductance of the F/I/d-wave superconductor contact for the (110) orientation
of the d-wave superconductor and, on the contrary, can restore the
zero-potential peak and suppress finite-potential peaks for the $\{100\}$
orientation of the d-wave superconductor.
|
In this paper, we report a new scheme to amplify a microwave signal carried
on a laser light at $\lambda$=852nm. The amplification is done via a
semiconductor tapered amplifier and this scheme is used to drive stimulated
Raman transitions in an atom interferometer. Sideband generation in the
amplifier, due to self-phase and amplitude modulation, is investigated and
characterized. We also demonstrate that the amplifier does not induce any
significant phase-noise on the beating signal. Finally, the degradation of the
performances of the interferometer due to the amplification process is shown to
be negligible.
|
In this paper, a novel approach is proposed to automatically construct
parallel discourse corpus for dialogue machine translation. Firstly, the
parallel subtitle data and its corresponding monolingual movie script data are
crawled and collected from Internet. Then tags such as speaker and discourse
boundary from the script data are projected to its subtitle data via an
information retrieval approach in order to map monolingual discourse to
bilingual texts. We not only evaluate the mapping results, but also integrate
speaker information into the translation. Experiments show our proposed method
can achieve 81.79% and 98.64% accuracy on speaker and dialogue boundary
annotation, and speaker-based language model adaptation can obtain around 0.5
BLEU points improvement in translation qualities. Finally, we publicly release
around 100K parallel discourse data with manual speaker and dialogue boundary
annotation.
|
Cosmic internal symmetry (COINS) relates cosmic vacuum, dark matter, baryons
and radiation, in a finite universe. Evidence for COINS comes from the
concordance data, including the WMAP data. COINS is behind 1)the cosmic
coincidence, 2)the spatial flatness, 3)cosmic entropy, 4)initial amplitude of
cosmic perturbations. COINS suggests also a solution to the naturalnes problem.
COINS is due to the electroweak-scale physics or multi-dimension physics, if
macroscopic extra dimensions really exist.
|
Young stars are associated with prominent outflows of molecular gas. The
ejection of gas via these outflows is believed to remove angular momentum from
the protostellar system, thus permitting young stars to grow by accretion of
material from the protostellar disk. The underlying mechanism for outflow
ejection is not yet understood, but is believed to be closely linked to the
protostellar disk. Assorted scenarios have been proposed to explain
protostellar outflows; the main difference between these models is the region
where acceleration of material takes place: close to the protostar itself
('X-wind', or stellar wind), in a larger region throughout the protostellar
disk (disk wind), or at the interface between. Because of the limits of
observational studies, outflow launching regions have so far only been probed
by indirect extrapolation. Here we report observations of carbon monoxide
toward the outflow associated with the TMC1A protostellar system. These data
show that gas is ejected from a region extending up to a radial distance of 25
astronomical units from the central protostar, and that angular momentum is
removed from an extended region of the disk. This demonstrates that the
outflowing gas is launched by an extended disk wind from a Keplerian disk.
Hence, we rule out X-wind and stellar wind launching scenarios as the source of
the emission on the scales we observe.
|
Federated learning (FL) is rapidly gaining popularity and enables multiple
data owners ({\em a.k.a.} FL participants) to collaboratively train machine
learning models in a privacy-preserving way. A key unaddressed scenario is that
these FL participants are in a competitive market, where market shares
represent their competitiveness. Although they are interested to enhance the
performance of their respective models through FL, market leaders (who are
often data owners who can contribute significantly to building high performance
FL models) want to avoid losing their market shares by enhancing their
competitors' models. Currently, there is no modeling tool to analyze such
scenarios and support informed decision-making. In this paper, we bridge this
gap by proposing the \underline{mar}ket \underline{s}hare-based decision
support framework for participation in \underline{FL} (MarS-FL). We introduce
{\em two notions of $\delta$-stable market} and {\em friendliness} to measure
the viability of FL and the market acceptability of FL. The FL participants'
behaviours can then be predicted using game theoretic tools (i.e., their
optimal strategies concerning participation in FL). If the market
$\delta$-stability is achievable, the final model performance improvement of
each FL-PT shall be bounded, which relates to the market conditions of FL
applications. We provide tight bounds and quantify the friendliness, $\kappa$,
of given market conditions to FL. Experimental results show the viability of FL
in a wide range of market conditions. Our results are useful for identifying
the market conditions under which collaborative FL model training is viable
among competitors, and the requirements that have to be imposed while applying
FL under these conditions.
|
We study the charmless two-body $\Lambda_b\to \Lambda (\phi,\eta^{(\prime)})$
and three-body $\Lambda_b\to \Lambda K^+K^- $ decays. We obtain ${\cal
B}(\Lambda_b\to \Lambda\phi)=(3.53\pm 0.24)\times 10^{-6}$ to agree with the
recent LHCb measurement. However, we find that ${\cal B}(\Lambda_b\to
\Lambda(\phi\to)K^+ K^-)=(1.71\pm 0.12)\times 10^{-6}$ is unable to explain the
LHCb observation of ${\cal B}(\Lambda_b\to\Lambda K^+ K^-)=(15.9\pm 1.2\pm
1.2\pm 2.0)\times 10^{-6}$, which implies the possibility for other
contributions, such as that from the resonant $\Lambda_b\to K^-
N^*,\,N^*\to\Lambda K^+$ decay with $N^*$ as a higher-wave baryon state. For
$\Lambda_b\to \Lambda \eta^{(\prime)}$, we show that ${\cal B}(\Lambda_b\to
\Lambda\eta,\,\Lambda\eta^\prime)= (1.47\pm 0.35,1.83\pm 0.58)\times 10^{-6}$,
which are consistent with the current data of $(9.3^{+7.3}_{-5.3},<3.1)\times
10^{-6}$, respectively. Our results also support the relation of ${\cal
B}(\Lambda_b\to \Lambda\eta) \simeq {\cal B}(\Lambda_b\to\Lambda\eta^\prime)$,
given by the previous study.
|
The self-organized growth of Co nanoparticles with 10 nm periodicity was
achieved at room temperature on a Ag(001) surface patterned by an underlying
dislocation network, as shown by real time, in situ Grazing Incidence Small and
Wide Angle X-ray Scattering. The misfit dislocation network, buried at the
interface between a 5nm-thick Ag thin film and a MgO(001) substrate, induces a
periodic strain field on top of the surface. Nucleation and growth of Co on
tensile areas are found as the most favorable sites as highlighted by Molecular
Dynamic simulations.
|
Enabling cellular connectivity for drones introduces a wide set of challenges
and opportunities. Communication of cellular-connected drones is influenced by
3-dimensional mobility and line-of-sight channel characteristics which results
in higher number of handovers with increasing altitude. Our cell planning
simulations in coexistence of aerial and terrestrial users indicate that the
severe interference from drones to base stations is a major challenge for
uplink communications of terrestrial users. Here, we first present the major
challenges in co-existence of terrestrial and drone communications by
considering real geographical network data for Stockholm. Then, we derive
analytical models for the key performance indicators (KPIs), including
communications delay and interference over cellular networks, and formulate the
handover and radio resource management (H-RRM) optimization problem.
Afterwards, we transform this problem into a machine learning problem, and
propose a deep reinforcement learning solution to solve H-RRM problem. Finally,
using simulation results, we present how the speed and altitude of drones, and
the tolerable level of interference, shape the optimal H-RRM policy in the
network. Especially, the heat-maps of handover decisions in different drone's
altitudes/speeds have been presented, which promote a revision of the legacy
handover schemes and redefining the boundaries of cells in the sky.
|
Photometry is presented of the Dec. 25, 2007 transit of HD 17156b, which has
the longest orbital period and highest orbital eccentricity of all the known
transiting exoplanets. New measurements of the stellar radial velocity are also
presented. All the data are combined and integrated with stellar-evolutionary
modeling to derive refined system parameters. The planet's mass and radius are
found to be 3.212_{-0.082}^{+0.069} Jupiter masses and 1.023_{-0.055}^{+0.070}
Jupiter radii. The corresponding stellar properties are 1.263_{-0.047}^{+0.035}
solar masses and 1.446_{-0.067}^{+0.099} solar radii. The planet is smaller by
1 sigma than a theoretical solar-composition gas giant with the same mass and
equilibrium temperature, a possible indication of heavy-element enrichment. The
midtransit time is measured to within 1 min, and shows no deviation from a
linear ephemeris (and therefore no evidence for orbital perturbations from
other planets). We provide ephemerides for future transits and superior
conjunctions. There is an 18% chance that the orbital plane is oriented close
enough to edge-on for secondary eclipses to occur at superior conjunction.
Observations of secondary eclipses would reveal the thermal emission spectrum
of a planet that experiences unusually large tidal heating and insolation
variations.
|
This study presents an innovative computer vision framework designed to
analyze human movements in industrial settings, aiming to enhance biomechanical
analysis by integrating seamlessly with existing software. Through a
combination of advanced imaging and modeling techniques, the framework allows
for comprehensive scrutiny of human motion, providing valuable insights into
kinematic patterns and kinetic data. Utilizing Convolutional Neural Networks
(CNNs), Direct Linear Transform (DLT), and Long Short-Term Memory (LSTM)
networks, the methodology accurately detects key body points, reconstructs 3D
landmarks, and generates detailed 3D body meshes. Extensive evaluations across
various movements validate the framework's effectiveness, demonstrating
comparable results to traditional marker-based models with minor differences in
joint angle estimations and precise estimations of weight and height.
Statistical analyses consistently support the framework's reliability, with
joint angle estimations showing less than a 5-degree difference for hip
flexion, elbow flexion, and knee angle methods. Additionally, weight estimation
exhibits an average error of less than 6 % for weight and less than 2 % for
height when compared to ground-truth values from 10 subjects. The integration
of the Biomech-57 landmark skeleton template further enhances the robustness
and reinforces the framework's credibility. This framework shows significant
promise for meticulous biomechanical analysis in industrial contexts,
eliminating the need for cumbersome markers and extending its utility to
diverse research domains, including the study of specific exoskeleton devices'
impact on facilitating the prompt return of injured workers to their tasks.
|
We revisit the problem of perturbing a large, i.i.d. random matrix by a
finite rank error. It is known that when elements of the i.i.d. matrix have
finite fourth moment, then the outlier eigenvalues of the perturbed matrix are
close to the outlier eigenvalues of the error, as long as the perturbation is
relatively small. We first prove that under a merely second moment condition,
for a large class of perturbation matrix with bounded rank and bounded operator
norm, the outlier eigenvalues of perturbed matrix still converge to that of the
perturbation. We then prove that for a matrix with i.i.d. Bernoulli $(d/n)$
entries or Bernoulli $(d_n/n)$ entries with $d_n=n^{o(1)}$, the same result
holds for perturbation matrices with a bounded number of nonzero elements.
|
We derive a new set of field equations within the framework of the Palatini
formalism.These equations are a natural generalization of the Einstein-Maxwell
equations which arise by adding a function $\mathcal{F}(\mathcal{Q})$, with
$\mathcal{Q}\equiv F^{\alpha\beta}F_{\alpha\beta}$ to the Palatini Lagrangian
$f(R,Q)$.The result we obtain can be viewed as the coupling of gravity with a
nonlinear extension of the electromagnetic field.In addition,a new method is
introduced to solve the algebraic equation associated to the Ricci tensor.
|
This paper presents a complex systems overview of a power grid network. In
recent years, concerns about the robustness of the power grid have grown
because of several cascading outages in different parts of the world. In this
paper, cascading effect has been simulated on three different networks, the
IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus
equivalent model, using the DC Power Flow Model. Power Degradation has been
discussed as a measure to estimate the damage to the network, in terms of load
loss and node loss. A network generator has been developed to generate graphs
with characteristics similar to the IEEE standard networks and the generated
graphs are then compared with the standard networks to show the effect of
topology in determining the robustness of a power grid. Three mitigation
strategies, Homogeneous Load Reduction, Targeted Range-Based Load Reduction,
and Use of Distributed Renewable Sources in combination with Islanding, have
been suggested. The Homogeneous Load Reduction is the simplest to implement but
the Targeted Range-Based Load Reduction is the most effective strategy.
|
We study theoretically single electron loss from helium-like highly charged
ions involving excitation and decay of autoionizing states of the ion. Electron
loss is caused by either photo absorption or the interaction with a fast atomic
particle (a bare nucleus, a neutral atom, an electron). The interactions with
the photon field and the fast particles are taken into account in the first
order of perturbation theory. Two initial states of the ion are considered:
$1s^2$ and $(1s2s)_{J=0}$. We analyze in detail how the shape of the emission
pattern depends on the atomic number $Z_{I}$ of the ion discussing, in
particular, the inter-relation between electron loss via photo absorption and
due to the impact of atomic particles in collisions at modest relativistic and
extreme relativistic energies. According to our results, in electron loss from
the $1s^2$ state autoionization may substantially influence the shape of the
emission spectra only up to $Z_{I} \approx 35-40$. A much more prominent role
is played by autoionization in electron loss from $(1s2s)_{J=0}$ where it not
only strongly affect the shape of the emission pattern but also may
substantially increase the total loss cross section.
|
Our paper offers an analysis of how Dante describes the tre giri ("three
rings") of the Holy Trinity in Paradiso 33 of the Divine Comedy. We point to
the myriad possibilities Dante may have been envisioning when he describes his
vision of God at this final stage in his journey. Saiber focuses on the
features of shape, motion, size, color, and orientation that Dante details in
describing the Trinity. Mbirika uses mathematical tools from topology
(specifically, knot theory) and combinatorics to analyze all the possible
configurations that have a specific layout of three intertwining circles which
we find particularly compelling given Dante's description of the Trinity: the
round figures arranged in a triangular format with rotational and reflective
symmetry. Of the many possible link patterns, we isolate two particularly
suggestive arrangements for the giri: the Brunnian link and the (3,3)-torus
link. These two patterns lend themselves readily to a Trinitarian model.
|
To satisfy the growing throughput demand of data-intensive applications, the
performance of optical communication systems increased dramatically in recent
years. With higher throughput, more advanced equalizers are crucial, to
compensate for impairments caused by inter-symbol interference (ISI). The
latest research shows that artificial neural network (ANN)-based equalizers are
promising candidates to replace traditional algorithms for high-throughput
communications. On the other hand, not only throughput but also flexibility is
a main objective of beyond-5G and 6G communication systems. A platform that is
able to satisfy the strict throughput and flexibility requirements of modern
communication systems are field programmable gate arrays (FPGAs). Thus, in this
work, we present a high-performance FPGA implementation of an ANN-based
equalizer, which meets the throughput requirements of modern optical
communication systems. Further, our architecture is highly flexible since it
includes a variable degree of parallelism (DOP) and therefore can also be
applied to low-cost or low-power applications which is demonstrated for a
magnetic recording channel. The implementation is based on a cross-layer design
approach featuring optimizations from the algorithm down to the hardware
architecture, including a detailed quantization analysis. Moreover, we present
a framework to reduce the latency of the ANN-based equalizer under given
throughput constraints. As a result, the bit error ratio (BER) of our equalizer
for the optical fiber channel is around four times lower than that of a
conventional one, while the corresponding FPGA implementation achieves a
throughput of more than 40 GBd, outperforming a high-performance graphics
processing unit (GPU) by three orders of magnitude for a similar batch size.
|
We are interested in the dynamic of a structured branching population where
the trait of each individual moves according to a Markov process. The rate of
division of each individual is a function of its trait and when a branching
event occurs, the trait of a descendant at birth depends on the trait of the
mother. We prove a law of large numbers for the empirical distribution of
ancestral trajectories. It ensures that the empirical measure converges to the
mean value of the spine which is a time-inhomogeneous Markov process describing
the trait of a typical individual along its ancestral lineage. Our approach
relies on ergodicity arguments for this time-inhomogeneous Markov process. We
apply this technique on the example of a size-structured population with
exponential growth in varying environment.
|
A search for high-energy neutrinos coming from the direction of the Sun has
been performed using the data recorded by the ANTARES neutrino telescope during
2007 and 2008. The neutrino selection criteria have been chosen to maximize the
selection of possible signals produced by the self-annihilation of weakly
interacting massive particles accumulated in the centre of the Sun with respect
to the atmospheric background. After data unblinding, the number of neutrinos
observed towards the Sun was found to be compatible with background
expectations. The $90\%$ CL upper limits in terms of spin-dependent and
spin-independent WIMP-proton cross-sections are derived and compared to
predictions of two supersymmetric models, CMSSM and MSSM-7. The ANTARES limits
are competitive with those obtained by other neutrino observatories and are
more stringent than those obtained by direct search experiments for the
spin-dependent WIMP-proton cross-section.
|
The probability density function of stochastic differential equations is
governed by the Fokker-Planck (FP) equation. A novel machine learning method is
developed to solve the general FP equations based on deep neural networks. The
proposed algorithm does not require any interpolation and coordinate
transformation, which is different from the traditional numercial methods. The
main novelty of this paper is that penalty factors are introduced to overcome
the local optimization for the deep learning approach, and the corresponding
setting rules are given. Meanwhile, we consider a normalization condition as a
supervision condition to effectively avoid that the trial solution is zero.
Several numerical examples are presented to illustrate performances of the
proposed algorithm, including one- and two-dimensional systems. All the results
suggest that the deep learning is quite feasible and effective to calculate the
FP equation. Further, influences of the number of hidden layers, the penalty
factors, and the optimization algorithm are discussed in detail. These results
indicate that the performances of the machine learning technique can be
improved through constructing the neural networks appropriately.
|
We have attempted to develop here tentatively a model for $J/\Psi$ production
in p+p, d+Au, Cu + Cu and Au + Au collisions at RHIC energies on the basic
ansatz that the results of nucleus-nucleus collisions could be arrived at from
the nucleon-nucleon (p + p)-interactions with induction of some additional
specific features of high energy nuclear collisions. Based on the proposed new
and somewhat unfamiliar model, we have tried (i) to capture the properties of
invariant $p_T$ -spectra for $J/\Psi$ meson production; (ii) to study the
nature of centrality dependence of the $p_T$ -spectra; (iii) to understand the
rapidity distributions; (iv) to obtain the characteristics of the average
transverse momentum $< p_T >$ and the values of $< p_T^2 >$ as well and (v) to
trace the nature of nuclear modification factor. The alternative approach
adopted here describes the data-sets on the above-mentioned various observables
in a fairly satisfactory manner. And, finally, the nature of
$J/\Psi$-production at Large Hadron Collider(LHC)-energies deduced on the basis
of our chosen model has been presented in a predictive way against the
RHIC-yields, both calculated for the most central collisions and on the same
model.
|
We consider cosmological consequences of string theory tachyon condensation.
We show that it is very difficult to obtain inflation in the simplest versions
of this theory. Typically, inflation in these theories could occur only at
super-Planckian densities, where the effective 4D field theory is inapplicable.
Reheating and creation of matter in models where the tachyon potential V(T) has
a minimum at infinitely large T is problematic because the tachyon field in
such theories does not oscillate. If the universe after inflation is dominated
by the energy density of the tachyon condensate, it will always remain
dominated by the tachyons. It might happen that string condensation is
responsible for a short stage of inflation at a nearly Planckian density, but
one would need to have a second stage of inflation after that. This would imply
that the tachyon played no role in the post-inflationary universe until the
very late stages of its evolution. These problems do not appear in the recently
proposed models of hybrid inflation where the complex tachyon field has a
minimum at T << M_p.
|
Although interference alignment (IA) can theoretically achieve the optimal
degrees of freedom (DoFs) in the $K$-user Gaussian interference channel, its
direct application comes at the prohibitive cost of precoding over
exponentially-many signaling dimensions. On the other hand, it is known that
practical "one-shot" IA precoding (i.e., linear schemes without symbol
expansion) provides a vanishing DoFs gain in large fully-connected networks
with generic channel coefficients. In our previous work, we introduced the
concept of "Cellular IA" for a network topology induced by hexagonal cells with
sectors and nearest-neighbor interference. Assuming that neighboring sectors
can exchange decoded messages (and not received signal samples) in the uplink,
we showed that linear one-shot IA precoding over $M$ transmit/receive antennas
can achieve the optimal $M/2$ DoFs per user. In this paper we extend this
framework to networks with omni-directional (non-sectorized) cells and consider
the practical scenario where users have $2$ antennas, and base-stations have
$2$, $3$ or $4$ antennas. In particular, we provide linear one-shot IA schemes
for the $2\times 2$, $2\times3$ and $2\times 4$ cases, and show the
achievability of $3/4$, $1$ and $7/6$ DoFs per user, respectively. DoFs
converses for one-shot schemes require the solution of a discrete optimization
problem over a number of variables that grows with the network size. We develop
a new approach to transform such challenging optimization problem into a
tractable linear program (LP) with significantly fewer variables. This approach
is used to show that the achievable $3/4$ DoFs per user are indeed optimal for
a large (extended) cellular network with $2\times 2$ links.
|
We estimate photospheric velocities of Type II-P supernovae using model
spectra created with SYNOW, and compare the results with those obtained by more
conventional techniques, such as cross-correlation, or measuring the absorption
minimum of P Cygni features. Based on a sample of 81 observed spectra of 5 SNe,
we show that SYNOW provides velocities that are similar to ones obtained by
more sophisticated NLTE modeling codes, but they can be derived in a less
computation-intensive way. The estimated photospheric velocities (v_model) are
compared to ones measured from Doppler-shifts of the absorption minima of the
Hbeta and the FeII \lambda5169 features.
Our results confirm that the FeII velocities (v_Fe) have tighter and more
homogeneous correlation with the estimated photospheric velocities than the
ones measured from Hbeta, but both suffer from phase-dependent systematic
deviations from those. The same is true for comparison with the
cross-correlation velocities. We verify and improve the relations between v_Fe,
v_Hbeta and v_model in order to provide useful formulae for
interpolating/extrapolating the velocity curves of Type II-P SNe to phases not
covered by observations. We also discuss the implications of our results for
the distance measurements of Type II-P SNe, and show that the application of
the model velocities is preferred in the Expanding Photosphere Method.
|
We have studied the \phi(1020)f_0(980) S-wave scattering at energies around
threshold employing chiral Lagrangians coupled to vector mesons through minimal
coupling. The interaction kernel is obtained by considering the f_0(980) as a
K\bar{K} bound state. The Y(2175) resonance is generated in this approach by
the self-interactions between the \phi(1020) and the f_0(980) resonances. We
are able to describe the e^+e^-\to \phi(1020)f_0(980) recent scattering data to
test experimentally our scattering amplitudes, concluding that the Y(2175)
resonance has a large \phi(1020)f_0(980) meson-meson component.
|
The most luminous quasars (with bolometric luminosities are 1E47 erg/s) show
a high prevalence of CIV {\lambda}1549 and [OIII]{\lambda}{\lambda}4959,5007
emission line profiles with strong blueshifts. Blueshifts are interpreted as
due to Doppler effect and selective obscuration, and indicate outflows
occurring over a wide range of spatial scales. We found evidence in favor of
the nuclear origin of the outflows diagnosed by [OIII]{\lambda}{\lambda}
4959,5007. The ionized gas mass, kinetic power, and mechanical thrust are
extremely high, and suggest widespread feedback effects on the host galaxies of
very luminous quasars, at cosmic epochs between 2 and 6 Gyr from the Big Bang.
In this mini-review we summarize results obtained by our group and reported in
several major papers in the last few years with an eye on challenging aspects
of quantifying feedback effects in large samples of quasars.
|
We propose a novel time window-based analysis technique to investigate the
convergence properties of the stochastic gradient descent method with momentum
(SGDM) in nonconvex settings. Despite its popularity, the convergence behavior
of SGDM remains less understood in nonconvex scenarios. This is primarily due
to the absence of a sufficient descent property and challenges in
simultaneously controlling the momentum and stochastic errors in an almost sure
sense. To address these challenges, we investigate the behavior of SGDM over
specific time windows, rather than examining the descent of consecutive
iterates as in traditional studies. This time window-based approach simplifies
the convergence analysis and enables us to establish the first iterate
convergence result for SGDM under the Kurdyka-Lojasiewicz (KL) property. We
further provide local convergence rates which depend on the underlying KL
exponent and the utilized step size schemes.
|
We perform error analyses explaining some previously mysterious phenomena
arising in numerical computation of the Evans function, in particular (i) the
advantage of centered coordinates for exterior product and related methods, and
(ii) the unexpected stability of the (notoriously unstable) continuous
orthogonalization method of Drury in the context of Evans function
applications. The analysis in both cases centers around a numerical version of
the gap lemma of Gardner--Zumbrun and Kapitula--Sandstede, giving uniform error
estimates for apparently ill-posed projective boundary-value problems with
asymptotically constant coefficients, so long as the rate of convergence of
coefficients is greater than the "badness" of the boundary projections as
measured by negative spectral gap. In the second case, we use also the simple
but apparently previously unremarked observation that the Drury method is in
fact (neutrally) stable when used to approximate an unstable subspace, so that
continuous orthogonalization and the centered exterior product method are
roughly equally well-conditioned as methods for Evans function approximation.
The latter observation makes possible an extremely simple nonlinear
boundary-value method for possible use in large-scale systems, extending ideas
suggested by Sandstede. We suggest also a related linear method based on the
conjugation lemma of M\'etivier--Zumbrun, an extension of the gap lemma
mentioned above.
|
Photoelectron emission from excited states of laser-dressed atomic helium is
analyzed with respect to laser intensity-dependent excitation energy shifts and
angular distributions. In the two-color XUV (exteme ultra\-violet) -- IR
(infrared) measurement, the XUV photon energy is scanned between
\SI{20.4}{\electronvolt} and the ionization threshold at
\SI{24.6}{\electronvolt}, revealing electric dipole-forbidden transitions for a
temporally overlapping IR pulse ($\sim\!\SI{e12}{\watt\per
\centi\meter\squared}$). The interpretation of the experimental results is
supported by numerically solving the time-dependent Schr\"odinger equation in a
single-active-electron approximation.
|
The sign coherence phenomenon is an important feature of c-vectors in cluster
algebras with principal coefficients. In this note, we consider a more general
version of c-vectors defined for arbitrary cluster algebras of geometric type
and formulate a conjecture describing their asymptotic behavior. This
conjecture, which is called the asymptotic sign coherence conjecture, states
that for any infinite sequence of matrix mutations that satisfies certain
natural conditions, the corresponding c-vectors eventually become sign
coherent. We prove this conjecture for rank 2 cluster algebras of infinite type
and for a particular sequence of mutations in a cluster algebra associated with
the Markov quiver.
|
Warm dark matter is consistent with the observations of the large-scale
structure, and it can also explain the cored density profiles on smaller
scales. However, it has been argued that warm dark matter could delay the star
formation. This does not happen if warm dark matter is made up of keV sterile
neutrinos, which can decay into X-ray photons and active neutrinos. The X-ray
photons have a catalytic effect on the formation of molecular hydrogen, the
essential cooling ingredient in the primordial gas. In all the cases we have
examined, the overall effect of sterile dark matter is to facilitate the
cooling of the gas and to reduce the minimal mass of the halo prone to
collapse. We find that the X-rays from the decay of keV sterile neutrinos
facilitate the collapse of the gas clouds and the subsequent star formation at
high redshift.
|
Multi-modal neuroimaging projects are advancing our understanding of human
brain architecture, function, connectivity using high-quality non-invasive data
from many subjects. However, ground truth validation of connectivity using
invasive tracers is not feasible in humans. Our NonHuman Primate Neuroimaging &
Neuroanatomy Project (NHP_NNP) is an international effort (6 laboratories in 5
countries) to: (i) acquire and analyze high-quality multi-modal brain imaging
data of macaque and marmoset monkeys using protocols and methods adapted from
the HCP; (ii) acquire quantitative invasive tract-tracing data for cortical and
subcortical projections to cortical areas; and (iii) map the distributions of
different brain cell types with immunocytochemical stains to better define
brain areal boundaries. We are acquiring high-resolution structural,
functional, and diffusion MRI data together with behavioral measures from over
100 individual macaques and marmosets in order to generate non-invasive
measures of brain architecture such as myelin and cortical thickness maps, as
well as functional and diffusion tractography-based connectomes. We are using
classical and next-generation anatomical tracers to generate quantitative
connectivity maps based on brain-wide counting of labeled cortical and
subcortical neurons, providing ground truth measures of connectivity. Advanced
statistical modeling techniques address the consistency of both kinds of data
across individuals, allowing comparison of tracer-based and non-invasive
MRI-based connectivity measures. We aim to develop improved cortical and
subcortical areal atlases by combining histological and imaging methods.
Finally, we are collecting genetic and sociality-associated behavioral data in
all animals in an effort to understand how genetic variation shapes the
connectome and behavior.
|
We study the structure and properties of vortices in a recently proposed
Abelian Maxwell-Chern-Simons model in $2 +1 $ dimensions. The model which is
described by gauge field interacting with a complex scalar field, includes two
parity and time violating terms: the Chern-Simons and the anomalous magnetic
terms. Self-dual relativistic vortices are discussed in detail. We also find
one dimensional soliton solutions of the domain wall type. The vortices are
correctly described by the domain wall solutions in the large flux limit.
|
In this work we present a modeling tool designed to estimate the hysteretic
losses in the coils of an electric generator with coils made of coated
conductor tapes during transient operation. The model is based on a two-stage
segregated model approach that allows simulating the electric generator and the
current distribution in the superconducting coils using a one-way coupling from
the generator to the HTS coils model. The model has two inputs: the rotational
speed and the electric load signal. A homogeneous anisotropic bulk model for
the coils allows computing the current distribution in the coils. From this
distribution, the hysteretic losses are estimated. Beyond the interest on
providing an estimate on the global energy dissipation in the machine, in this
work we present a more detailed local analysis that allows addressing issues
such as coil design, critical current ratting, electric load change rate
limits, cryocooler design, identification of quench-prone regions and overall
transient performance.
|
We investigate whether the many-body ground states of bosons in a generalized
two-mode model with localized inhomogeneous single-particle orbitals and
anisotropic long-range interactions (e.g. dipole-dipole interactions), are
coherent or fragmented. It is demonstrated that fragmentation can take place in
a single trap for positive values of the interaction couplings, implying that
the system is potentially stable. Furthermore, the degree of fragmentation is
shown to be insensitive to small perturbations on the single-particle level.
|
The rise of deep learning algorithms has led many researchers to withdraw
from using classic signal processing methods for sound generation. Deep
learning models have achieved expressive voice synthesis, realistic sound
textures, and musical notes from virtual instruments. However, the most
suitable deep learning architecture is still under investigation. The choice of
architecture is tightly coupled to the audio representations. A sound's
original waveform can be too dense and rich for deep learning models to deal
with efficiently - and complexity increases training time and computational
cost. Also, it does not represent sound in the manner in which it is perceived.
Therefore, in many cases, the raw audio has been transformed into a compressed
and more meaningful form using upsampling, feature-extraction, or even by
adopting a higher level illustration of the waveform. Furthermore, conditional
on the form chosen, additional conditioning representations, different model
architectures, and numerous metrics for evaluating the reconstructed sound have
been investigated. This paper provides an overview of audio representations
applied to sound synthesis using deep learning. Additionally, it presents the
most significant methods for developing and evaluating a sound synthesis
architecture using deep learning models, always depending on the audio
representation.
|
The Galois lattice is a graphic method of representing knowledge structures.
The first basic purpose in this paper is to introduce a new class of Galois
lattices, called graded Galois lattices. As a direct result, one can obtain the
notion of graded closed itemsets (sets of items), to extend the definition of
closed itemsets. Our second important goal in this paper, is related to set a
constructive method, computing the graded formal concepts and graded closed
itemsets. We mean by a constructive method, a method that builds up a complete
solution from scratch by sequentially adding components to a partial solution
until the solution is complete. Besides of computational aspects, our methods
in this paper are based on the strong results obtained by special mappings in
the realm of domain theory. To reach the fertilized consequences and
constructive algorithms, we need to push the study to the structures of Banach
lattices.
|
By quantising the gravitational dynamics, space and time are usually forced
to play fundamentally different roles. This raises the question whether
physically relevent configurations could also exist which would not admit
space-time-splitting. This has led to the investigation of an approach not
based on quantum dynamical assumptions. The assumptions are mainly restricted
to a constrained statistical concept of ordered partitions (NDA). For the time
being, the continuum description is restricted in order to allow the
application of the rules of differential geometry. It is verified that NDA
yields equations of the same form as general relativity and quantum field
theory for 3+1 dimensions and within the limits of experimental evidence. The
derivations are shown in detail. First results are compared to the path
integral approach to quantum gravity.
|
We studied the cosmological constraints on the Galileon gravity obtained from
observational data of the growth rate of matter density perturbations, the
supernovae Ia (SN Ia), the cosmic microwave background (CMB), and baryon
acoustic oscillations (BAO). For the same value of the energy density parameter
of matter $\Omega_{m,0}$, the growth rate $f$ in Galileon models is enhanced,
relative to the $\Lambda$CDM case, because of an increase in Newton's constant.
The smaller $\Omega_{m,0}$ is, the more growth rate is suppressed. Therefore,
the best fit value of $\Omega_{m,0}$ in the Galileon model, based only the
growth rate data, is quite small. This is incompatible with the value of
$\Omega_{m,0}$ obtained from the combination of SN Ia, CMB, and BAO data. On
the other hand, in the $\Lambda$CDM model, the values of $\Omega_{m,0}$
obtained from different observational data sets are consistent. In the analysis
of this paper, we found that the Galileon model is less compatible with
observations than the $\Lambda$CDM model. This result seems to be qualitatively
the same in most of the generalized Galileon models in which Newton's constant
is enhanced.
|
Solving evolutionary equations in a parallel-in-time manner is an attractive
topic and many algorithms are proposed in recent two decades. The algorithm
based on the block $\alpha$-circulant preconditioning technique has shown
promising advantages, especially for wave propagation problems. By fast Fourier
transform for factorizing the involved circulant matrices, the preconditioned
iteration can be computed efficiently via the so-called diagonalization
technique, which yields a direct parallel implementation across all time
levels. In recent years, considerable efforts have been devoted to exploring
the convergence of the preconditioned iteration by studying the spectral radius
of the iteration matrix, and this leads to many case-by-case studies depending
on the used time-integrator. In this paper, we propose a unified convergence
analysis for the algorithm applied to $u'+Au=f$, where
$\sigma(A)\subset\mathbb{C}^+$ with $\sigma(A)$ being the spectrum of
$A\in\mathbb{C}^{m\times m}$. For any one-step method (such as the Runge-Kutta
methods) with stability function $\mathcal{R}(z)$, we prove that the decay rate
of the global error is bounded by $\alpha/(1-\alpha)$, provided the method is
stable, i.e., $\max_{\lambda\in\sigma(A)}|\mathcal{R}(\Delta t\lambda)|\leq1$.
For any linear multistep method, such a bound becomes $c\alpha/(1-c\alpha)$,
where $c\geq1$ is a constant specified by the multistep method itself. Our
proof only relies on the stability of the time-integrator and the estimate is
independent of the step size $\Delta t$ and the spectrum $\sigma(A)$.
|
We consider the mixed problem on the exterior of the unit ball in
$\mathbb{R}^{n}$, $n\ge2$, for a defocusing Schr\"{o}dinger equation with a
power nonlinearity $|u|^{p-1}u$, with zero boundary data. Assuming that the
initial data are non radial, sufficiently small perturbations of \emph{large}
radial initial data, we prove that for all powers $p>n+6$ the solution exists
for all times, its Sobolev norms do not inflate, and the solution is unique in
the energy class.
|
Stimulated emission in small-molecule organic films at a high dye
concentration is generally hindered by fluorescence quenching, especially in
the red region of the spectrum. Here we demonstrate the achievement of high net
gains (up to 50 cm-1) around 640 nm in thermally evaporated non-doped films of
4-di(4'-tert-butylbiphenyl-4-yl)amino-4'-dicyanovinylbenzene, which makes this
material suitable for green-light pumped single-mode organic lasers with low
threshold and superior stability. Lasing effect is demonstrated in a DBR
resonator configuration, as well as under the form of random lasing at high
pump intensities.
|
High order cumulant tensors carry information about statistics of
non-normally distributed multivariate data. In this work we present a new
efficient algorithm for calculation of cumulants of arbitrary order in a
sliding window for data streams. We showed that this algorithms enables
speedups of cumulants updates compared to current algorithms. This algorithm
can be used for processing on-line high-frequency multivariate data and can
find applications in, e.g., on-line signal filtering and classification of data
streams.
To present an application of this algorithm, we propose an estimator of
non-Gaussianity of a data stream based on the norms of high-order cumulant
tensors.
We show how to detect the transition from Gaussian distributed data to
non-Gaussian ones in a~data stream. In order to achieve high implementation
efficiency of operations on super-symmetric tensors, such as cumulant tensors,
we employ the block structure to store and calculate only one hyper-pyramid
part of such tensors.
|
We use a large data-set of realistic synthetic observations (PaperI) to
assess how observational techniques affect the measurement of physical
properties of star-forming regions. In this paper (PaperII), we explore the
reliability of the measured total gas mass, dust surface density and dust
temperature maps derived from modified blackbody fitting of synthetic Herschel
observations. We found from our pixel-by-pixel analysis of the measured dust
surface density and dust temperature a worrisome error spread especially close
to star-formation sites and low-density regions, where for those "contaminated"
pixels the surface densities can be under/overestimated by up to three orders
of magnitude. In light of this, we recommend to treat the pixel-based results
from this technique with caution in regions with active star formation. In
regions of high background typical in the inner Galactic plane, we are not able
to recover reliable surface density maps of individual synthetic regions, since
low-mass regions are lost in the FIR background. When measuring the total gas
mass of regions in moderate background, we find that modified blackbody fitting
works well (absolute error:+9%;-13%) up to 10kpc distance (errors increase with
distance). Commonly, the initial images are convolved to the largest common
beam-size, which smears contaminated pixels over large areas. The resulting
information loss makes this commonly-used technique less verifiable as now
chi^2-values cannot be used as a quality indicator of a fitted pixel. Our
control measurements of the total gas mass (without the step of convolution to
the largest common beam size) produce similar results (absolute error:+20%;-7%)
while having much lower median errors especially for the high-mass stellar
feedback phase. In upcoming papers (III&IV) we test the reliability of measured
star-formation rate with direct and indirect techniques.
|
The Fermi/LAT collaboration recently reported the detection of starburt
galaxies in the high energy gamma-ray domain, as well as radio-loud narrow-line
Seyfert 1 objects. Motivated by the presence of sources close to the location
of composite starburst/Seyfert 2 galaxies in the first year Fermi/LAT
catalogue, we aim at studying high energy gamma-ray emission from such objects,
and at disentangling the emission of starburst and Seyfert activity. We
analysed 1.6 years of Fermi/LAT data from NGC 1068 and NGC 4945, which count
among the brightest Seyfert 2 galaxies. We search for potential variability of
the high energy signal, and derive a spectrum of these sources. We also analyse
public INTEGRAL IBIS/ISGRI data over the last seven years to derive their hard
X-ray spectrum. We find an excess of high energy gamma-rays of 8.3 sigma and
9.2 sigma for 1FGL J0242.7+0007 and 1FGL J1305.4-4928, which are found to be
consistent with the position of the Seyfert 2 galaxies NGC 1068 and NGC 4945,
respectively. The energy spectrum of the sources can be described by a power
law with a photon index of Gamma=2.31 \pm 0.13 for NGC 1068, while for NGC
4945, we obtain a photon index of Gamma=2.31 \pm 0.10. For both sources, we
detect no significant variability nor any indication of a curvature of the
spectrum. We discuss the origin of the high energy emission of these objects in
the context of Seyfert or starburst activity. While the emission of NGC 4945 is
consistent with starburst activity, that of NGC 1068 is an order of magnitude
above expectations, suggesting dominant emission from the active nucleus. We
show that a leptonic scenario can account for the multi-wavelength spectral
energy distribution of NGC 1068.
|
We have carried out an exercise in the classification of W+W- and ttbar
events as produced in a high-energy proton-proton collider, motivated in part
by the current tension between the measured and predicted values of the WW
cross section. The performance of the random forest classifier surpasses that
of a standard cut-based analysis. Furthermore, the distortion of the
distributions of key kinematic event features is relatively slight, suggesting
that systematic uncertainties due to modeling might be reduced. Finally, our
random forest can tolerate missing features such as missing transverse energy
without a severe degradation of its performance.
|
Let $A$ be a finite subset of $L^2(\mathbb{R})$ and $p,q\in\mathbb{N}$. We
characterize the Schauder basis properties in $L^2(\mathbb{R})$ of the Gabor
system $$G(1,p/q,A)=\{e^{2\pi i m x}g(x-np/q) : m,n\in \mathbb{Z}, g\in A\},$$
with a specific ordering on $\mathbb{Z}\times \mathbb{Z}\times A$. The
characterization is given in terms of a Muckenhoupt matrix $A_2$ condition on
an associated Zibulski-Zeevi type matrix.
|
3350 objects from the Sixth catalog of orbits of visual binary stars (ORB6)
are investigated to validate Gaia EDR3 parallaxes and provide mass estimates
for the systems. We show that 2/3 of binaries with 0.2 - 0.5 arcsec separation
are left without a parallax solution in EDR3. A special attention is paid to
521 pairs with parallax known separately for both components. We find 16
entries that are deemed to be chance alignments of unrelated stars. At once we
show examples of high-confidence binary systems with significant differences in
the reported parallaxes of their components. Next we conclude that the reported
Gaia EDR3 parallax errors are underestimated, at least by a factor of 3 for
sources with large RUWE. Parallaxes are needed to estimate stellar masses.
Since nearly 30\% of ORB6 entries lack 5 or 6-parameter solution in EDR3, we
attempt to enrich the astrometric data. Distant companions of ORB6 entries are
revealed in EDR3 by analysis of stellar proper motions and Hipparcos
parallaxes. Notably, in 28 cases intrinsic EDR3 parallaxes of the binary
components appear to be less reliable than the parallax of the outer
companions. Gaia DR2, TGAS and Hipparcos parallaxes are used when EDR3 data is
unavailable. Synthetic mass-luminosity relation in the G band for main sequence
stars is obtained to provide mass estimates along with dynamical masses
calculated via Kepler's Third Law.
|
Model-based quantum optimal control promises to solve a wide range of
critical quantum technology problems within a single, flexible framework. The
catch is that highly-accurate models are needed if the optimized controls are
to meet the exacting demands set by quantum engineers. A practical alternative
is to directly calibrate control parameters by taking device data and tuning
until success is achieved. In quantum computing, gate errors due to inaccurate
models can be efficiently polished if the control is limited to a few (usually
hand-designed) parameters; however, an alternative tool set is required to
enable efficient calibration of the complicated waveforms potentially returned
by optimal control. We propose an automated model-based framework for
calibrating quantum optimal controls called Learning Iteratively for Feasible
Tracking (LIFT). LIFT achieves high-fidelity controls despite parasitic model
discrepancies by precisely tracking feasible trajectories of quantum
observables. Feasible trajectories are set by combining black-box optimal
control and the bilinear dynamic mode decomposition, a physics-informed
regression framework for discovering effective Hamiltonian models directly from
rollout data. Any remaining tracking errors are eliminated in a non-causal way
by applying model-based, norm-optimal iterative learning control to subsequent
rollout data. We use numerical experiments of qubit gate synthesis to
demonstrate how LIFT enables calibration of high-fidelity optimal control
waveforms in spite of model discrepancies.
|
Quantum key distribution (QKD) which enables information-theoretically
security is now heading towards quantum secure networks. It requires
high-performance and cost-effective protocols while increasing the number of
users. Unfortunately, qubit-implemented protocols only allow one receiver to
respond to the prepared signal at a time, thus cannot support multiple users
natively and well satisfy the network demands. Here, we show a 'protocol
solution' using continuous-variable quantum information. A coherent-state
point-to-multipoint protocol is proposed to simultaneously support multiple
independent QKD links between a single transmitter and massive receivers. Every
prepared coherent state is measured by all receivers to generate raw keys, then
processed with a secure and high-efficient key distillation method to remove
the correlations between different QKD links. It can achieve remarkably high
key rates even with a hundred of access points and shows the potential
improvement of two orders of magnitude. This scheme is a promising step towards
a high-rate multi-user solution in a scalable quantum secure network.
|
The article is devoted to the integration order replacement technique for
iterated Ito stochastic integrals and iterated stochastic integrals with
respect to martingales. We consider the class of iterated Ito stochastic
integrals, for which with probability 1 the formulas of integration order
replacement corresponding to the rules of classical integral calculus are
reasonable. The theorems on integration order replacement for the class of
iterated Ito stochastic integrals is proven. Many examples of this theorems
usage have been considered. These results are generalized for the class of
iterated stochastic integrals with respect to martingales.
|
Compressed sensing is a central topic in signal processing with myriad
applications, where the goal is to recover a signal from as few observations as
possible. Iterative re-weighting is one of the fundamental tools to achieve
this goal. This paper re-examines the iteratively reweighted least squares
(IRLS) algorithm for sparse recovery proposed by Daubechies, Devore, Fornasier,
and G\"unt\"urk in \emph{Iteratively reweighted least squares minimization for
sparse recovery}, {\sf Communications on Pure and Applied Mathematics}, {\bf
63}(2010) 1--38. Under the null space property of order $K$, the authors show
that their algorithm converges to the unique $k$-sparse solution for $k$
strictly bounded above by a value strictly less than $K$, and this $k$-sparse
solution coincides with the unique $\ell_1$ solution. On the other hand, it is
known that, for $k$ less than or equal to $K$, the $k$-sparse and $\ell_1$
solutions are unique and coincide. The authors emphasize that their proof
method does not apply for $k$ sufficiently close to $K$, and remark that they
were unsuccessful in finding an example where the algorithm fails for these
values of $k$.
In this note we construct a family of examples where the
Daubechies-Devore-Fornasier-G\"unt\"urk IRLS algorithm fails for $k=K$, and
provide a modification to their algorithm that provably converges to the unique
$k$-sparse solution for $k$ less than or equal to $K$ while preserving the
local linear rate. The paper includes numerical studies of this family as well
as the modified IRLS algorithm, testing their robustness under perturbations
and to parameter selection.
|
Tree level unitarity violations of extra dimensional extensions of the
Standard Model may become much stronger when the scalar sector is included in
the bulk. This effect occurs when the couplings are not suppressed for larger
Kaluza-Klein levels, and could have relevant consequences for the phenomenology
of the next generation of colliders. We briefly review our formalism to obtain
more stringent unitarity bounds when KK modes are present, as well as the
generalization to extra dimensions of the Equivalence Theorem between Goldstone
bosons and longitudinal gauge bosons
|
We study the category of KM fans - a "stacky" generalization of the category
of fans considered in toric geometry - and its various realization functors to
"geometric" categories. The "purest" such realization takes the form of a
functor from KM fans to the 2-category of stacks over the category of fine
fans, in the "characteristic-zero-\'etale" topology. In the algebraic setting,
over a field of characteristic zero, we have a realization functor from KM fans
to (log) Deligne-Mumford stacks. We prove that this realization functor gives
rise to an equivalence of categories between (lattice) KM fans and an
appropriate category of toric DM stacks. Finally, we have a differential
realization functor to the category of (positive) log differentiable spaces.
Unlike the other realizations, the differential realization of a stacky fan is
an "actual" log differentiable space, not a stack. Our main results are
generalizations of "classical" toric geometry, as well as a characterization of
"when a map of KM fans is a torsor". The latter is used to explain the
relationship between our theory and the "stacky fans" of Geraschenko and
Satriano.
|
The scanning Kelvin probe is a tool that allows for the contactless
evaluation of contact potential differences in a range of materials, permitting
the indirect determination of surface properties such as work function or Fermi
levels. In this paper, we derive the equations governing the operation of a
Kelvin probe and describe the implementation of the off-null method for contact
potential difference determination, we conclude with a short discussion on
design considerations.
|
Motivated by the need for estimating the 3D pose of arbitrary objects, we
consider the challenging problem of class-agnostic object viewpoint estimation
from images only, without CAD model knowledge. The idea is to leverage features
learned on seen classes to estimate the pose for classes that are unseen, yet
that share similar geometries and canonical frames with seen classes. We train
a direct pose estimator in a class-agnostic way by sharing weights across all
object classes, and we introduce a contrastive learning method that has three
main ingredients: (i) the use of pre-trained, self-supervised, contrast-based
features; (ii) pose-aware data augmentations; (iii) a pose-aware contrastive
loss. We experimented on Pascal3D+, ObjectNet3D and Pix3D in a cross-dataset
fashion, with both seen and unseen classes. We report state-of-the-art results,
including against methods that additionally use CAD models as input.
|
A criterion to locate tricritical points in phase diagrams is proposed. The
criterion is formulated in the framework of the Elementary Catastrophe Theory
and encompasses all the existing criteria in that it applies to systems
described by a generally non symmetric free energy which can depend on one or
more order parameters. We show that a tricritical point is given whenever the
free energy is not 4-determined. An application to smectic-C liquid crystals is
briefly discussed.
|
When modeling laser wakefield acceleration (LWFA) using the particle-in-cell
(PIC) algorithm in a Lorentz boosted frame, the plasma is drifting
relativistically at $\beta_b c$ towards the laser, which can lead to a
computational speedup of $\sim \gamma_b^2=(1-\beta_b^2)^{-1}$. Meanwhile, when
LWFA is modeled in the quasi-3D geometry in which the electromagnetic fields
and current are decomposed into a limited number of azimuthal harmonics,
speedups are achieved by modeling three dimensional problems with the
computation load on the order of two dimensional $r-z$ simulations. Here, we
describe how to combine the speed ups from the Lorentz boosted frame and
quasi-3D algorithms. The key to the combination is the use of a hybrid Yee-FFT
solver in the quasi-3D geometry that can be used to effectively eliminate the
Numerical Cerenkov Instability (NCI) that inevitably arises in a Lorentz
boosted frame due to the unphysical coupling of Langmuir modes and EM modes of
the relativistically drifting plasma in these simulations. In addition, based
on the space-time distribution of the LWFA data in the lab and boosted frame,
we propose to use a moving window to follow the drifting plasma to further
reduce the computational load. We describe the details of how the NCI is
eliminated for the quasi-3D geometry, the setups for simulations which combine
the Lorentz boosted frame and quasi-3D geometry, the use of a moving window,
and compare the results from these simulations against their corresponding lab
frame cases. Good agreement is obtained, particularly when there is no
self-trapping, which demonstrates it is possible to combine the Lorentz boosted
frame and the quasi-3D algorithms when modeling LWFA to achieve unprecedented
speedups.
|
We study the ordered phases and the phase transitions in the stacked
triangular antiferromagnetic Ising (STAFI) model with strong interplane
coupling modeling CsCoCl$_3$ and CsCoBr$_3$. We find that there exists an
intermediate phase which consists of a single phase of so-called partial
disordered (PD) type, and confirm the stability of this phase. The low
temperature phase of this model is so-called two-sublattice ferri magnetic
phase. The phase transition between the PD phase and two-sublattice ferri
magnetic phase is of the first order. This sequence of the phases is
homomorphic as that in the three-dimensional generalized six-state clock model
which have the same symmetry of the STAFI model. By studying distributions of
domain walls in one dimensional chains connecting layered triangular lattices,
we clarify the nature of the phase transition and give an interpretation of
little anomaly of the specific heat.
|
We analyze the basic properties of Brightest Cluster Galaxies (BCGs) produced
by state of the art cosmological zoom-in hydrodynamical simulations. These
simulations have been run with different sub-grid physics included. Here we
focus on the results obtained with and without the inclusion of the
prescriptions for supermassive black hole (SMBH) growth and of the ensuing
Active Galactic Nuclei (AGN) feedback. The latter process goes in the right
direction of decreasing significantly the overall formation of stars. However,
BCGs end up still containing too much stellar mass, a problem that increases
with halo mass, and having an unsatisfactory structure. This is in the sense
that their effective radii are too large, and that their density profiles
feature a flattening on scales much larger than observed. We also find that our
model of thermal AGN feedback has very little effect on the stellar velocity
dispersions, which turn out to be very large. Taken together, these problems,
which to some extent can be recognized also in other numerical studies
typically dealing with smaller halo masses, indicate that on one hand present
day sub-resolution models of AGN feedback are not effective enough in
diminishing the global formation of stars in the most massive galaxies, but on
the other hand they are relatively too effective in their centers. It is likely
that a form of feedback generating large scale gas outflows from BCGs
precursors, and a more widespread effect over the galaxy volume, can alleviate
these difficulties.
|
We present ZTF18abvkwla (the "Koala"), a fast blue optical transient
discovered in the Zwicky Transient Facility (ZTF) One-Day Cadence (1DC) Survey.
ZTF18abvkwla has a number of features in common with the groundbreaking
transient AT2018cow: blue colors at peak ($g-r\approx-0.5$ mag), a short rise
time from half-max of under two days, a decay time to half-max of only three
days, a high optical luminosity ($M_{g,\mathrm{peak}}\approx-20.6$mag), a hot
($\gtrsim 40,000$K) featureless spectrum at peak light, and a luminous radio
counterpart. At late times ($\Delta t>80$d) the radio luminosity of
ZTF18abvkwla ($\nu L_\nu \gtrsim 10^{40}$erg/s at 10 GHz, observer-frame) is
most similar to that of long-duration gamma-ray bursts (GRBs). The host galaxy
is a dwarf starburst galaxy ($M\approx5\times10^{8}M_\odot$,
$\mathrm{SFR}\approx7 M_\odot$/yr) that is moderately metal-enriched
($\log\mathrm{[O/H]} \approx 8.5$), similar to the hosts of GRBs and
superluminous supernovae. As in AT2018cow, the radio and optical emission in
ZTF18abvkwla likely arise from two separate components: the radio from
fast-moving ejecta ($\Gamma \beta c >0.38c$) and the optical from
shock-interaction with confined dense material ($<0.07M_\odot$ in $\sim
10^{15}$cm). Compiling transients in the literature with $t_\mathrm{rise} <5$d
and $M_\mathrm{peak}<-20$mag, we find that a significant number are
engine-powered, and suggest that the high peak optical luminosity is directly
related to the presence of this engine. From 18 months of the 1DC survey, we
find that transients in this rise-luminosity phase space are at least two to
three orders of magnitude less common than CC SNe. Finally, we discuss
strategies for identifying such events with future facilities like the Large
Synoptic Survey Telescope, and prospects for detecting accompanying X-ray and
radio emission.
|
In online shopping, ever-changing fashion trends make merchants need to
prepare more differentiated products to meet the diversified demands, and
e-commerce platforms need to capture the market trend with a prophetic vision.
For the trend prediction, the attribute tags, as the essential description of
items, can genuinely reflect the decision basis of consumers. However, few
existing works explore the attribute trend in the specific community for
e-commerce. In this paper, we focus on the community trend prediction on the
item attribute and propose a unified framework that combines the dynamic
evolution of two graph patterns to predict the attribute trend in a specific
community. Specifically, we first design a communityattribute bipartite graph
at each time step to learn the collaboration of different communities. Next, we
transform the bipartite graph into a hypergraph to exploit the associations of
different attribute tags in one community. Lastly, we introduce a dynamic
evolution component based on the recurrent neural networks to capture the
fashion trend of attribute tags. Extensive experiments on three real-world
datasets in a large e-commerce platform show the superiority of the proposed
approach over several strong alternatives and demonstrate the ability to
discover the community trend in advance.
|
Zipf-like distributions characterize a wide set of phenomena in physics,
biology, economics and social sciences. In human activities, Zipf-laws describe
for example the frequency of words appearance in a text or the purchases types
in shopping patterns. In the latter, the uneven distribution of transaction
types is bound with the temporal sequences of purchases of individual choices.
In this work, we define a framework using a text compression technique on the
sequences of credit card purchases to detect ubiquitous patterns of collective
behavior. Clustering the consumers by their similarity in purchases sequences,
we detect five consumer groups. Remarkably, post checking, individuals in each
group are also similar in their age, total expenditure, gender, and the
diversity of their social and mobility networks extracted by their mobile phone
records. By properly deconstructing transaction data with Zipf-like
distributions, this method uncovers sets of significant sequences that reveal
insights on collective human behavior.
|
We investigate the phase transition of the four-dimensional Ising model with
two types of tensor network scheme, one is the higher-order tensor
renormalization group and the other is the anisotropic tensor renormalization
group. The results for the internal energy and magnetization obtained by the
former algorithm with the impure tensor method, enlarging the lattice volume up
to $1024^4$, are consistent with the weak first-order phase transition. For the
later algorithm, our implementation successfully reduces the execution time
thanks to the parallel computation and the results provided by ATRG seems
comparable to those with HOTRG.
|
Automatic log file analysis enables early detection of relevant incidents
such as system failures. In particular, self-learning anomaly detection
techniques capture patterns in log data and subsequently report unexpected log
event occurrences to system operators without the need to provide or manually
model anomalous scenarios in advance. Recently, an increasing number of
approaches leveraging deep learning neural networks for this purpose have been
presented. These approaches have demonstrated superior detection performance in
comparison to conventional machine learning techniques and simultaneously
resolve issues with unstable data formats. However, there exist many different
architectures for deep learning and it is non-trivial to encode raw and
unstructured log data to be analyzed by neural networks. We therefore carry out
a systematic literature review that provides an overview of deployed models,
data pre-processing mechanisms, anomaly detection techniques, and evaluations.
The survey does not quantitatively compare existing approaches but instead aims
to help readers understand relevant aspects of different model architectures
and emphasizes open issues for future work.
|
In this paper, we propose a novel wireless scheme that integrates satellite,
airborne, and terrestrial networks aiming to support ground users. More
specifically, we study the enhancement of the achievable users' throughput
assisted with terrestrial base stations, high altitude platforms (HAPs), and
satellite station. The goal is to optimize the resource allocations and the
HAPs locations in order to maximize the users' throughput. In this context, we
propose to solve the optimization problem in two stages; first a short-term
stage and then a long-term stage. In the short-term stage, we start by
proposing a near optimal solution and low complexity solution to solve the
associations and power allocations. In the first solution, we formulate and
solve a binary linear optimization problem to find the best associations and
then using Taylor expansion approximation to optimally determine the power
allocations. While in the second solution, we propose a low complexity approach
based on frequency partitioning technique to solve the associations and power
allocations. One the other hand, in the long-term stage, we optimize the
locations of the HAPs by proposing an efficient algorithm based on a recursive
shrink-and-realign process. Finally, selected numerical results show the
advantages provided by our proposed optimization scheme.
|
We numerically investigate low-energy stationary states of pseudospin-1
Bose-Einstein condensates in the presence of Rashba-Dresselhaus-type spin-orbit
coupling. We show that for experimentally feasible parameters and strong
spin-orbit coupling, the ground state is a square vortex lattice irrespective
of the nature of the spin-dependent interactions. For weak spin-orbit coupling,
the lowest-energy state may host a single vortex. Furthermore, we analytically
derive constraints that explain why certain stationary states do not emerge as
ground states. Importantly, we show that the distinct stationary states can be
observed experimentally by standard time-of-flight spinindependent absorption
imaging.
|
This paper introduces a novel algorithmic solution for the approximation of a
given multivariate function by a nomographic function that is composed of a
one-dimensional continuous and monotone outer function and a sum of univariate
continuous inner functions. We show that a suitable approximation can be
obtained by solving a cone-constrained Rayleigh-Quotient optimization problem.
The proposed approach is based on a combination of a dimensionwise function
decomposition known as Analysis of Variance (ANOVA) and optimization over a
class of monotone polynomials. An example is given to show that the proposed
algorithm can be applied to solve problems in distributed function computation
over multiple-access channels.
|
We use a recently-developed analytic model for the ISM structure from scales
of GMCs through star-forming cores to explore how the pre-stellar core mass
function (CMF) and, by extrapolation, stellar initial mass function (IMF)
should depend on both local and galactic properties. If the ISM is
supersonically turbulent, the statistical properties of the density field
follow from the turbulent velocity spectrum, and the excursion set formalism
can be applied to analytically calculate the mass function of collapsing cores
on the smallest scales on which they are self-gravitating (non-fragmenting).
Two parameters determine the model: the disk-scale Mach number M_h (which sets
the shape of the CMF), and the absolute velocity (to assign an absolute scale).
For 'normal' variation in disk properties and core gas temperatures in the MW
and local galaxies, there is almost no variation in the predicted high-mass
behavior of the CMF/IMF. The slope is always close to Salpeter down to <1
M_sun. We predict modest variation in the sub-solar regime, mostly from
variation in M_h, but within the observed scatter in sub-solar IMFs in local
regions. For fixed galaxy properties, there is little variation in shape or
'upper mass limit' with parent GMC mass. However, in extreme starbursts (e.g.
ULIRGs) we predict a bottom-heavy CMF. This agrees with the IMF inferred for
the centers of Virgo ellipticals, believed to form in such a nuclear starburst.
The CMF is bottom heavy despite the gas temperature being an order of magnitude
larger, because M_h is also much larger. Larger M_h values make the 'parent'
cloud mass (turbulent Jeans mass) larger, but promote fragmentation to smaller
scales; this steepens the slope of the low-mass CMF and shifts the turnover
mass. The model may predict a top-heavy CMF for the sub-pc disks around Sgr A*,
but the relevant input parameters are uncertain.
|
The field of topological materials science has recently been focussing on
three-dimensional Dirac semimetals, which exhibit robust Dirac phases in the
bulk. However, the absence of characteristic surface states in accidental Dirac
semimetals (DSM) makes it difficult to experimentally verify claims about the
topological nature using commonly used surface-sensitive techniques. The chiral
magnetic effect (CME), which originates from the Weyl nodes, causes an
$\textbf{E}\cdot\textbf{B}$-dependent chiral charge polarization, which
manifests itself as negative magnetoresistance. We exploit the extended
lifetime of the chirally polarized charge and study the CME through both local
and non-local measurements in Hall bar structures fabricated from single
crystalline flakes of the DSM Bi$_{0.97}$Sb$_{0.03}$. From the non-local
measurement results we find a chiral charge relaxation time which is over one
order of magnitude larger than the Drude transport lifetime, underlining the
topological nature of Bi$_{0.97}$Sb$_{0.03}$.
|
A new design of a cryogenic germanium detector for dark matter search is
presented, taking advantage of the coplanar grid technique of event
localisation for improved background discrimination. Experiments performed with
prototype devices in the EDELWEISS II setup at the Modane underground facility
demonstrate the remarkably high efficiency of these devices for the rejection
of low-energy $\beta$, approaching 10$^5$ . This opens the road to investigate
the range beyond 10$^{-8}$ pb in the WIMP-nucleon collision cross-sections, as
proposed in the EURECA project of a one-ton cryogenic detector mass.
|
In this paper we study the jet response (particularly azimuthal anisotropy)
as a hard probe of the harmonic fluctuations in the initial condition of
central heavy ion collisions. By implementing the fluctuations via cumulant
expansion for various harmonics quantified by $\epsilon_n$ and using the
geometric model for jet energy loss, we compute the response
$\chi^h_n=v_n/\epsilon_n$. Combining these results with the known hydrodynamic
response of the bulk matter expansion in the literature, we show that the
hard-soft azimuthal correlation arising from their respective responses to the
common geometric fluctuations reveals a robust and narrow near-side peak that
may provide the dominant contribution to the "hard-ridge" observed in
experimental data.
|
First-principles investigations of the structural, electronic and magnetic
properties of Cr-doped AlN/GaN (0001) heterostructures reveal that Cr
segregates into the GaN region, that these interfaces retain their important
half-metallic character and thus yield efficient (100 %) spin polarized
injection from a ferromagnetic GaN:Cr electrode through an AlN tunnel barrier -
whose height and width can be controlled by adjusting the Al concentration in
the graded bandgap engineered Al(1-x)Ga(x)N (0001) layers.
|
Analytical expressions for the Transverse Momentum Dependent (TMD, or
unintegrated) gluon and sea quark densities in nuclei are derived at leading
order of QCD running coupling. The calculations are performed in the framework
of the rescaling model and Kimber-Martin-Ryskin (KMR) prescription, where the
Bessel-inspired behavior of parton densities at small Bjorken $x$ values,
obtained in the case of flat initial conditions in the double scaling QCD
approximation, is applied. The derived expressions are used to evaluate the
inclusive heavy flavor production in proton-lead collisions at the LHC. We find
a good agreement of our results with latest experimental data collected by the
CMS and ALICE Collaborations at $\sqrt s = 5.02$ GeV.
|
Every day, thousands of customers post questions on Amazon product pages.
After some time, if they are fortunate, a knowledgeable customer might answer
their question. Observing that many questions can be answered based upon the
available product reviews, we propose the task of review-based QA. Given a
corpus of reviews and a question, the QA system synthesizes an answer. To this
end, we introduce a new dataset and propose a method that combines information
retrieval techniques for selecting relevant reviews (given a question) and
"reading comprehension" models for synthesizing an answer (given a question and
review). Our dataset consists of 923k questions, 3.6M answers and 14M reviews
across 156k products. Building on the well-known Amazon dataset, we collect
additional annotations, marking each question as either answerable or
unanswerable based on the available reviews. A deployed system could first
classify a question as answerable and then attempt to generate an answer.
Notably, unlike many popular QA datasets, here, the questions, passages, and
answers are all extracted from real human interactions. We evaluate numerous
models for answer generation and propose strong baselines, demonstrating the
challenging nature of this new task.
|
The afterglow of the binary neutron star merger GW170817 gave evidence for a
structured relativistic jet and a link between such mergers and short gamma-ray
bursts. Superluminal motion, found using radio very long baseline
interferometry (VLBI), together with the afterglow light curve provided
constraints on the viewing angle (14-28 degrees), the opening angle of the jet
core (less than about 5 degrees), and a modest limit on the initial Lorentz
factor of the jet core (more than 4). Here we report on another superluminal
motion measurement, at seven times the speed of light, leveraging Hubble Space
Telescope precision astrometry and previous radio VLBI data of GW170817. We
thereby obtain a unique measurement of the Lorentz factor of the wing of the
structured jet, as well as substantially improved constraints on the viewing
angle (19-25 degrees) and the initial Lorentz factor of the jet core (more than
40).
|
Quintessential inflation utilises a single scalar field to account for the
observations of both cosmic inflation and dark energy. The requirements for
modelling quintessential inflation are described and two explicit successful
models are presented in the context of $\alpha$-attractors and Palatini
modified gravity.
|
We prove that the KP-I initial value problem is globally well-posed in the
natural energy space of the equation.
|
In this paper we prove that if k is a cardinal in L[0^#], then there is an
inner model M such that M |= (V_k,E) has no elementary end extension. In
particular if 0^# exists then weak compactness is never downwards absolute. We
complement the result with a lemma stating that any cardinal greater than
aleph_1 of uncountable cofinality in L[0^#] is Mahlo in every strict inner
model of L[0^#].
|
In this article, mixed finite element methods are discussed for a class of
hyperbolic integro-differential equations (HIDEs). Based on a modification of
the nonstandard energy formulation of Baker, both semidiscrete and completely
discrete implicit schemes for an extended mixed method are analyzed and optimal
L^{\infty}(L^2)-error estimates are derived under minimal smoothness
assumptions on the initial data. Further, quasi-optimal estimates are shown to
hold in L^{\infty}(L^{\infty})-norm. Finally, the analysis is extended to the
standard mixed method for HIDEs and optimal error estimates in
L^{\infty}(L^2)-norm are derived again under minimal smoothness on initial
data.
|
Recent financial disasters emphasised the need to investigate the consequence
associated with the tail co-movements among institutions; episodes of contagion
are frequently observed and increase the probability of large losses affecting
market participants' risk capital. Commonly used risk management tools fail to
account for potential spillover effects among institutions because they provide
individual risk assessment. We contribute to analyse the interdependence
effects of extreme events providing an estimation tool for evaluating the
Conditional Value-at-Risk (CoVaR) defined as the Value-at-Risk of an
institution conditioned on another institution being under distress. In
particular, our approach relies on Bayesian quantile regression framework. We
propose a Markov chain Monte Carlo algorithm exploiting the Asymmetric Laplace
distribution and its representation as a location-scale mixture of Normals.
Moreover, since risk measures are usually evaluated on time series data and
returns typically change over time, we extend the CoVaR model to account for
the dynamics of the tail behaviour. Application on U.S. companies belonging to
different sectors of the Standard and Poor's Composite Index (S&P500) is
considered to evaluate the marginal contribution to the overall systemic risk
of each individual institution
|
We introduce classes of measures in the half-space $\mathbf{R}^{n+1}_+,$
generated by Riesz, or Bessel, or Besov capacities in $\mathbf{R}^n$, and give
a geometric characterization as Carleson-type measures.
|
A systematic investigation of La0.67Ca0.33MnO3 manganites has been
undertaken, mainly to understand the influence of varying crystallite size
(nanometer range) on electrical resistivity, magnetic susceptibility and
thermoelectric power. The materials were prepared by the sol-gel method of
sintering at four different temperatures between 800 and 1100 degrees C. The
samples were characterized by X-ray diffraction and data were analyzed using
Rietveld refinement. The metal-insulator transition temperatures (TP) are found
to increase with increasing sintering temperatures, while the magnetic
transition temperatures (TC) decrease. The electrical resistivity and
thermoelectric power data at low temperatures (T < TP) have been analyzed by
considering various scattering phenomena, while the high temperature (T > TP)
data were analyzed with Mott's small polaron hopping conduction mechanisms.
PACS Codes: 73.50.Lw, 75.47.Gk, 75.47.Lx
|
Autoionizing resonances that arise from the interaction of a bound
single-excitation with the continuum can be accurately captured with the
presently used approximations in time-dependent density functional theory
(TDDFT), but those arising from a bound double excitation cannot. In the former
case, we explain how an adiabatic kernel, which has no frequency-dependence,
can yet generate the strongly frequency-dependent resonant structures in the
interacting response function, not present in the Kohn-Sham response function.
In the case of the bound double-excitation, we explain that a strongly
frequency-dependent kernel is needed, and derive one for the vicinity of a
resonance of the latter type, as an {\it a posteriori} correction to the usual
adiabatic approximations in TDDFT. Our approximation becomes exact for an
isolated resonance in the limit of weak interaction, where one discrete state
interacts with one continuum. We derive a "Fano TDDFT kernel" that reproduces
the Fano lineshape within the TDDFT formalism, and also a dressed kernel, that
operates on top of an adiabatic approximation. We illustrate our results on a
simple model system.
|
The question of determining the spatial geometry of the Universe is of
greater relevance than ever, as precision cosmology promises to verify
inflationary predictions about the curvature of the Universe. We revisit the
question of what can be learnt about the spatial geometry of the Universe from
the perspective of a three-way Bayesian model comparison. We show that, given
current data, the probability that the Universe is spatially infinite lies
between 67% and 98%, depending on the choice of priors. For the strongest prior
choice, we find odds of order 50:1 (200:1) in favour of a flat Universe when
compared with a closed (open) model. We also report a robust, prior-independent
lower limit to the number of Hubble spheres in the Universe, N_U > 5 (at 99%
confidence). We forecast the accuracy with which future CMB and BAO
observations will be able to constrain curvature, finding that a cosmic
variance limited CMB experiment together with an SKA-like BAO observation will
constrain curvature with a precision of about sigma ~ 4.5x10^{-4}. We
demonstrate that the risk of 'model confusion' (i.e., wrongly favouring a flat
Universe in the presence of curvature) is much larger than might be assumed
from parameter errors forecasts for future probes. We argue that a 5-sigma
detection threshold guarantees a confusion- and ambiguity-free model selection.
Together with inflationary arguments, this implies that the geometry of the
Universe is not knowable if the value of the curvature parameter is below
|Omega_curvature| ~ 10^{-4}, a bound one order of magnitude larger than the
size of curvature perturbations, ~ 10^{-5}. [abridged]
|
Embedded devices are becoming popular. Meanwhile, researchers are actively
working on improving the security of embedded devices. However, previous work
ignores the insecurity caused by a special category of devices, i.e., the
End-of-Life (EoL in short) devices. Once a product becomes End-of-Life, vendors
tend to no longer maintain its firmware or software, including providing bug
fixes and security patches. This makes EoL devices susceptible to attacks. For
instance, a report showed that an EoL model with thousands of active devices
was exploited to redirect web traffic for malicious purposes. In this paper, we
conduct the first measurement study to shed light on the (in)security of EoL
devices. To this end, our study performs two types of analysis, including the
aliveness analysis and the vulnerability analysis. The first one aims to detect
the scale of EoL devices that are still alive. The second one is to evaluate
the vulnerabilities existing in (active) EoL devices. We have applied our
approach to a large number of EoL models from three vendors (i.e., D-Link,
Tp-Link, and Netgear) and detect the alive devices in a time period of ten
months. Our study reveals some worrisome facts that were unknown by the
community. For instance, there exist more than 2 million active EoL devices.
Nearly 300,000 of them are still alive even after five years since they became
EoL. Although vendors may release security patches after the EoL date, however,
the process is ad hoc and incomplete. As a result, more than 1 million active
EoL devices are vulnerable, and nearly half of them are threatened by high-risk
vulnerabilities. Attackers can achieve a minimum of 2.79 Tbps DDoS attack by
compromising a large number of active EoL devices. We believe these facts pose
a clear call for more attention to deal with the security issues of EoL
devices.
|
We present a novel approach to reconstruct RGB-D indoor scene with plane
primitives. Our approach takes as input a RGB-D sequence and a dense coarse
mesh reconstructed by some 3D reconstruction method on the sequence, and
generate a lightweight, low-polygonal mesh with clear face textures and sharp
features without losing geometry details from the original scene. To achieve
this, we firstly partition the input mesh with plane primitives, simplify it
into a lightweight mesh next, then optimize plane parameters, camera poses and
texture colors to maximize the photometric consistency across frames, and
finally optimize mesh geometry to maximize consistency between geometry and
planes. Compared to existing planar reconstruction methods which only cover
large planar regions in the scene, our method builds the entire scene by
adaptive planes without losing geometry details and preserves sharp features in
the final mesh. We demonstrate the effectiveness of our approach by applying it
onto several RGB-D scans and comparing it to other state-of-the-art
reconstruction methods.
|
Two-dimensional (2D) Dirac cone materials exhibit linear energy dispersion at
the Fermi level, where the effective masses of carriers are very close to zero
and the Fermi velocity is ultrahigh, only 2 ~ 3 orders of magnitude lower than
the light velocity. Such the Dirac cone materials have great promise in
high-performance electronic devices. Herein, we have employed the genetic
algorithms methods combining with first-principles calculations to propose a
new 2D anisotropic Dirac cone material, that is, orthorhombic boron phosphide
(BP) monolayer named as borophosphene. Molecular dynamics simulation and phonon
dispersion have been used to evaluate the dynamic and thermal stability of
borophosphene. Because of the unique arrangements of B-B and P-P dimers, the
mechanical and electronic properties are highly anisotropic. Of great interest
is that the Dirac cone of the borophosphene is robust, independent of in-plane
biaxial and uniaxial strains, and can also be observed in its one-dimensional
(1D) zigzag nanoribbons and armchair nanotubes. The Fermi velocities are ~ 105
m/s, the same order of magnitude with that of graphene. By using a
tight-binding model, the origin of the Dirac cone of borophosphene is analyzed.
Moreover, a unique feature of self-doping can be induced by the in-plane
biaxial and uniaxial strains of borophosphene and the Curvature effect of
nanotubes, which is great beneficial to realizing high speed carriers (holes).
Our results suggest that the borophosphene holds a great promise in
high-performance electronic devices, which could promote the experimental and
theoretical studies to further explore the potential applications of other 2D
Dirac cone sheets.
|
We present measurements of the cross-correlation of the triply-ionized carbon
(CIV) forest with quasars using Sloan Digital Sky Survey Data Release 14. The
study exploits a large sample of new quasars from the first two years of
observations by the Extended Baryon Oscillation Spectroscopic Survey (eBOSS).
The CIV forest is a weaker tracer of large-scale structure than the Ly$\alpha$
forest, but benefits from being accessible at redshifts $z<2$ where the quasar
number density from eBOSS is high. Our data sample consists of 287,651 CIV
forest quasars in the redshift range $1.4<z<3.5$ and 387,315 tracer quasars
with $1.2<z<3.5$. We measure large-scale correlations from CIV absorption
occuring in three distinct quasar rest-frame wavelength bands of the spectra
referred to as the CIV forest, the SiIV forest and the Ly$\alpha$ forest. From
the combined fit to the quasar-CIV cross-correlations for the CIV forest and
the SiIV forest, the CIV redshift-space distortion parameter is $\beta_{\rm
CIV}=0.27_{\ -0.14}^{\ +0.16}$ and its combination with the CIV linear
transmission bias parameter is $b_{\rm CIV}(1+\beta_{\rm CIV})=-0.0183_{\
-0.0014}^{\ +0.0013}$ ($1\sigma$ statistical error) at the mean redshift
$z=2.00$. Splitting the sample at $z=2.2$ to constrain the bias evolution with
redshift yields the power-law exponent $\gamma=0.60\pm0.63$, indicating a
significantly weaker redshift-evolution than for the Ly$\alpha$ forest linear
transmission bias. We demonstrate that CIV absorption has the potential to be
used as a probe of baryon acoustic oscillations (BAO). While the current data
set is insufficient for a detection of the BAO peak feature, the final quasar
samples for redshifts $1.4<z<2.2$ from eBOSS and the Dark Energy Spectroscopic
Instrument (DESI) are expected to provide measurements of the isotropic BAO
scale to $\sim7\%$ and $\sim3\%$ precision, respectively, at $z\simeq1.6$.
|
Vortices in type-II superconductors have attracted enormous attention as
ideal systems in which to study nonequilibrium collective phenomena, since the
self-ordering of the vortices competes with quenched disorder and thermal
effects. Dynamic effects found in vortex systems include depinning,
nonequilibrium phase transitions, creep, structural order-disorder transitions,
and melting. Understanding vortex dynamics is also important for applications
of superconductors which require the vortices either to remain pinned or to
move in a controlled fashion. Recently, topological defects called skyrmions
have been realized experimentally in chiral magnets. Here we highlight
similarities and differences between skyrmion dynamics and vortex dynamics.
Many of the previous ideas and experimental setups that have been applied to
superconducting vortices can also be used to study skyrmions. We also discuss
some of the differences between the two systems, such as the potentially large
contribution of the Magnus force in the skyrmion system that can dramatically
alter the dynamics and transport properties.
|
This paper is concerned with the study of linear geometric rigidity of
shallow thin domains under zero Dirichlet boundary conditions on the
displacement field on the thin edge of the domain. A shallow thin domain is a
thin domain that has in-plane dimensions of order $O(1)$ and $\epsilon,$ where
$\epsilon\in (h,1)$ is a parameter (here $h$ is the thickness of the shell).
The problem has been solved in [8,10] for the case $\epsilon=1,$ with the
outcome of the optimal constant $C\sim h^{-3/2},$ $C\sim h^{-4/3},$ and $C\sim
h^{-1}$ for parabolic, hyperbolic and elliptic thin domains respectively. We
prove in the present work that in fact there are two distinctive scaling
regimes $\epsilon\in (h,\sqrt h]$ and $\epsilon\in (\sqrt h,1),$ such that in
each of which the thin domain rigidity is given by a certain formula in $h$ and
$\epsilon.$ An interesting new phenomenon is that in the first (small
parameter) regime $\epsilon\in (h,\sqrt h]$, the rigidity does not depend on
the curvature of the thin domain mid-surface.
|
Subsets and Splits