text
stringlengths 6
128k
|
---|
A method of embedding partially ordered sets into linear spaces is presented.
The problem of finding all orthocomplementations in a finite lattice is reduced
to a linear programming problem.
|
We consider models of inflation with U(1) gauge fields and charged scalar
fields including symmetry breaking potential, chaotic inflation and hybrid
inflation. We show that there exist attractor solutions where the anisotropies
produced during inflation becomes comparable to the slow-roll parameters. In
the models where the inflaton field is a charged scalar field the gauge field
becomes highly oscillatory at the end of inflation ending inflation quickly.
Furthermore, in charged hybrid inflation the onset of waterfall phase
transition at the end of inflation is affected significantly by the evolution
of the background gauge field. Rapid oscillations of the gauge field and its
coupling to inflaton can have interesting effects on preheating and
non-Gaussianities.
|
Vision-based stair perception can help autonomous mobile robots deal with the
challenge of climbing stairs, especially in unfamiliar environments. To address
the problem that current monocular vision methods are difficult to model stairs
accurately without depth information, this paper proposes a depth-aware stair
modeling method for monocular vision. Specifically, we take the extraction of
stair geometric features and the prediction of depth images as joint tasks in a
convolutional neural network (CNN), with the designed information propagation
architecture, we can achieve effective supervision for stair geometric feature
learning by depth information. In addition, to complete the stair modeling, we
take the convex lines, concave lines, tread surfaces and riser surfaces as
stair geometric features and apply Gaussian kernels to enable the network to
predict contextual information within the stair lines. Combined with the depth
information obtained by depth sensors, we propose a stair point cloud
reconstruction method that can quickly get point clouds belonging to the stair
step surfaces. Experiments on our dataset show that our method has a
significant improvement over the previous best monocular vision method, with an
intersection over union (IOU) increase of 3.4 %, and the lightweight version
has a fast detection speed and can meet the requirements of most real-time
applications. Our dataset is available at
https://data.mendeley.com/datasets/6kffmjt7g2/1.
|
This paper presents the energy resolution study in the JUNO experiment,
incorporating the latest knowledge acquired during the detector construction
phase. The determination of neutrino mass ordering in JUNO requires an
exceptional energy resolution better than 3\% at 1 MeV. To achieve this
ambitious goal, significant efforts have been undertaken in the design and
production of the key components of the JUNO detector. Various factors
affecting the detection of inverse beta decay signals have an impact on the
energy resolution, extending beyond the statistical fluctuations of the
detected number of photons, such as the properties of liquid scintillator,
performance of photomultiplier tubes, and the energy reconstruction algorithm.
To account for these effects, a full JUNO simulation and reconstruction
approach is employed. This enables the modeling of all relevant effects and the
evaluation of associated inputs to accurately estimate the energy resolution.
The study reveals an energy resolution of 2.95\% at 1 MeV. Furthermore, the
study assesses the contribution of major effects to the overall energy
resolution budget. This analysis serves as a reference for interpreting future
measurements of energy resolution during JUNO data taking. Moreover, it
provides a guideline in comprehending the energy resolution characteristics of
liquid scintillator-based detectors.
|
We quantify the degree of fine tuning required to achieve an observationally
viable period of inflation in the strongly dissipative regime of warm
inflation. The ``fine-tuning'' parameter $\lambda$ is taken to be the ratio of
the change in the height of the potential $\Delta V$ to the change in the
scalar field $(\Delta \phi)^{4}$, i.e. the width of the potential, and
therefore measures the requisite degree of flatness in the potential. The best
motivated warm inflationary scenarios involve a dissipation rate of the kind
$\Gamma\propto T^c$ with $c\geq 0$, and for all such cases, the bounds on
$\lambda$ are tighter than those for standard cold inflation by at least 3
orders of magnitude. In other words, these models require an even flatter
potential than standard inflation. On the other hand for the case of warm
inflation with $c< 0$, we find that in a strongly dissipative regime the bound
on $\lambda$ can significantly weaken with respect to cold inflation. Thus, if
a warm inflation model can be constructed in a strongly dissipative, negatively
temperature-dependent regime, it accommodates steeper potentials otherwise
ruled out in standard inflation.
|
Photon emission is the hallmark of light-matter interaction and the
foundation of photonic quantum science, enabling advanced sources for quantum
communication and computing. Although single-emitter radiation can be tailored
by the photonic environment, the introduction of multiple emitters extends this
picture. A fundamental challenge, however, is that the radiative dipole-dipole
coupling rapidly decays with spatial separation, typically within a fraction of
the optical wavelength. We realize distant dipole-dipole radiative coupling
with pairs of solid-state optical quantum emitters embedded in a nanophotonic
waveguide. We dynamically probe the collective response and identify both
super- and subradiant emission as well as means to control the dynamics by
proper excitation techniques. Our work constitutes a foundational step toward
multiemitter applications for scalable quantum-information processing.
|
Code sharing and reuse is a widespread use practice in software engineering.
Although a vast amount of open-source Python code is accessible on many online
platforms, programmers often find it difficult to restore a successful runtime
environment. Previous studies validated automatic inference of Python
dependencies using pre-built knowledge bases. However, these studies do not
cover sufficient knowledge to accurately match the Python code and also ignore
the potential conflicts between their inferred dependencies, thus resulting in
a low success rate of inference. In this paper, we propose PyCRE, a new
approach to automatically inferring Python compatible runtime environments with
domain knowledge graph (KG). Specifically, we design a domain-specific ontology
for Python third-party packages and construct KGs for over 10,000 popular
packages in Python 2 and Python 3. PyCRE discovers candidate libraries by
measuring the matching degree between the known libraries and the third-party
resources used in target code. For the NP-complete problem of dependency
solving, we propose a heuristic graph traversal algorithm to efficiently
guarantee the compatibility between packages. PyCRE achieves superior
performance on a real-world dataset and efficiently resolves nearly half more
import errors than previous methods.
|
In this chapter first the statistics of the standard and truncated Pareto
distributions are derived and used to fit empirical values of asteroids
diameters from different families, namely, Koronis, Eos and Themis, and from
the Astorb database. A theoretical analysis is then carried out and two
possible physical mechanisms are suggested that account for Pareto tails in
distributions of asteroids diameter.
|
We carried out wind tunnel experiments on parabolic flights with 100 $\mu$m
Mojave Mars simulant sand. The experiments result in shear stress thresholds
and erosion rates for varying g-levels at 600 Pa pressure. Our data confirm
former results on JSC Mars 1A simulant where the threshold shear stress is
lower under Martian gravity than extrapolated from earlier ground-based studies
which fits observations of Martian sand activity. The data are consistent with
a model by Shao and Lu (2000) and can also be applied to other small
terrestrial (exo)-planets with low pressure atmospheres.
|
Even in times of deep learning, low-rank approximations by factorizing a
matrix into user and item latent factors continue to be a method of choice for
collaborative filtering tasks due to their great performance. While deep
learning based approaches excel in hybrid recommender tasks where additional
features for items, users or even context are available, their flexibility
seems to rather impair the performance compared to low-rank approximations for
pure collaborative filtering tasks where no additional features are used.
Recent works propose hybrid models combining low-rank approximations and
traditional deep neural architectures with promising results but fail to
explain why neural networks alone are unsuitable for this task. In this work,
we revisit the model and intuition behind low-rank approximation to point out
its suitability for collaborative filtering tasks. In several experiments we
compare the performance and behavior of models based on a deep neural network
and low-rank approximation to examine the reasons for the low effectiveness of
traditional deep neural networks. We conclude that the universal approximation
capabilities of traditional deep neural networks severely impair the
determination of suitable latent vectors, leading to a worse performance
compared to low-rank approximations.
|
We apply information-based complexity analysis to support vector machine
(SVM) algorithms, with the goal of a comprehensive continuous algorithmic
analysis of such algorithms. This involves complexity measures in which some
higher order operations (e.g., certain optimizations) are considered primitive
for the purposes of measuring complexity. We consider classes of information
operators and algorithms made up of scaled families, and investigate the
utility of scaling the complexities to minimize error. We look at the division
of statistical learning into information and algorithmic components, at the
complexities of each, and at applications to support vector machine (SVM) and
more general machine learning algorithms. We give applications to SVM
algorithms graded into linear and higher order components, and give an example
in biomedical informatics.
|
The analysis of the LHCb data on $X(6900)$ found in the di-$\jpsi$ system is
performed using a momentum-dependent Flatt\'{e}-like parameterization. The use
of the pole counting rule and spectral density function sum rule give
consistent conclusions that both confining states and molecular states are
possible, or it is unable to distinguish the nature of $X(6900)$, if only the
di-$\jpsi$ experimental data with current statistics are available.
Nevertheless, we found that the lowest state in the di-$J/\psi$ system has very
likely the same quantum numbers as $X(6900)$, and $X(6900)$ is probably not
interpreted as a $J/\psi-\psi(2S)$ molecular state.
|
The classical linear search problem is studied from the view point of
Hamiltonian dynamics. For the specific, yet representative case of
exponentially distributed position of the hidden object, we show that the
optimal plan follows an unstable separatrix which is present in the associated
Hamiltonian system.
|
We perform a careful investigation of the problem of physically realistic
gravitational collapse of massive stars in f(R)-gravity. We show that the extra
matching conditions that arise in the modified gravity imposes strong
constraints on the stellar structure and thermodynamic properties. In our
opinion these constraints are unphysical. We prove that no homogeneous stars
with non-constant Ricci scalar can be matched smoothly with a static exterior
for any nonlinear function f(R). Therefore, these extra constraints make
classes of physically realistic collapse scenarios in general relativity,
non-admissible in these theories. We also find an exact solution for an
inhomogeneous collapsing star in the Starobinski model that obeys all the
energy and matching conditions. However, we argue that such solutions are
fine-tuned and unstable to matter perturbations. Possible consequences on black
hole physics and the cosmic censorship conjecture are also discussed.
|
We revise the structure-preserving finite element method in [K. Hu, Y. MA and
J. Xu. (2017) Stable finite element methods preserving $\nabla \cdot
\mathbf{B}=0$ exactly for MHD models. Numer. Math.,135, 371-396]. The revised
method is semi-implicit in time-discretization. We prove the linearized scheme
preserves the divergence free property for the magnetic field exactly at each
time step. Further, we showed the linearized scheme is unconditionally stable
and we obtain optimal convergence in the energy norm of the revised method even
for solutions with low regularity.
|
Doping a topological insulator (TI) film with transition metal ions can break
its time-reversal symmetry and lead to the realization of the quantum anomalous
Hall (QAH) effect. Prior studies have shown that the longitudinal resistance of
the QAH samples usually does not vanish when the Hall resistance shows a good
quantization. This has been interpreted as a result of the presence of possible
dissipative conducting channels in magnetic TI samples. By studying the
temperature- and magnetic field-dependence of the magnetoresistance of a
magnetic TI sandwich heterostructure device, we demonstrate that the
predominant dissipation mechanism in thick QAH insulators can switch between
non-chiral edge states and residual bulk states in different magnetic field
regimes. The interactions between bulk states, chiral edge states, and
non-chiral edge states are also investigated. Our study provides a way to
distinguish between the dissipation arising from the residual bulk states and
non-chiral edge states, which is crucial for achieving true dissipationless
transport in QAH insulators and for providing deeper insights into QAH-related
phenomena.
|
Deep generative networks can simulate from a complex target distribution, by
minimizing a loss with respect to samples from that distribution. However,
often we do not have direct access to our target distribution - our data may be
subject to sample selection bias, or may be from a different but related
distribution. We present methods based on importance weighting that can
estimate the loss with respect to a target distribution, even if we cannot
access that distribution directly, in a variety of settings. These estimators,
which differentially weight the contribution of data to the loss function,
offer both theoretical guarantees and impressive empirical performance.
|
The radiative baryonic decay
$\mathcal{B}^{*}\left(\frac{3}{2}\right)\to\mathcal{B}\left(\frac{1}{2}\right)+\gamma$
is a magnetic dipole $(M1)$ transition. It requires the transition magnetic
moment $\mu_{\mathcal{B}\left(3/2\right)\to\mathcal{B}\left(1/2\right)}$. The
transition magnetic moments for the helicities $1/2$ and $3/2$ are evaluated in
the frame work of constituent quark model in which the intrinsic spin and the
magnetic moments of quarks $u,d$ and $s$ play a key role. Within this
framework, the radiative decays $\Delta^{+}\to p +\gamma$, $\Sigma^{*0}\to
\Lambda+\gamma$, $\Sigma^{*+}\to \Sigma^{+}+\gamma$ and $\Xi^{*0}\to
\Xi^{0}+\gamma$ are analyzed in detail. The branching ratio for these decays is
found to be in good agreement with the corresponding experimental values.
|
These notes are based on a seminar which took place in the autumn of 2022 at
the Mathematical Institute of the University of Leiden.
Its goal was to understand the recent work of J. Evans and Y. Lekili on the
symplectic cohomology of the Milnor fiber for specific classes of isolated
singularities. This work uses inputs from several fields, notably from
algebraic geometry, in particular singularity theory, and from symplectic
geometry. The main aim of the notes is to make the work of J. Evans and Y.
Lekili more accessible by explaining the main ideas from these fields and
indicate how these play a role in this work.
|
A number of concurrent, relaxed priority queues have recently been proposed
and implemented. Results are commonly reported for a throughput benchmark that
uses a uniform distribution of keys drawn from a large integer range, and
mostly for single systems. We have conducted more extensive benchmarking of
three recent, relaxed priority queues on four different types of systems with
different key ranges and distributions. While we can show superior throughput
and scalability for our own k-LSM priority queue for the uniform key
distribution, the picture changes drastically for other distributions, both
with respect to achieved throughput and relative merit of the priority queues.
The throughput benchmark alone is thus not sufficient to characterize the
performance of concurrent priority queues. Our benchmark code and k-LSM
priority queue are publicly available to foster future comparison.
|
We present new families of weighted homogeneous and Newton non-degenerate
line singularities that satisfy the Zariski multiplicity conjecture.
|
The paper describes two Borel-measurable functions from a measure space into
a locally convex space such that the image measure for each function is Radon
but their sum is not Borel-measurable.
|
Gait recognition is a promising video-based biometric for identifying
individual walking patterns from a long distance. At present, most gait
recognition methods use silhouette images to represent a person in each frame.
However, silhouette images can lose fine-grained spatial information, and most
papers do not regard how to obtain these silhouettes in complex scenes.
Furthermore, silhouette images contain not only gait features but also other
visual clues that can be recognized. Hence these approaches can not be
considered as strict gait recognition.
We leverage recent advances in human pose estimation to estimate robust
skeleton poses directly from RGB images to bring back model-based gait
recognition with a cleaner representation of gait. Thus, we propose GaitGraph
that combines skeleton poses with Graph Convolutional Network (GCN) to obtain a
modern model-based approach for gait recognition. The main advantages are a
cleaner, more elegant extraction of the gait features and the ability to
incorporate powerful spatio-temporal modeling using GCN. Experiments on the
popular CASIA-B gait dataset show that our method archives state-of-the-art
performance in model-based gait recognition.
The code and models are publicly available.
|
We discuss in detail the properties of gravity with a negative cosmological
constant as viewed in Cherns-Simons theory on a line times a disc. We reanalyze
the problem of computing the BTZ entropy, and show how the demand of unitarity
and modular invariance of the boundary conformal field theory severely
constrain proposals in this framework.
|
Causal viscous hydrodynamic fits to experimental data for pion and kaon
transverse momentum spectra from central Au+Au collisions at \sqrt{s_{NN}}=200
GeV are presented. Starting the hydrodynamic evolution at 1 fm/c and using
small values for the relaxation time, reasonable fits up to moderate ratios
\eta/s\simeq 0.4 can be obtained. It is found that a percentage of roughly 50
\eta/s to 75 \eta/s of the final meson multiplicity is due to viscous entropy
production. Finally, it is shown that with increasing viscosity, the ratio of
HBT radii R_{out}/R_{side} approaches and eventually matches the experimental
data.
|
A self-adaptive software system modifies its behavior at runtime in response
to changes within the system or in its execution environment. The fulfillment
of the system requirements needs to be guaranteed even in the presence of
adverse conditions and adaptations. Thus, a key challenge for self-adaptive
software systems is assurance. Traditionally, confidence in the correctness of
a system is gained through a variety of activities and processes performed at
development time, such as design analysis and testing. In the presence of
selfadaptation, however, some of the assurance tasks may need to be performed
at runtime. This need calls for the development of techniques that enable
continuous assurance throughout the software life cycle. Fundamental to the
development of runtime assurance techniques is research into the use of models
at runtime (M@RT). This chapter explores the state of the art for usingM@RT to
address the assurance of self-adaptive software systems. It defines what
information can be captured by M@RT, specifically for the purpose of assurance,
and puts this definition into the context of existing work. We then outline key
research challenges for assurance at runtime and characterize assurance
methods. The chapter concludes with an exploration of selected application
areas where M@RT could provide significant benefits beyond existing assurance
techniques for adaptive systems.
|
Charge transfer along the base-pair stack in DNA is modeled in terms of
thermally-assisted tunneling between adjacent base pairs. Central to our
approach is the notion that tunneling between fluctuating pairs is rate-limited
by the requirement of their optimal alignment. We focus on this aspect of the
process by modeling two adjacent base pairs in terms of a classical damped
oscillator subject to thermal fluctuations as described by a Fokker-Planck
equation. We find that the process is characterized by two time scales, a
result that is in accord with experimental findings.
|
We introduce hardness in relative entropy, a new notion of hardness for
search problems which on the one hand is satisfied by all one-way functions and
on the other hand implies both next-block pseudoentropy and inaccessible
entropy, two forms of computational entropy used in recent constructions of
pseudorandom generators and statistically hiding commitment schemes,
respectively. Thus, hardness in relative entropy unifies the latter two notions
of computational entropy and sheds light on the apparent "duality" between
them. Additionally, it yields a more modular and illuminating proof that
one-way functions imply next-block inaccessible entropy, similar in structure
to the proof that one-way functions imply next-block pseudoentropy (Vadhan and
Zheng, STOC '12).
|
We identify an abundant population of extreme emission line galaxies (EELGs)
at redshift z~1.7 in the Cosmic Assembly Near-IR Deep Extragalactic Legacy
Survey (CANDELS) imaging from Hubble Space Telescope/Wide Field Camera 3
(HST/WFC3). 69 EELG candidates are selected by the large contribution of
exceptionally bright emission lines to their near-infrared broad-band
magnitudes. Supported by spectroscopic confirmation of strong [OIII] emission
lines -- with rest-frame equivalent widths ~1000\AA -- in the four candidates
that have HST/WFC3 grism observations, we conclude that these objects are
galaxies with 10^8 Msol in stellar mass, undergoing an enormous starburst phase
with M_*/(dM_*/dt) of only ~15 Myr. These bursts may cause outflows that are
strong enough to produce cored dark matter profiles in low-mass galaxies. The
individual star formation rates and the co-moving number density (3.7x10^-4
Mpc^-3) can produce in ~4 Gyr much of the stellar mass density that is
presently contained in 10^8-10^9 Msol dwarf galaxies. Therefore, our
observations provide a strong indication that many or even most of the stars in
present-day dwarf galaxies formed in strong, short-lived bursts, mostly at z>1.
|
The accurate sampling of protein dynamics is an ongoing challenge despite the
utilization of High-Performance Computers (HPC) systems. Utilizing only "brute
force" MD simulations requires an unacceptably long time to solution. Adaptive
sampling methods allow a more effective sampling of protein dynamics than
standard MD simulations. Depending on the restarting strategy the speed up can
be more than one order of magnitude. One challenge limiting the utilization of
adaptive sampling by domain experts is the relatively high complexity of
efficiently running adaptive sampling on HPC systems. We discuss how the ExTASY
framework can set up new adaptive sampling strategies, and reliably execute
resulting workflows at scale on HPC platforms. Here the folding dynamics of
four proteins are predicted with no a priori information.
|
In the paper the thermal energy transfer for elementary particles is
described. The quantum heat transport equation is obtained. It is shown that
for thermal excitation of the order of the relaxation time the excited matter
response is quantized on the different levels (atomic, nuclear, quark) with
quantum thermal energy equal E^{atomic}=9 eV, E^(nuclear)=7 MeV and
E^{quark}=139 MeV. As the result the quantum for the heating process of
nucleons is the pi-meson (consisting of the two quarks).
Keywords: Heat quanta; Quantum heat transport; Quantum diffusion coefficient.
|
It is shown that there exists a charge five monopole with octahedral symmetry
and a charge seven monopole with icosahedral symmetry. A numerical
implementation of the ADHMN construction is used to calculate the energy
density of these monopoles and surfaces of constant energy density are
displayed. The charge five and charge seven monopoles look like an octahedron
and a dodecahedron respectively. A scattering geodesic for each of these
monopoles is presented and discussed using rational maps. This is done with the
aid of a new formula for the cluster decomposition of monopoles when the poles
of the rational map are close together.
|
Spectropolarimetry of distant sources of electromagnetic radiation at
wavelengths ranging from infrared to ultraviolet are used to constrain Lorentz
violation. A bound of 3x10^{-32} is placed on coefficients for Lorentz
violation.
|
For light harvesters with a reaction center complex (LH1-RC complex) of three
types, we propose an experiment to verify our analysis based upon antenna
theories that automatically include the required structural information. Our
analysis conforms to the current understanding of light-harvesting antennae in
that we can explain known properties of these complexes. We provide an
explanation for the functional roles of the notch at the light harvester, a
functional role of the polypeptide called PufX or W at the opening, a
functional role of the special pair, a reason that the cross section of the
light harvester must not be circular, a reason that the light harvester must
not be spherical, reasons for the use of dielectric bacteriochlorophylls
instead of conductors to make the light harvester, a mechanism to prevent
damage from excess sunlight, an advantage of the dimeric form, and reasons for
the modular design of nature. Based upon our analysis we provide a mechanism
for dimerization. We predict the dimeric form of light-harvesting complexes is
favoured under intense sunlight. We further comment upon the classification of
the dimeric or S-shape complexes. The S-shape complexes should not be
considered as the third type of light harvester but simply as a composite form.
|
We have observed period-tripling subharmonic oscillations, in a
superconducting coplanar waveguide resonator operated in the quantum regime,
$k_B T \ll \hbar\omega$. The resonator is terminated by a tunable inductance
that provides a Kerr-type nonlinearity. We detected the output field
quadratures at frequencies near the fundamental mode, $\omega/2\pi \sim
5\,$GHz, when the resonator was driven by a current at $3\omega$ with an
amplitude exceeding an instability threshold. The output radiation was
red-detuned from the fundamental mode. We observed three stable radiative
states with equal amplitudes and phase-shifted by $120^\circ$. The
downconversion from $3\omega$ to $\omega$ is strongly enhanced by resonant
excitation of the second mode of the resonator, and the cross-Kerr effect. Our
experimental results are in quantitative agreement with a model for the driven
dynamics of two coupled modes.
|
Finding representative reaction pathways is necessary for understanding
mechanisms of molecular processes, but is considered to be extremely
challenging. We propose a new method to construct reaction paths based on mean
first-passage times. This approach incorporates information of all possible
reaction events as well as the effect of temperature. The method is applied to
exemplary reactions in a continuous and in a discrete setting. The suggested
approach holds great promise for large reaction networks that are completely
characterized by the method through a pathway graph.
|
Compiler backends should be automatically generated from hardware design
language (HDL) models of the hardware they target. Generating compiler
components directly from HDL can provide stronger correctness guarantees, ease
development effort, and encourage hardware exploration. Past work has already
championed this idea; here we argue that advances in program synthesis make the
approach more feasible. We present a concrete example by demonstrating how FPGA
technology mappers can be automatically generated from SystemVerilog models of
an FPGA's primitives using program synthesis.
|
It was previously noted that SU(5) unification can be achieved via the simple
addition of light scalar leptoquarks from two split $\bf10$ multiplets. We
explore the parameter space of this model in detail and find that unification
requires at least one leptoquark to have mass below $\approx16\,$TeV. We point
out that introducing splitting of the $\bf24$ allows the unification scale to
be raised beyond $10^{16}$ GeV, while a U(1)$_{PQ}$ symmetry can be imposed to
forbid dangerous proton decay mediated by the light leptoquarks. The latest
bounds from LHC searches are combined and we find that a leptoquark as light as
400 GeV is still permitted. Finally, we discuss the interesting possibility
that the leptoquarks required for unification could also be responsible for the
$2.6\sigma$ deviation observed in the ratio $R_K$ at LHCb.
|
Resonant absorption of a photon by bound electrons in a solid can promote an
electron to another orbital state or transfer it to a neighboring atomic site.
Such a transition in a magnetically ordered material could affect the magnetic
order. While this process is an obvious road map for optical control of
magnetization, experimental demonstration of such a process remains
challenging. Exciting a significant fraction of magnetic ions requires a very
intense incoming light beam, as orbital resonances are often weak compared to
above-band-gap excitations. In the latter case, a sizeable reduction of the
magnetization occurs as the absorbed energy increases the spin temperature,
masking the non-thermal optical effects. Here, using ultrafast x-ray
spectroscopy, we were able to resolve changes in the magnetization state
induced by resonant absorption of infrared photons in Co-doped yttrium iron
garnet, with negligible thermal effects. We found that the optical excitation
of the Co ions affects the two distinct magnetic Fe sublattices differently,
resulting in a transient non-collinear magnetic state. The present results
indicate that the all-optical magnetization switching most likely occurs due to
the creation of a transient, non-collinear magnetic state followed by coherent
spin rotations of the Fe moments.
|
Pulsatile flows are common in nature and in applications, but their stability
and transition to turbulence are still poorly understood. Even in the simple
case of pipe flow subject to harmonic pulsation, there is no consensus among
experimental studies on whether pulsation delays or enhances transition. We
here report direct numerical simulations of pulsatile pipe flow at low
pulsation amplitude A<0.4. We use a spatially localized impulsive disturbance
to generate a single turbulent puff and track its dynamics as it travels
downstream. The computed relaminarization statistics are in quantitative
agreement with the experiments of Xu et al. (J. Fluid Mech., vol. 831, 2017,
pp. 418-432) and support the conclusion that increasing the pulsation amplitude
and lowering the frequency enhance the stability of the flow. In the
high-frequency regime, the behaviour of steady pipe flow is recovered. In
addition, we show that when the pipe length does not permit the observation of
a full cycle, a reduction of the transition threshold is observed. We obtain an
equation quantifying this effect and compare it favourably with the
measurements of Stettler & Hussain (J. Fluid Mech., vol. 170, 1986, pp.
169-197). Our results resolve previous discrepancies, which are due to
different pipe lengths, perturbation methods and criteria chosen to quantify
transition in experiments.
|
Let $\alpha: G\curvearrowright X$ be a continuous action of an infinite
countable group on a compact Hausdorff space. We show that, under the
hypothesis that the action $\alpha$ is topologically free and has no
$G$-invariant regular Borel probability measure on $X$, dynamical comparison
implies that the reduced crossed product of $\alpha$ is purely infinite and
simple. This result, as an application, shows a dichotomy between stable
finiteness and pure infiniteness for reduced crossed products arising from
actions satisfying dynamical comparison. We also introduce the concepts of
paradoxical comparison and the uniform tower property. Under the hypothesis
that the action $\alpha$ is exact and essentially free, we show that
paradoxical comparison together with the uniform tower property implies that
the reduced crossed product of $\alpha$ is purely infinite. As applications, we
provide new results on pure infiniteness of reduced crossed products in which
the underlying spaces are not necessarily zero-dimensional. Finally, we study
the type semigroups of actions on the Cantor set in order to establish the
equivalence of almost unperforation of the type semigroup and comparison. This
sheds a light to a question arising in the paper of R{\o}rdam and Sierakowski.
|
When implementing secure software, developers must ensure certain
requirements, such as the erasure of secret data after its use and execution in
real time. Such requirements are not explicitly captured by the C language and
could potentially be violated by compiler optimizations. As a result,
developers typically use indirect methods to hide their code's semantics from
the compiler and avoid unwanted optimizations. However, such workarounds are
not permanent solutions, as increasingly efficient compiler optimization causes
code that was considered secure in the past now vulnerable. This paper is a
literature review of (1) the security complications caused by compiler
optimizations, (2) approaches used by developers to mitigate optimization
problems, and (3) recent academic efforts towards enabling security engineers
to communicate implicit security requirements to the compiler. In addition, we
present a short study of six cryptographic libraries and how they approach the
issue of ensuring security requirements. With this paper, we highlight the need
for software developers and compiler designers to work together in order to
design efficient systems for writing secure software.
|
We consider two nonparametric estimators for the risk measure of the sum of
$n$ i.i.d. individual insurance risks where the number of historical single
claims that are used for the statistical estimation is of order $n$. This
framework matches the situation that nonlife insurance companies are faced with
within in the scope of premium calculation. Indeed, the risk measure of the
aggregate risk divided by $n$ can be seen as a suitable premium for each of the
individual risks. For both estimators divided by $n$ we derive a sort of
Marcinkiewicz--Zygmund strong law as well as a weak limit theorem. The behavior
of the estimators for small to moderate $n$ is studied by means of Monte-Carlo
simulations.
|
We formalize the theory of forcing in the set theory framework of
Isabelle/ZF. Under the assumption of the existence of a countable transitive
model of ZFC, we construct a proper generic extension and show that the latter
also satisfies ZFC. In doing so, we remodularized Paulson's ZF-Constructibility
library.
|
Recent experimental advances probing coherent phonon and electron transport
in nanoscale devices at contact have motivated theoretical channel-based
analyses of conduction based on the nonequilibrium Green's function formalism.
The transmission through each channel has been known to be bounded above by
unity, yet actual transmissions in typical systems often fall far below these
limits. Building upon recently derived radiative heat transfer limits and a
unified formalism characterizing heat transport for arbitrary bosonic systems
in the linear regime, we propose new bounds on conductive heat transfer. In
particular, we demonstrate that our limits are typically far tighter than the
Landauer limits per channel and are close to actual transmission eigenvalues by
examining a model of phonon conduction in a 1-dimensional chain. Our limits
have ramifications for designing molecular junctions to optimize conduction.
|
The first direct observation of a binary neutron star (BNS) merger was a
watershed moment in multi-messenger astronomy. However, gravitational waves
from GW170817 have only been observed prior to the BNS merger, but
electromagnetic observations all follow the merger event. While post-merger
gravitational wave signal in general relativity is too faint (given current
detector sensitivities), here we present the first tentative detection of
post-merger gravitational wave "echoes" from a highly spinning "black hole"
remnant. The echoes may be expected in different models of quantum black holes
that replace event horizons by exotic Planck-scale structure and tentative
evidence for them has been found in binary black hole merger events. The fact
that the echo frequency is suppressed by $\log M$ (in Planck units) puts it
squarely in the LIGO sensitivity window, allowing us to build an optimal
model-agnostic search strategy via cross-correlating the two detectors in
frequency/time. We find a tentative detection of echoes at $f_{\rm echo} \simeq
72$ Hz, around 1.0 sec after the BNS merger, consistent with a 2.6-2.7
$M_\odot$ "black hole" remnant with dimensionless spin $0.84-0.87$. Accounting
for all the "look-elsewhere" effects, we find a significance of $4.2 \sigma$,
or a false alarm probability of $1.6\times 10^{-5}$, i.e. a similar
cross-correlation within the expected frequency/time window after the merger
cannot be found more than 4 times in 3 days. If confirmed, this finding will
have significant consequences for both physics of quantum black holes and
astrophysics of binary neutron star mergers [Note added: This result is
independently confirmed by arXiv:1901.04138, who use the electromagnetic
observations to infer $t_{\rm coll}=0.98^{+0.31}_{-0.26}$ sec for black hole
formation].
|
The possible spectra of one-particle reduced density matrices that are
compatible with a pure multipartite quantum system of finite dimension form a
convex polytope. We introduce a new construction of inner- and outer-bounding
polytopes that constrain the polytope for the entire quantum system. The outer
bound is sharp. The inner polytope stems only from doubly excited states. We
find all quantum systems, where the bounds coincide giving the entire polytope.
We show, that those systems are: i) any system of two particles ii) $L$ qubits,
iii) three fermions on $N\leq 7$ levels, iv) any number of bosons on any number
of levels and v) fermionic Fock space on $N\leq 5$ levels. The methods we use
come from symplectic geometry and representation theory of compact Lie groups.
In particular, we study the images of proper momentum maps, where our method
describes momentum images for all representations that are spherical.
|
Establishing entanglement between distant parties is one of the most
important problems of quantum technology, since long-distance entanglement is
an essential part of such fundamental tasks as quantum cryptography or quantum
teleportation. In this lecture we review basic properties of entanglement and
quantum discord, and discuss recent results on entanglement distribution and
the role of quantum discord therein. We also review entanglement distribution
with separable states, and discuss important problems which still remain open.
One such open problem is a possible advantage of indirect entanglement
distribution, when compared to direct distribution protocols.
|
Recently,a noticeable progress had been achieved in the area of high
temperature superconductors. The maximum temperature of 250K for LaH(10) and
288K for CSH(8) were reported at the megabar pressures. The highest possible
temperatures were achieved by employing hydrides of chemical elements.
Empirically, many of these are made of Madelung-exceptional atoms. Here the
theoretical background is provided explaining this observation. The, thus far
empirical, Madelung rule is controlling Mendeleev's law of periodicity.
Although the majority of elements do obey this rule, there are some exceptions.
Thus, it is of interest to derive it and its exceptions theoretically in view
of experimental findings. As a by product, such a study yields some plausible
explanation of the role of Madelung-exceptional atoms in the design of
hightemperature superconductors. Thus far the atoms obeying the Madelung rule
and its exceptions were studied with help of the relativistic Hartree-Fock
calculations. In this work we reobtain both the rule and the exceptions
analytically. The newly developed methods are expected to be of value in
quantum many-body theory and, in particular, in the theory of high temperature
superconductivity. Ultimately, new methods involve some uses of the
Seiberg-Witten (S-W) theory known as the extended Ginzburg-Landau theory of
superconductivity. Using results of the S-W theory the difference between the
Madelung-regular and Madelung-exceptional atoms is explained in terms of the
topological transition. Extension of this, single atom, result to solids of
respective elements is also discussed
|
Polarization of $\Lambda$ hyperons and their antiparticles is calculated in a
3+1 dimensional viscous hydrodynamic model with initial state from UrQMD
hadron/string cascade. We find that, along with recent results from STAR, the
mean polarization at fixed centrality decreases as a function of collision
energy from 1.5% at $\sqrt{s_{\rm NN}}=7.7$ GeV to 0.2% at $\sqrt{s_{\rm
NN}}=200$ GeV. We explore the effects which lead to such collision energy
dependence, feed-down corrections and a difference between $\Lambda$ and
$\bar\Lambda$.
|
This paper presents a method for incorporating risk aversion into existing
decision tree models used in economic evaluations. The method involves applying
a probability weighting function based on rank dependent utility theory to
reduced lotteries in the decision tree model. This adaptation embodies the fact
that different decision makers can observe the same decision tree model
structure but come to different conclusions about the optimal treatment. The
proposed solution to this problem is to compensate risk-averse decision makers
to use the efficient technology that they are reluctant to adopt.
|
Tau neutrino is the least studied lepton of the Standard Model (SM). The
NA65/DsTau experiment targets to investigate $D_s$, the parent particle of the
$\nu_\tau$, using the nuclear emulsion-based detector and to decrease the
systematic uncertainty of $\nu_\tau$ flux prediction from over 50% to 10% for
future beam dump experiments. In the experiment, the emulsion detectors are
exposed to the CERN SPS 400 GeV proton beam. To provide optimal conditions for
the reconstruction of interactions, the protons are required to be uniformly
distributed over the detector's surface with an average density of
$10^5~\rm{cm^{-2}}$ and the fluctuation of less than 10%. To address this
issue, we developed a new proton irradiation system called the target mover.
The new target mover provided irradiation with a proton density of
$0.98~\rm{cm^{-2}}$ and the density fluctuation of $2.0\pm 0.3$% in the DsTau
2021 run.
|
The nucleon-nucleon J-matrix Inverse Scattering Potential JISP16 is applied
to elastic nucleon-deuteron (Nd) scattering and the deuteron breakup process at
the lab. nucleon energies up to 135 MeV. The formalism of the Faddeev equations
is used to obtain 3N scattering states. We compare predictions based on the
JISP16 force with data and with results based on various NN interactions: the
CD Bonn, the AV18, the chiral force with the semi-local regularization at the
5th order of the chiral expansion and with low-momentum interactions obtained
from the CD Bonn force as well as with the predictions from the combination of
the AV18 NN interaction and the Urbana IX 3N force. JISP16 provides a
satisfactory description of some observables at low energies but strong
deviations from data as well as from standard and chiral potential predictions
with increasing energy. However, there are also polarization observables at low
energies for which the JISP16 predictions differ from those based on the other
forces by a factor of two. The reason for such a behavior can be traced back to
the P-wave components of the JISP16 force. At higher energies the deviations
can be enhanced by an interference with higher partial waves and by the
properties of the JISP16 deuteron wave function. In addition, we compare the
energy and angular dependence of predictions based on the JISP16 force with the
results of the low-momentum forces obtained with different values of the
momentum cutoff parameter. We found that such low-momentum forces can be
employed to interpret the Nd elastic scattering data only below some specific
energy which depends on the cutoff parameter. Since JISP16 is defined in a
finite oscillator basis, it has properties similar to low momentum interactions
and its application to the description of Nd scattering data is limited to a
low momentum transfer region.
|
A cactus graph is a connected graph in which every block is either an edge or
a cycle. In this paper, we consider several problems of graph theory and
developed optimal algorithms to solve such problems on cactus graphs. The
running time of these algorithms is O(n), where n is the total number of
vertices of the graph. The cactus graph has many applications in real life
problems, especially in radio communication system.
|
We study resonant x-ray scattering (RXS) at Np M_{4,5} edges in the
triple-\textbf{k} multipole ordering phase in NpO_{2}, on the basis of a
localized electron model. We derive an expression for RXS amplitudes to
characterize the spectra under the assumption that a rotational invariance is
preserved in the intermediate state of scattering process. This assumption is
justified by the fact that energies of the crystal electric field and the
intersite interaction is smaller than the energy of multiplet structures. This
expression is found useful to calculate energy profiles with taking account of
the intra-Coulomb and spin-orbit interactions. Assuming the \Gamma_{8}-quartet
ground state, we construct the triple-\textbf{k} ground state, and analyze the
RXS spectra. The energy profiles are calculated in good agreement with the
experiment, providing a sound basis to previous phenomenological analyses.
|
It is a well-known and elementary fact that a holomorphic function on a
compact complex manifold without boundary is necessarily constant. The purpose
of the present article is to investigate whether, or to what extent, a similar
property holds in the setting of holomorphically foliated spaces.
|
Large-scale image retrieval benchmarks invariably consist of images from the
Web. Many of these benchmarks are derived from online photo sharing networks,
like Flickr, which in addition to hosting images also provide a highly
interactive social community. Such communities generate rich metadata that can
naturally be harnessed for image classification and retrieval. Here we study
four popular benchmark datasets, extending them with social-network metadata,
such as the groups to which each image belongs, the comment thread associated
with the image, who uploaded it, their location, and their network of friends.
Since these types of data are inherently relational, we propose a model that
explicitly accounts for the interdependencies between images sharing common
properties. We model the task as a binary labeling problem on a network, and
use structured learning techniques to learn model parameters. We find that
social-network metadata are useful in a variety of classification tasks, in
many cases outperforming methods based on image content.
|
We present a review of atmospheric muon flux and energy spectrum measurements
over almost six decades of muon momentum. Sea-level and underground/water/ice
experiments are considered. Possible sources of systematic errors in the
measurements are examinated. The characteristics of underground/water muons
(muons in bundle, lateral distribution, energy spectrum) are discussed. The
connection between the atmospheric muon and neutrino measurements are also
reported.
|
The low-lying states in 106Zr and 108Zr have been investigated by means of
{\beta}-{\gamma} and isomer spectroscopy at the RI beam factory, respectively.
A new isomer with a half-life of 620\pm150 ns has been identified in 108Zr. For
the sequence of even-even Zr isotopes, the excitation energies of the first 2+
states reach a minimum at N = 64 and gradually increase as the neutron number
increases up to N = 68, suggesting a deformed sub-shell closure at N = 64. The
deformed ground state of 108Zr indicates that a spherical sub-shell gap
predicted at N = 70 is not large enough to change the ground state of 108Zr to
the spherical shape. The possibility of a tetrahedral shape isomer in 108Zr is
also discussed.
|
Serious searches for the weakly interacting massive particle (WIMP) have now
begun. In this context, the most important questions that need to be addressed
are: "To what extent can we constrain the WIMP models in the future?" and "What
will then be the remaining unexplored regions in the WIMP parameter space for
each of these models?" In our quest to answer these questions, we classify WIMP
in terms of quantum number and study each case adopting minimality as a guiding
principle. As a first step, we study one of the simple cases of the minimal
composition in the well-tempered fermionic WIMP regime, namely the
singlet-doublets WIMP model. We consider all available constraints from direct
and indirect searches and also the predicted constraints coming from the near
future and the future experiments. We thus obtain the current status, the near
future prospects and the future prospects of this model in all its generality.
We find that in the future, this model will be constrained almost solely by the
future direct dark matter detection experiments (as compared to the weaker
indirect and collider constraints) and the cosmological (relic density)
constraints and will hence be gradually pushed to the corner of the
coannihilation region, if no WIMP signal is detected. Future lepton colliders
will then be useful in exploring this region not constrained by any other
experiments.
|
The third homology group of GL_n(R) is studied, where R is a `ring with many
units' with center Z(R). The main theorem states that if K_1(Z(R))_Q \simeq
K_1(R)_Q, (e.g. R a commutative ring or a central simple algebra), then
H_3(GL_2(R), Q) --> H_3(GL_3(R), Q) is injective. If R is commutative, Q can be
replaced by a field k such that 1/2 is in k. For an infinite field R (resp. an
infinite field R such that R*=R*^2), we get a better result that H_3(GL_2(R),
Z[1/2] --> H_3(GL_3(R), Z[1/2]) (resp. H_3(GL_2(R), Z) --> H_3(GL_3(R), Z)) is
injective. As an application we study the third homology group of SL_2(R) and
the indecomposable part of K_3(R).
|
There has been an increasing use of master protocols in oncology clinical
trials because of its efficiency and flexibility to accelerate cancer drug
development. Depending on the study objective and design, a master protocol
trial can be a basket trial, an umbrella trial, a platform trial, or any other
form of trials in which multiple investigational products and/or subpopulations
are studied under a single protocol. Master protocols can use external data and
evidence (e.g., external controls) for treatment effect estimation, which can
further improve efficiency of master protocol trials. This paper provides an
overview of different types of external controls and their unique features when
used in master protocols. Some key considerations in master protocols with
external controls are discussed including construction of estimands, assessment
of fit-for-use real-world data, and considerations for different types of
master protocols. Similarities and differences between regular randomized
controlled trials and master protocols when using external controls are
discussed. A targeted learning-based causal roadmap is presented which
constitutes three key steps: (1) define a target statistical estimand that
aligns with the causal estimand for the study objective, (2) use an efficient
estimator to estimate the target statistical estimand and its uncertainty, and
(3) evaluate the impact of causal assumptions on the study conclusion by
performing sensitivity analyses. Two illustrative examples for master protocols
using external controls are discussed for their merits and possible improvement
in causal effect estimation.
|
We study resolvent estimate and maximal regularity of the Stokes operator in
$L^q$-spaces with exponential weights in the axial directions of unbounded
cylinders of ${\mathbb R}^n,n\geq 3$. For straights cylinders we obtain these
results in Lebesgue spaces with exponential weights in the axial direction and
Muckenhoupt weights in the cross-section. Next, for general cylinders with
several exits to infinity we prove that the Stokes operator in $L^q$-spaces
with exponential weight along the axial directions generates an exponentially
decaying analytic semigroup and has maximal regularity.
The proofs for straight cylinders use an operator-valued Fourier multiplier
theorem and techniques of unconditional Schauder decompositions based on the
${\mathcal R}$-boundedness of the family of solution operators for a system in
the cross-section of the cylinder parametrized by the phase variable of the
one-dimensional partial Fourier transform. For general cylinders we use cut-off
techniques based on the result for straight cylinders and the result for the
case without exponential weight.
|
When applied to statistical systems showing an arctic curve phenomenon, the
tangent method assumes that a modification of the most external path does not
affect the arctic curve. We strengthen this statement and also make it more
concrete by observing a factorization property: if $Z^{}_{n+k}$ denotes a
refined partition function of a system of $n+k$ non-crossing paths, with the
endpoints of the $k$ most external paths possibly displaced, then at dominant
order in $n$, it factorizes as $Z^{}_{n+k} \simeq Z^{}_{n} Z_k^{\rm out}$ where
$Z_k^{\rm out}$ is the contribution of the $k$ most external paths. Moreover if
the shape of the arctic curve is known, we find that the asymptotic value of
$Z_k^{\rm out}$ is fully computable in terms of the large deviation function
$L$ introduced in \cite{DGR19} (also called Lagrangean function). We present
detailed verifications of the factorization in the Aztec diamond and for
alternating sign matrices by using exact lattice results. Reversing the
argument, we reformulate the tangent method in a way that no longer requires an
extension of the domain, and which reveals the hidden role of the $L$ function.
As a by-product, the factorization property provides an efficient way to
conjecture the asymptotics of multirefined partition functions.
|
Through international regulations (most prominently the latest UNECE
regulation) and standards, the already widely perceived higher need for
cybersecurity in automotive systems has been recognized and will mandate higher
efforts for cybersecurity engineering. T he UNECE also demands the
effectiveness of these engineering to be verified and validated through
testing. T his requires both a significantly higher rate and more
comprehensiveness of cybersecurity testing that is not effectively to cope with
using current, predominantly manual, automotive cybersecurity testing
techniques. To allow for comprehensive and efficient testing at all stages of
the automotive life cycle, including supply chain parts not at band, and to
facilitate efficient third party testing, as well as to test under real-world
conditions, also methodologies for testing the cybersecurity of vehicular
systems as a black box are necessary. T his paper therefore presents a model
and attack tree-based approach to (semi-)automate automotive cybersecurity
testing, as well as considerations for automatically black box-deriving models
for the use in attack modeling.
|
We present a sample of luminous red-sequence galaxies to study the
large-scale structure in the fourth data release of the Kilo-Degree Survey. The
selected galaxies are defined by a red-sequence template, in the form of a
data-driven model of the colour-magnitude relation conditioned on redshift. In
this work, the red-sequence template is built using the broad-band optical+near
infrared photometry of KiDS-VIKING and the overlapping spectroscopic data sets.
The selection process involves estimating the red-sequence redshifts, assessing
the purity of the sample, and estimating the underlying redshift distributions
of redshift bins. After performing the selection, we mitigate the impact of
survey properties on the observed number density of galaxies by assigning
photometric weights to the galaxies. We measure the angular two-point
correlation function of the red galaxies in four redshift bins, and constrain
the large scale bias of our red-sequence sample assuming a fixed $\Lambda$CDM
cosmology. We find consistent linear biases for two luminosity-threshold
samples (dense and luminous). We find that our constraints are well
characterized by the passive evolution model.
|
Nonlocal potential models have been used in place of the Coulomb potential in
the Schrodinger equation as an efficient means of exploring high field
laser-atom interaction in previous works. Al- though these models have found
use in modeling phenomena including photo-ionization and ejected electron
momentum spectra, they are known to break electromagnetic gauge invariance.
This paper examines if there is a preferred gauge for the linear field response
and photoionization characteristics of nonlocal atomic binding potentials in
the length and velocity gauges. It is found that the length gauge is preferable
for a wide range of parameters.
|
In this paper, we address the problem of recovering images degraded by
Poisson noise, where the image is known to belong to a specific class. In the
proposed method, a dataset of clean patches from images of the class of
interest is clustered using multivariate Gaussian distributions. In order to
recover the noisy image, each noisy patch is assigned to one of these
distributions, and the corresponding minimum mean squared error (MMSE) estimate
is obtained. We propose to use a self-normalized importance sampling approach,
which is a method of the Monte-Carlo family, for the both determining the most
likely distribution and approximating the MMSE estimate of the clean patch.
Experimental results shows that our proposed method outperforms other methods
for Poisson denoising at a low SNR regime.
|
Recent developments in domains such as non-local games, quantum interactive
proofs, and quantum generative adversarial networks have renewed interest in
quantum game theory and, specifically, quantum zero-sum games. Central to
classical game theory is the efficient algorithmic computation of Nash
equilibria, which represent optimal strategies for both players. In 2008, Jain
and Watrous proposed the first classical algorithm for computing equilibria in
quantum zero-sum games using the Matrix Multiplicative Weight Updates (MMWU)
method to achieve a convergence rate of $\mathcal{O}(d/\epsilon^2)$ iterations
to $\epsilon$-Nash equilibria in the $4^d$-dimensional spectraplex. In this
work, we propose a hierarchy of quantum optimization algorithms that generalize
MMWU via an extra-gradient mechanism. Notably, within this proposed hierarchy,
we introduce the Optimistic Matrix Multiplicative Weights Update (OMMWU)
algorithm and establish its average-iterate convergence complexity as
$\mathcal{O}(d/\epsilon)$ iterations to $\epsilon$-Nash equilibria. This
quadratic speed-up relative to Jain and Watrous' original algorithm sets a new
benchmark for computing $\epsilon$-Nash equilibria in quantum zero-sum games.
|
The characteristic function for heat fluctuations in a non equilibrium system
is characterised by a large deviation function whose symmetry gives rise to a
fluctuation theorem. In equilibrium the large deviation function vanishes and
the heat fluctuations are bounded. Here we consider the characteristic function
for heat fluctuations in equilibrium, constituting a sub-leading correction to
the large deviation behaviour. Modelling the system by an oscillator coupled to
an explicit multi-oscillator heat reservoir we evaluate the characteristic
function.
|
We compute the 5PM order contributions to the scattering angle and impulse of
classical black hole scattering in the conservative sector at first self-force
order (1SF) using the worldline quantum field theory formalism. This
challenging four-loop computation required the use of advanced
integration-by-parts and differential equation technology implemented on
high-performance computing systems. Use of partial fraction identities allowed
us to render the complete integrand in a fully planar form. The resulting
function space is simpler than expected: in the scattering angle we see only
multiple polylogarithms up to weight three, and a total absence of the elliptic
integrals that appeared at 4PM order. All checks on our result, both internal -
cancellation of dimensional regularization poles, preservation of the on-shell
condition - and external - matching the slow-velocity limit with the
post-Newtonian (PN) literature up to 5PN order and matching the tail terms to
the 4PM loss of energy - are passed.
|
We exhibit an infinite family of discrete subgroups of ${Sp}_4(\mathbb R)$
which have a number of remarkable properties. Our results are established by
showing that each group plays ping-pong on an appropriate set of cones. The
groups arise as the monodromy of hypergeometric differential equations with
parameters $\left(\tfrac{N-3}{2N},\tfrac{N-1}{2N}, \tfrac{N+1}{2N},
\tfrac{N+3}{2N}\right)$ at infinity and maximal unipotent monodromy at zero,
for any integer $N\geq 4$. Additionally, we relate the cones used for ping-pong
in $\mathbb R^4$ with crooked surfaces, which we then use to exhibit domains of
discontinuity for the monodromy groups in the Lagrangian Grassmannian.
|
The Polyakov-quark-meson (PQM) model, which combines chiral as well as
deconfinement aspects of strongly interacting matter is introduced for three
light quark flavors. An analysis of the chiral and deconfinement phase
transition of the model and its thermodynamics at finite temperatures is given.
Three different forms of the effective Polyakov loop potential are considered.
The findings of the (2+1)-flavor model investigations are confronted to
corresponding recent QCD lattice simulations of the RBC-Bielefeld, HotQCD and
Wuppertal-Budapest collaborations. The influence of the heavier quark masses,
which are used in the lattice calculations, is taken into account. In the
transition region the bulk thermodynamics of the PQM model agrees well with the
lattice data.
|
As recommender systems become increasingly sophisticated and complex, they
often suffer from lack of fairness and transparency. Providing robust and
unbiased explanations for recommendations has been drawing more and more
attention as it can help address these issues and improve trustworthiness and
informativeness of recommender systems. However, despite the fact that such
explanations are generated for humans who respond more strongly to messages
with appropriate emotions, there is a lack of consideration for emotions when
generating explanations for recommendations. Current explanation generation
models are found to exaggerate certain emotions without accurately capturing
the underlying tone or the meaning. In this paper, we propose a novel method
based on a multi-head transformer, called Emotion-aware Transformer for
Explainable Recommendation (EmoTER), to generate more robust, fair, and
emotion-enhanced explanations. To measure the linguistic quality and emotion
fairness of the generated explanations, we adopt both automatic text metrics
and human perceptions for evaluation. Experiments on three widely-used
benchmark datasets with multiple evaluation metrics demonstrate that EmoTER
consistently outperforms the existing state-of-the-art explanation generation
models in terms of text quality, explainability, and consideration for fairness
to emotion distribution. Implementation of EmoTER will be released as an
open-source toolkit to support further research.
|
We perturb the SC, BCC, and FCC crystal structures with a spatial Gaussian
noise whose adimensional strength is controlled by the parameter a, and analyze
the topological and metrical properties of the resulting Voronoi Tessellations
(VT). The topological properties of the VT of the SC and FCC crystals are
unstable with respect to the introduction of noise, because the corresponding
polyhedra are geometrically degenerate, whereas the tessellation of the BCC
crystal is topologically stable even against noise of small but finite
intensity. For weak noise, the mean area of the perturbed BCC and FCC crystals
VT increases quadratically with a. In the case of perturbed SCC crystals, there
is an optimal amount of noise that minimizes the mean area of the cells.
Already for a moderate noise (a>0.5), the properties of the three perturbed VT
are indistinguishable, and for intense noise (a>2), results converge to the
Poisson-VT limit. Notably, 2-parameter gamma distributions are an excellent
model for the empirical of of all considered properties. The VT of the
perturbed BCC and FCC structures are local maxima for the isoperimetric
quotient, which measures the degre of sphericity of the cells, among space
filling VT. In the BCC case, this suggests a weaker form of the recentluy
disproved Kelvin conjecture. Due to the fluctuations of the shape of the cells,
anomalous scalings with exponents >3/2 is observed between the area and the
volumes of the cells, and, except for the FCC case, also for a->0. In the
Poisson-VT limit, the exponent is about 1.67. As the number of faces is
positively correlated with the sphericity of the cells, the anomalous scaling
is heavily reduced when we perform powerlaw fits separately on cells with a
specific number of faces.
|
As is well known, the search for and eventual identification of dark matter
in supersymmetry requires a simultaneous, multi-pronged approach with important
roles played by the LHC as well as both direct and indirect dark matter
detection experiments. We examine the capabilities of these approaches in the
19-parameter p(henomenological)MSSM which provides a general framework for
complementarity studies of neutralino dark matter. We summarize the sensitivity
of dark matter searches at the 7, 8 (and eventually 14) TeV LHC, combined with
those by \Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and
weaknesses of each of these techniques are examined and contrasted and their
interdependent roles in covering the model parameter space are discussed in
detail. We find that these approaches explore orthogonal territory and that
advances in each are necessary to cover the Supersymmetric WIMP parameter
space. We also find that different experiments have widely varying
sensitivities to the various dark matter annihilation mechanisms, some of which
would be completely excluded by null results from these experiments.
|
Metamaterials offer a powerful way to manipulate a variety of physical fields
ranging from wave fields (electromagnetic field, acoustic field, elastic wave,
etc.), static fields (static magnetic field, static electric field) to
diffusive fields (thermal field, diffusive mass). However, the relevant reports
and studies are usually conducted on a single physical field or functionality.
In this study, we proposed and experimentally demonstrated a bifunctional
metamaterial which can manipulate thermal and electric fields simultaneously
and independently. Specifically, a composite with independently controllable
thermal and electric conductivity was introduced, on the basis of which a
bifunctional device capable of shielding thermal flux and concentrating
electric current simultaneously was designed, fabricated and characterized.
This work provides an encouraging example of metamaterials transcending their
natural limitations, which offers a promising future in building a broad
platform for manipulation of multi-physics field.
|
Complex interactions between genes or proteins contribute a substantial part
to phenotypic evolution. Here we develop an evolutionarily grounded method for
the cross-species analysis of interaction networks by {\em alignment}, which
maps bona fide functional relationships between genes in different organisms.
Network alignment is based on a scoring function measuring mutual similarities
between networks taking into account their interaction patterns as well as
sequence similarities between their nodes. High-scoring alignments and optimal
alignment parameters are inferred by a systematic Bayesian analysis. We apply
this method to analyze the evolution of co-expression networks between human
and mouse. We find evidence for significant conservation of gene expression
clusters and give network-based predictions of gene function. We discuss
examples where cross-species functional relationships between genes do not
concur with sequence similarity.
|
We study the interplay of confining potential, electron-electron interaction,
and Zeeman splitting at the edges of fractional quantum Hall liquids, using
numerical diagonalization of finite-size systems. The filling factors studied
include 1/3, 5/2, 2/5, and 2/3. In the absence of Zeeman splitting and an edge,
the first two have spin fully polarized ground states, while the latter two
have singlet ground states. We find that with few exceptions, edge
instabilities of these systems are triggered by softening of edge spin waves
for Abelian fractional quantum Hall liquids (1/3, 2/5 and 2/3 liquids), and are
triggered by softening of edge magnetoplasmon excitations for non-Abelian 5/2
liquid at the smoother confinement side. Phase diagrams are obtained in the
accessible parameter spaces.
|
In the field of phononics, periodic patterning controls vibrations and
thereby the flow of heat and sound in matter. Bandgaps arising in such phononic
crystals realize low-dissipation vibrational modes and enable applications
towards mechanical qubits, efficient waveguides, and state-of-the-art sensing.
Here, we combine phononics and two-dimensional materials and explore the
possibility of manipulating phononic crystals via applied mechanical pressure.
To this end, we fabricate the thinnest possible phononic crystal from monolayer
graphene and simulate its vibrational properties. We find a bandgap in the MHz
regime, within which we localize a defect mode with a small effective mass of
0.72 ag = 0.002 $m_{physical}$. Finally, we take advantage of graphene's
flexibility and mechanically tune a finite size phononic crystal. Under
electrostatic pressure up to 30 kPa, we observe an upshift in frequency of the
entire phononic system by more than 350%. At the same time, the defect mode
stays within the bandgap and remains localized, suggesting a high-quality,
dynamically tunable mechanical system.
|
Context: There is considerable diversity in the range and design of
computational experiments to assess classifiers for software defect prediction.
This is particularly so, regarding the choice of classifier performance
metrics. Unfortunately some widely used metrics are known to be biased, in
particular F1. Objective: We want to understand the extent to which the
widespread use of the F1 renders empirical results in software defect
prediction unreliable. Method: We searched for defect prediction studies that
report both F1 and the Matthews correlation coefficient (MCC). This enabled us
to determine the proportion of results that are consistent between both metrics
and the proportion that change. Results: Our systematic review identifies 8
studies comprising 4017 pairwise results. Of these results, the direction of
the comparison changes in 23% of the cases when the unbiased MCC metric is
employed. Conclusion: We find compelling reasons why the choice of
classification performance metric matters, specifically the biased and
misleading F1 metric should be deprecated.
|
Estimating climate effects on future ocean storm severity is plagued by large
uncertainties, yet for safe design and operation of offshore structures, best
possible estimates of climate effects are required given available data. We
explore the variability in estimates of 100-year return value of significant
wave height (Hs) over time, for output of WAVEWATCHIII models from 7
representative CMIP5 GCMs, and the FIO-ESM v2.0 CMIP6 GCM, for neighbourhoods
of locations east of Madagascar and south of Australia. Non-stationary extreme
value analysis of peaks-over-threshold and block maxima using Bayesian
inference provide posterior estimates of return values as a function of time;
MATLAB software is provided. There is large variation between return value
estimates from different GCMs, and with longitude and latitude within each
neighbourhood. These sources of uncertainty tend to be larger than that due to
typical modelling choices (such as choice of threshold for POT, or block length
for BM). However, careful threshold and block length are critical east of
Madagascar, because of the presence of a mixed population of storms there. The
long 700-year pre-industrial control (piControl) output of the CMIP6 GCM allows
quantification of the apparent inherent variability in return value as a
function of time.
|
In these proceedings we present the latest developments in our effort to
include vector boson scattering (VBS) measurements into global SMEFT fits of
LHC data. We present some updates to our initial study of arXiv:2101.03180 as
well as comment on a possible road map for the inclusion of higher orders
beyond dimension 6 in the SMEFT and on the interpretation of VBS data in other
EFT frameworks.
|
This article is a short and elementary introduction to the monstrous
moonshine aiming to be as accessible as possible. I first review the
classification of finite simple groups out of which the monster naturally
arises, and features of the latter that are needed in order to state the
moonshine conjecture of Conway and Norton. Then I motivate modular functions
and modular forms from the classification of complex tori, with the definitions
of the J-invariant and its q-expansion as a goal. I eventually provide evidence
for the monstrous moonshine correspondence, state the conjecture, and then
present the ideas that led to its proof. Lastly I give a brief account of some
recent developments and current research directions in the field.
|
In this paper, we study the arithmetics of skew polynomial rings over finite
fields, mostly from an algorithmic point of view. We give various algorithms
for fast multiplication, division and extended Euclidean division. We give a
precise description of quotients of skew polynomial rings by a left principal
ideal, using results relating skew polynomial rings to Azumaya algebras. We use
this description to give a new factorization algorithm for skew polynomials,
and to give other algorithms related to factorizations of skew polynomials,
like counting the number of factorizations as a product of irreducibles.
|
A previous paper [2] showed how to generate a linear discriminant network
(LDN) that computes likely faults for a noisy fault detection problem by using
a modification of the perceptron learning algorithm called the pocket
algorithm. Here we compare the performance of this connectionist model with
performance of the optimal Bayesian decision rule for the example that was
previously described. We find that for this particular problem the
connectionist model performs about 97% as well as the optimal Bayesian
procedure. We then define a more general class of noisy single-pattern boolean
(NSB) fault detection problems where each fault corresponds to a single
:pattern of boolean instrument readings and instruments are independently
noisy. This is equivalent to specifying that instrument readings are
probabilistic but conditionally independent given any particular fault. We
prove:
1. The optimal Bayesian decision rule for every NSB fault detection problem
is representable by an LDN containing no intermediate nodes. (This slightly
extends a result first published by Minsky & Selfridge.) 2. Given an NSB fault
detection problem, then with arbitrarily high probability after sufficient
iterations the pocket algorithm will generate an LDN that computes an optimal
Bayesian decision rule for that problem. In practice we find that a reasonable
number of iterations of the pocket algorithm produces a network with good, but
not optimal, performance.
|
We present an algorithm for enumerating exactly the number of Hamiltonian
chains on regular lattices in low dimensions. By definition, these are sets of
k disjoint paths whose union visits each lattice vertex exactly once. The
well-known Hamiltonian circuits and walks appear as the special cases k=0 and
k=1 respectively. In two dimensions, we enumerate chains on L x L square
lattices up to L=12, walks up to L=17, and circuits up to L=20. Some results
for three dimensions are also given. Using our data we extract several
quantities of physical interest.
|
A new deep learning-based electroencephalography (EEG) signal analysis
framework is proposed. While deep neural networks, specifically convolutional
neural networks (CNNs), have gained remarkable attention recently, they still
suffer from high dimensionality of the training data. Two-dimensional input
images of CNNs are more vulnerable to be redundant versus one-dimensional input
time-series of conventional neural networks. In this study, we propose a new
dimensionality reduction framework for reducing the dimension of CNN inputs
based on the tensor decomposition of the time-frequency representation of EEG
signals. The proposed tensor decomposition-based dimensionality reduction
algorithm transforms a large set of slices of the input tensor to a concise set
of slices which are called super-slices. Employing super-slices not only
handles the artifacts and redundancies of the EEG data but also reduces the
dimension of the CNNs training inputs. We also consider different
time-frequency representation methods for EEG image generation and provide a
comprehensive comparison among them. We test our proposed framework on HCB-MIT
data and as results show our approach outperforms other previous studies.
|
Recent advancements in speech synthesis have leveraged GAN-based networks
like HiFi-GAN and BigVGAN to produce high-fidelity waveforms from
mel-spectrograms. However, these networks are computationally expensive and
parameter-heavy. iSTFTNet addresses these limitations by integrating inverse
short-time Fourier transform (iSTFT) into the network, achieving both speed and
parameter efficiency. In this paper, we introduce an extension to iSTFTNet,
termed HiFTNet, which incorporates a harmonic-plus-noise source filter in the
time-frequency domain that uses a sinusoidal source from the fundamental
frequency (F0) inferred via a pre-trained F0 estimation network for fast
inference speed. Subjective evaluations on LJSpeech show that our model
significantly outperforms both iSTFTNet and HiFi-GAN, achieving
ground-truth-level performance. HiFTNet also outperforms BigVGAN-base on
LibriTTS for unseen speakers and achieves comparable performance to BigVGAN
while being four times faster with only $1/6$ of the parameters. Our work sets
a new benchmark for efficient, high-quality neural vocoding, paving the way for
real-time applications that demand high quality speech synthesis.
|
We propose a class of displacement- and laser-noise free
gravitational-wave-interferometer configurations, which does not sense
non-geodesic mirror motions and laser noises, but provides non-vanishing
gravitational-wave signal. Our interferometer consists of 4 mirrors and 2
beamsplitters, which form 4 Mach-Zehnder interferometers. By contrast to
previous works, no composite mirrors are required. Each mirror in our
configuration is sensed redundantly, by at least two pairs of incident and
reflected beams. Displacement- and laser-noise free detection is achieved when
output signals from these 4 interferometers are combined appropriately. Our
3-dimensional interferometer configuration has a low-frequency response
proportional to f^2, which is better than the f^3 achievable by previous
2-dimensional configurations.
|
Quantization has proven effective in high-resolution and large-scale
simulations, which benefit from bit-level memory saving. However, identifying a
quantization scheme that meets the requirement of both precision and memory
efficiency requires trial and error. In this paper, we propose a novel
framework to allow users to obtain a quantization scheme by simply specifying
either an error bound or a memory compression rate. Based on the error
propagation theory, our method takes advantage of auto-diff to estimate the
contributions of each quantization operation to the total error. We formulate
the task as a constrained optimization problem, which can be efficiently solved
with analytical formulas derived for the linearized objective function. Our
workflow extends the Taichi compiler and introduces dithering to improve the
precision of quantized simulations. We demonstrate the generality and
efficiency of our method via several challenging examples of physics-based
simulation, which achieves up to 2.5x memory compression without noticeable
degradation of visual quality in the results. Our code and data are available
at https://github.com/Hanke98/AutoQuantizer.
|
A computationally efficient method for solving three-dimensional, viscous,
incompressible flows on unbounded domains is presented. The method formally
discretizes the incompressible Navier-Stokes equations on an unbounded
staggered Cartesian grid. Operations are limited to a finite computational
domain through a lattice Green's function technique. This technique obtains
solutions to inhomogeneous difference equations through the discrete
convolution of source terms with the fundamental solutions of the discrete
operators. The differential algebraic equations describing the temporal
evolution of the discrete momentum equation and incompressibility constraint
are numerically solved by combining an integrating factor technique for the
viscous term and a half-explicit Runge-Kutta scheme for the convective term. A
projection method that exploits the mimetic and commutativity properties of the
discrete operators is used to efficiently solve the system of equations that
arises in each stage of the time integration scheme. Linear complexity, fast
computation rates, and parallel scalability are achieved using recently
developed fast multipole methods for difference equations. The accuracy and
physical fidelity of solutions is verified through numerical simulations of
vortex rings.
|
Thanks to their past history on the main sequence phase, supergiant massive
stars develop a convective shell around the helium core. This intermediate
convective zone (ICZ) plays an essential role in governing which g-modes are
excited. Indeed a strong radiative damping occurs in the high density radiative
core but the ICZ acts as a barrier preventing the propagation of some g-modes
into the core. These g-modes can thus be excited in supergiant stars by the
kappa-mechanism in the superficial layers due to the opacity bump of iron, at
log T=5.2. However massive stars are submitted to various complex phenomena
such as rotation, magnetic fields, semiconvection, mass loss, overshooting.
Each of these phenomena exerts a significant effect on the evolution and some
of them could prevent the onset of the convective zone. We develop a numerical
method which allows us to select the reflected, thus the potentially excited,
modes only. We study different cases in order to show that mass loss and
overshooting, in a large enough amount, reduce the extent of the ICZ and are
unfavourable to the excitation of g-modes.
|
We study an order relation on the fibers of a continuous map and its
application to the study of the structure of compact spaces of uncountable
weight.
|
Understanding the internals of Integrated Circuits (ICs), referred to as
Hardware Reverse Engineering (HRE), is of interest to both legitimate and
malicious parties. HRE is a complex process in which semi-automated steps are
interwoven with human sense-making processes. Currently, little is known about
the technical and cognitive processes which determine the success of HRE.
This paper performs an initial investigation on how reverse engineers solve
problems, how manual and automated analysis methods interact, and which
cognitive factors play a role. We present the results of an exploratory
behavioral study with eight participants that was conducted after they had
completed a 14-week training. We explored the validity of our findings by
comparing them with the behavior (strategies applied and solution time) of an
HRE expert. The participants were observed while solving a realistic HRE task.
We tested cognitive abilities of our participants and collected large sets of
behavioral data from log files. By comparing the least and most efficient
reverse engineers, we were able to observe successful strategies. Moreover, our
analyses suggest a phase model for reverse engineering, consisting of three
phases. Our descriptive results further indicate that the cognitive factor
Working Memory (WM) might play a role in efficiently solving HRE problems. Our
exploratory study builds the foundation for future research in this topic and
outlines ideas for designing cognitively difficult countermeasures ("cognitive
obfuscation") against HRE.
|
We present results of VLBI observations of the water masers associated with
IRAS 4A and IRAS 4B in the NGC 1333 star-forming region taken in four epochs
over a two month period. Both objects have been classified as extremely young
sources and each source is known to be a multiple system. Using the Very Long
Baseline Array, we detected 35 masers in Epoch I, 40 masers in Epoch II, 35 in
Epoch III, and 24 in Epoch IV. Only one identified source in each system
associates with these masers. These data are used to calculate proper motions
for the masers and trace the jet outflows within 100 AU of IRAS 4A2 and IRAS
4BW. In IRAS 4A2 there are two groups of masers, one near the systemic cloud
velocity and one red-shifted. They expand linearly away from each other at
velocities of 53 km/s. In IRAS 4BW, masers are observed in two groups that are
blue-shifted and red-shifted relative to the cloud velocity. They form complex
linear structures with a thickness of 3 mas (1 AU at a distance of 320 pc) that
expand linearly away from each other at velocities of 78 km/s. Neither of the
jet outflows traced by the maser groups align with the larger scale outflows.
We suggest the presence of unresolved companions to both IRAS 4A2 and 4BW.
|
Secret sharing allows distributing a secret among several parties such that
only authorized subsets, specified by an access structure, can reconstruct the
secret. Sehrawat and Desmedt (COCOON 2020) introduced hidden access structures,
that remain secret until some authorized subset of parties collaborate.
However, their scheme assumes semi-honest parties and supports only restricted
access structures. We address these shortcomings by constructing an access
structure hiding verifiable secret sharing scheme that supports all monotone
access structures. It is the first secret sharing scheme to support cheater
identification and share verifiability in malicious-majority settings. The
verification procedure of our scheme incurs no communication overhead. As the
building blocks of our scheme, we introduce and construct: (i) a set-system
with $> \exp\left(c\frac{2(\log h)^2}{(\log\log
h)}\right)+2\exp\left(c\frac{(\log h)^2}{(\log\log h)}\right)$ subsets of a set
of $h$ elements. Our set-system, $\mathcal{H}$, is defined over $\mathbb{Z}_m$,
where $m$ is a non-prime-power. The size of each set in $\mathcal{H}$ is
divisible by $m$ but the sizes of their pairwise intersections are not, unless
one set is a subset of another, (ii) a new variant of the learning with errors
(LWE) problem, called PRIM-LWE, wherein the secret matrix is sampled such that
its determinant is a generator of $\mathbb{Z}_q^*$, where $q$ is the LWE
modulus. The security of our scheme relies on the hardness of the LWE problem,
and its share size is $$(1+ o(1)) \dfrac{2^{\ell}}{\sqrt{\pi \ell/2}}(2
q^{\varrho + 0.5} + \sqrt{q} + \mathrm{\Theta}(h)),$$ where $\varrho \leq 1$ is
a constant and $\ell$ is the total number of parties. We also provide
directions for future work to reduce the share size to
\[\leq \dfrac{1}{3} \left( (1+ o(1)) \dfrac{2^{\ell}}{\sqrt{\pi \ell/2}}(2
q^{\varrho + 0.5} + 2\sqrt{q}) \right).\]
|
Smart devices are considered as an integral part of Internet of Things (IoT),
have an aim to make a dynamic network to exchange information, collect data,
analysis, and make optimal decisions in an autonomous way to achieve more
efficient, automatic, and economical services. Message dissemination among
these smart devices allows adding new features, sending updated instructions,
alerts or safety messages, informing the pricing information or billing amount,
incentives, and installing security patches. On one hand, such message
disseminations are directly beneficial to the all parties involved in the IoT
system. On the other hand, due to remote procedure, smart devices, vendors, and
other involved authorities might have to meet a number of security, privacy,
and performance related concerns while disseminating messages among targeted
devices. To this end, in this paper, we design STarEdgeChain, a security and
privacy aware targeted message dissemination in IoT to show how blockchain
along with advanced cryptographic techniques are devoted to address such
concerns. In fact, the STarEdgeChain employs a permissioned blockchain assisted
edge computing in order to expedite a single signcrypted message dissemination
among targeted groups of devices, at the same time avoiding the dependency of
utilizing multiple unicasting approaches. Finally, we develop a software
prototype of STarEdgeChain and show it's practicability for smart devices. The
codes are publicly available at https://github.com/mbaqer/Blockchain-IoT
|
Starting from the many-body Bethe-Salpeter equation we derive an
exchange-correlation kernel $f_{xc}$ that reproduces excitonic effects in bulk
materials within time-dependent density functional theory. The resulting
$f_{xc}$ accounts for both self-energy corrections and the electron-hole
interaction. It is {\em static}, {\em non-local} and has a long-range Coulomb
tail. Taking the example of bulk silicon, we show that the $- \alpha / q^2$
divergency is crucial and can, in the case of continuum excitons, even be
sufficient for reproducing the excitonic effects and yielding excellent
agreement between the calculated and the experimental absorption spectrum.
|
We present a novel Fourier camera, an in-hardware optical compression of
high-speed frames employing pixel-level sign-coded exposure where pixel
intensities temporally modulated as positive and negative exposure are combined
to yield Hadamard coefficients. The orthogonality of Walsh functions ensures
that the noise is not amplified during high-speed frame reconstruction, making
it a much more attractive option for coded exposure systems aimed at very high
frame rate operation. Frame reconstruction is carried out by a single-pass
demosaicking of the spatially multiplexed Walsh functions in a lattice
arrangement, significantly reducing the computational complexity. The
simulation prototype confirms the improved robustness to noise compared to the
binary-coded exposure patterns, such as one-hot encoding and pseudo-random
encoding. Our hardware prototype demonstrated the reconstruction of 4kHz frames
of a moving scene lit by ambient light only.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.