text
stringlengths 6
128k
|
---|
We present results on simplifying an acting group preserving properties of
actions: transitivity, being a coset space and preserving a fixed
equiuniformity in case of a $G$-Tychonoff space.
|
We present a dual-channel optical transmitter (MTx+)/transceiver (MTRx+) for
the front-end readout electronics of high-energy physics experiments. MTx+
utilizes two Transmitter Optical Sub-Assemblies (TOSAs) and MTRx+ utilizes a
TOSA and a Receiver Optical Sub-Assemblies (ROSA). Both MTx+ and MTRx+ receive
multimode fibers with standard Lucent Connectors (LCs) as the optical interface
and can be panel or board mounted to a motherboard with a standard Enhanced
Small Form-factor Pluggable (SFP+) connector as the electrical interface. MTx+
and MTRx+ employ a dual-channel Vertical-Cavity Surface-Emitting Laser (VCSEL)
driver ASIC called LOCld65, which brings the transmitting data rate up to 14
Gbps per channel. MTx+ and MTRx+ have been tested to survive 4.9 kGy(SiO2).
|
We present Berkeley Illinois Maryland Association (BIMA) millimeter
interferometer observations of giant molecular clouds (GMCs) along a spiral arm
in M31. The observations consist of a survey using the compact configuration of
the interferometer and follow-up, higher-resolution observations on a subset of
the detections in the survey. The data are processed using an analysis
algorithm designed to extract GMCs and correct their derived properties for
observational biases thereby facilitating comparison with Milky Way data. The
algorithm identifies 67 GMCs of which 19 have sufficient signal-to-noise to
accurately measure their properties. The GMCs in this portion of M31 are
indistinguishable from those found in the Milky Way, having a similar size-line
width relationship and distribution of virial parameters, confirming the
results of previous, smaller studies. The velocity gradients and angular
momenta of the GMCs are comparable to the values measured in M33 and the Milky
Way; and, in all cases, are below expected values based on the local galactic
shear. The studied region of M31 has a similar interstellar radiation field,
metallicity, Toomre Q parameter, and midplane volume density as the inner Milky
Way, so the similarity of GMC populations between the two systems is not
surprising.
|
We study the problem of selecting a user equipment (UE) and a beam for each
access point (AP) for concurrent transmissions in a millimeter wave (mmWave)
network, such that the sum of weighted rates of UEs is maximized. We prove that
this problem is NP-complete. We propose two algorithms -- Markov Chain Monte
Carlo (MCMC) based and local interaction game (LIG) based UE and beam selection
-- and prove that both of them asymptotically achieve the optimal solution.
Also, we propose two fast greedy algorithms -- NGUB1 and NGUB2 -- for UE and
beam selection. Through extensive simulations, we show that our proposed greedy
algorithms outperform the most relevant algorithms proposed in prior work and
perform close to the asymptotically optimal algorithms.
|
String theory provides one of the most deepest insights into quantum gravity.
Its single most central and profound result is the gauge/gravity duality, i.e.
the emergence of gravity from gauge theory. The two examples of M(atrix)-theory
and the AdS/CFT correspondence, together with the fundamental phenomena of
quantum entanglement, are of paramount importance to many fundamental problems
including the physics of black holes (in particular to the information loss
paradox), the emergence of spacetime geometry and to the problem of the
reconciliation of general relativity and quantum mechanics. In this article an
account of the AdS/CFT correspondence and the role of quantum entanglement in
the emergence of spacetime geometry using strictly the language of quantum
field theory is put forward.
|
We investigate theoretically the possibility for robust and fast cooling of a
trapped atomic ion by transient interaction with a pre-cooled ion. The
transient coupling is achieved through dynamical control of the ions'
equilibrium positions. To achieve short cooling times we make use of shortcuts
to adiabaticity by applying invariant-based engineering. We design these to
take account of imperfections such as stray fields, and trap frequency offsets.
For settings appropriate to a currently operational trap in our laboratory, we
find that robust performance could be achieved down to $6.3$ motional cycles,
comprising $14.2\ \mathrm{\mu s}$ for ions with a $0.44\ \mathrm{MHz}$ trap
frequency. This is considerably faster than can be achieved using laser cooling
in the weak coupling regime, which makes this an attractive scheme in the
context of quantum computing.
|
Music recommender systems have become a key technology supporting the access
to increasingly larger music catalogs in on-line music streaming services,
on-line music shops, and private collections. The interaction of users with
large music catalogs is a complex phenomenon researched from different
disciplines. We survey our works investigating the machine learning and data
mining aspects of hybrid music recommender systems (i.e., systems that
integrate different recommendation techniques). We proposed hybrid music
recommender systems based solely on data and robust to the so-called
"cold-start problem" for new music items, favoring the discovery of relevant
but non-popular music. We thoroughly studied the specific task of music
playlist continuation, by analyzing fundamental playlist characteristics, song
feature representations, and the relationship between playlists and the songs
therein.
|
Evaluating grounded neural language model performance with respect to
pragmatic qualities like the trade off between truthfulness, contrastivity and
overinformativity of generated utterances remains a challenge in absence of
data collected from humans. To enable such evaluation, we present a novel open
source image-text dataset "Annotated 3D Shapes" (A3DS) comprising over nine
million exhaustive natural language annotations and over 12 million
variable-granularity captions for the 480,000 images provided by Burges & Kim
(2018). We showcase the evaluation of pragmatic abilities developed by a
task-neutral image captioner fine-tuned in a multi-agent communication setting
to produce contrastive captions. The evaluation is enabled by the dataset
because the exhaustive annotations allow to quantify the presence of
contrastive features in the model's generations. We show that the model
develops human-like patterns (informativity, brevity, over-informativity for
specific features (e.g., shape, color biases)).
|
Black and white holes play remarkably contrasting roles in general relativity
versus observational astrophysics. While there is overwhelming observational
evidence for the existence of compact objects that are "cold, dark, and heavy",
which thereby are natural candidates for black holes, the theoretically viable
time-reversed variants -- the "white holes" -- have nowhere near the same level
of observational support. Herein we shall explore the possibility that the
connection between black and white holes is much more intimate than commonly
appreciated. We shall first construct "horizon penetrating" coordinate systems
that differ from the standard curvature coordinates only in a small
near-horizon region, thereby emphasizing that ultimately the distinction
between black and white horizons depends only on near-horizon physics. We shall
then construct an explicit model for a "black-to-white transition" where all of
the nontrivial physics is confined to a compact region of spacetime -- a
finite-duration finite-thickness, (in principle arbitrarily small), region
straddling the naive horizon. Moreover we shall show that it is possible to
arrange the "black-to-white transition" to have zero action -- so that it will
not be subject to destructive interference in the Feynman path integral. This
then raises the very intriguing possibility that astrophysical black holes
might be interpratable in terms of a quantum superposition of black and white
horizons.
|
In this paper we provide a characterization of second order fully nonlinear
CR invariant equations on the Heisenberg group, which is the analogue in the CR
setting of the result proved in the Euclidean setting by A. Li and the first
author (2003). We also prove a comparison principle for solutions of second
order fully nonlinear CR invariant equations defined on bounded domains of the
Heisenberg group and a comparison principle for solutions of a family of second
order fully nonlinear equations on a punctured ball.
|
We show how the continuity equation can be used to determine pattern speeds
in the Milky Way Galaxy (MWG). This method, first discussed by Tremaine &
Weinberg in the context of external galaxies, requires projected positions,
$(l,b)$, and line-of-sight velocities for a spatially complete sample of
relaxed tracers. If the local standard of rest (LSR) has a zero velocity in the
radial direction ($u_{\rm LSR}$), then the quantity that is measured is $\Delta
V \equiv \Omega_p R_0 - V_{\rm LSR}$, where $\Omega_p$ is the pattern speed of
the non-axisymmetric feature, $R_0$ is the distance of the Sun from the
Galactic centre and $V_{\rm LSR}$ is the tangential motion of the LSR,
including the circular velocity. We use simple models to assess the reliability
of the method for measuring a single, constant pattern speed of either a bar or
spiral in the inner MWG. We then apply the method to the OH/IR stars in the
ATCA/VLA OH 1612 MHz survey of Sevenster et al, finding $\Delta V = 252 \pm 41$
km/s, if $u_{\rm LSR} = 0$. Assuming further that $R_0 = 8$ kpc and $V_{\rm
LSR} = 220$ \kms, this gives $\Omega_p = 59\pm 5$ km/s/kpc with a possible
systematic error of perhaps 10 km/s/kpc. The non-axisymmetric feature for which
we measure this pattern speed must be in the disc of the MWG.
|
Ultra-cold atomic systems are among the most promising platforms that have
the potential to shed light on the complex behavior of many-body quantum
systems. One prominent example is the case of a dense ensemble illuminated by a
strong coherent drive while interacting via dipole-dipole interactions. Despite
being subjected to intense investigations, this system retains many open
questions. A recent experiment carried out in a pencil-shaped geometry reported
measurements that seemed consistent with the emergence of strong collective
effects in the form of a ``superradiant'' phase transition in free space, when
looking at the light emission properties in the forward direction. Motivated by
the experimental observations, we carry out a systematic theoretical analysis
of the system's steady-state properties as a function of the driving strength
and atom number, $N$. We observe signatures of collective effects in the weak
drive regime, which disappear with increasing drive strength as the system
evolves into a single-particle-like mixed state comprised of randomly aligned
dipoles. Although the steady-state features some similarities to the reported
superradiant to normal non-equilibrium transition, also known as cooperative
resonance fluorescence, we observe significant qualitative and quantitative
differences, including a different scaling of the critical drive parameter
(from $N$ to $\sqrt{N}$). We validate the applicability of a mean-field
treatment to capture the steady-state dynamics under currently accessible
conditions. Furthermore, we develop a simple theoretical model that explains
the scaling properties by accounting for interaction-induced inhomogeneous
effects and spontaneous emission, which are intrinsic features of interacting
disordered arrays in free space.
|
A popular paradigm for 3D point cloud registration is by extracting 3D
keypoint correspondences, then estimating the registration function from the
correspondences using a robust algorithm. However, many existing 3D keypoint
techniques tend to produce large proportions of erroneous correspondences or
outliers, which significantly increases the cost of robust estimation. An
alternative approach is to directly search for the subset of correspondences
that are pairwise consistent, without optimising the registration function.
This gives rise to the combinatorial problem of matching with pairwise
constraints. In this paper, we propose a very efficient maximum clique
algorithm to solve matching with pairwise constraints. Our technique combines
tree searching with efficient bounding and pruning based on graph colouring. We
demonstrate that, despite the theoretical intractability, many real problem
instances can be solved exactly and quickly (seconds to minutes) with our
algorithm, which makes our approach an excellent alternative to standard robust
techniques for 3D registration.
|
We study a positivity condition for the curvature of oriented Riemannian
4-manifolds: The half-$PIC$ condition. It is a slight weakening of the positive
isotropic curvature ($PIC$) condition introduced by M. Micallef and J. Moore.
We observe that the half-$PIC$ condition is preserved by the Ricci flow and
satisfies a maximality property among all Ricci flow invariant positivity
conditions on the curvature of oriented 4-manifolds.
We also study some geometric and topological aspects of half-$PIC$ manifolds.
|
We investigate the Meissner currents of interacting bosons subjected to a
staggered artificial gauge field in a three-leg ribbon geometry, realized by
spin-tensor--momentum coupled spin-1 atoms in a 1D optical lattice. By
calculating the current distributions using the state-of-the-art density-matrix
renormalization-group method, we find a rich phase diagram containing
interesting Meissner and vortex phases, where the currents are mirror symmetric
with respect to the {\color{red}middle leg} (i.e., they flow in the same
direction on the two boundary legs opposite to that on the middle leg), leading
to the spin-tensor type Meissner currents, which is very different from
previously observed chiral edge currents under uniform gauge field. The
currents are uniform along each leg in the Meissner phase and form
vortex-antivortex pairs in the vortex phase. Besides, the system also support a
polarized phase that spontaneously breaks the mirror symmetry, whose ground
states are degenerate with currents either uniform or forming vortex-antivortex
pairs. We also discuss the experimental schemes for probing these phases. Our
work provides useful guidance to ongoing experimental research on synthetic
flux ribbons and paves the way for exploring novel many-body phenomena therein.
|
We study symmetric simple exclusion processes (SSEP) on a ring in the
presence of uniformly moving multiple defects or disorders - a generalization
of the model proposed earlier [Phys. Rev. E 89, 022138 (2014)]. The defects
move with uniform velocity and change the particle hopping rates locally. We
explore the collective effects of the defects on the spatial structure and
transport properties of the system. We also introduce an SSEP with ordered
sequential (sitewise) update and elucidate the close connection with our model.
|
In this paper, it is shown that the quantum electrodynamic vacuum particle
production rate by a vaporization laser is negligible and is not a significant
energy sink at electric field strengths beyond the Schwinger Limit.
|
In this short note, the author shows that the gap problem of some $k$-CSPs
with the support of its predicate the ground of a balanced pairwise independent
distribution can be solved by a modified version of Hast's Algorithm BiLin that
calls Charikar\&Wirth's SDP algorithm for two rounds in polynomial time, when
$k$ is sufficiently large, the support of its predicate is combined by the
grounds of three biased homogeneous distributions and the three biases satisfy
certain conditions. To conclude, the author refutes Unique Game Conjecture,
assuming $P\ne NP$.
|
PG 1159-035, a pre-white dwarf with T=140000 K, is the prototype of the
PG1159 spectroscopic class and the DOV pulsating class. Changes in the star
cause variations in its oscillation periods. The measurement of temporal change
in the oscillation periods, dP/dt, allows us to estimate directly rates of
stellar evolutionary changes, such as the cooling rate and the envelope
contraction rate, providing a way to test and refine evolutionary models for
pre-white dwarf pulsating stars.
We measured 27 pulsation modes period changes. The periods varied at rates of
between 1 and 100 ms/yr, and several can be directly measured with a relative
standard uncertainty below 10%. For the 516.0 s mode (the highest in amplitude)
in particular, not only the value of dP/dt can be measured directly with a
relative standard uncertainty of 2%, but the second order period change,
d(dP/dt)/dt, can also be calculated reliably. By using the (O-C) method we
refined the dP/dt and estimated the d(dP/dt)/dt for six other pulsation
periods. As a first application, we calculated the change in the PG 1559-035
rotation period, dP_rot/dt = -2.13*10^{-6} s/s, the envelope contraction rate
dR/dt = -2.2*10^{-13} solar radius/s, and the cooling rante dT/dt =
-1.42*10^{-3} K/s.
|
Mehring et al. have recently described an elegant nuclear magnetic resonance
(NMR) experiment implementing an algorithm to factor numbers based on the
properties of Gauss sums. Similar experiments have also been described by
Mahesh et al. In fact these algorithms do not factor numbers directly, but
rather check whether a trial integer $\ell$ is a factor of a given integer $N$.
Here I show that these NMR schemes cannot be used for factor checking without
first implicitly determining whether or not $\ell$ is a factor of $N$.
|
This paper compares the advantages, limitations, and computational
considerations of using Finite-Time Lyapunov Exponents (FTLEs) and Lagrangian
Descriptors (LDs) as tools for identifying barriers and mechanisms of fluid
transport in two-dimensional time-periodic flows. These barriers and mechanisms
of transport are often referred to as "Lagrangian Coherent Structures," though
this term often changes meaning depending on the author or context. This paper
will specifically focus on using FTLEs and LDs to identify stable and unstable
manifolds of hyperbolic stagnation points, and the Kolmogorov-Arnold-Moser
(KAM) tori associated with elliptic stagnation points. The background and
theory behind both methods and their associated phase space structures will be
presented, and then examples of FTLEs and LDs will be shown based on a simple,
periodic, time-dependent double-gyre toy model with varying parameters.
|
The magnetic properties of the two-site Hubbard cluster (dimer or pair),
embedded in the external electric and magnetic fields and treated as the open
system, are studied by means of the exact diagonalization of the Hamiltonian.
The formalism of the grand canonical ensemble is adopted. The phase diagrams,
on-site magnetization, spin-spin correlations, mean occupation numbers and
hopping energy are investigated and illustrated in figures. An influence of
temperature, mean electron concentration, Coulomb $U$ parameter and external
fields on the quantities of interest is presented and discussed. In particular,
the anomalous behaviour of the magnetization and correlation function vs.
temperature near the critical magnetic field is found. Also, the effect of
magnetization switching by the external fields is demonstrated.
|
Prediction of future states of the environment and interacting agents is a
key competence required for autonomous agents to operate successfully in the
real world. Prior work for structured sequence prediction based on latent
variable models imposes a uni-modal standard Gaussian prior on the latent
variables. This induces a strong model bias which makes it challenging to fully
capture the multi-modality of the distribution of the future states. In this
work, we introduce Conditional Flow Variational Autoencoders (CF-VAE) using our
novel conditional normalizing flow based prior to capture complex multi-modal
conditional distributions for effective structured sequence prediction.
Moreover, we propose two novel regularization schemes which stabilizes training
and deals with posterior collapse for stable training and better fit to the
target data distribution. Our experiments on three multi-modal structured
sequence prediction datasets -- MNIST Sequences, Stanford Drone and HighD --
show that the proposed method obtains state of art results across different
evaluation metrics.
|
We prove that the $\cal P$ norm estimate between a Hardy martingale and its
cosine part are stable under dyadic perturbations, and show how dyadic
stability of the $\cal P$ norm estimate is used in the proof that $L^1$ embeds
into $L^1/H^1$.
|
The dynamical transition occurring in spin-glass models with one step of
Replica-Symmetry-Breaking is a mean-field artifact that disappears in finite
systems and/or in finite dimensions. The critical fluctuations that smooth the
transition are described in the $\beta$ regime by dynamical stochastic
equations. The quantitative parameters of the dynamical stochastic equations
have been computed analytically on the 3-spin Bethe lattice Spin-Glass by means
of the (static) cavity method and the equations have been solved numerically.
The resulting parameter-free dynamical predictions are shown here to be in
excellent agreement with numerical simulation data for the correlation and its
fluctuations.
|
We study reliable communication in uncoordinated vehicular communication from
the perspective of Shannon theory. Our system model for the information
transmission is that of an Arbitrarily Varying Channel (AVC): One
sender-receiver pair wants to communicate reliably, no matter what the input of
a second sender is. The second sender is assumed to be uncoordinated and
interfering, but is supposed to follow the rational goal of transmitting
information otherwise. We prove that repetition coding can increase the
capacity of such a system by relating the notion of symmetrizability of an
arbitrarily varying channel to invertibility of the corresponding channel
matrix. Explicit upper bounds on the number of repetitions needed to prevent
system breakdown through diversity are provided. Further we introduce the
notion of block-restricted jamming and present a lower and an upper bound on
the maximum error capacity of the corresponding restricted AVC.
|
Let $\mathfrak{g}$ be a semisimple Lie algebra over $\mathbb{C}$. Let $\nu
\in \text{Aut}\, \mathfrak{g}$ be a diagram automorphism whose order divides $T
\in \mathbb{Z}_{\geq 1}$. We define cyclotomic $\mathfrak{g}$-opers over the
Riemann sphere $\mathbb{P}^1$ as gauge equivalence classes of
$\mathfrak{g}$-valued connections of a certain form, equivariant under actions
of the cyclic group $\mathbb{Z}/ T\mathbb{Z}$ on $\mathfrak{g}$ and
$\mathbb{P}^1$. It reduces to the usual notion of $\mathfrak{g}$-opers when $T
= 1$.
We also extend the notion of Miura $\mathfrak{g}$-opers to the cyclotomic
setting. To any cyclotomic Miura $\mathfrak{g}$-oper $\nabla$ we associate a
corresponding cyclotomic $\mathfrak{g}$-oper. Let $\nabla$ have residue at the
origin given by a $\nu$-invariant rational dominant coweight
$\check{\lambda}_0$ and be monodromy-free on a cover of $\mathbb{P}^1$. We
prove that the subset of all cyclotomic Miura $\mathfrak{g}$-opers associated
with the same cyclotomic $\mathfrak{g}$-oper as $\nabla$ is isomorphic to the
$\vartheta$-invariant subset of the full flag variety of the adjoint group $G$
of $\mathfrak{g}$, where the automorphism $\vartheta$ depends on $\nu$, $T$ and
$\check{\lambda}_0$. The big cell of the latter is isomorphic to $N^\vartheta$,
the $\vartheta$-invariant subgroup of the unipotent subgroup $N \subset G$,
which we identify with those cyclotomic Miura $\mathfrak{g}$-opers whose
residue at the origin is the same as that of $\nabla$. In particular, the
cyclotomic generation procedure recently introduced in [arXiv:1505.07582] is
interpreted as taking $\nabla$ to other cyclotomic Miura $\mathfrak{g}$-opers
corresponding to elements of $N^\vartheta$ associated with simple root
generators.
We motivate the introduction of cyclotomic $\mathfrak{g}$-opers by
formulating two conjectures which relate them to the cyclotomic Gaudin model of
[arXiv:1409.6937].
|
Time series are ubiquitous and therefore inherently hard to analyze and
ultimately to label or cluster. With the rise of the Internet of Things (IoT)
and its smart devices, data is collected in large amounts any given second. The
collected data is rich in information, as one can detect accidents (e.g. cars)
in real time, or assess injury/sickness over a given time span (e.g. health
devices). Due to its chaotic nature and massive amounts of datapoints,
timeseries are hard to label manually. Furthermore new classes within the data
could emerge over time (contrary to e.g. handwritten digits), which would
require relabeling the data. In this paper we present SuSL4TS, a deep
generative Gaussian mixture model for semi-unsupervised learning, to classify
time series data. With our approach we can alleviate manual labeling steps,
since we can detect sparsely labeled classes (semi-supervised) and identify
emerging classes hidden in the data (unsupervised). We demonstrate the efficacy
of our approach with established time series classification datasets from
different domains.
|
We construct countably infinitely many nonradial singular solutions of the
problem \[ \Delta u+e^u=0\ \ \textrm{in}\ \ \mathbb{R}^N\backslash\{0\},\ \
4\le N\le 10 \] of the form \[ u(r,\sigma)=-2\log r+\log 2(N-2)+v(\sigma), \]
where $v(\sigma)$ depends only on $\sigma\in \mathbb{S}^{N-1}$. To this end we
construct countably infinitely many solutions of \[
\Delta_{\mathbb{S}^{N-1}}v+2(N-2)(e^v-1)=0,\ \ 4\le N\le 10, \] using ODE
techniques.
|
The organization of live cells to tissues is associated with the mechanical
interaction between cells, which is mediated through their elastic environment.
We model cells as spherical active force dipoles surrounded by an infinite
elastic matrix, and analytically evaluate the interaction energy for different
scenarios of their regulatory behavior. We obtain attraction for homeostatic
(set point) forces and repulsion for homeostatic displacements. When the
translational motion of the cells is regulated, the interaction energy decays
with distance as $1/d^4$, while when it is not regulated the energy decays as
$1/d^6$. This arises from the same reasons as the van der Waals interaction
between induced electric dipoles.
|
We compute the error threshold of color codes, a class of topological quantum
codes that allow a direct implementation of quantum Clifford gates, when both
qubit and measurement errors are present. By mapping the problem onto a
statistical-mechanical three-dimensional disordered Ising lattice gauge theory,
we estimate via large-scale Monte Carlo simulations that color codes are stable
against 4.5(2)% errors. Furthermore, by evaluating the skewness of the Wilson
loop distributions, we introduce a very sensitive probe to locate first-order
phase transitions in lattice gauge theories.
|
We prove results relating the theory of optimal transport and generalized
Ricci flow. We define an adapted cost functional for measures using a solution
of the associated dilaton flow. This determines a formal notion of geodesics in
the space of measures, and we show geodesic convexity of an associated entropy
functional. Finally, we show monotonicity of the cost along the backwards heat
flow, and use this to give a new proof of the monotonicity of the energy
functional along generalized Ricci flow.
|
Valence quark distributions of pion at very low resolution scale $Q^{2}_0
\sim 0.1~GeV^2$ are deduced from a maximum entropy method, under the assumption
that pion consists of only a valence quark and a valence anti-quark at such a
low scale. Taking the obtained initial quark distributions as the
nonperturbative input in the modified
Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (with the GLR-MQ-ZRS corrections)
evolution, the generated valence quark distribution functions at high $Q^2$ are
consistent with the measured ones from a Drell-Yan experiment. The maximum
entropy method is also applied to estimate the valence quark distributions at
relatively higher $Q^2$ = 0.26 GeV$^{2}$. At this higher scale, other
components (sea quarks and gluons) should be considered in order to match the
experimental data. The first three moments of pion quark distributions at high
$Q^2$ are calculated and compared with the other theoretical predictions.
|
In this article, we introduce a modular hybrid analysis and modeling (HAM)
approach to account for hidden physics in reduced order modeling (ROM) of
parameterized systems relevant to fluid dynamics. The hybrid ROM framework is
based on using the first principles to model the known physics in conjunction
with utilizing the data-driven machine learning tools to model remaining
residual that is hidden in data. This framework employs proper orthogonal
decomposition as a compression tool to construct orthonormal bases and Galerkin
projection (GP) as a model to built the dynamical core of the system. Our
proposed methodology hence compensates structural or epistemic uncertainties in
models and utilizes the observed data snapshots to compute true modal
coefficients spanned by these bases. The GP model is then corrected at every
time step with a data-driven rectification using a long short-term memory
(LSTM) neural network architecture to incorporate hidden physics. A
Grassmannian manifold approach is also adapted for interpolating basis
functions to unseen parametric conditions. The control parameter governing the
system's behavior is thus implicitly considered through true modal coefficients
as input features to the LSTM network. The effectiveness of the HAM approach is
discussed through illustrative examples that are generated synthetically to
take hidden physics into account. Our approach thus provides insights
addressing a fundamental limitation of the physics-based models when the
governing equations are incomplete to represent underlying physical processes.
|
We study the approximation properties of a harmonic function $u \in
H\sp{1-k}(\Omega)$, $k > 0$, on relatively compact sub-domain $A$ of $\Omega$,
using the Generalized Finite Element Method. For smooth, bounded domains
$\Omega$, we obtain that the GFEM--approximation $u_S$ satisfies $\|u -
u_S\|_{H\sp{1}(A)} \le C h^{\gamma}\|u\|_{H\sp{1-k}(\Omega)}$, where $h$ is the
typical size of the ``elements'' defining the GFEM--space $S$ and $\gamma \ge 0
$ is such that the local approximation spaces contain all polynomials of degree
$k + \gamma + 1$. The main technical result is an extension of the classical
super-approximation results of Nitsche and Schatz \cite{NitscheSchatz72} and,
especially, \cite{NitscheSchatz74}. It turns out that, in addition to the usual
``energy'' Sobolev spaces $H^1$, one must use also the negative order Sobolev
spaces $H\sp{-l}$, $l \ge 0$, which are defined by duality and contain the
distributional boundary data.
|
In this paper theoretical and statistical/experimental criteria for
determining the nanoscale strength of materials are proposed. In particular,
quantized criteria in fracture mechanics, dynamic fracture mechanics and
fatigue, as well as an experimental indirect observation of the nanoscale
strength, are proposed. The increasing of the dynamic resistance and the role
of a fractal crack surface formation are also rationalized. The analysis shows
that materials can be sensitive to flaws also at nanoscale (as demonstrated for
carbon nanotubes), in contrast to the conclusion of a recently published paper,
and that the surfaces are weaker than the inner parts of a solid by a factor of
about 10%. In addition, the proposed statistical/experimental procedure is
applied for predicting the nanoscale strength of the ultrananocrystalline
diamond (UNCD), an innovative material only recently developed.
|
We present "interoperability" as a guiding framework for statistical
modelling to assist policy makers asking multiple questions using diverse
datasets in the face of an evolving pandemic response. Interoperability
provides an important set of principles for future pandemic preparedness,
through the joint design and deployment of adaptable systems of statistical
models for disease surveillance using probabilistic reasoning. We illustrate
this through case studies for inferring spatial-temporal coronavirus disease
2019 (COVID-19) prevalence and reproduction numbers in England.
|
This paper reviews the status of molecular dynamics as a method in describing
solid-solid phase transitions, and its relationship to continuum approaches.
Simulation work done in NiTi and Zr using first principles and semi-empirical
potentials is presented. This shows failures of extending equilibrium
thermodynamics to the nanoscale, and the crucial importance of system-specific
details to the dynamics of martensite formation. The inconsistency between
experimental and theoretical crystal structures in NiTi is described, together
with its possible resolution in terms of nanoscale effects.
|
Integrated optomechanical systems are one of the leading platforms for
manipulating, sensing, and distributing quantum information. The temperature
increase due to residual optical absorption sets the ultimate limit on
performance for these applications. In this work, we demonstrate a
two-dimensional optomechanical crystal geometry, named \textbf{b-dagger}, that
alleviates this problem through increased thermal anchoring to the surrounding
material. Our mechanical mode operates at 7.4 GHz, well within the operation
range of standard cryogenic microwave hardware and piezoelectric transducers.
The enhanced thermalization combined with the large optomechanical coupling
rates, $g_0/2\pi \approx 880~\mathrm{kHz}$, and high optical quality factors,
$Q_\text{opt} = 2.4 \times 10^5$, enables the ground-state cooling of the
acoustic mode to phononic occupancies as low as $n_\text{m} = 0.35$ from an
initial temperature of 3 kelvin, as well as entering the optomechanical
strong-coupling regime. Finally, we perform pulsed sideband asymmetry of our
devices at a temperature below 10 millikelvin and demonstrate ground-state
operation ($n_\text{m} < 0.45$) for repetition rates as high as 3 MHz. Our
results extend the boundaries of optomechanical system capabilities and
establish a robust foundation for the next generation of microwave-to-optical
transducers with entanglement rates overcoming the decoherence rates of
state-of-the-art superconducting qubits.
|
Nitrogen heteroatom doping into a triangulene molecule allows tuning its
magnetic state. However, the synthesis of the nitrogen-doped triangulene
(aza-triangulene) has been challenging. Herein, we report the successful
synthesis of aza-triangulene on the Au(111) and Ag(111) surfaces, along with
their characterizations by scanning tunneling microscopy and spectroscopy in
combination with density functional theory (DFT) calculations. Aza-triangulenes
were obtained by reducing ketone-substituted precursors. Exposure to atomic
hydrogen followed by thermal annealing and, when necessary, manipulations with
the scanning probe, afforded the target product. We demonstrate that on Au(111)
aza-triangulene donates an electron to the substrate and exhibits an open-shell
triplet ground state. This is derived from the different Kondo resonances of
the final aza-triangulene product and a series of intermediates on Au(111).
Experimentally mapped molecular orbitals match perfectly with DFT calculated
counterparts for a positively charged aza-triangulene. In contrast,
aza-triangulene on Ag(111) receives an extra electron from the substrate and
displays a closed-shell character. Our study reveals the electronic properties
of aza-triangulene on different metal surfaces and offers an efficient approach
for the fabrication of new hydrocarbon structures, including reactive
open-shell molecules.
|
We deduce $q$-continued fractions $S_{1}(q)$, $S_{2}(q)$ and $S_{3}(q)$ of
order fourteen, and continued fractions $V_{1}(q)$, $V_{2}(q)$ and $V_{3}(q)$
of order twenty-eight from a general continued fraction identity of Ramanujan.
We establish some theta-function identities for the continued fractions and
derive some colour partition identities as applications. Some vanishing
coefficients results arising from the continued fractions are also offered.
|
The approach to the consideration of the ordinary differential equations with
distributions in the classical space $\mathcal D'$ of distributions with
continuous test functions has certain insufficiencies: the notations are
incorrect from the point of view of distribution theory, the right-hand side
has to satisfy the restrictive conditions of equality type. In the present
paper we consider an initial value problem for the ordinary differential
equation with distributions in the space of distributions with dynamic test
functions $\mathcal T'$, where the continuous operation of multiplication of
distributions by discontinuous functions is defined, and show that this
approach does not have the aforementioned insufficiencies.
We provide the sufficient conditions for viability of solutions of the
ordinary differential equations with distributions (a generalization of the
Nagumo Theorem), and show that the consideration of the distributional
(impulse) controls in the problem for avoidance of encounters with the set (the
maximal viability time problem) allows us to provide the existence of solution,
which may not exist for the ordinary controls.
|
We study the problem of selling identical goods to n unit-demand bidders in a
setting in which the total supply of goods is unknown to the mechanism. Items
arrive dynamically, and the seller must make the allocation and payment
decisions online with the goal of maximizing social welfare. We consider two
models of unknown supply: the adversarial supply model, in which the mechanism
must produce a welfare guarantee for any arbitrary supply, and the stochastic
supply model, in which supply is drawn from a distribution known to the
mechanism, and the mechanism need only provide a welfare guarantee in
expectation.
Our main result is a separation between these two models. We show that all
truthful mechanisms, even randomized, achieve a diminishing fraction of the
optimal social welfare (namely, no better than a Omega(loglog n) approximation)
in the adversarial setting. In sharp contrast, in the stochastic model, under a
standard monotone hazard-rate condition, we present a truthful mechanism that
achieves a constant approximation. We show that the monotone hazard rate
condition is necessary, and also characterize a natural subclass of truthful
mechanisms in our setting, the set of online-envy-free mechanisms. All of the
mechanisms we present fall into this class, and we prove almost optimal lower
bounds for such mechanisms. Since auctions with unknown supply are regularly
run in many online-advertising settings, our main results emphasize the
importance of considering distributional information in the design of auctions
in such environments.
|
The drivers of compositionality in artificial languages that emerge when two
(or more) agents play a non-visual referential game has been previously
investigated using approaches based on the REINFORCE algorithm and the (Neural)
Iterated Learning Model. Following the more recent introduction of the
\textit{Straight-Through Gumbel-Softmax} (ST-GS) approach, this paper
investigates to what extent the drivers of compositionality identified so far
in the field apply in the ST-GS context and to what extent do they translate
into (emergent) systematic generalisation abilities, when playing a visual
referential game. Compositionality and the generalisation abilities of the
emergent languages are assessed using topographic similarity and zero-shot
compositional tests. Firstly, we provide evidence that the test-train split
strategy significantly impacts the zero-shot compositional tests when dealing
with visual stimuli, whilst it does not when dealing with symbolic ones.
Secondly, empirical evidence shows that using the ST-GS approach with small
batch sizes and an overcomplete communication channel improves compositionality
in the emerging languages. Nevertheless, while shown robust with symbolic
stimuli, the effect of the batch size is not so clear-cut when dealing with
visual stimuli. Our results also show that not all overcomplete communication
channels are created equal. Indeed, while increasing the maximum sentence
length is found to be beneficial to further both compositionality and
generalisation abilities, increasing the vocabulary size is found detrimental.
Finally, a lack of correlation between the language compositionality at
training-time and the agents' generalisation abilities is observed in the
context of discriminative referential games with visual stimuli. This is
similar to previous observations in the field using the generative variant with
symbolic stimuli.
|
We study perfect state transfer on quantum networks represented by weighted
graphs. Our focus is on graphs constructed from the join and related graph
operators. Some specific results we prove include: (1) The join of a weighted
two-vertex graph with any regular graph has perfect state transfer. This
generalizes a result of Casaccino et al. [clms09] where the regular graph is a
complete graph or a complete graph with a missing link. In contrast, the
half-join of a weighted two-vertex graph with any weighted regular graph has no
perfect state transfer. This implies that adding weights in a complete
bipartite graph do not help in achieving perfect state transfer. (2) A Hamming
graph has perfect state transfer between each pair of its vertices. This is
obtained using a closure property on weighted Cartesian products of perfect
state transfer graphs. Moreover, on the hypercube, we show that perfect state
transfer occurs between uniform superpositions on pairs of arbitrary subcubes.
This generalizes results of Bernasconi et al. [bgs08] and Moore and Russell
[mr02]. Our techniques rely heavily on the spectral properties of graphs built
using the join and Cartesian product operators.
|
Local diffusivity of a protein depends crucially on the conformation, and the
conformational fluctuations are often non-Markovian. Here, we investigate the
Langevin equation with non-Markovian fluctuating diffusivity, where the
fluctuating diffusivity is modeled by a generalized Langevin equation under a
double-well potential. We find that non-Markovian fluctuating diffusivity
affect the global diffusivity, i.e., the diffusion coefficient obtained by the
long-time trajectories when the memory kernel in the generalized Langevin
equation is a power-law form. On the other hand, the diffusion coefficient does
not change when the memory kernel is exponential. We show that these
non-Markovian effects are the consequences of an everlasting effect of the
initial condition on the stationary distribution in the generalized Langevin
equation under a double-well potential due to long-term memory.
|
Physicists and physics students have been studied with respect to the
variation in ways they expound on their topic of research and a physics
problem, respectively. A phenomenographic approach has been employed; six
fourth-year physics students and ten teacher/researcher physicists at various
stages of their careers have been interviewed. Four qualitatively distinct ways
of expounding on physics have been identified, constituting an outcome space
where there is a successive shift towards coherent structure and multiple
referent domains. The interviewed person is characterised as expressing an
'object of knowledge' and the interviewer is characterised as a willing and
active listener who is trying to make sense of it, constituting a 'knowledge
object' out of the ideas, data and personal experience. Pedagogical situations
of analogous character to the interviewer-interviewee discussions are
considered in the light of the analysis, focusing on the affordances for
learning offered by the different forms of exposition.
|
A theoretical and experimental study of the spin-over mode induced by the
elliptical instability of a flow contained in a slightly deformed rotating
spherical shell is presented. This geometrical configuration mimics the liquid
rotating cores of planets when deformed by tides coming from neighboring
gravitational bodies. Theoretical estimations for the growth rates and for the
non linear amplitude saturations of the unstable mode are obtained and compared
to experimental data obtained from Laser D\"{o}ppler anemometry measurements.
Visualizations and descriptions of the various characteristics of the
instability are given as functions of the flow parameter.
|
In this paper, we propose the Redundancy Reduction Twins Network (RRTN), a
redundancy reduction training framework that minimizes redundancy by measuring
the cross-correlation matrix between the outputs of the same network fed with
distorted versions of a sample and bringing it as close to the identity matrix
as possible. RRTN also applies a new loss function, the Barlow Twins loss
function, to help maximize the similarity of representations obtained from
different distorted versions of a sample. However, as the distribution of
losses can cause performance fluctuations in the network, we also propose the
use of a Restrained Uncertainty Weight Loss (RUWL) or joint training to
identify the best weights for the loss function. Our best approach on CNN14
with the proposed methodology obtains a CCC over emotion regression of 0.678 on
the ExVo Multi-task dev set, a 4.8% increase over a vanilla CNN 14 CCC of
0.647, which achieves a significant difference at the 95% confidence interval
(2-tailed).
|
Context. Solar Orbiter and PSP jointly observed the solar wind for the first
time in June 2020, capturing data from very different solar wind streams, calm
and Alfv\'enic wind as well as many dynamic structures. Aims. The aim here is
to understand the origin and characteristics of the highly dynamic solar wind
observed by the two probes, in particular in the vicinity of the heliospheric
current sheet (HCS). Methods. We analyse the plasma data obtained by PSP and
Solar Orbiter in situ during the month of June 2020. We use the Alfv\'en-wave
turbulence MHD solar wind model WindPredict-AW, and perform two 3D simulations
based on ADAPT solar magnetograms for this period. Results. We show that the
dynamic regions measured by both spacecraft are pervaded with flux ropes close
to the HCS. These flux ropes are also present in the simulations, forming at
the tip of helmet streamers, i.e. at the base of the heliospheric current
sheet. The formation mechanism involves a pressure driven instability followed
by a fast tearing reconnection process, consistent with the picture of
R\'eville et al. (2020a). We further characterize the 3D spatial structure of
helmet streamer born flux ropes, which seems, in the simulations, to be related
to the network of quasi-separatrices.
|
In this paper we investigate the balanced condition (in the sense of
Donaldson) and the existence of an Englis expansion for the LeBrun's metrics on
$C^2$. Our first result shows that a LeBrun's metric on $C^2$ is never balanced
unless it is the flat metric. The second one shows that an Englis expansion of
the Rawnsley's function associated to a LeBrun's metric always exists, while
the coefficient $a_3$ of the expansion vanishes if and only if the LeBrun's
metric is indeed the flat one.
|
We first review some invariant theoretic results about the finite subgroups
of SU(2) in a quick algebraic way by using the McKay correspondence and quantum
affine Cartan matrices. By the way it turns out that some parameters
(a,b,h;p,q,r) that one usually associates with such a group and hence with a
simply-laced Coxeter-Dynkin diagram have a meaningful definition for the
non-simply-laced diagrams, too, and as a byproduct we extend Saito's formula
for the determinant of the Cartan matrix to all cases. Returning to invariant
theory we show that for each irreducible representation i of a binary
tetrahedral, octahedral, or icosahedral group one can find a homomorphism into
a finite complex reflection group whose defining reflection representation
restricts to i.
|
Text generation rarely considers the control of lexical complexity, which
limits its more comprehensive practical application. We introduce a novel task
of lexical complexity controlled sentence generation, which aims at keywords to
sentence generation with desired complexity levels. It has enormous potential
in domains such as grade reading, language teaching and acquisition. The
challenge of this task is to generate fluent sentences only using the words of
given complexity levels. We propose a simple but effective approach for this
task based on complexity embedding. Compared with potential solutions, our
approach fuses the representations of the word complexity levels into the model
to get better control of lexical complexity. And we demonstrate the feasibility
of the approach for both training models from scratch and fine-tuning the
pre-trained models. To facilitate the research, we develop two datasets in
English and Chinese respectively, on which extensive experiments are conducted.
Results show that our approach better controls lexical complexity and generates
higher quality sentences than baseline methods.
|
In recent years, large-scale auto-regressive models have made significant
progress in various tasks, such as text or video generation. However, the
environmental impact of these models has been largely overlooked, with a lack
of assessment and analysis of their carbon footprint. To address this gap, we
introduce OpenCarbonEval, a unified framework for integrating large-scale
models across diverse modalities to predict carbon emissions, which could
provide AI service providers and users with a means to estimate emissions
beforehand and help mitigate the environmental pressure associated with these
models. In OpenCarbonEval, we propose a dynamic throughput modeling approach
that could capture workload and hardware fluctuations in the training process
for more precise emissions estimates. Our evaluation results demonstrate that
OpenCarbonEval can more accurately predict training emissions than previous
methods, and can be seamlessly applied to different modal tasks. Specifically,
we show that OpenCarbonEval achieves superior performance in predicting carbon
emissions for both visual models and language models. By promoting sustainable
AI development and deployment, OpenCarbonEval can help reduce the environmental
impact of large-scale models and contribute to a more environmentally
responsible future for the AI community.
|
We propose a new theory framework to study the electroweak radiative
corrections in $K_{l3}$ decays by combining the classic current algebra
approach with the modern effective field theory. Under this framework, the most
important $\mathcal{O}(G_F\alpha)$ radiative corrections are described by a
single tensor $T^{\mu\nu}$ involving the time-ordered product between the
charged weak current and the electromagnetic current, and all remaining pieces
are calculable order-by-order in Chiral Perturbation Theory. We further point
out a special advantage in the $K_{l3}^{0}$ channel that it suffers the least
impact from the poorly-constrained low-energy constants. This finding may serve
as a basis for a more precise extraction of the matrix element $V_{us}$ in the
future.
|
Quantum machine learning has emerged as a promising utilization of near-term
quantum computation devices. However, algorithmic classes such as variational
quantum algorithms have been shown to suffer from barren plateaus due to
vanishing gradients in their parameters spaces. We present an approach to
quantum algorithm optimization that is based on trainable Fourier coefficients
of Hamiltonian system parameters. Our ansatz is exclusive to the extension of
discrete quantum variational algorithms to analog quantum optimal control
schemes and is non-local in time. We demonstrate the viability of our ansatz on
the objectives of compiling the quantum Fourier transform and preparing ground
states of random problem Hamiltonians. In comparison to the temporally local
discretization ans\"atze in quantum optimal control and parameterized circuits,
our ansatz exhibits faster and more consistent convergence. We uniformly sample
objective gradients across the parameter space and find that in our ansatz the
variance decays at a non-exponential rate with the number of qubits, while it
decays at an exponential rate in the temporally local benchmark ansatz. This
indicates the mitigation of barren plateaus in our ansatz. We propose our
ansatz as a viable candidate for near-term quantum machine learning.
|
We present results for QCD with 2 degenerate flavours of quark using a
non-perturbatively improved action on a lattice volume of $16^3\times32$ where
the bare gauge coupling and bare dynamical quark mass have been chosen to
maintain a fixed physical lattice spacing and volume (1.71 fm). By comparing
measurements from these matched ensembles, including quenched ones, we find
evidence of dynamical quark effects on the short distance static potential, the
scalar glueball mass and the topological susceptibility. There is little
evidence of effects on the light hadron spectrum over the range of quark masses
studied ($m_{\pi}/m_{\rho}\geq 0.60$).
|
We report the spin texture formation resulting from the magnetic
dipole-dipole interaction in a spin-2 $^{87}$Rb Bose-Einstein condensate. The
spinor condensate is prepared in the transversely polarized spin state and the
time evolution is observed under a magnetic field of 90 mG with a gradient of 3
mG/cm using Stern-Gerlach imaging. The experimental results are compared with
numerical simulations of the Gross-Pitaevskii equation, which reveals that the
observed spatial modulation of the longitudinal magnetization is due to the
spin precession in an effective magnetic field produced by the dipole-dipole
interaction. These results show that the dipole-dipole interaction has
considerable effects even on spinor condensates of alkali metal atoms.
|
Several domination results have been obtained for maximal outerplanar graphs
(mops). The classical domination problem is to minimize the size of a set $S$
of vertices of an $n$-vertex graph $G$ such that $G - N[S]$, the graph obtained
by deleting the closed neighborhood of $S$, is null. A classical result of
Chv\'{a}tal is that the minimum size is at most $n/3$ if $G$ is a mop. Here we
consider a modification by allowing $G - N[S]$ to have isolated vertices and
isolated edges only. Let $\iota_1(G)$ denote the size of a smallest set $S$ for
which this is achieved. We show that if $G$ is a mop on $n \geq 5$ vertices,
then $\iota_{1}(G) \leq n/5$. We also show that if $n_2$ is the number of
vertices of degree $2$, then $\iota_{1}(G) \leq \frac{n+n_2}{6}$ if $n_2 \leq
\frac{n}{3}$, and $\iota_1(G) \leq \frac{n-n_2}{3}$ otherwise. We show that
these bounds are best possible.
|
In this paper, the linear complexity over $\mathbf{GF}(r)$ of generalized
cyclotomic quaternary sequences with period $2pq$ is determined, where $ r $ is
an odd prime such that $r \ge 5$ and $r\notin \lbrace p,q\rbrace$. The minimal
value of the linear complexity is equal to $\tfrac{5pq+p+q+1}{4}$ which is
greater than the half of the period $2pq$. According to the Berlekamp-Massey
algorithm, these sequences are viewed as enough good for the use in
cryptography. We show also that if the character of the extension field
$\mathbf{GF}(r^{m})$, $r$, is chosen so that $\bigl(\tfrac{r}{p}\bigr) =
\bigl(\tfrac{r}{q}\bigr) = -1$, $r\nmid 3pq-1$, and $r\nmid 2pq-4$, then the
linear complexity can reach the maximal value equal to the length of the
sequences.
|
This paper presents an alternative way to the dynamic modeling of a
rotational inverted pendulum using the classic mechanics known as
Euler-Lagrange allows to find motion equations that describe our model. It also
has a design of the basic model of the system in SolidWorks software, which
based on the material and dimensions of the model provides some physical
variables necessary for modeling. In order to verify the theoretical results,
It was made a contrast between the solutions obtained by simulation
SimMechanics-Matlab and the system of equations Euler-Lagrange, solved through
ODE23tb method included in Matlab bookstores for solving equations systems of
the type and order obtained. This article comprises a pendulum trajectory
analysis by a phase space diagram that allows the identification of stable and
unstable regions of the system.
|
Supersymmetry implies that stable non-topological solitons, Q-balls, could
form in the early universe and could make up all or part of dark matter. We
show that the relic Q-balls passing through Earth can produce a detectable
neutrino flux. The peculiar zenith angle dependence and a small annual
modulation of this flux can be used as signatures of dark-matter Q-balls.
|
Time series with large discontinuities are common in many scenarios. However,
existing distance-based algorithms (e.g., DTW and its derivative algorithms)
may perform poorly in measuring distances between these time series pairs. In
this paper, we propose the segmented pairwise distance (SPD) algorithm to
measure distances between time series with large discontinuities. SPD is
orthogonal to distance-based algorithms and can be embedded in them. We
validate advantages of SPD-embedded algorithms over corresponding
distance-based ones on both open datasets and a proprietary dataset of surgical
time series (of surgeons performing a temporal bone surgery in a virtual
reality surgery simulator). Experimental results demonstrate that SPD-embedded
algorithms outperform corresponding distance-based ones in distance measurement
between time series with large discontinuities, measured by the Silhouette
index (SI).
|
We develop an inexact primal-dual first-order smoothing framework to solve a
class of non-bilinear saddle point problems with primal strong convexity.
Compared with existing methods, our framework yields a significant improvement
over the primal oracle complexity, while it has competitive dual oracle
complexity. In addition, we consider the situation where the primal-dual
coupling term has a large number of component functions. To efficiently handle
this situation, we develop a randomized version of our smoothing framework,
which allows the primal and dual sub-problems in each iteration to be inexactly
solved by randomized algorithms in expectation. The convergence of this
framework is analyzed both in expectation and with high probability. In terms
of the primal and dual oracle complexities, this framework significantly
improves over its deterministic counterpart. As an important application, we
adapt both frameworks for solving convex optimization problems with many
functional constraints. To obtain an $\varepsilon$-optimal and
$\varepsilon$-feasible solution, both frameworks achieve the best-known oracle
complexities.
|
Many diseases cause significant changes to the concentrations of small
molecules (aka metabolites) that appear in a person's biofluids, which means
such diseases can often be readily detected from a person's "metabolic
profile". This information can be extracted from a biofluid's NMR spectrum.
Today, this is often done manually by trained human experts, which means this
process is relatively slow, expensive and error-prone. This paper presents a
tool, Bayesil, that can quickly, accurately and autonomously produce a complex
biofluid's (e.g., serum or CSF) metabolic profile from a 1D1H NMR spectrum.
This requires first performing several spectral processing steps then matching
the resulting spectrum against a reference compound library, which contains the
"signatures" of each relevant metabolite. Many of these steps are novel
algorithms and our matching step views spectral matching as an inference
problem within a probabilistic graphical model that rapidly approximates the
most probable metabolic profile. Our extensive studies on a diverse set of
complex mixtures, show that Bayesil can autonomously find the concentration of
all NMR-detectable metabolites accurately (~90% correct identification and ~10%
quantification error), in <5minutes on a single CPU. These results demonstrate
that Bayesil is the first fully-automatic publicly-accessible system that
provides quantitative NMR spectral profiling effectively -- with an accuracy
that meets or exceeds the performance of trained experts. We anticipate this
tool will usher in high-throughput metabolomics and enable a wealth of new
applications of NMR in clinical settings. Available at http://www.bayesil.ca.
|
Despite the successes in capturing continuous distributions, the application
of generative adversarial networks (GANs) to discrete settings, like natural
language tasks, is rather restricted. The fundamental reason is the difficulty
of back-propagation through discrete random variables combined with the
inherent instability of the GAN training objective. To address these problems,
we propose Maximum-Likelihood Augmented Discrete Generative Adversarial
Networks. Instead of directly optimizing the GAN objective, we derive a novel
and low-variance objective using the discriminator's output that follows
corresponds to the log-likelihood. Compared with the original, the new
objective is proved to be consistent in theory and beneficial in practice. The
experimental results on various discrete datasets demonstrate the effectiveness
of the proposed approach.
|
A Lorentz-noninvariant modification of the kinematic dispersion law was
proposed in [hep-th/0211237], claimed to be derivable from from q-deformed
noncommutative theory, and argued to evade ultrahigh energy threshold anomalies
(trans-GKZ-cutoff cosmic rays and TeV-photons) by raising the respective
thresholds. It is pointed out that such dispersion laws do not follow from
deformed oscillator systems, and the proposed dispersion law is invalidated by
tachyonic propagation, as well as photon instability, in addition to the
process considered.
|
Nested graphs have been used in different applications, for example to
represent knowledge in semantic networks. On the other hand, graphs with cycles
are really important in surface reconstruction, periodic schedule and network
analysis. Also, of particular interest are the cycle basis, which arise in
mathematical and algorithm problems. In this work we develop the concept of
perfectly nested eulerian circuits, exploring some of their properties. The
main result establishes an order isomorphism between some sets of perfectly
nested circuits and equivalence classes over finite binary sequences.
|
We propose a dimensionality reduction method for infinite-dimensional
measure-valued evolution equations such as the Fokker-Planck partial
differential equation or the Kushner-Stratonovich resp. Duncan-Mortensen-Zakai
stochastic partial differential equations of nonlinear filtering, with
potential applications to signal processing, quantitative finance, heat flows
and quantum theory among many other areas. Our method is based on the
projection coming from a duality argument built in the exponential statistical
manifold structure developed by G. Pistone and co-authors. The choice of the
finite dimensional manifold on which one should project the infinite
dimensional equation is crucial, and we propose finite dimensional exponential
and mixture families. This same problem had been studied, especially in the
context of nonlinear filtering, by D. Brigo and co-authors but the $L^2$
structure on the space of square roots of densities or of densities themselves
was used, without taking an infinite dimensional manifold environment space for
the equation to be projected. Here we re-examine such works from the
exponential statistical manifold point of view, which allows for a deeper
geometric understanding of the manifold structures at play. We also show that
the projection in the exponential manifold structure is consistent with the
Fisher Rao metric and, in case of finite dimensional exponential families, with
the assumed density approximation. Further, we show that if the sufficient
statistics of the finite dimensional exponential family are chosen among the
eigenfunctions of the backward diffusion operator then the statistical-manifold
or Fisher-Rao projection provides the maximum likelihood estimator for the
Fokker Planck equation solution. We finally try to clarify how the finite
dimensional and infinite dimensional terminology for exponential and mixture
spaces are related.
|
Federated learning (FL) typically faces data heterogeneity, i.e.,
distribution shifting among clients. Sharing clients' information has shown
great potentiality in mitigating data heterogeneity, yet incurs a dilemma in
preserving privacy and promoting model performance. To alleviate the dilemma,
we raise a fundamental question: \textit{Is it possible to share partial
features in the data to tackle data heterogeneity?} In this work, we give an
affirmative answer to this question by proposing a novel approach called
{\textbf{Fed}erated \textbf{Fe}ature \textbf{d}istillation} (FedFed).
Specifically, FedFed partitions data into performance-sensitive features (i.e.,
greatly contributing to model performance) and performance-robust features
(i.e., limitedly contributing to model performance). The performance-sensitive
features are globally shared to mitigate data heterogeneity, while the
performance-robust features are kept locally. FedFed enables clients to train
models over local and shared data. Comprehensive experiments demonstrate the
efficacy of FedFed in promoting model performance.
|
We investigate the transformed Hopf algebras in Hopf Galois extensions. The
final goal of this paper is to introduce certain triangular Hopf algebras
associated with restricted Frobenius Lie algebras over a field of
characteristic $p>0$.
|
Frege's definition of the real numbers, as envisaged in the second volume of
\textit{Grundgesetze der Arithmetik}, is fatally flawed by the inconsistency of
Frege's ill-fated \textit{Basic Law V}. We restate Frege's definition in a
consistent logical framework and investigate whether it can provide a logical
foundation of real analysis. Our conclusion will deem it doubtful that such a
foundation along the lines of Frege's own indications is possible at all.
|
Location data can be extremely useful to study commuting patterns and
disruptions, as well as to predict real-time traffic volumes. At the same time,
however, the fine-grained collection of user locations raises serious privacy
concerns, as this can reveal sensitive information about the users, such as,
life style, political and religious inclinations, or even identities. In this
paper, we study the feasibility of crowd-sourced mobility analytics over
aggregate location information: users periodically report their location, using
a privacy-preserving aggregation protocol, so that the server can only recover
aggregates -- i.e., how many, but not which, users are in a region at a given
time. We experiment with real-world mobility datasets obtained from the
Transport For London authority and the San Francisco Cabs network, and present
a novel methodology based on time series modeling that is geared to forecast
traffic volumes in regions of interest and to detect mobility anomalies in
them. In the presence of anomalies, we also make enhanced traffic volume
predictions by feeding our model with additional information from correlated
regions. Finally, we present and evaluate a mobile app prototype, called
Mobility Data Donors (MDD), in terms of computation, communication, and energy
overhead, demonstrating the real-world deployability of our techniques.
|
Over the past few decades, a consensus picture has emerged in which roughly a
quarter of the universe consists of dark matter. I begin with a review of the
observational evidence for the existence of dark matter: rotation curves of
galaxies, gravitational lensing measurements, hot gas in clusters, galaxy
formation, primordial nucleosynthesis and cosmic microwave background
observations. Then I discuss a number of anomalous signals in a variety of data
sets that may point to discovery, though all of them are controversial. The
annual modulation in the DAMA detector and/or the gamma-ray excess seen in the
Fermi Gamma Ray Space Telescope from the Galactic Center could be due to WIMPs;
a 3.5 keV X-ray line from multiple sources could be due to sterile neutrinos;
or the 511 keV line in INTEGRAL data could be due to MeV dark matter. All of
these would require further confirmation in other experiments or data sets to
be proven correct. In addition, a new line of research on dark stars is
presented, which suggests that the first stars to exist in the universe were
powered by dark matter heating rather than by fusion: the observational
possibility of discovering dark matter in this way is discussed.
|
Markov diagrams provide a way to understand the structures of topological
dynamical systems. We examine the construction of such diagrams for subshifts,
including some which do not have any nontrivial Markovian part, in particular
Sturmian systems and some substitution systems.
|
Sparseness is a useful regularizer for learning in a wide range of
applications, in particular in neural networks. This paper proposes a model
targeted at classification tasks, where sparse activity and sparse connectivity
are used to enhance classification capabilities. The tool for achieving this is
a sparseness-enforcing projection operator which finds the closest vector with
a pre-defined sparseness for any given vector. In the theoretical part of this
paper, a comprehensive theory for such a projection is developed. In
conclusion, it is shown that the projection is differentiable almost everywhere
and can thus be implemented as a smooth neuronal transfer function. The entire
model can hence be tuned end-to-end using gradient-based methods. Experiments
on the MNIST database of handwritten digits show that classification
performance can be boosted by sparse activity or sparse connectivity. With a
combination of both, performance can be significantly better compared to
classical non-sparse approaches.
|
The quantum orthogonal arrays define remarkable classes of multipartite
entangled states called $k$-uniform states whose every reductions to $k$
parties are maximally mixed. We present constructions of quantum orthogonal
arrays of strength 2 with levels of prime power, as well as some constructions
of strength 3. As a consequence, we give infinite classes of 2-uniform states
of $N$ systems with dimension of prime power $d\geq 2$ for arbitrary $N\geq 5$;
3-uniform states of $N$-qubit systems for arbitrary $N\geq 6$ and $N\neq
7,8,9,11$; 3-uniform states of $N$ systems with dimension of prime power $d\geq
7$ for arbitrary $N\geq 7$.
|
On September 10, 2017, Hurricane Irma made landfall in the Florida Keys and
caused significant damage. Informed by hydrodynamic storm surge and wave
modeling and post-storm satellite imagery, a rapid damage survey was soon
conducted for 1600+ residential buildings in Big Pine Key and Marathon. Damage
categorizations and statistical analysis reveal distinct factors governing
damage at these two locations. The distance from the coast is significant for
the damage in Big Pine Key, as severely damaged buildings were located near
narrow waterways connected to the ocean. Building type and size are critical in
Marathon, highlighted by the near-complete destruction of trailer communities
there. These observations raise issues of affordability and equity that need
consideration in damage recovery and rebuilding for resilience.
|
We propose a new look on triangulated categories, which is based on the
second Hochschild cohomology.
|
The so-called configurational entropy (CE) framework has proved to be an
efficient instrument to study nonlinear scalar field models featuring solutions
with spatially-localized energy, since its proposal by Gleiser and Stamapoulos.
Therefore, in this work, we apply this new physical quantity in order to
investigate the properties of degenerate Bloch branes. We show that it is
possible to construct a configurational entropy measure in functional space
from the field configurations, where a complete set of exact solutions for the
model studied displays both double and single-kink configurations. Our study
shows a rich internal structure of the configurations, where we observe that
the field configurations undergo a quick phase transition, which is endorsed by
information entropy. Furthermore, the Bloch configurational entropy is employed
to demonstrate a high organisational degree in the structure of the
configurations of the system, stating that there is a best ordering for the
solutions.
|
We present the results of an extensive observational study of the active
star-forming complex W51 that was observed in the J=2-1 transition of the 12CO
and 13CO molecules over a 1.25 deg x 1.00 deg region with the University of
Arizona Heinrich Hertz Submillimeter Telescope. We use a statistical
equilibrium code to estimate physical properties of the molecular gas. We
compare the molecular cloud morphology with the distribution of infrared (IR)
and radio continuum sources, and find associations between molecular clouds and
young stellar objects (YSOs) listed in Spitzer IR catalogs. The ratios of CO
lines associated with HII regions are different from the ratios outside the
active star-forming regions. We present evidence of star formation triggered by
the expansion of the HII regions and by cloud-cloud collisions. We estimate
that about 1% of the cloud mass is currently in YSOs.
|
IOTA is a distributed ledger technology that uses a Directed Acyclic Graph
(DAG) structure called the Tangle. It is known for its efficiency and is widely
used in the Internet of Things (IoT) environment. Tangle can be configured by
utilizing the tip selection process. Due to performance issues with light
nodes, full nodes are being asked to perform the tip selections of light nodes.
However, in this paper, we demonstrate that tip selection can be exploited to
compromise users' privacy. An adversary full node can associate a transaction
with the identity of a light node by comparing the light node's request with
its ledger. We show that these types of attacks are not only viable in the
current IOTA environment but also in IOTA 2.0 and the privacy improvement being
studied. We also provide solutions to mitigate these attacks and propose ways
to enhance anonymity in the IOTA network while maintaining efficiency and
scalability.
|
We provide the quantum mechanical description of the excitation of long-range
surface plasmon polaritons (LRSPPs) on thin metallic strips. The excitation
process consists of an attenuated-reflection setup, where efficient
photon-to-LRSPP wavepacket-transfer is shown to be achievable. For calculating
the coupling, we derive the first quantization of LRSPPs in the polaritonic
regime. We study quantum statistics during propagation and characterize the
performance of photon-to-LRSPP quantum state transfer for single-photons,
photon-number states and photonic coherent superposition states.
|
We consider Bayesian multiple hypothesis problem with independent and
identically distributed observations. The classical, Sanov's theorem-based,
analysis of the error probability allows one to characterize the best
achievable error exponent. However, this analysis does not generalize to the
case where the true distributions of the hypothesis are not exact or partially
known via some nominal distributions. This problem has practical significance,
because the nominal distributions may be quantized versions of the true
distributions in a hardware implementation, or they may be estimates of the
true distributions obtained from labeled training sequences as in statistical
classification. In this paper, we develop a type-based analysis to investigate
Bayesian multiple hypothesis testing problem. Our analysis allows one to
explicitly calculate the error exponent of a given type and extends the
classical analysis. As a generalization of the proposed method, we derive a
robust test and obtain its error exponent for the case where the hypothesis
distributions are not known but there exist nominal distribution that are close
to true distributions in variational distance.
|
A new differential test for series of positive terms is proved. Let f(x) be a
positive continuous function corresponded to a series of positive terms f(k),
and g(x) is a derivative of reciprocal of f(x). Then, the convergence and
divergence of the series may be determined from a value of fgx for enough large
x. The rest may make the limit form, and is universal and complete.
|
The marriage of Quantum Physics and Information Technology, originally
motivated by the need for miniaturization, has recently opened the way to the
realization of radically new information-processing devices, with the
possibility of guaranteed secure cryptographic communications, and tremendous
speedups of some complex computational tasks. Among the many problems posed by
the new information technology there is the need of characterizing the new
quantum devices, making a complete identification and characterization of their
functioning. As we will see, quantum mechanics provides us with a powerful tool
to achieve the task easily and efficiently: this tools is the so called quantum
entanglement, the basis of the quantum parallelism of the future computers. We
present here the first full experimental quantum characterization of a
single-qubit device. The new method, we may refer to as ''quantum
radiography'', uses a Pauli Quantum Tomography at the output of the device, and
needs only a single entangled state at the input, which works on the test
channel as all possible input states in quantum parallel. The method can be
easily extended to any n-qubits device.
|
Let $G$ be a finite connected simple graph with $n$ vertices and $m$ edges.
We show that, when $G$ is not bipartite, the number of $4$-cycles contained in
$G$ is at most $\binom{m-n+1}{2}$. We further provide a short combinatorial
proof of the bound $\binom{m-n+2}{2}$ which holds for bipartite graphs.
|
In symbolic computing, a major bottleneck is middle expression swell.
Symbolic geometric computing based on invariant algebras can alleviate this
difficulty. For example, the size of projective geometric computing based on
bracket algebra can often be restrained to two terms, using final polynomials,
area method, Cayley expansion, etc. This is the "binomial" feature of
projective geometric computing in the language of bracket algebra.
In this paper we report a stunning discovery in Euclidean geometric
computing: the term preservation phenomenon. Input an expression in the
language of Null Bracket Algebra (NBA), by the recipe we are to propose in this
paper, the computing procedure can often be controlled to within the same
number of terms as the input, through to the end. In particular, the
conclusions of most Euclidean geometric theorems can be expressed by monomials
in NBA, and the expression size in the proving procedure can often be
controlled to within one term! Euclidean geometric computing can now be
announced as having a "monomial" feature in the language of NBA.
The recipe is composed of three parts: use long geometric product to
represent and compute multiplicatively, use "BREEFS" to control the expression
size locally, and use Clifford factorization for term reduction and transition
from algebra to geometry.
By the time this paper is being written, the recipe has been tested by 70+
examples from \cite{chou}, among which 30+ have monomial proofs. Among those
outside the scope, the famous Miquel's five-circle theorem \cite{chou2}, whose
analytic proof is straightforward but very difficult symbolic computing, is
discovered to have a 3-termed elegant proof with the recipe.
|
We have measured branching fractions of hadronic $\tau$ decays involving an
$\eta$ meson using 485 fb^{-1} of data collected with the Belle detector at the
KEKB asymmetric-energy e^+e^- collider. We obtain the following branching
fractions: ${\cal B}(\tau^-\to K^- \eta \nu_{\tau})=(1.62\pm 0.05 \pm
0.09)\times 10^{-4}$, ${\cal B}(\tau^-\to K^- \pi^0 \eta \nu_{\tau}) =(4.7\pm
1.1 \pm 0.4)\times 10^{-5}$, ${\cal B}(\tau^-\to \pi^- \pi^0 \eta
\nu_{\tau})=(1.39 \pm 0.03 \pm 0.07) \times 10^{-3}$, and ${\cal B}(\tau^-\to
K^{*-} \eta \nu_{\tau})=(1.13\pm 0.19 \pm 0.07)\times 10^{-4}$ improving the
accuracy compared to the best previous measurements by factors of six, eight,
four and four, respectively.
|
We construct toroidal partial compactifications of the moduli spaces of mixed
Hodge structures with polarized graded quotients. They are moduli spaces of log
mixed Hodge structures with polarized graded quotients. We construct them as
the spaces of nilpotent orbits.
|
A rather simple method is used to detect at once both optical absorption
spectra and excited-states nonradiative transitions in 0,03 at ${%}$ and 0,16
at ${%}$ ruby at temperature 2 K. The technique utilizes a time-resolved
bolometer detection of phonons, generated by the excited-state nonradiative
relaxation following optical excitation with a pulsed tunable dye laser. The
observed excitation spectra of phonons well coincide with already known
absorption spectra of both single chromium ions and pairs of the nearest
neighbors. For the first time the fast ($\leqslant 0,5 {\mu}s$)resonant energy
transfer from single chromium ions to the fourth- nearest neighbors is observed
directly. The new strongly perturbed $Cr$-single-ion sites are observed.
|
We study holographic RG flows in a 3d supergravity model from the side of the
dynamical system theory. The gravity equations of motion are reduced to an
autonomous dynamical system. Then we find equilibrium points of the system and
analyze them for stability. We also restore asymptotic solutions near the
critical points. We find two types of solutions: with asymptotically AdS
metrics and hyperscaling violating metrics. We write down possible RG flows
between an unstable (saddle) UV fixed point and a stable (stable node) IR fixed
point. We also analyze bifurcations in the model.
|
We study a Bayesian approach to estimating a smooth function in the context
of regression or classification problems on large graphs. We derive theoretical
results that show how asymptotically optimal Bayesian regularization can be
achieved under an asymptotic shape assumption on the underlying graph and a
smoothness condition on the target function, both formulated in terms of the
graph Laplacian. The priors we study are randomly scaled Gaussians with
precision operators involving the Laplacian of the graph.
|
We study the one parameter family of Fredholm determinants $\det(I-\gamma
K_{\textnormal{csin}}),\gamma\in\mathbb{R}$ of an integrable Fredholm operator
$K_{\textnormal{csin}}$ acting on the interval $(-s,s)$ whose kernel is a cubic
generalization of the sine kernel which appears in random matrix theory. This
Fredholm determinant appears in the description of the Fermi distribution of
semiclassical non-equilibrium Fermi states in condensed matter physics as well
as in random matrix theory. Using the Riemann-Hilbert method, we calculate the
large $s$-asymptotics of $\det(I-\gamma K_{\textnormal{csin}})$ for all values
of the real parameter $\gamma$.
|
Next generation galaxy surveys demand the development of massive ensembles of
galaxy mocks to model the observables and their covariances, what is
computationally prohibitive using $N$-body simulations. COLA is a novel method
designed to make this feasible by following an approximate dynamics but with up
to 3 orders of magnitude speed-ups when compared to an exact $N$-body. In this
paper we investigate the optimization of the code parameters in the compromise
between computational cost and recovered accuracy in observables such as
two-point clustering and halo abundance. We benchmark those observables with a
state-of-the-art $N$-body run, the MICE Grand Challenge simulation (MICE-GC).
We find that using 40 time steps linearly spaced since $z_i \sim 20$, and a
force mesh resolution three times finer than that of the number of particles,
yields a matter power spectrum within $1\%$ for $k \lesssim 1\,h {\rm
Mpc}^{-1}$ and a halo mass function within $5\%$ of those in the $N$-body. In
turn the halo bias is accurate within $2\%$ for $k \lesssim 0.7\,h {\rm
Mpc}^{-1}$ whereas, in redshift space, the halo monopole and quadrupole are
within $4\%$ for $k \lesssim 0.4\,h {\rm Mpc}^{-1}$. These results hold for a
broad range in redshift ($0 < z < 1$) and for all halo mass bins investigated
($M > 10^{12.5} \, h^{-1} \, {\rm M_{\odot}}$). To bring accuracy in clustering
to one percent level we study various methods that re-calibrate halo masses
and/or velocities. We thus propose an optimized choice of COLA code parameters
as a powerful tool to optimally exploit future galaxy surveys.
|
Due to the growing concerns for sustainable development, supply chains seek
to invest in social sustainability issues to seize more market share in today's
competitive business environment. This study aims to develop a coordination
scheme for a manufacturer-retailer supply chain (SC) contributing to social
donation (SD) activity under a cause-related marketing (CRM) campaign. In the
presence of consumer social awareness (CSA), the manufacturer notices consumers
through some activities (i.e. labelling) that he participates in a CRM campaign
by donating a proportion of the retail price to a cause whenever a consumer
makes a purchase. In this study, the market demand depends on the retail price,
the retailer's stock level and donation size. The proposed problem is designed
under three decision-making systems. Firstly, a decentralized decision-making
system (traditional structure), where the SC's members aim to optimize their
profits regardless of the other member's profitability, is investigated. Then,
the problem is designed under a centralized decision-making system to obtain
the best values of the retail price and replenishment decisions from the entire
SC perspective. Afterwards, an incentive mechanism based on a cost and
revenue-sharing (RCS) factor is developed in the coordination system to
persuade the SC members to accept the optimal results of the centralized system
without suffering any profit loss. Moreover, the surplus profit obtained in the
centralized system is divided between the members based on their bargaining
power. The numerical investigations and the blocked decision-making on SD
activity are presented to evaluate the proposed model. Not only does the
proposed coordination model increase the SC members' profit, but it is also
desirable in achieving a more socially responsible SC.
|
We present a measurement of the ratio of t-tbar production cross section via
gluon-gluon fusion to the total t-tbar production cross section in p-pbar
collisions at sqrt{s}=1.96 TeV at the Tevatron. Using a data sample with an
integrated luminosity of 955/pb recorded by the CDF II detector at Fermilab, we
select events based on the t-tbar decay to lepton+jets. Using an artificial
neural network technique we discriminate between t-tbar events produced via
q-qbar annihilation and gluon-gluon fusion, and find
Cf=(gg->ttbar)/(pp->ttbar)<0.33 at the 68% confidence level. This result is
combined with a previous measurement to obtain the most precise measurement of
this quantity, Cf=0.07+0.15-0.07.
|
In this paper, we present a stochastic queuing model for the road traffic,
which captures the stationary density-flow relationships in both uncongested
and congestion conditions. The proposed model is based on the $M/g/c/c$ state
dependent queuing model of Jain and Smith, and is inspired from the
deterministic Godunov scheme for the road traffic simulation. We first propose
a reformulation of the $M/g/c/c$ state dependent model that works with
density-flow fundamental diagrams rather than density-speed relationships. We
then extend this model in order to consider upstream traffic demand as well as
downstream traffic supply. Finally, we calculate the speed and travel time
distributions for the $M/g/c/c$ state dependent queuing model and for the
proposed model, and derive stationary performance measures (expected number of
cars, blocking probability, expected travel time, and throughput). A comparison
with results predicted by the $M/g/c/c$ state dependent queuing model shows
that the proposed model correctly represents the dynamics of traffic and gives
good performances measures. The results illustrate the good accuracy of the
proposed model.
|
Using Wilf-Zeilberger algorithmic proof theory, we continue pioneering work
of Meni Rosenfeld (followed up by interesting work by Cyril Grunspan and
Ricardo Perez-Marco) and study the probability and duration of successful
bitcoin attacks, but using an equivalent, and much more congenial, formulation
as a certain two-phase soccer match.
|
Quantitative phase imaging (QPI) is a label-free technique that provides
optical path length information for transparent specimens, finding utility in
biology, materials science, and engineering. Here, we present quantitative
phase imaging of a 3D stack of phase-only objects using a
wavelength-multiplexed diffractive optical processor. Utilizing multiple
spatially engineered diffractive layers trained through deep learning, this
diffractive processor can transform the phase distributions of multiple 2D
objects at various axial positions into intensity patterns, each encoded at a
unique wavelength channel. These wavelength-multiplexed patterns are projected
onto a single field-of-view (FOV) at the output plane of the diffractive
processor, enabling the capture of quantitative phase distributions of input
objects located at different axial planes using an intensity-only image sensor.
Based on numerical simulations, we show that our diffractive processor could
simultaneously achieve all-optical quantitative phase imaging across several
distinct axial planes at the input by scanning the illumination wavelength. A
proof-of-concept experiment with a 3D-fabricated diffractive processor further
validated our approach, showcasing successful imaging of two distinct phase
objects at different axial positions by scanning the illumination wavelength in
the terahertz spectrum. Diffractive network-based multiplane QPI designs can
open up new avenues for compact on-chip phase imaging and sensing devices.
|
Subsets and Splits