text
stringlengths 6
128k
|
---|
We elaborate a theory for the modeling of concepts using the mathematical
structure of quantum mechanics. Concepts are represented by vectors in the
complex Hilbert space of quantum mechanics and membership weights of items are
modeled by quantum weights calculated following the quantum rules. We apply
this theory to model the disjunction of concepts and show that experimental
data of membership weights of items with respect to the disjunction of concepts
can be modeled accurately. It is the quantum effects of interference and
superposition, combined with an effect of context, that are at the origin of
the effects of overextension and underextension observed as deviations from a
classical use of the disjunction. We put forward a graphical explanation of the
effects of overextension and underextension by interpreting the quantum model
applied to the modeling of the disjunction of concepts.
|
We consider the problem of stochastic monotone submodular function
maximization, subject to constraints. We give results on adaptivity gaps, and
on the gap between the optimal offline and online solutions. We present a
procedure that transforms a decision tree (adaptive algorithm) into a
non-adaptive chain. We prove that this chain achieves at least ${\tau}$ times
the utility of the decision tree, over a product distribution and binary state
space, where ${\tau} = \min_{i,j} \Pr[x_i=j]$. This proves an adaptivity gap of
$1/{\tau}$ (which is $2$ in the case of a uniform distribution) for the problem
of stochastic monotone submodular maximization subject to state-independent
constraints. For a cardinality constraint, we prove that a simple adaptive
greedy algorithm achieves an approximation factor of $(1-1/e^{\tau})$ with
respect to the optimal offline solution; previously, it has been proven that
the algorithm achieves an approximation factor of $(1-1/e)$ with respect to the
optimal adaptive online solution. Finally, we show that there exists a
non-adaptive solution for the stochastic max coverage problem that is within a
factor $(1-1/e)$ of the optimal adaptive solution and within a factor of
${\tau}(1-1/e)$ of the optimal offline solution.
|
We propose a formalism to obtain the electroweak sphaleron, which is one of
the static classical solutions, using the gradient flow method. By adding a
modification term to the gradient flow equation, we can obtain the sphaleron
configuration as a stable fixed point of the flow in the large flow time.
Applying the method to the $SU(2)$-Higgs model (the Weinberg angle $\theta_W$
is $0$) in four dimensions, we obtain the sphaleron configuration whose energy
coincides with previous studies numerically. We can also show that the
Chern-Simons number of the solution has a half-integer.
|
We prove that any reductive group G over a non-Archimedean local field has a
cuspidal complex representation.
|
Let $\Omega$ be a pseudoconvex domain in $\mathbb C^n$ satisfying an
$f$-property for some function $f$. We show that the Bergman metric associated
to $\Omega$ has the lower bound $\tilde g(\delta_\Omega(z)^{-1})$ where
$\delta_\Omega(z)$ is the distance from $z$ to the boundary $\partial\Omega$
and $\tilde g$ is a specific function defined by $f$. This refines
Khanh-Zampieri's work in \cite{KZ12} with reducing the smoothness assumption of
the boundary.
|
A systematic investigation is presented of grain boundaries and grain
boundary networks in two dimensional flexible membranes with crystalline order.
An isolated grain boundary undergoes a buckling transition at a critical value
of the elastic constants, but, contrary to previous findings, the shape of the
buckled membrane is shown to be asymptotically flat. This is in general not
true in the case of a network of intersecting grain boundaries, where each
intersection behaves as the source of a long range stress field. Unlike the
case of isolated dislocations or disclinations, the energy associated with
these stresses is finite; they can, however, destabilize the flat phase. The
buckled phase can be modeled by an Ising spin-glass with long range
antiferromagnetic interactions. These findings may be relevant to the
understanding of the wrinkling transition in partially polymerized vesicles
reported in recent experiments.
|
This note is devoted to a small, but essential, extension of Theorem 2.1 of
our recent paper [6]. The improvement is explained in Section 1 and proved in
Section 2. The importance of the extension is demonstrated in Section 3 with an
application to the Navier-Stokes system in critical $L_q$-spaces.
|
The identification of states and parameters from noisy measurements of a
dynamical system is of great practical significance and has received a lot of
attention. Classically, this problem is expressed as optimization over a class
of models. This work presents such a method, where we augment the system in
such a way that there is no distinction between parameter and state
reconstruction. We pose the resulting problem as a batch problem: given the
model, reconstruct the state from a finite sequence of output measurements. In
the case the model is linear, we derive an analytical expression for the state
reconstruction given the model and the output measurements. Importantly, we
estimate the state trajectory in its entirety and do not aim to estimate just
an initial condition: that is, we use more degrees of freedom than strictly
necessary in the optimization step. This particular approach can be
reinterpreted as training of a neural network that estimates the state
trajectory from available measurements. The technology associated with neural
network optimization/training allows an easy extension to nonlinear models. The
proposed framework is relatively easy to implement, does not depend on an
informed initial guess, and provides an estimate for the state trajectory
(which incorporates an estimate for the unknown parameters) over a given finite
time horizon.
|
We have investigated the long-term X-ray variability, defined as the
root-mean-square (rms) of the ASM RXTE light curves, of a set of galactic
Be/X-ray binaries and searched for correlations with system parameters, such as
the spin period of the neutron star and the orbital period and eccentricity of
the binary. We find that systems with larger rms are those harbouring fast
rotating neutron stars, low eccentric and narrow orbits. These relationships
can be explained as the result of the truncation of the circumstellar disc. We
also present an updated version of the Halpha equivalent width-orbital period
diagram, including sources in the SMC. This diagram provides strong
observational evidence of the interaction of neutron star with the
circumstellar envelope of its massive companion.
|
It is argued that the traditional "realist" methodology of physics, according
to which human concepts, laws and theories can grasp the essence of reality, is
incompatible with the most fruitful interpretation of quantum formalism. The
proof rests on the violation by quantum mechanics of the foundational
principles of that methodology. An alternative methodology, in which the
construction of sciences finishes at the level of human experience, as standard
quantum theory strongly suggests, is then conjectured.
|
Algorithmic decision-making in societal contexts, such as retail pricing,
loan administration, recommendations on online platforms, etc., often involves
experimentation with decisions for the sake of learning, which results in
perceptions of unfairness among people impacted by these decisions. It is hence
necessary to embed appropriate notions of fairness in such decision-making
processes. The goal of this paper is to highlight the rich interface between
temporal notions of fairness and online decision-making through a novel
meta-objective of ensuring fairness at the time of decision. Given some
arbitrary comparative fairness notion for static decision-making (e.g.,
students should pay at most 90% of the general adult price), a corresponding
online decision-making algorithm satisfies fairness at the time of decision if
the said notion of fairness is satisfied for any entity receiving a decision in
comparison to all the past decisions. We show that this basic requirement
introduces new methodological challenges in online decision-making. We
illustrate the novel approaches necessary to address these challenges in the
context of stochastic convex optimization with bandit feedback under a
comparative fairness constraint that imposes lower bounds on the decisions
received by entities depending on the decisions received by everyone in the
past. The paper showcases novel research opportunities in online
decision-making stemming from temporal fairness concerns.
|
Surface assays, such as ELISA and immunofluorescence, are nothing short of
ubiquitous in biotechnology and medical diagnostics today. The development and
optimization of these assays generally focuses on three aspects: immobilization
chemistry, ligand-receptor interaction and concentrations of ligands, buffers
and sample. A fourth aspect, the transport of the analyte to the surface, is
more rarely delved into during assay design and analysis. Improving transport
is generally limited to the agitation of reagents, a mode of flow generation
inherently difficult to control, often resulting in inconsistent reaction
kinetics. However, with assay optimization reaching theoretical limits, the
role of transport becomes decisive. This perspective develops an intuitive and
practical understanding of transport in conventional agitation systems and in
microfluidics, the latter underpinning many new life science technologies. We
give rules of thumb to guide the user on system behavior, such as advection
regimes and shear stress, and derive estimates for relevant quantities that
delimit assay parameters. Illustrative cases with examples of experimental
results are used to clarify the role of fundamental concepts such as boundary
and depletion layers, mass diffusivity or surface tension.
|
Although high-resolution N-body simulations make robust empirical predictions
for the density distribution within cold dark matter halos, these studies have
yielded little physical insight into the origins of the distribution. We
investigate the problem using analytic and semi-analytic approaches. Simple
analytic considerations suggest that the inner slope of dark matter halos
cannot be steeper than alpha=2 (rho ~ r^-alpha), with alpha=1.5-1.7 being a
more realistic upper limit. Our analysis suggests that any number of effects,
both real (angular momentum from tidal torques, secondary perturbations) and
artificial (two-body interactions, the accuracy of the numerical integrator,
round-off errors), will result in shallower slopes. We also find that the halos
should exhibit a well-defined relation between r_peri/r_apo and j_theta/j_r. We
derive this relation analytically and speculate that it may be "universal".
Using a semi-analytic scheme based on Ryden & Gunn (1987), we further explore
the relationship between the specific angular momentum distribution in a halo
and its density profile. For now we restrict ourselves to halos that form
primarily via nearly-smooth accretion of matter, and only consider the specific
angular momentum generated by secondary perturbations associated with the cold
dark matter spectrum of density fluctuations. Compared to those formed in
N-body simulations, our ``semi-analytic'' halos are more extended, have flatter
rotation curves and have higher specific angular momentum, even though we have
not yet taken into account the effects of tidal torques. Whether the density
profiles of numerical halos is indeed the result of loss in angular momentum
outside the central region, and whether this loss is a feature of hierarchical
merging and major mergers in particular, is under investigation.
|
Complications in preparing and preserving quantum correlations stimulate
recycling of a single quantum resource in information processing and
communication tasks multiple times. Here, we consider a scenario involving
multiple independent pairs of observers acting with unbiased inputs on a single
pair of spatially separated qubits sequentially. In this scenario, we address
whether more than one pair of observers can demonstrate quantum advantage in
some specific $2 \rightarrow 1$ and $3 \rightarrow 1$ random access codes.
Interestingly, we not only address these in the affirmative, but also
illustrate that unbounded pairs can exhibit quantum advantage. Furthermore,
these results remain valid even when all observers perform suitable projective
measurements and an appropriate separable state is initially shared.
|
We study the VC-dimension of the set system on the vertex set of some graph
which is induced by the family of its $k$-connected subgraphs. In particular,
we give tight upper and lower bounds for the VC-dimension. Moreover, we show
that computing the VC-dimension is $\mathsf{NP}$-complete and that it remains
$\mathsf{NP}$-complete for split graphs and for some subclasses of planar
bipartite graphs in the cases $k = 1$ and $k = 2$. On the positive side, we
observe it can be decided in linear time for graphs of bounded clique-width.
|
It has long been known that the combinatorial properties of a graph $\Gamma$
are closely related to the group theoretic properties of its right angled artin
group (raag). It's natural to ask if the graph homomorphisms are similarly
related to the group homomorphisms between two raags. The main result of this
paper shows that there is a purely algebraic way to characterize the raags
amongst groups, and the graph homomorphisms amongst the group homomorphisms. As
a corollary we present a new algorithm for recovering $\Gamma$ from its raag.
|
Numerous Machine Learning (ML) bias-related failures in recent years have led
to scrutiny of how companies incorporate aspects of transparency and
accountability in their ML lifecycles. Companies have a responsibility to
monitor ML processes for bias and mitigate any bias detected, ensure business
product integrity, preserve customer loyalty, and protect brand image.
Challenges specific to industry ML projects can be broadly categorized into
principled documentation, human oversight, and need for mechanisms that enable
information reuse and improve cost efficiency. We highlight specific roadblocks
and propose conceptual solutions on a per-category basis for ML practitioners
and organizational subject matter experts. Our systematic approach tackles
these challenges by integrating mechanized and human-in-the-loop components in
bias detection, mitigation, and documentation of projects at various stages of
the ML lifecycle. To motivate the implementation of our system -- SIFT (System
to Integrate Fairness Transparently) -- we present its structural primitives
with an example real-world use case on how it can be used to identify potential
biases and determine appropriate mitigation strategies in a participatory
manner.
|
We aim to devise feasible, efficient verification schemes for bosonic
channels. To this end, we construct an average-fidelity witness that yields a
tight lower bound for average fidelity plus a general framework for verifying
optimal quantum channels. For both multi-mode unitary Gaussian channels and
single-mode amplification channels, we present experimentally feasible
average-fidelity witnesses and reliable verification schemes, for which sample
complexity scales polynomially with respect to all channel specification
parameters. Our verification scheme provides an approach to benchmark the
performance of bosonic channels on a set of Gaussian-distributed coherent
states by employing only two-mode squeezed vacuum states and local homodyne
detections. Our results demonstrate how to perform feasible tests of quantum
components designed for continuous-variable quantum information processing.
|
We present the first study of parametric amplification in hydrogenated
amorphous silicon waveguides. Broadband on/off amplification up to 26.5~dB at
telecom wavelength is reported. Measured nonlinear parameter is
770~$\textrm{W}^{-1} \textrm{m}^{-1}$, nonlinear absorption 28~$\textrm{W}^{-1}
\textrm{m}^{-1}$, bandgap $1.61$~eV.
|
We study ordinal embedding relaxations in the realm of parameterized
complexity. We prove the existence of a quadratic kernel for the {\sc
Betweenness} problem parameterized above its tight lower bound, which is stated
as follows. For a set $V$ of variables and set $\mathcal C$ of constraints
"$v_i$ \mbox{is between} $v_j$ \mbox{and} $v_k$", decide whether there is a
bijection from $V$ to the set $\{1,\ldots,|V|\}$ satisfying at least $|\mathcal
C|/3 + \kappa$ of the constraints in $\mathcal C$. Our result solves an open
problem attributed to Benny Chor in Niedermeier's monograph "Invitation to
Fixed-Parameter Algorithms." The betweenness problem is of interest in
molecular biology. An approach developed in this paper can be used to determine
parameterized complexity of a number of other optimization problems on
permutations parameterized above or below tight bounds.
|
It has recently been shown that finding the optimal measurement on the
environment for stationary Linear Quadratic Gaussian control problems is a
semi-definite program. We apply this technique to the control of the
EPR-correlations between two bosonic modes interacting via a parametric
Hamiltonian at steady state. The optimal measurement turns out to be nonlocal
homodyne measurement -- the outputs of the two modes must be combined before
measurement. We also find the optimal local measurement and control technique.
This gives the same degree of entanglement but a higher degree of purity than
the local technique previously considered [S. Mancini, Phys. Rev. A {\bf 73},
010304(R) (2006)].
|
We present the results obtained in the development of scintillating Double
Beta Decay bolometers. Several Mo and Cd based crystals were tested with the
bolometric technique. The scintillation light was measured through a second
independent bolometer. A 140 g CdWO_4 crystal was run in a 417 h live time
measurement. Thanks to the scintillation light, the alpha background is easily
discriminated resulting in zero counts above the 2615 keV gamma line of
Thallium 208. These results, combined with an extremely easy light detector
operation, represent the first tangible proof demonstrating the feasibility of
this kind of technique.
|
The electric resistivity is examined in the constrained Hilbert space of a
doped Mott insulator, which is dictated by a non-Ioffe-Larkin composition rule
due to the underlying mutual Chern-Simons topological gauge structure. In the
low-temperature pseudogap phase, where holons remain condensed while spinons
proliferate, the charge transport is governed by a chiral spinon excitation,
comprising a bosonic spin-$1/2$ at the core of a supercurrent vortex. It leads
to a vanishing resistivity with the "confinement" of the spinons in the
superconducting phase but a low-$T$ divergence of the resistivity once the
spinon confinement is disrupted by external magnetic fields. In the latter, the
chiral spinons will generate a Hall number $n_H =$ doping concentration
$\delta$ and a Nernst effect to signal an underlying long-range entanglement
between the charge and spin degrees of freedom. Their presence is further
reflected in thermodynamic quantities such as specific heat and spin
susceptibility. Finally, in the high-temperature spin-disordered phase, it is
shown that the holons exhibit a linear-$T$ resistivity by scattering with the
spinons acting as free local moments, which generate randomized gauge fluxes as
perceived by the charge degree of freedom.
|
A new method to study the long-range correlations in multiparticle production
is developped. It is proposed to study the joint factorial moments or cumulants
of multiplicity distributions in several (more than two) bins. It is shown that
this step dramatically increases the discriminative power of data.
|
In this paper, we study the optimal multiple stopping problem under the
filtration consistent nonlinear expectations. The reward is given by a set of
random variables satisfying some appropriate assumptions rather than an RCLL
process. We first construct the optimal stopping time for the single stopping
problem, which is no longer given by the first hitting time of processes. We
then prove by induction that the value function of the multiple stopping
problem can be interpreted as the one for the single stopping problem
associated with a new reward family, which allows us to construct the optimal
multiple stopping times. If the reward family satisfies some strong regularity
conditions, we show that the reward family and the value functions can be
aggregated by some progressive processes. Hence, the optimal stopping times can
be represented as hitting times.
|
The goal of this paper is twofold; first, show the equivalence between
certain problems in geometry, such as view-obstruction and billiard ball
motions, with the estimation of covering radii of lattice zonotopes. Second, we
will estimate upper bounds of said radii by virtue of the Flatness Theorem.
These problems are similar in nature with the famous lonely runner conjecture.
|
We theoretically investigate the spin-charge transport in two-terminal device
of graphene nanoribbons in the presence of an uniform uniaxial strain,
spin-orbit coupling, exchange field and smooth staggered potential. We show
that the direction of applied strain can efficiently tune strain-strength
induced oscillation of band-gap of armchair graphene nanoribbon (AGNR). It is
also found that electronic conductance in both AGNR and zigzag graphene
nanoribbons (ZGNRs) oscillates with Rashba spin-orbit coupling akin to the
Datta-Das field effect transistor. Two distinct strain response regimes of
electronic conductance as function of spin-orbit couplings (SOC) magnitude are
found. In the regime of small strain, conductance of ZGNR presents stronger
strain dependence along the longitudinal direction of strain. Whereas for high
values of strain shows larger effect for the transversal direction.
Furthermore, the local density of states (LDOS) shows that depending on the
smoothness of the staggered potential, the edge state of AGNR can either emerge
or be suppressed. These emerging states can be determined experimentally by
performing spatially scanning tunneling microscope or by scanning tunneling
spectroscopy. Our findings open up new paradigms of manipulation and control of
strained graphene based nanostructure for application on novel topological
quantum devices.
|
In the recent years massive protostars have been suggested to be high-energy
emitters. Among the best candidates is IRAS 16547-4247, a protostar that
presents a powerful outflow with clear signatures of interaction with its
environment. This source has been revealed to be a potential high-energy source
because it displays non-thermal radio emission of synchrotron origin, which is
evidence of relativistic particles. To improve our understanding of IRAS
16547-4247 as a high-energy source, we analyzed XMM-Newton archival data and
found that IRAS 16547-4247 is a hard X-ray source. We discuss these results in
the context of a refined one-zone model and previous radio observations. From
our study we find that it may be difficult to explain the X-ray emission as
non-thermal radiation coming from the interaction region, but it might be
produced by thermal Bremsstrahlung (plus photo-electric absorption) by a fast
shock at the jet end. In the high-energy range, the source might be detectable
by the present generation of Cherenkov telescopes, and may eventually be
detected by Fermi in the GeV range.
|
We study the asymptotics of iterates of the transfer operator for
non-uniformly hyperbolic $\alpha$-Farey maps. We provide a family of
observables which are Riemann integrable, locally constant and of bounded
variation, and for which the iterates of the transfer operator, when applied to
one of these observables, is not asymptotic to a constant times the wandering
rate on the first element of the partition $\alpha$. Subsequently, sufficient
conditions on observables are given under which this expected asymptotic holds.
In particular, we obtain an extension theorem which establishes that, if the
asymptotic behaviour of iterates of the transfer operator is known on the first
element of the partition $\alpha$, then the same asymptotic holds on any
compact set bounded away from the indifferent fixed point.
|
We present the model of a quantum dot (QD) consisting of a spherical
core-bulk heterostructure made of 3D topological insulator (TI) materials, such
as PbTe/Pb$_{0.31}$Sn$_{0.69}$Te, with bound massless and helical Weyl states
existing at the interface and being confined in all three dimensions. The
number of bound states can be controlled by tuning the size of the QD and the
magnitude of the core and bulk energy gaps, which determine the confining
potential. We demonstrate that such bound Weyl states can be realized for QD
sizes of few nanometers. We identify the spin locking and the Kramers pairs,
both hallmarks of 3D TIs. In contrast to topologically trivial semiconductor
QDs, the confined massless Weyl states in 3D TI QDs are localized at the
interface of the QD and exhibit a mirror symmetry in the energy spectrum. We
find strict optical selection rules satisfied by both interband and intraband
transitions that depend on the polarization of electron-hole pairs and
therefore give rise to the Faraday effect due to Pauli exclusion principle. We
show that the semi-classical Faraday effect can be used to read out spin
quantum memory. When a 3D TI QD is embedded inside a cavity, the single-photon
Faraday rotation provides the possibility to implement optically mediated
quantum teleportation and quantum information processing with 3D TI QDs, where
the qubit is defined by either an electron-hole pair, a single electron spin,
or a single hole spin in a 3D TI QD. Remarkably, the combination of inter- and
intraband transition gives rise to a large dipole moment of up to 450 Debye.
Therefore, the strong-coupling regime can be reached for a cavity quality
factor of $Q\approx10^{4}$ in the infrared wavelength regime of around
$10\:\mu$m.
|
This article proposes a non-parametric tail classifier to aid tail
classification among discrete thick-tail distributions. Theoretical results in
this article show that certain distributional tail types can be identified
among a sequence of plots based on the tail profile. Such distributional tails
include power, sub-exponential, near-exponential, and exponential or thinner
decaying tails. The proposed method does not hypothesize the distribution
parameters values for classification. The method can be used practically as a
preliminary tool to narrow down possible parametric models for refined
statistical analysis with the unbiased estimators of the tail profile. Besides,
simulation studies suggest that the proposed classification method performs
well under various situations.
|
The Landau-Zener(LZ) transition of a two-level system coupling to spin chains
near their critical points is studied in this paper. Two kinds of spin chains,
the Ising spin chain and XY spin chain, are considered. We calculate and
analyze the effects of system-chain coupling on the LZ transition. A relation
between the LZ transition and the critical points of the spin chain is
established. These results suggest that LZ transitions may serve as the
witnesses of criticality of the spin chain. This may provide a new way to study
quantum phase transitions as well as LZ transitions.
|
We prove that the group of area-preserving diffeomorphisms of the 2-sphere
admits a non-trivial homogeneous quasimorphism to the real numbers with the
following property. Its value on any diffeomorphism supported in a sufficiently
small open subset of the sphere equals to the Calabi invariant of the
diffeomorphism. This result extends to more general symplectic manifolds: If
the symplectic manifold is monotone and its quantum homology algebra is
semi-simple we construct a similar quasimorphism on the universal cover of the
group of Hamiltonian diffeomorphisms.
|
Alzheimer's Disease (AD) is a severe brain disorder, destroying memories and
brain functions. AD causes chronically, progressively, and irreversibly
cognitive declination and brain damages. The reliable and effective evaluation
of early dementia has become essential research with medical imaging
technologies and computer-aided algorithms. This trend has moved to modern
Artificial Intelligence (AI) technologies motivated by deeplearning success in
image classification and natural language processing. The purpose of this
review is to provide an overview of the latest research involving deep-learning
algorithms in evaluating the process of dementia, diagnosing the early stage of
AD, and discussing an outlook for this research. This review introduces various
applications of modern AI algorithms in AD diagnosis, including Convolutional
Neural Network (CNN), Recurrent Neural Network (RNN), Automatic Image
Segmentation, Autoencoder, Graph CNN (GCN), Ensemble Learning, and Transfer
Learning. The advantages and disadvantages of the proposed methods and their
performance are discussed. The conclusion section summarizes the primary
contributions and medical imaging preprocessing techniques applied in the
reviewed research. Finally, we discuss the limitations and future outlooks.
|
We prove a generalized vanishing theorem for certain quasi-coherent sheaves
along the derived blow-ups of quasi-smooth derived Artin stacks. We give four
applications of the generalized vanishing theorem: we prove a $K$-theoretic
version of the generalized vanishing theorem which verified a conjecture of the
author and give a new proof of the $K$-theoretic virtual localization theorem
for quasi-smooth derived schemes through the intrinsic blow-up theory of
Kiem-Li-Savvas; we prove a desingularization theorem for quasi-smooth derived
schemes and give an approximation formula for the virtual fundamental classes;
we give a resolution of the diagonal along the projection map of blow-ups of
smooth varieties, which strengthens the semi-orthogonal decomposition theorem
of Orlov; we illustrate the relation between the generalized vanishing theorem
and weak categorifications of quantum loop and toroidal algebra actions on the
derived category of Nakajima quiver varieties. We also propose several
conjectures related to birational geometry and the $L_{\infty}$-algebroid of
Calaque-C\u{a}ld\u{a}raru-Tu.
|
We consider a two-level system coupled to a highly non-Markovian environment
when the coupling axis rotates with time. The environment may be quantum (for
example a bosonic bath or a spin bath) or classical (such as classical noise).
We show that an Anderson orthogonality catastrophe suppresses transitions, so
that the system's instantaneous eigenstates (parallel and anti-parallel to the
coupling axis) can adiabatically follow the rotation. These states thereby
acquire Berry phases; geometric phases given by the area enclosed by the
coupling axis. Unlike in earlier proposals for environment-induced Berry
phases, here there is little decoherence, so one does not need a
decoherence-free subspace. Indeed we show that this Berry phase should be much
easier to observe than a conventional one, because it is not masked by either
the dynamic phase or the leading non-adiabatic phase. The effects that we
discuss should be observable in any qubit device where one can drive three
parameters in the Hamiltonian with strong man-made noise.
|
The flexibility level allowed in nursing care delivery and uncertainty in
infusion durations are very important factors to be considered during the
chemotherapy schedule generation task. The nursing care delivery scheme
employed in an outpatient chemotherapy clinic (OCC) determines the strictness
of the patient-to-nurse assignment policies, while the estimation of infusion
durations affects the trade-off between patient waiting time and nurse
overtime. We study the problem of daily scheduling of patients, assignment of
patients to nurses and chairs under uncertainty in infusion durations for an
OCC that functions according to any of the three commonly used nursing care
delivery models representing fully flexible, partially flexible, and inflexible
care models, respectively. We develop a two-stage stochastic mixed-integer
programming model that is valid for the three care delivery models to minimize
expected weighted cost of patient waiting time and nurse overtime. We propose
multiple variants of a scenario grouping-based decomposition algorithm to solve
the model using data of a major university oncology hospital. The variants of
the algorithm differ from each other according to the method used to group
scenarios. We compare input-based, solution-based and random scenario grouping
methods within the decomposition algorithm. We obtain near-optimal schedules
that are also significantly better than the schedules generated based on the
policy used in the clinic. We analyze the impact of nursing care flexibility to
determine whether partial or fully flexible delivery system is necessary to
adequately improve waiting time and overtime. We examine the sensitivity of the
performance measures to the cost coefficients and the number of nurses and
chairs. Finally, we provide an estimation of the value of stochastic solution.
|
This paper is to design and optimize a non-orthogonal and noncoherent massive
multiple-input multiple-output (MIMO) framework towards enabling scalable
ultra-reliable low-latency communications (sURLLC) in wireless systems beyond
5G. In this framework, the huge diversity gain associated with the large-scale
antenna array in massive MIMO systems is leveraged to ensure ultrahigh
reliability. To reduce the overhead and latency induced by the channel
estimation process, we advocate the noncoherent communication technique which
does not need the knowledge of instantaneous channel state information (CSI)
but only depends on the large-scale fading coefficients for information
decoding. To boost the scalability of the system considered, we enable the
non-orthogonal channel access of multiple users by devising a new differential
modulation scheme to assure that each transmitted signal matrix can be uniquely
determined in the noise-free case and be reliably estimated in noisy cases when
the antenna array size is scaled up. The key idea is to make the transmitted
signals from multiple users be superimposed properly over the air such that
when the sum-signal is correctly detected, the signals sent by all users can be
uniquely determined. To further improve the average error performance when the
array antenna number is large, we propose a max-min Kullback-Leibler (KL)
divergence-based design by jointly optimizing the transmitted powers of all
users and the sub-constellation assignment among them. Simulation results show
that the proposed design significantly outperforms the existing max-min
Euclidean distance-based counterpart in terms of error performance. Moreover,
our proposed approach also has a better error performance than the conventional
coherent zero-forcing (ZF) receiver with orthogonal channel training,
particularly for cell-edge users.
|
A weighted recursive tree is an evolving tree in which vertices are assigned
random vertex-weights and new vertices connect to a predecessor with a
probability proportional to its weight. Here, we study the maximum degree and
near-maximum degrees in weighted recursive trees when the vertex-weights are
almost surely bounded and their distribution function satisfies a mild
regularity condition near zero. We are able to specify higher-order corrections
to the first order growth of the maximum degree established in prior work. The
accuracy of the results depends on the behaviour of the weight distribution
near the largest possible value and in certain cases we manage to find the
corrections up to random order. Additionally, we describe the tail distribution
of the maximum degree, the distribution of the number of vertices attaining the
maximum degree, and establish asymptotic normality of the number of vertices
with near-maximum degree. Our analysis extends the results proved for random
recursive trees (where the weights are constant) to the case of random weights.
The main technical result shows that the degrees of several uniformly chosen
vertices are asymptotically independent with explicit error corrections.
|
We show that interacting bosons in a periodically-driven two dimensional (2D)
optical lattice may effectively exhibit fermionic statistics. The phenomenon is
similar to the celebrated Tonks-Girardeau regime in 1D. The Floquet band of a
driven lattice develops the moat shape, i.e. a minimum along a closed contour
in the Brillouin zone. Such degeneracy of the kinetic energy favors fermionic
quasiparticles. The statistical transmutation is achieved by the Chern-Simons
flux attachment similar to the fractional quantum Hall case. We show that the
velocity distribution of the released bosons is a sensitive probe of the
fermionic nature of their stationary Floquet state.
|
The bias-variance tradeoff tells us that as model complexity increases, bias
falls and variances increases, leading to a U-shaped test error curve. However,
recent empirical results with over-parameterized neural networks are marked by
a striking absence of the classic U-shaped test error curve: test error keeps
decreasing in wider networks. This suggests that there might not be a
bias-variance tradeoff in neural networks with respect to network width, unlike
was originally claimed by, e.g., Geman et al. (1992). Motivated by the shaky
evidence used to support this claim in neural networks, we measure bias and
variance in the modern setting. We find that both bias and variance can
decrease as the number of parameters grows. To better understand this, we
introduce a new decomposition of the variance to disentangle the effects of
optimization and data sampling. We also provide theoretical analysis in a
simplified setting that is consistent with our empirical findings.
|
Carbon clusters have been generated by a novel technique of energetic heavy
ion bombardment of amorphous graphite. The evolution of clusters and their
subsequent fragmentation under continuing ion bombardment is revealed by
detecting various clusters in the energy spectra of the direct recoils emitted
as a result of collision between ions and the surface constituents.
|
We classify Freund-Rubin backgrounds of eleven-dimensional supergravity of
the form AdS_4 x X^7 which are at least half BPS; equivalently, smooth
quotients of the round 7-sphere by finite subgroups of SO(8) which admit an
(N>3)-dimensional subspace of Killing spinors. The classification is given in
terms of pairs consisting of an ADE subgroup of SU(2) and an automorphism
defining its embedding in SO(8). In particular we find novel half-BPS quotients
associated with the subgroups of type D_n (for n>5), E_7 and E_8 and their
outer automorphisms.
|
When can a unimodular random planar graph be drawn in the Euclidean or the
hyperbolic plane in a way that the distribution of the random drawing is
isometry-invariant? This question was answered for one-ended unimodular graphs
in \cite{benjamini2019invariant}, using the fact that such graphs automatically
have locally finite (simply connected) drawings into the plane. For the case of
graphs with multiple ends the question was left open. We revisit Halin's graph
theoretic characterization of graphs that have a locally finite embedding into
the plane. Then we prove that such unimodular random graphs do have a locally
finite invariant embedding into the Euclidean or the hyperbolic plane,
depending on whether the graph is amenable or not.
|
A theorem on subwavelength imaging with arrays of discrete sources is
formulated. This theorem is analogous to the Kotelnikov (also named
Nyquist-Shannon) sampling theorem as it represents the field at an arbitrary
point of space in terms of the same field taken at discrete points and imposes
similar limitations on the accuracy of the image. A physical realization of an
imaging system operating exactly on the resolution limit enforced by the
theorem is outlined.
|
We are devoted to the study of a nonhomogeneous time-fractional Timoshenko
system with frictional and viscoelastic damping terms. We are concerned with
the well-posedness of the given problem. The approach relies on some
functional-analysis tools, operator theory, a prori estimates, and density
arguments.
|
In recent years, diffusion models have achieved remarkable success in various
domains of artificial intelligence, such as image synthesis, super-resolution,
and 3D molecule generation. However, the application of diffusion models in
graph learning has received relatively little attention. In this paper, we
address this gap by investigating the use of diffusion models for unsupervised
graph representation learning. We begin by identifying the anisotropic
structures of graphs and a crucial limitation of the vanilla forward diffusion
process in learning anisotropic structures. This process relies on continuously
adding an isotropic Gaussian noise to the data, which may convert the
anisotropic signals to noise too quickly. This rapid conversion hampers the
training of denoising neural networks and impedes the acquisition of
semantically meaningful representations in the reverse process. To address this
challenge, we propose a new class of models called {\it directional diffusion
models}. These models incorporate data-dependent, anisotropic, and directional
noises in the forward diffusion process. To assess the efficacy of our proposed
models, we conduct extensive experiments on 12 publicly available datasets,
focusing on two distinct graph representation learning tasks. The experimental
results demonstrate the superiority of our models over state-of-the-art
baselines, indicating their effectiveness in capturing meaningful graph
representations. Our studies not only provide valuable insights into the
forward process of diffusion models but also highlight the wide-ranging
potential of these models for various graph-related tasks.
|
We study the phase diagram of two different mixed-dimensional
$t-J_z-J_{\perp}$-models on the square lattice, in which the hopping amplitude
$t$ is only nonzero along the $x$-direction. In the first, bosonic, model, the
spin exchange amplitude $J_{\perp}$ is negative and isotropic along the $x$ and
$y$ directions of the lattice, and $J_z$ is isotropic and positive. The
low-energy physics is characterized by spin-charge separation: the holes hop as
free fermions in an easy-plane ferromagnetic background. In the second model,
$J_{\perp}$ is restricted to the $x$-axis while $J_z$ remains isotropic and
positive. The model is agnostic to particle statistics, and shows stripe
patterns with anti-ferromagnetic N{\'e}el order at low temperature and high
hole concentrations, in resemblance of the mixed-dimensional $t-J_z$ and $t-J$
models. At lower hole concentration, a very strong first order transition and
hysteresis loop is seen extending to a remarkably high 14(1)% hole doping.
|
Different from existing MOT (Multi-Object Tracking) techniques that usually
aim at improving tracking accuracy and average FPS, real-time systems such as
autonomous vehicles necessitate new requirements of MOT under limited computing
resources: (R1) guarantee of timely execution and (R2) high tracking accuracy.
In this paper, we propose RT-MOT, a novel system design for multiple MOT tasks,
which addresses R1 and R2. Focusing on multiple choices of a workload pair of
detection and association, which are two main components of the
tracking-by-detection approach for MOT, we tailor a measure of object
confidence for RT-MOT and develop how to estimate the measure for the next
frame of each MOT task. By utilizing the estimation, we make it possible to
predict tracking accuracy variation according to different workload pairs to be
applied to the next frame of an MOT task. Next, we develop a novel
confidence-aware real-time scheduling framework, which offers an offline timing
guarantee for a set of MOT tasks based on non-preemptive fixed-priority
scheduling with the smallest workload pair. At run-time, the framework checks
the feasibility of a priority-inversion associated with a larger workload pair,
which does not compromise the timing guarantee of every task, and then chooses
a feasible scenario that yields the largest tracking accuracy improvement based
on the proposed prediction. Our experiment results demonstrate that RT-MOT
significantly improves overall tracking accuracy by up to 1.5x, compared to
existing popular tracking-by-detection approaches, while guaranteeing timely
execution of all MOT tasks.
|
We consider the winding number of planar stationary Gaussian processes
defined on the line. Under mild conditions, we obtain the asymptotic variance
and the Central Limit Theorem for the winding number as the time horizon tends
to infinity. In the asymptotic regime, our discrete approach is equivalent to
the continuous one studied previously in the literature and our main result
extends the existing ones. Our model allows for a general dependence of the
coordinates of the process and non-differentiability of one of them.
Furthermore, beyond our general framework, we consider as examples an
approximation to the winding number of a process whose coordinates are both
non-differentiable and the winding number of a process which is not exactly
stationary.
|
We consider random subgroups of Thompson's group $F$ with respect to two
natural stratifications of the set of all $k$ generator subgroups. We find that
the isomorphism classes of subgroups which occur with positive density are not
the same for the two stratifications.
We give the first known examples of {\em persistent} subgroups, whose
isomorphism classes occur with positive density within the set of $k$-generator
subgroups, for all sufficiently large $k$. Additionally, Thompson's group
provides the first example of a group without a generic isomorphism class of
subgroup. Elements of $F$ are represented uniquely by reduced pairs of finite
rooted binary trees.
We compute the asymptotic growth rate and a generating function for the
number of reduced pairs of trees, which we show is D-finite and not algebraic.
We then use the asymptotic growth to prove our density results.
|
We investigate the semantic intricacies of conditioning, a main feature in
probabilistic programming. We provide a weakest (liberal) pre-condition (w(l)p)
semantics for the elementary probabilistic programming language pGCL extended
with conditioning. We prove that quantitative weakest (liberal) pre-conditions
coincide with conditional (liberal) expected rewards in Markov chains and show
that semantically conditioning is a truly conservative extension. We present
two program transformations which entirely eliminate conditioning from any
program and prove their correctness using the w(l)p-semantics. Finally, we show
how the w(l)p-semantics can be used to determine conditional probabilities in a
parametric anonymity protocol and show that an inductive w(l)p-semantics for
conditioning in non-deterministic probabilistic programs cannot exist.
|
In this paper we report on a study conducted with a group of older adults in
which they engaged in participatory design workshops to create a VR ATM
training simulation. Based on observation, recordings and the developed VR
application we present the results of the workshops and offer considerations
and recommendations for organizing opportunities for end users, in this case
older adults, to directly engage in co-creation of cutting-edge ICT solutions.
These include co-designing interfaces and interaction schemes for emerging
technologies like VR and AR. We discuss such aspects as user engagement and
hardware and software tools suitable for participatory prototyping of VR
applications. Finally, we present ideas for further research in the area of VR
participatory prototyping with users of various proficiency levels, taking
steps towards developing a unified framework for co-design in AR and VR.
|
We study the geometric interpretation of two dimensional rational conformal
field theories, corresponding to sigma models on Calabi-Yau manifolds. We
perform a detailed study of RCFT's corresponding to T^2 target and identify the
Cardy branes with geometric branes. The T^2's leading to RCFT's admit ``complex
multiplication'' which characterizes Cardy branes as specific D0-branes. We
propose a condition for the conformal sigma model to be RCFT for arbitrary
Calabi-Yau n-folds, which agrees with the known cases. Together with recent
conjectures by mathematicians it appears that rational conformal theories are
not dense in the space of all conformal theories, and sometimes appear to be
finite in number for Calabi-Yau n-folds for n>2. RCFT's on K3 may be dense. We
speculate about the meaning of these special points in the moduli spaces of
Calabi-Yau n-folds in connection with freezing geometric moduli.
|
As science technology grows, medical application is becoming more complex to
solve the physiological problems within expected time. Workflow management
systems (WMS) in Grid computing are promising solution to solve the
sophisticated problem such as genomic analysis, drug discovery, disease
identification, etc. Although existing WMS can provide basic management
functionality in Grid environment, consideration of user requirements such as
performance, reliability and interaction with user is missing. In this paper,
we propose hybrid workflow management system for heart disease identification
and discuss how to guarantee different user requirements according to user SLA.
The proposed system is applied to Physio-Grid e-health platform to identify
human heart disease with ECG analysis and Virtual Heart Simulation (VHS)
workflow applications.
|
Principal Component Analysis (PCA) and other multi-variate models are often
used in the analysis of "omics" data. These models contain much information
which is currently neither easily accessible nor interpretable. Here we present
an algorithmic method which has been developed to integrate this information
with existing databases of background knowledge, stored in the form of known
sets (for instance genesets or pathways). To make this accessible we have
produced a Graphical User Interface (GUI) in Matlab which allows the overlay of
known set information onto the loadings plot and thus improves the
interpretability of the multi-variate model. For each known set the optimal
convex hull, covering a subset of elements from the known set, is found through
a search algorithm and displayed. In this paper we discuss two main topics; the
details of the search algorithm for the optimal convex hull for this problem
and the GUI interface which is freely available for download for academic use.
|
We study holographic renormalization group flows from four-dimensional
$\mathcal{N}=2$ SCFTs to either $\mathcal{N}=2$ or $\mathcal{N}=1$ SCFTs. Our
approach is based on the framework of five-dimensional half-maximal
supergravity with general gauging, which we use to study domain wall solutions
interpolating between different supersymmetric AdS$_5$ vacua. We show that a
holographic RG flow connecting two $\mathcal{N}=2$ SCFTs is only possible if
the flavor symmetry of the UV theory admits an ${\rm SO}(3)$ subgroup. In this
case the ratio of the IR and UV central charges satisfies a universal relation
which we also establish in field theory. In addition we provide several general
examples of holographic flows from $\mathcal{N}=2$ to $\mathcal{N}=1$ SCFTs and
relate the ratio of the UV and IR central charges to the conformal dimension of
the operator triggering the flow. Instrumental to our analysis is a derivation
of the general conditions for AdS vacua preserving eight supercharges as well
as for domain wall solutions preserving eight Poincare supercharges in
half-maximal supergravity.
|
In a focused ion beam (FIB) microscope, source particles interact with a
small volume of a sample to generate secondary electrons that are detected,
pixel by pixel, to produce a micrograph. Randomness of the number of incident
particles causes excess variation in the micrograph, beyond the variation in
the underlying particle-sample interaction. We recently demonstrated that joint
processing of multiple time-resolved measurements from a single pixel can
mitigate this effect of source shot noise in helium ion microscopy. This paper
is focused on establishing a rigorous framework for understanding the potential
for this approach. It introduces idealized continuous- and discrete-time
abstractions of FIB microscopy with direct electron detection and
estimation-theoretic limits of imaging performance under these measurement
models. Novel estimators for use with continuous-time measurements are
introduced and analyzed, and estimators for use with discrete-time measurements
are analyzed and shown to approach their continuous-time counterparts as time
resolution is increased. Simulated FIB microscopy results are consistent with
theoretical analyses and demonstrate that substantial improvements over
conventional FIB microscopy image formation are made possible by time-resolved
measurement.
|
In the paper, we study behavior of discrete dynamical systems (automata)
w.r.t. transitivity; that is, speaking loosely, we consider how diverse may be
behavior of the system w.r.t. variety of word transformations performed by the
system: We call a system completely transitive if, given arbitrary pair $a,b$
of finite words that have equal lengths, the system $\mathfrak A$, while
evolution during (discrete) time, at a certain moment transforms $a$ into $b$.
To every system $\mathfrak A$, we put into a correspondence a family $\mathcal
F_{\mathfrak A}$ of continuous maps of a suitable non-Archimedean metric space
and show that the system is completely transitive if and only if the family
$\mathcal F_{\mathfrak A}$ is ergodic w.r.t. the Haar measure; then we find
easy-to-verify conditions the system must satisfy to be completely transitive.
The theory can be applied to analyze behavior of straight-line computer
programs (in particular, pseudo-random number generators that are used in
cryptography and simulations) since basic CPU instructions (both numerical and
logical) can be considered as continuous maps of a (non-Archimedean) metric
space $\mathbb Z_2$ of 2-adic integers.
|
We propose a Bayesian approach to estimating parameters in multiclass
functional models. Unordered multinomial probit, ordered multinomial probit and
multinomial logistic models are considered. We use finite random series priors
based on a suitable basis such as B-splines in these three multinomial models,
and classify the functional data using the Bayes rule. We average over models
based on the marginal likelihood estimated from Markov Chain Monte Carlo (MCMC)
output. Posterior contraction rates for the three multinomial models are
computed. We also consider Bayesian linear and quadratic discriminant analyses
on the multivariate data obtained by applying a functional principal component
technique on the original functional data. A simulation study is conducted to
compare these methods on different types of data. We also apply these methods
to a phoneme dataset.
|
Network embedding aims to represent a network into a low dimensional space
where the network structural information and inherent properties are maximumly
preserved. Random walk based network embedding methods such as DeepWalk and
node2vec have shown outstanding performance in the aspect of preserving the
network topological structure. However, these approaches either predict the
distribution of a node's neighbors in both direction together, which makes them
unable to capture any asymmetric relationship in a network; or preserve
asymmetric relationship in only one direction and hence lose the one in another
direction. To address these limitations, we propose bidirectional group random
walk based network embedding method (BiGRW), which treats the distributions of
a node's neighbors in the forward and backward direction in random walks as two
different asymmetric network structural information. The basic idea of BiGRW is
to learn a representation for each node that is useful to predict its
distribution of neighbors in the forward and backward direction separately.
Apart from that, a novel random walk sampling strategy is proposed with a
parameter {\alpha} to flexibly control the trade-off between breadth-first
sampling (BFS) and depth-first sampling (DFS). To learn representations from
node attributes, we design an attributed version of BiGRW (BiGRW-AT).
Experimental results on several benchmark datasets demonstrate that the
proposed methods significantly outperform the state-of-the-art plain and
attributed network embedding methods on tasks of node classification and
clustering.
|
We introduce new models of very weakly coupled logistic and tent maps for
which orbits of very long period are found. The length of these periods is far
greater than one billion. The property of these models relatively to the
distribution of the iterated points (invariant measure) is described.
|
This work discusses the boundedness of solutions for impulsive Duffing
equation with time-dependent polynomial potentials. By KAM theorem, we prove
that all solutions of the Duffing equation with low regularity in time
undergoing suitable impulses are bounded for all time and that there are many
(positive Lebesgue measure) quasi-periodic solutions clustering at infinity.
This result extends some well-known results on Duffing equations to impulsive
Duffing equations.
|
Deep neural networks continue to awe the world with their remarkable
performance. Their predictions, however, are prone to be corrupted by
adversarial examples that are imperceptible to humans. Current efforts to
improve the robustness of neural networks against adversarial examples are
focused on developing robust training methods, which update the weights of a
neural network in a more robust direction. In this work, we take a step beyond
training of the weight parameters and consider the problem of designing an
adversarially robust neural architecture with high intrinsic robustness. We
propose AdvRush, a novel adversarial robustness-aware neural architecture
search algorithm, based upon a finding that independent of the training method,
the intrinsic robustness of a neural network can be represented with the
smoothness of its input loss landscape. Through a regularizer that favors a
candidate architecture with a smoother input loss landscape, AdvRush
successfully discovers an adversarially robust neural architecture. Along with
a comprehensive theoretical motivation for AdvRush, we conduct an extensive
amount of experiments to demonstrate the efficacy of AdvRush on various
benchmark datasets. Notably, on CIFAR-10, AdvRush achieves 55.91% robust
accuracy under FGSM attack after standard training and 50.04% robust accuracy
under AutoAttack after 7-step PGD adversarial training.
|
An agnostic PAC learning algorithm finds a predictor that is competitive with
the best predictor in a benchmark hypothesis class, where competitiveness is
measured with respect to a given loss function. However, its predictions might
be quite sub-optimal for structured subgroups of individuals, such as protected
demographic groups. Motivated by such fairness concerns, we study "multi-group
agnostic PAC learnability": fixing a measure of loss, a benchmark class $\H$
and a (potentially) rich collection of subgroups $\G$, the objective is to
learn a single predictor such that the loss experienced by every group $g \in
\G$ is not much larger than the best possible loss for this group within $\H$.
Under natural conditions, we provide a characterization of the loss functions
for which such a predictor is guaranteed to exist. For any such loss function
we construct a learning algorithm whose sample complexity is logarithmic in the
size of the collection $\G$. Our results unify and extend previous positive and
negative results from the multi-group fairness literature, which applied for
specific loss functions.
|
We prove that any ergodic $SL_2(R)$-invariant probability measure on a
stratum of translation surfaces satisfies strong regularity: the measure of the
set of surfaces with two non-parallel saddle connections of length at most
$\epsilon_1, \epsilon_2$ is $O(\epsilon_1^2 \epsilon_2^2)$. We prove a more
general theorem which works for any number of short saddle connections. The
proof uses the multi-scale compactification of strata recently introduced by
Bainbridge-Chen-Gendron-Grushevsky-M\"oller and the algebraicity result of
Filip.
|
We complete the realisation by braided subfactors, announced by Ocneanu, of
all SU(3)-modular invariant partition functions previously classified by
Gannon.
|
Mixtures of ultracold gases with long-range interactions are expected to open
new avenues in the study of quantum matter. Natural candidates for this
research are spin mixtures of atomic species with large magnetic moments.
However, the lifetime of such assemblies can be strongly affected by the
dipolar relaxation that occurs in spin-flip collisions. Here we present
experimental results for a mixture composed of the two lowest Zeeman states of
$^{162}$Dy atoms, that act as dark states with respect to a light-induced
quadratic Zeeman effect. We show that, due to an interference phenomenon, the
rate for such inelastic processes is dramatically reduced with respect to the
Wigner threshold law. Additionally, we determine the scattering lengths
characterizing the s-wave interaction between these states, providing all
necessary data to predict the miscibility range of the mixture, depending on
its dimensionality.
|
We present a combined theoretical and experimental study of X-ray optical
wave mixing. This class of nonlinear phenomena combines the strengths of
spectroscopic techniques from the optical domain, with the high-resolution
capabilities of X-rays. In particular, the spectroscopic sensitivity of these
phenomena can be exploited to selectively probe valence dynamics. Specifically,
we focus on the effect of X-ray parametric down-conversion. We present a
theoretical description of the process, from which we deduce the observable
nonlinear response of valence charges. Subsequently, we simulate scattering
patterns for realistic conditions and identify characteristic signatures of the
nonlinear conversion. For the observation of this signature, we present a
dedicated experimental setup and results of a detailed investigation. However,
we do not find evidence of the nonlinear effect. This finding stands in strong
contradiction to previous claims of proof-of-principle demonstrations.
Nevertheless, we are optimistic to employ related X-ray optical wave mixing
processes on the basis of the methods presented here for probing valence
dynamics in the future.
|
Recent discovery of several overluminous type Ia supernovae (SNe Ia)
indicates that the explosive masses of white dwarfs may significantly exceed
the canonical Chandrasekhar mass limit. Rapid differential rotation may support
these massive white dwarfs. Based on the single-degenerate scenario, and
assuming that the white dwarfs would differentially rotate when the accretion
rate $\dot{M}>3\times 10^{-7}M_{\odot}\rm yr^{-1}$, employing Eggleton's
stellar evolution code we have performed the numerical calculations for $\sim$
1000 binary systems consisting of a He star and a CO white dwarf (WD). We
present the initial parameters in the orbital period - helium star mass plane
(for WD masses of $1.0 M_{\odot}$ and $1.2 M_{\odot}$, respectively), which
lead to super-Chandrasekhar mass SNe Ia. Our results indicate that, for an
initial massive WD of $1.2 M_{\odot}$, a large number of SNe Ia may result from
super-Chandrasekhar mass WDs, and the highest mass of the WD at the moment of
SNe Ia explosion is 1.81 $M_\odot$, but very massive ($>1.85M_{\odot}$) WDs
cannot be formed. However, when the initial mass of WDs is $1.0 M_{\odot}$, the
explosive masses of SNe Ia are nearly uniform, which is consistent with the
rareness of super-Chandrasekhar mass SNe Ia in observations.
|
Motivated by the recent work of Kachru-Vafa in string theory, we study in
Part A of this paper, certain identities involving modular forms,
hypergeometric series, and more generally series solutions to Fuchsian
equations. The identity which arises in string theory is the simpliest of its
kind. There are nontrivial generalizations of the identity which appear new. We
give many such examples -- all of which arise in mirror symmetry for algebraic
K3 surfaces.
In Part B, we study the integrality property of certain $q$-series, known as
mirror maps, which arise in mirror symmetry.
|
A characterization is given of the subsets of a group that extend to the
positive cone of a right order on the group and used to relate validity of
equations in lattice-ordered groups (l-groups) to subsets of free groups that
extend to positive cones of right orders. This correspondence is used to obtain
new proofs of the decidability of the word problem for free l-groups and
generation of the variety of l-groups by the l-group of automorphisms of the
real number line. A characterization of the subsets of a group that extend to
the positive cone of an order on the group is also used to establish a
correspondence between the validity of equations in varieties of representable
l-groups (equivalently, classes of ordered groups) and subsets of relatively
free groups that extend to positive cones of orders.
|
Clustering algorithms have become a popular tool in computer security to
analyze the behavior of malware variants, identify novel malware families, and
generate signatures for antivirus systems. However, the suitability of
clustering algorithms for security-sensitive settings has been recently
questioned by showing that they can be significantly compromised if an attacker
can exercise some control over the input data. In this paper, we revisit this
problem by focusing on behavioral malware clustering approaches, and
investigate whether and to what extent an attacker may be able to subvert these
approaches through a careful injection of samples with poisoning behavior. To
this end, we present a case study on Malheur, an open-source tool for
behavioral malware clustering. Our experiments not only demonstrate that this
tool is vulnerable to poisoning attacks, but also that it can be significantly
compromised even if the attacker can only inject a very small percentage of
attacks into the input data. As a remedy, we discuss possible countermeasures
and highlight the need for more secure clustering algorithms.
|
Given a symbol $\varphi,$ i.e., a holomorphic endomorphism of the unit disc,
we consider the composition operator $C_{\varphi}(f)=f\circ\varphi$ defined on
the Banach spaces of holomorphic functions $A(\mathbb{D})$ and
$H^{\infty}(\mathbb{D})$. We obtain different conditions on the symbol
$\varphi$ which characterize when the composition operator is mean ergodic and
uniformly mean ergodic in the corresponding spaces. These conditions are
related to the asymptotic behaviour of the iterates of the symbol. As an
appendix, we deal with some particular case in the setting of weighted Banach
spaces of holomorphic functions.
|
Fractional revival is a quantum transport phenomenon important for
entanglement generation in spin networks. This takes place whenever a
continuous-time quantum walk maps the characteristic vector of a vertex to a
superposition of the characteristic vectors of a subset of vertices containing
the initial vertex. A main focus will be on the case when the subset has two
vertices. We explore necessary and sufficient spectral conditions for graphs to
exhibit fractional revival. This provides a characterization of fractional
revival in paths and cycles. Our work builds upon the algebraic machinery
developed for related quantum transport phenomena such as state transfer and
mixing, and it reveals a fundamental connection between them.
|
Motivated by the physics of spin-orbital liquids, we study a model of
interacting Dirac fermions on a bilayer honeycomb lattice at half filling,
featuring an explicit global SO(3)$\times$U(1) symmetry. Using large-scale
auxiliary-field quantum Monte Carlo (QMC) simulations, we locate two
zero-temperature phase transitions as function of increasing interaction
strength. First, we observe a continuous transition from the weakly-interacting
semimetal to a different semimetallic phase in which the SO(3) symmetry is
spontaneously broken and where two out of three Dirac cones acquire a mass gap.
The associated quantum critical point can be understood in terms of a
Gross-Neveu-SO(3) theory. Second, we subsequently observe a transition towards
an insulating phase in which the SO(3) symmetry is restored and the U(1)
symmetry is spontaneously broken. While strongly first order at the mean-field
level, the QMC data is consistent with a direct and continuous transition. It
is thus a candidate for a new type of deconfined quantum critical point that
features gapless fermionic degrees of freedom.
|
In this paper, for the first time, we analytically prove that the uplink (UL)
inter-cell interference in frequency division multiple access (FDMA) small cell
networks (SCNs) can be well approximated by a lognormal distribution under a
certain condition. The lognormal approximation is vital because it allows
tractable network performance analysis with closed-form expressions. The
derived condition, under which the lognormal approximation applies, does not
pose particular requirements on the shapes/sizes of user equipment (UE)
distribution areas as in previous works. Instead, our results show that if a
path loss related random variable (RV) associated with the UE distribution
area, has a low ratio of the 3rd absolute moment to the variance, the lognormal
approximation will hold. Analytical and simulation results show that the
derived condition can be readily satisfied in future dense/ultra-dense SCNs,
indicating that our conclusions are very useful for network performance
analysis of the 5th generation (5G) systems with more general cell deployment
beyond the widely used Poisson deployment.
|
We recently developed a scheme to use low-cost calculations to find a single
twist angle where the couple cluster doubles energy of a single calculation
matches the twist-averaged coupled cluster doubles energy in a finite unit
cell. We used initiator full configuration interaction quantum Monte Carlo
($i$-FCIQMC) as an example of an exact method beyond coupled cluster doubles
theory to show that this selected twist angle approach had comparable accuracy
in methods beyond coupled cluster. Further, at least for small system sizes, we
show that the same twist angle can also be found by comparing the energy
directly (at the level of second-order Moller-Plesset theory) suggesting a
route toward twist angle selection which requires minimal modification to
existing codes which can perform twist averaging.
|
Based on the Large Sky Area Multi-Object Fiber Spectroscopic Telescope
(LAMOST) medium-resolution spectroscopic survey (MRS), we report the discovery
of nine super Li-rich unevolved stars with A(Li) $>$ 3.8 dex. These objects
show unusually high levels of lithium abundances up to three times higher than
the meteoritic value of 3.3 dex, which indicates that they must have
experienced a history of lithium enrichment. It is found that seven of our
program stars are fast rotators with $vsini>9$ km\,s$^{-1}$, which suggests
that the accretion of circumstellar matter may be the main contributor to the
lithium enhancement of these unevolved stars, however, other sources cannot be
excluded.
|
Observations of multiple or lensed QSOs at high redshift can be used to probe
transverse structure in intervening absorption systems. Here we present results
obtained from STIS spectroscopy of the unique triply imaged QSO APM08279+5255
at z = 3.9 and study the coherence scales of intervening low and high
ionization absorbers.
|
Christ and Kiselev have established that the generalized eigenfunctions of
one-dimensional Dirac operators with $L^p$ potential $F$ are bounded for almost
all energies for $p < 2$. Roughly speaking, the proof involved writing these
eigenfunctions as a multilinear series $\sum_n T_n(F, ..., F)$ and carefully
bounding each term $T_n(F, ..., F)$. It is conjectured that the results of
Christ and Kiselev also hold for $L^2$ potentials $F$. However in this note we
show that the bilinear term $T_2(F,F)$ and the trilinear term $T_3(F,F,F)$ are
badly behaved on $L^2$, which seems to indicate that multilinear expansions are
not the right tool for tackling this endpoint case.
|
Large pre-trained neural networks are ubiquitous and critical to the success
of many downstream tasks in natural language processing and computer vision.
However, within the field of web information retrieval, there is a stark
contrast in the lack of similarly flexible and powerful pre-trained models that
can properly parse webpages. Consequently, we believe that common machine
learning tasks like content extraction and information mining from webpages
have low-hanging gains that yet remain untapped.
We aim to close the gap by introducing an agnostic deep graph neural network
feature extractor that can ingest webpage structures, pre-train self-supervised
on massive unlabeled data, and fine-tune to arbitrary tasks on webpages
effectually.
Finally, we show that our pre-trained model achieves state-of-the-art results
using multiple datasets on two very different benchmarks: webpage boilerplate
removal and genre classification, thus lending support to its potential
application in diverse downstream tasks.
|
We propose a method for reduction of quantum systems with arbitrary first
class constraints. An appropriate mathematical setting for the problem is
homology of associative algebras. For every such an algebra $A$ and its
subalgebra B with an augmentation e there exists a cohomological complex which
is a generalization of the BRST one. Its cohomology is an associative graded
algebra Hk^{*}(A,B) which we call the Hecke algebra of the triple (A,B,e). It
acts in the cohomology space H^{*}(B,V) for every left A- module V. In
particular the zeroth graded component Hk^{0}(A,B) acts in the space of B-
invariants of V and provides the reduction of the quantum system.
|
Until recently, the field of speaker diarization was dominated by cascaded
systems. Due to their limitations, mainly regarding overlapped speech and
cumbersome pipelines, end-to-end models have gained great popularity lately.
One of the most successful models is end-to-end neural diarization with
encoder-decoder based attractors (EEND-EDA). In this work, we replace the EDA
module with a Perceiver-based one and show its advantages over EEND-EDA; namely
obtaining better performance on the largely studied Callhome dataset, finding
the quantity of speakers in a conversation more accurately, and faster
inference time. Furthermore, when exhaustively compared with other methods, our
model, DiaPer, reaches remarkable performance with a very lightweight design.
Besides, we perform comparisons with other works and a cascaded baseline across
more than ten public wide-band datasets. Together with this publication, we
release the code of DiaPer as well as models trained on public and free data.
|
Recent work has characterized the sum capacity of
time-varying/frequency-selective wireless interference networks and $X$
networks within $o(\log({SNR}))$, i.e., with an accuracy approaching 100% at
high SNR (signal to noise power ratio). In this paper, we seek similar capacity
characterizations for wireless networks with relays, feedback, full duplex
operation, and transmitter/receiver cooperation through noisy channels. First,
we consider a network with $S$ source nodes, $R$ relay nodes and $D$
destination nodes with random time-varying/frequency-selective channel
coefficients and global channel knowledge at all nodes. We allow full-duplex
operation at all nodes, as well as causal noise-free feedback of all received
signals to all source and relay nodes. The sum capacity of this network is
characterized as $\frac{SD}{S+D-1}\log({SNR})+o(\log({SNR}))$. The implication
of the result is that the capacity benefits of relays, causal feedback,
transmitter/receiver cooperation through physical channels and full duplex
operation become a negligible fraction of the network capacity at high SNR.
Some exceptions to this result are also pointed out in the paper. Second, we
consider a network with $K$ full duplex nodes with an independent message from
every node to every other node in the network. We find that the sum capacity of
this network is bounded below by $\frac{K(K-1)}{2K-2}+o(\log({SNR}))$ and
bounded above by $\frac{K(K-1)}{2K-3}+o(\log({SNR}))$.
|
We consider convection-diffusion problems in time-dependent domains and
present a space-time finite element method based on quadrature in time which is
simple to implement and avoids remeshing procedures as the domain is moving.
The evolving domain is embedded in a domain with fixed mesh and a cut finite
element method with continuous elements in space and discontinuous elements in
time is proposed. The method allows the evolving geometry to cut through the
fixed background mesh arbitrarily and thus avoids remeshing procedures.
However, the arbitrary cuts may lead to ill-conditioned algebraic systems. A
stabilization term is added to the weak form which guarantees well-conditioned
linear systems independently of the position of the geometry relative to the
fixed mesh and in addition makes it possible to use quadrature rules in time to
approximate the space-time integrals. We review here the space-time cut finite
element method presented in [13] where linear elements are used in both space
and time and extend the method to higher order elements for problems on
evolving surfaces (or interfaces). We present a new stabilization term which
also when higher order elements are used controls the condition number of the
linear systems from cut finite element methods on evolving surfaces. The new
stabilization combines the consistent ghost penalty stabilization [1] with a
term controlling normal derivatives at the interface.
|
We present a numerical investigation of three-dimensional, short-wavelength
linear instabilities in Kelvin-Helmholtz (KH) vortices in homogeneous and
stratified environments. The base flow, generated using two-dimensional
numerical simulations, is characterized by the Reynolds number and the
Richardson number defined based on the initial one-dimensional velocity and
buoyancy profiles. The local stability equations are then solved on closed
streamlines in the vortical base flow, which is assumed quasi-steady. For the
unstratified case, the elliptic instability at the vortex core dominates at
early times, before being taken over by the hyperbolic instability at the
vortex edge. For the stratified case, the early time instabilities comprise a
dominant elliptic instability at the core and a hyperbolic instability strongly
influenced by stratification at the vortex edge. At intermediate times, the
local approach shows a new branch of instability (convective branch) that
emerges at the vortex core and subsequently moves towards the vortex edge. A
few more convective instability branches appear at the vortex core and move
away, before coalescing to form the most unstable region inside the vortex
periphery at large times. The dominant instability characteristics from the
local approach are shown to be in good qualitative agreement with results from
global instability studies for both homogeneous and stratified cases.
Compartmentalized analyses are then used to elucidate the role of shear and
stratification on the identified instabilities. The role of buoyancy is shown
to be critical after the primary KH instability saturates, with the dominant
convective instability shown to occur in regions with the strongest statically
unstable layering. We conclude by highlighting the potentially insightful role
that the local approach may offer in understanding the secondary instabilities
in other flows.
|
Structural changes in giant DNA induced by the addition of the flexible
polymer PEG were examined by the method of single-DNA observation. In dilute
DNA conditions, individual DNA assumes a compact state via a discrete
coil-globule transition, whereas in concentrated solution, DNA molecules
exhibit an extended conformation via macroscopic phase segregation. The long
axis length of the stretched state in DNA is about 1000 times larger than that
of the compact state. Phase segregation at high DNA concentrations occurs at
lower PEG concentrations than the compaction at low DNA concentrations. These
opposite changes in the conformation of DNA molecule are interpreted in terms
of the free energy, including depletion interaction.
|
Our two principle goals are generalizations of the commutant lifting theorem
and the Nevanlinna-Pick interpolation theorem to the context of Hardy algebras
built from $W^*$-correspondences endowed with a sequence of weights. These
theorems generalize theorems of Muhly and Solel from 1998 and 2004,
respectively, which were proved in settings without weights. Of special
interest is the fact that commutant lifting in our setting is a consequence of
Parrott's Lemma; it is inspired by work of Arias.
|
In two-tier networks -- comprising a conventional cellular network overlaid
with shorter range hotspots (e.g. femtocells, distributed antennas, or wired
relays) -- with universal frequency reuse, the near-far effect from cross-tier
interference creates dead spots where reliable coverage cannot be guaranteed to
users in either tier. Equipping the macrocell and femtocells with multiple
antennas enhances robustness against the near-far problem. This work derives
the maximum number of simultaneously transmitting multiple antenna femtocells
meeting a per-tier outage probability constraint. Coverage dead zones are
presented wherein cross-tier interference bottlenecks cellular and hotspot
coverage. Two operating regimes are shown namely 1) a cellular-limited regime
in which femtocell users experience unacceptable cross-tier interference and 2)
a hotspot-limited regime wherein both femtocell users and cellular users are
limited by hotspot interference. Our analysis accounts for the per-tier
transmit powers, the number of transmit antennas (single antenna transmission
being a special case) and terrestrial propagation such as the Rayleigh fading
and the path loss exponents. Single-user (SU) multiple antenna transmission at
each tier is shown to provide significantly superior coverage and spatial reuse
relative to multiuser (MU) transmission. We propose a decentralized
carrier-sensing approach to regulate femtocell transmission powers based on
their location. Considering a worst-case cell-edge location, simulations using
typical path loss scenarios show that our interference management strategy
provides reliable cellular coverage with about 60 femtocells per cellsite.
|
The multi-scale interaction of self-consistently driven magnetic islands with
electromagnetic turbulence is studied within the three dimensional, toroidal
gyro-kinetic framework. It can be seen that, even in the presence of
electromagnetic turbulence the linear structure of the mode is retained.
Turbulent fluctuations do not destroy the growing island early in its
development, which then maintains a coherent form as it grows.
The island is seeded by the electromagnetic turbulence fluctuations, which
provide an initial island structure through nonlinear interactions and which
grows at a rate significantly faster than the linear tearing growth rate. These
island structures saturate at a width that is approximately $\rho_{i}$ in size.
In the presence of turbulence the island then grows at the linear rate even
though the island is significantly wider than the resonant layer width, a
regime where the island is expected to grow at a significantly reduced
non-linear rate.
A large degree of stochastisation around the separatrix, and an almost
complete break down of the X-point is seen. This significantly reduces the
effective island width.
|
I discuss outstanding questions about the formation of the ring nebula around
SN1987A and some implications of similar ring nebulae around Galactic B
supergiants. There are notable obstacles for the formation of SN1987A's bipolar
nebula through interacting winds in a transition from a red supergiant to a
blue supergiant. Instead, several clues hint that the nebula may have been
ejected in an LBV-like event. In addition to the previously known example of
Sher25, there are two newly-discovered Galactic analogs of SN1987A's ringed
nebula. Of these three Galactic analogs around blue supergiants, two (Sher25
and SBW1) have chemical abundances indicating that they have not been through a
red supergiant phase, and the remaining ringed bipolar nebula surrounds a
luminous blue variable (HD168625). Although SK-69 202's initial mass of 20 Msun
is lower than those atributed to most LBVs, it is not far off, and the
low-luminosity end of the LBV phenomenon is not well defined. Furthermore,
HD168625's luminosity indicates an initial mass of only 25 Msun, that of SBW1
is consistent with 20 Msun, and there is a B[e] star in the SMC with an initial
mass of 20 Msun that experienced an LBV outburst in the 1990s. These
similarities may be giving us important clues about Sk-69 202's pre-SN
evolution and the formation mechanism of its nebula.
|
We propose a new mechanism for baryogenesis at the 1-200 MeV scale.
Enhancement of CP violation takes place via interference between oscillations
and decays of mesinos--bound states of a scalar quark and antiquark and their
CP conjugates. We present the mechanism in a simplified model with four new
fundamental particles, with masses between 300 GeV and 10 TeV, and show that
some of the experimentally allowed parameter space can give the observed
baryon-to-entropy ratio.
|
In this paper, we address the issue of hyperspectral pan-sharpening, which
consists in fusing a (low spatial resolution) hyperspectral image HX and a
(high spatial resolution) panchromatic image P to obtain a high spatial
resolution hyperspectral image. The problem is addressed under a variational
convex constrained formulation. The objective favors high resolution spectral
bands with level lines parallel to those of the panchromatic image. This term
is balanced with a total variation term as regularizer. Fit-to-P data and
fit-to-HX data constraints are effectively considered as mathematical
constraints, which depend on the statistics of the data noise measurements. The
developed Alternating Direction Method of Multipliers (ADMM) optimization
scheme enables us to solve this problem efficiently despite the non
differentiabilities and the huge number of unknowns.
|
We are in the process of building complex highly autonomous systems that have
build-in beliefs, perceive their environment and exchange information. These
systems construct their respective world view and based on it they plan their
future manoeuvres, i.e., they choose their actions in order to establish their
goals based on their prediction of the possible futures. Usually these systems
face an overwhelming flood of information provided by a variety of sources
where by far not everything is relevant. The goal of our work is to develop a
formal approach to determine what is relevant for a safety critical autonomous
system at its current mission, i.e., what information suffices to build an
appropriate world view to accomplish its mission goals.
|
The expectation that scientific productivity follows regular patterns over a
career underpins many scholarly evaluations, including hiring, promotion and
tenure, awards, and grant funding. However, recent studies of individual
productivity patterns reveal a puzzle: on the one hand, the average number of
papers published per year robustly follows the "canonical trajectory" of a
rapid rise to an early peak followed by a graduate decline, but on the other
hand, only about 20% of individual researchers' productivity follows this
pattern. We resolve this puzzle by modeling scientific productivity as a
parameterized random walk, showing that the canonical pattern can be explained
as a decrease in the variance in changes to productivity in the early-to-mid
career. By empirically characterizing the variable structure of 2,085
productivity trajectories of computer science faculty at 205 PhD-granting
institutions, spanning 29,119 publications over 1980--2016, we (i) discover
remarkably simple patterns in both early-career and year-to-year changes to
productivity, and (ii) show that a random walk model of productivity both
reproduces the canonical trajectory in the average productivity and captures
much of the diversity of individual-level trajectories. These results highlight
the fundamental role of a panoply of contingent factors in shaping individual
scientific productivity, opening up new avenues for characterizing how systemic
incentives and opportunities can be directed for aggregate effect.
|
The dynamics of a three-level atom in a cascade configuration with both
transitions coupled to a single structured reservoir of quantized field modes
is treated using Laplace transform methods applied to the coupled amplitude
equations. Results are also obtained from master equations by two different
approaches, that is, involving either pseudomodes or quasimodes. Two different
types of reservoir are considered, namely a high-Q cavity and a photonic
band-gap system, in which the respective reservoir structure functions involve
Lorentzians. Non-resonant transitions are included in the model. In all cases
non-Markovian behaviour for the atomic system can be found, such as oscillatory
decay for the high-Q cavity case and population trapping for the photonic
band-gap case. In the master equation approaches, the atomic system is
augmented by a small number of pseudomodes or quasimodes, which in the
quasimode approach themselves undergo Markovian relaxation into a flat
reservoir of continuum quasimodes. Results from these methods are found to be
identical to those from the Laplace transform method including two-photon
excitation of the reservoir with both emitting sequences. This shows that
complicated non-Markovian decays of an atomic system into structured EM field
reservoirs can be described by Markovian models for the atomic system coupled
to a small number of pseudomodes or quasimodes.
|
Bell nonlocality, entanglement and nonclassical correlations are different
aspects of quantum correlations for a given state. There are many methods to
measure nonclassical correlations. In this paper, nonclassical correlations in
two-qubit spin models are measured by use of measurement-induced disturbance
(MID) [Phys. Rev. A, 77, 022301 (2008)] and geometric measure of quantum
discord (GQD) [Phys. Rev. Lett. 105, 190502 (2010)]. Their dependencies on
external magnetic field, spin-spin coupling, and Dzyaloshinski-Moriya (DM)
interaction are presented in detail. We also compare Bell nonlocality,
entanglement measured by concurrence, MID and GQD and illustrate their
different characteristics.
|
Achieving electrical conductance in amorphous non-doped polymers is a
challenging task. Here, we show that vibrational strong coupling of the
aromatic C-H(D) out-of-plane bending modes of polystyrene, deuterated
polystyrene, and poly (benzyl methacrylate) to the vacuum electromagnetic field
of the cavity enhance the electrical conductivity by at least six orders of
magnitude compared to the uncoupled polymers. The conductance is thermally
activated at the onset of strong coupling. It becomes temperature and cavity
path length independent at the highest coupling strengths, giving rise to the
extraordinary electrical conductance in these polymers. The electrical
characterizations are performed without external light excitation,
demonstrating the role of quantum light in enhancing the long-range coherent
transport even in amorphous non-conducting polymers.
|
This paper establishes a purely syntactic representation for the category of
algebraic L-domains with Scott-continuous functions as morphisms. The central
tool used here is the notion of logical states, which builds a bridge between
disjunctive sequent calculi and algebraic L-domains. To capture
Scott-continuous functions between algebraic L-domains, the notion of
consequence relations between disjunctive sequent calculi is also introduced.
It is shown that the category of disjunctive sequent calculi with consequence
relations as morphisms is categorical equivalent to that of algebraic L-domains
with Scott-continuous functions as morphisms.
|
Subsets and Splits