text
stringlengths 6
128k
|
---|
We introduce an efficient cellular automaton for the coagulation-fission
process with diffusion 2A->3A, 2A->A in arbitrary dimensions. As the well-known
Domany-Kinzel model, it is defined on a tilted hypercubic lattice and evolves
by parallel updates. The model exhibits a non-equilibrium phase transition from
an active into an absorbing phase and its critical properties are expected to
be of the same type as in the pair contact process with diffusion.
High-precision simulations on a parallel computer suggest that various
quantities of interest do not show the expected power-law scaling, calling for
new approaches to understand this unusual type of critical behavior.
|
A structure of a left-symmetric algebra on the set of all derivations of a
free algebra is introduced such that its commutator algebra becomes the usual
Lie algebra of derivations. Left and right nilpotent elements of left-symmetric
algebras of derivations are studied. Simple left-symmetric algebras of
derivations and Novikov algebras of derivations are described. It is also
proved that the positive part of the left-symmetric algebra of derivations of a
free nonassociative symmetric $m$-ary algebra in one free variable is generated
by one derivation and some right nilpotent derivations are described.
|
An expanding polariton condensate is investigated under pulsed nonresonant
excitation with a small laser pump spot. Far above the condensation threshold
we observe a pronounced increase in the dispersion curvature with a subsequent
linearization of the spectrum and strong luminescence from a ghost branch
orthogonally polarized with respect to the linearly polarized condensate
emission. The presence of the ghost branch has been confirmed in time-resolved
measurements. The dissipative and nonequilibrium effects in the
photoluminescence of polariton condensates and their excitations are discussed.
|
We develop a generalized method of moments (GMM) approach for fast parameter
estimation in a new class of Dirichlet latent variable models with mixed data
types. Parameter estimation via GMM has been demonstrated to have computational
and statistical advantages over alternative methods, such as expectation
maximization, variational inference, and Markov chain Monte Carlo. The key
computational advan- tage of our method (MELD) is that parameter estimation
does not require instantiation of the latent variables. Moreover, a
representational advantage of the GMM approach is that the behavior of the
model is agnostic to distributional assumptions of the observations. We derive
population moment conditions after marginalizing out the sample-specific
Dirichlet latent variables. The moment conditions only depend on component mean
parameters. We illustrate the utility of our approach on simulated data,
comparing results from MELD to alternative methods, and we show the promise of
our approach through the application of MELD to several data sets.
|
In a clinical setting, epilepsy patients are monitored via video
electroencephalogram (EEG) tests. A video EEG records what the patient
experiences on videotape while an EEG device records their brainwaves.
Currently, there are no existing automated methods for tracking the patient's
location during a seizure, and video recordings of hospital patients are
substantially different from publicly available video benchmark datasets. For
example, the camera angle can be unusual, and patients can be partially covered
with bedding sheets and electrode sets. Being able to track a patient in
real-time with video EEG would be a promising innovation towards improving the
quality of healthcare. Specifically, an automated patient detection system
could supplement clinical oversight and reduce the resource-intensive efforts
of nurses and doctors who need to continuously monitor patients. We evaluate an
ImageNet pre-trained Mask R-CNN, a standard deep learning model for object
detection, on the task of patient detection using our own curated dataset of 45
videos of hospital patients. The dataset was aggregated and curated for this
work. We show that without fine-tuning, ImageNet pre-trained Mask R-CNN models
perform poorly on such data. By fine-tuning the models with a subset of our
dataset, we observe a substantial improvement in patient detection performance,
with a mean average precision of 0.64. We show that the results vary
substantially depending on the video clip.
|
A Green's function approach to the inclusive quasielastic ($e,e'$) scattering
is presented. The components of the nuclear response are written in terms of
the single-particle optical model Green's function. The explicit calculation of
the Green's function can be avoided by its spectral representation, which is
based on a biorthogonal expansion in terms of the eigenfunctions of the
non-Hermitian optical potential and of its Hermitian conjugate. This allows one
to treat final state interactions consistently in the inclusive ($e,e'$) and in
the exclusive ($e,e'N$) reactions. Numerical results for the longitudinal and
transverse response functions obtained in a nonrelativistic and in a
relativistic framework are presented and discussed also in comparison with
data.
|
We discuss the effects of fluctuations of the local density of charged
dopants near a first order phase transition in electronic systems, that is
driven by change of charge carrier density controlled by doping level. Using a
generalization of the Imry-Ma argument, we find that the first order transition
is rounded by disorder at or below the lower critical dimension d_c=3, when at
least one of the two phases has no screening ability. The increase of d_c from
2 (as in the random field Ising model) to 3 is due to the long-range nature of
the Coulomb interaction. This result suggests that large clusters of both
phases will appear near such transitions due to disorder, in both two and three
dimensions. Possible implications of our results on manganites and underdoped
cuprates will be discussed.
|
The detection of gravitational waves resulting by the LIGO-Virgo-Kagra
collaboration has inaugurated a new era in gravitational physics, providing an
opportunity to test general relativity and its modifications in the strong
gravity regime. One such test involves the study of the ringdown phase of
gravitational waves from binary black-hole coalescence, which can be decomposed
into a superposition of quasinormal modes. In general relativity, the spectra
of quasinormal modes depend on the mass, spin, and charge of the final black
hole, but they can be influenced by additional properties of the black hole, as
well as corrections to general relativity. In this work, we employ the modified
Teukolsky formalism developed in a previous study to investigate perturbations
of slowly rotating black holes in a modified theory known as dynamical
Chern-Simons gravity. Specifically, we derive the master equations for the
$\Psi_0$ and $\Psi_4$ Weyl scalar perturbations that characterize the radiative
part of gravitational perturbations, as well as for the scalar field
perturbations. We employ metric reconstruction techniques to obtain explicit
expressions for all relevant quantities. Finally, by leveraging the properties
of spin-weighted spheroidal harmonics to eliminate the angular dependence from
the evolution equations, we derive two, radial, second-order, ordinary
differential equations for $\Psi_0$ and $\Psi_4$, respectively. These equations
are coupled to another radial, second-order, ordinary differential equation for
the scalar field perturbations. This work is the first attempt to derive a
master equation for black holes in dynamical Chern-Simons gravity using
curvature perturbations. The master equations can be numerically integrated to
obtain the quasinormal mode spectrum of slowly rotating black holes in this
theory, making progress in the study of ringdown in dynamical Chern-Simons
gravity.
|
We extend the Achlioptas model for the delay of criticality in the
percolation problem. Instead of having a completely random connectivity
pattern, we generalize the idea of the two-site probe in the Achlioptas model
for connecting smaller clusters, by introducing two models: the first one by
allowing any number k of probe sites to be investigated, k being a parameter,
and the second one independent of any specific number of probe sites, but with
a probabilistic character which depends on the size of the resulting clusters.
We find numerically the complete spectrum of critical points and our results
indicate that the value of the critical point behaves linearly with k after the
value of k = 3. The range k = 2-3 is not linear but parabolic. The more general
model of generating clusters with probability inversely proportional to the
size of the resulting cluster produces a critical point which is equivalent to
the value of k being in the range k = 5-7.
|
With the increasing utilization of deep learning in outdoor settings, its
robustness needs to be enhanced to preserve accuracy in the face of
distribution shifts, such as compression artifacts. Data augmentation is a
widely used technique to improve robustness, thanks to its ease of use and
numerous benefits. However, it requires more training epochs, making it
difficult to train large models with limited computational resources. To
address this problem, we treat data augmentation as supervised domain
generalization~(SDG) and benefit from the SDG method, contrastive semantic
alignment~(CSA) loss, to improve the robustness and training efficiency of data
augmentation. The proposed method only adds loss during model training and can
be used as a plug-in for existing data augmentation methods. Experiments on the
CIFAR-100 and CUB datasets show that the proposed method improves the
robustness and training efficiency of typical data augmentations.
|
A Blink Tree latch method and protocol supports synchronous node deletion in
a high concurrency environment. Full source code is available.
|
Given an indeterminate string pattern $p$ and an indeterminate string text
$t$, the problem of order-preserving pattern matching with character
uncertainties ($\mu$OPPM) is to find all substrings of $t$ that satisfy one of
the possible orderings defined by $p$. When the text and pattern are
determinate strings, we are in the presence of the well-studied exact
order-preserving pattern matching (OPPM) problem with diverse applications on
time series analysis. Despite its relevance, the exact OPPM problem suffers
from two major drawbacks: 1) the inability to deal with indetermination in the
text, thus preventing the analysis of noisy time series; and 2) the inability
to deal with indetermination in the pattern, thus imposing the strict
satisfaction of the orders among all pattern positions. This paper provides the
first polynomial algorithm to answer the $\mu$OPPM problem when indetermination
is observed on the pattern or text. Given two strings with length $m$ and
$O(r)$ uncertain characters per string position, we show that the $\mu$OPPM
problem can be solved in $O(mr\lg r)$ time when one string is indeterminate and
$r\in\mathbb{N}^+$. Mappings into satisfiability problems are provided when
indetermination is observed on both the pattern and the text, and results
concerning the general problem complexity are presented as well, with $\mu$OPPM
problem proved to be NP-hard in general.
|
We theoretically investigate the looping dynamics of a linear polymer
immersed in a viscoelastic fluid. The dynamics of the chain is governed by a
Rouse model with a fractional memory kernel recently proposed by Weber et al.
(S. C. Weber, J. A. Theriot, and A. J. Spakowitz, Phys. Rev. E 82, 011913
(2010)). Using the Wilemski-Fixman (G. Wilemski and M. Fixman, J. Chem. Phys.
60, 866 (1974)) formalism we calculate the looping time for a chain in a
viscoelastic fluid where the mean square displacement of the center of mass of
the chain scales as t^(1/2). We observe that the looping time is faster for the
chain in viscoelastic fluid than for a Rouse chain in Newtonian fluid up to a
chain length and above this chain length the trend is reversed. Also no scaling
of the looping time with the length of the chain seems to exist for the chain
in viscoelastic fluid.
|
In Defective Coloring we are given a graph $G = (V, E)$ and two integers
$\chi_d, \Delta^*$ and are asked if we can partition $V$ into $\chi_d$ color
classes, so that each class induces a graph of maximum degree $\Delta^*$. We
investigate the complexity of this generalization of Coloring with respect to
several well-studied graph parameters, and show that the problem is W-hard
parameterized by treewidth, pathwidth, tree-depth, or feedback vertex set, if
$\chi_d = 2$. As expected, this hardness can be extended to larger values of
$\chi_d$ for most of these parameters, with one surprising exception: we show
that the problem is FPT parameterized by feedback vertex set for any $\chi_d
\ge 2$, and hence 2-coloring is the only hard case for this parameter. In
addition to the above, we give an ETH-based lower bound for treewidth and
pathwidth, showing that no algorithm can solve the problem in $n^{o(pw)}$,
essentially matching the complexity of an algorithm obtained with standard
techniques.
We complement these results by considering the problem's approximability and
show that, with respect to $\Delta^*$, the problem admits an algorithm which
for any $\epsilon > 0$ runs in time $(tw/\epsilon)^{O(tw)}$ and returns a
solution with exactly the desired number of colors that approximates the
optimal $\Delta^*$ within $(1 + \epsilon)$. We also give a $(tw)^{O(tw)}$
algorithm which achieves the desired $\Delta^*$ exactly while 2-approximating
the minimum value of $\chi_d$. We show that this is close to optimal, by
establishing that no FPT algorithm can (under standard assumptions) achieve a
better than $3/2$-approximation to $\chi_d$, even when an extra constant
additive error is also allowed.
|
A theoretical description of a class of unidirectional axisymmetric localized
pulses, is given. The equivalence of their representations in the form of
relatively undistorted quasi-spherical waves, in the form of Fourier-Bessel
integrals and in the form of a superposition of plane waves with wave vectors
having positive projections on a given direction is established.
|
In this paper, we present algorithms for designing networks that are robust
to node failures with minimal or limited number of links. We present algorithms
for both the static network setting and the dynamic network setting; setting
where new nodes can arrive in the future. For the static setting, we present
algorithms for constructing the optimal network in terms of the number of links
used for a given node size and the number of nodes that can fail. We then
consider the dynamic setting where it is disruptive to remove any of the older
links. For this setting, we present online algorithms for two cases: (i) when
the number of nodes that can fail remains constant and (ii) when only the
proportion of the nodes that can fail remains constant. We show that the
proposed algorithm for the first case saves nearly $3/4$th of the total
possible links at any point of time. We then present algorithms for various
levels of the fraction of the nodes that can fail and characterize their link
usage. We show that when $1/2$ the number of nodes can fail at any point of
time, the proposed algorithm saves nearly $1/2$ of the total possible links at
any point of time. We show that when the number of nodes that can fail is
limited to the fraction $1/(2m)$ ($m \in \mathbb{N}$), the proposed algorithm
saves nearly as much as $(1-1/2m)$ of the total possible links at any point of
time. We also show that when the number of nodes that can fail at any point of
time is $1/2$ of the number of nodes plus $n$, $n \in \mathbb{N}$, the number
of links saved by the proposed algorithm reduces only linearly in $n$. We
conjecture that the saving ratio achieved by the algorithms we present is
optimal for the dynamic setting.
|
We present two-photon photoassociation to the least-bound vibrational level
of the X$^1\Sigma_g^+$ electronic ground state of the $^{86}$Sr$_2$ dimer and
measure a binding energy of $E_b=-83.00(7)(20)$\,kHz. Because of the very small
binding energy, this is a halo state corresponding to the scattering resonance
for two $^{86}$Sr atoms at low temperature. The measured binding energy,
combined with universal theory for a very weakly bound state on a potential
that asymptotes to a van der Waals form, is used to determine an $s$-wave
scattering length $a=810.6(12)$\,$a_0$, which is consistent with, but
substantially more accurate than the previously determined $a=798(12)\,a_0$
found from mass-scaling and precision spectroscopy of other Sr isotopes. For
the intermediate state, we use a bound level on the metastable $^1S_0-{^3P_1}$
potential. Large sensitivity of the dimer binding energy to light near-resonant
with the bound-bound transition to the intermediate state suggests that
$^{86}$Sr has great promise for manipulating atom interactions optically and
probing naturally occurring Efimov states.
|
In smooth strongly convex optimization, knowledge of the strong convexity
parameter is critical for obtaining simple methods with accelerated rates. In
this work, we study a class of methods, based on Polyak steps, where this
knowledge is substituted by that of the optimal value, $f_*$. We first show
slightly improved convergence bounds than previously known for the classical
case of simple gradient descent with Polyak steps, we then derive an
accelerated gradient method with Polyak steps and momentum, along with
convergence guarantees.
|
Scene graph representations, which form a graph of visual object nodes
together with their attributes and relations, have proved useful across a
variety of vision and language applications. Recent work in the area has used
Natural Language Processing dependency tree methods to automatically build
scene graphs.
In this work, we present an 'Attention Graph' mechanism that can be trained
end-to-end, and produces a scene graph structure that can be lifted directly
from the top layer of a standard Transformer model.
The scene graphs generated by our model achieve an F-score similarity of
52.21% to ground-truth graphs on the evaluation set using the SPICE metric,
surpassing the best previous approaches by 2.5%.
|
Many musical instruments, as for example woodwind instruments, flute or
violins, are self-sustained oscillating systems, i.e. musician enacts as a
continuous energy source to drive an oscillation in the passive resonator, the
body of the instrument, by means of a nonlinear coupling. For single reed
instruments like clarinet, there exists a minimal value of mouth pressure
beyond which sound can appear. This paper deals with the analysis of this
oscillation threshold, calculated using a modal decomposition of the resonator,
in order to have a better comprehension of how reed characteristics, such as
its strength and its damping, may influence the attack transient of notes.
|
In this talk, the new spacetime-supersymmetric description of the superstring
is reviewed and some of its applications are described. These applications
include the manifestly spacetime-supersymmetric calculation of scattering
amplitudes, the construction of a super-Poincar\'e invariant open superstring
field theory, and the beta-function calculation of low-energy equations of
motion in superspace. Parts of this work have been done in collaboration with
deBoer, van Nieuwenhuizen, Ro\v{c}ek, Sezgin, Skenderis, Stelle, and
especially, Siegel and Vafa.
|
We present here a reciprocal space formulation of the Augmented space
recursion (ASR) which uses the lattice translation symmetry in the full
augmented space to produce configuration averaged quantities, such as spectral
functions and complex band structures. Since the real space part is taken into
account {\sl exactly} and there is no truncation of this in the recursion, the
results are more accurate than recursions in real space. We have also described
the Brillouin zone integration procedure to obtain the configuration averaged
density of states. We apply the technique to Ni$_{50}$Pt$_{50}$ alloy in
conjunction with the tight-binding linearized muffin-tin orbital basis. These
developments in the theoretical basis were necessitated by our future
application to obtain optical conductivity in random systems.
|
In these lectures, we present the behavior of conventional $\bar{q}q$ mesons,
glueballs, and hybrids in the large-$N_{c}$ limit of QCD. To this end, we use
an approach based on rather simple NJL-like bound-state equations. The obtained
large-$N_{c}$ scaling laws are general and coincide with the known results. A
series of consequences, such as the narrowness of certain mesons and the
smallness of some interaction types, the behavior of chiral and dilaton models
at large-$N_{c},$ and the relation to the compositeness condition and the
standard derivation of large-$N_{c}$ results, are explained. The bound-state
formalism shows also that mesonic molecular and dynamically generated states do
not form in the large-$N_{c}$ limit. The same fate seems to apply also for
tetraquark states, but here further studies are needed. Next, following the
same approach, baryons are studied as bound states of a generalized diquark
($N_{c}-1$ antisymmetric object) and a quark. Similarities and differences with
regular mesons are discussed. All the standard scaling laws for baryons and
their interaction with mesons are correctly reproduced. The behavior of chiral
models involving baryons and describing chirally invariant mass generation is
investigated. Finally, properties of QCD in the medium at large-$N_{c}$ are
studied: the deconfinement phase transition is investigated along the
temperature and the chemical potential directions, respectively. Within the QCD
phase diagrams, the features of different models at large-$N_{c}$ are reviewed
and the location of the critical endpoint is discussed. In the end, the very
existence of nuclei and the implications of large-$N_{c}$ arguments for neutron
stars are outlined.
|
We prove that the mapping class group of a closed oriented surface of genus
at least two does not have Kazhdan's property (T).
|
Advancements in unsupervised machine translation have enabled the development
of machine translation systems that can translate between languages for which
there is not an abundance of parallel data available. We explored unsupervised
machine translation between Mandarin Chinese and Cantonese. Despite the vast
number of native speakers of Cantonese, there is still no large-scale corpus
for the language, due to the fact that Cantonese is primarily used for oral
communication. The key contributions of our project include: 1. The creation of
a new corpus containing approximately 1 million Cantonese sentences, and 2. A
large-scale comparison across different model architectures, tokenization
schemes, and embedding structures. Our best model trained with character-based
tokenization and a Transformer architecture achieved a character-level BLEU of
25.1 when translating from Mandarin to Cantonese and of 24.4 when translating
from Cantonese to Mandarin. In this paper we discuss our research process,
experiments, and results.
|
We consider a fibration with compact fiber together with a unitarily flat
complex vector bundle over the total space. Under the assumption that the
fiberwise cohomology admits a filtration with unitary factors, we construct
Bismut-Lott analytic torsion classes. The analytic torsion classes obtained
satisfy Igusa's and Ohrt's axiomatization of higher torsion invariants. As a
consequence, we obtain a higher version of the Cheeger-M\"uller/Bismut-Zhang
theorem: for trivial flat line bundles, the Bismut-Lott analytic torsion
classes coincide with the Igusa-Klein higher topological torsions up to a
normalization.
|
For each odd $n \geq 3$, we construct a closed convex hypersurface of
$\mathbb{R}^{n+1}$ that contains a non-degenerate closed geodesic with Morse
index zero. A classical theorem of J. L. Synge would forbid such constructions
for even $n$, so in a sense we prove that Synge's theorem is "sharp." We also
construct stable figure-eights: that is, for each $n \geq 3$ we embed the
figure-eight graph in a closed convex hypersurface of $\mathbb{R}^{n+1}$, such
that sufficiently small variations of the embedding either preserve its image
or must increase its length. These index-zero geodesics and stable
figure-eights are mainly derived by constructing explicit billiard trajectories
with "controlled parallel transport" in convex polytopes.
|
We study the persistent current and the Drude weight of a system of spinless
fermions, with repulsive interactions and a hopping impurity, on a mesoscopic
ring pierced by a magnetic flux, using a Density Matrix Renormalization Group
algorithm for complex fields. Both the Luttinger Liquid (LL) and the Charge
Density Wave (CDW) phases of the system are considered. Under a Jordan-Wigner
transformation, the system is equivalent to a spin-1/2 XXZ chain with a
weakened exchange coupling. We find that the persistent current changes from an
algebraic to an exponential decay with the system size, as the system crosses
from the LL to the CDW phase with increasing interaction $U$. We also find that
in the interacting system the persistent current is invariant under the
impurity transformation $\rho\to 1/\rho $, for large system sizes, where $\rho
$ is the defect strength. The persistent current exhibits a decay that is in
agreement with the behavior obtained for the Drude weight. We find that in the
LL phase the Drude weight decreases algebraically with the number of lattice
sites $N$, due to the interplay of the electron interaction with the impurity,
while in the CDW phase it decreases exponentially, defining a localization
length which decreases with increasing interaction and impurity strength. Our
results show that the impurity and the interactions always decrease the
persistent current, and imply that the Drude weight vanishes in the limit $N\to
\infty $, in both phases.
|
We numerically examine driven skyrmions interacting with a periodic quasi-one
dimensional substrate where the driving force is applied either parallel or
perpendicular to the substrate periodicity direction. For perpendicular
driving, the particles in a purely overdamped system simply slide along the
substrate minima; however, for skyrmions where the Magnus force is relevant, we
find that a rich variety of dynamics can arise. In the single skyrmion limit,
the skyrmion motion is locked along the driving or longitudinal direction for
low drives, while at higher drives a transition occurs to a state in which the
skyrmion moves both transverse and longitudinal to the driving direction.
Within the longitudinally locked phase we find a pronounced speed up effect
that occurs when the Magnus force aligns with the external driving force, while
at the transition to transverse and longitudinal motion, the skyrmion velocity
drops, producing negative differential conductivity. For collectively
interacting skyrmion assemblies, the speed up effect is still present and we
observe a number of distinct dynamical phases, including a sliding smectic
phase, a disordered or moving liquid phase, a moving hexatic phase, and a
moving crystal phase. The transitions between the dynamic phases produce
distinct features in the structure of the skyrmion lattice and in the
velocity-force curves. We map these different phases as a function of the ratio
of the Magnus term to the dissipative term, the substrate strength, the
commensurability ratio, and the magnitude of the driving force.
|
Open set recognition (OSR) assumes unknown instances appear out of the blue
at the inference time. The main challenge of OSR is that the response of models
for unknowns is totally unpredictable. Furthermore, the diversity of open set
makes it harder since instances have different difficulty levels. Therefore, we
present a novel framework, DIfficulty-Aware Simulator (DIAS), that generates
fakes with diverse difficulty levels to simulate the real world. We first
investigate fakes from generative adversarial network (GAN) in the classifier's
viewpoint and observe that these are not severely challenging. This leads us to
define the criteria for difficulty by regarding samples generated with GANs
having moderate-difficulty. To produce hard-difficulty examples, we introduce
Copycat, imitating the behavior of the classifier. Furthermore, moderate- and
easy-difficulty samples are also yielded by our modified GAN and Copycat,
respectively. As a result, DIAS outperforms state-of-the-art methods with both
metrics of AUROC and F-score. Our code is available at
https://github.com/wjun0830/Difficulty-Aware-Simulator.
|
It is of extreme importance to monitor and manage the battery health to
enhance the performance and decrease the maintenance cost of operating electric
vehicles. This paper concerns the machine-learning-enabled state-of-health
(SoH) prognosis for Li-ion batteries in electric trucks, where they are used as
energy sources. The paper proposes methods to calculate SoH and cycle life for
the battery packs. We propose autoregressive integrated modeling average
(ARIMA) and supervised learning (bagging with decision tree as the base
estimator; BAG) for forecasting the battery SoH in order to maximize the
battery availability for forklift operations. As the use of data-driven methods
for battery prognostics is increasing, we demonstrate the capabilities of ARIMA
and under circumstances when there is little prior information available about
the batteries. For this work, we had a unique data set of 31 lithium-ion
battery packs from forklifts in commercial operations. On the one hand, results
indicate that the developed ARIMA model provided relevant tools to analyze the
data from several batteries. On the other hand, BAG model results suggest that
the developed supervised learning model using decision trees as base estimator
yields better forecast accuracy in the presence of large variation in data for
one battery.
|
We generalize the AKSZ construction of topological field theories to allow
the target manifolds to have possibly-degenerate (homotopy) Poisson structures.
Classical AKSZ theories, which exist for all oriented spacetimes, are described
in terms of dioperads. The quantization problem is posed in terms of extending
from dioperads to properads. We conclude by relating the quantization problem
for AKSZ theories on R^d to the formality of the E_d operad, and conjecture a
properadic description of the space of E_d formality quasiisomorphisms.
|
At present, there are several measurements of $B$ decays that exhibit
discrepancies with the predictions of the SM, and suggest the presence of new
physics (NP) in $b \to s \mu^+ \mu^-$ transitions. Many NP models have been
proposed as explanations. These involve the tree-level exchange of a leptoquark
(LQ) or a flavor-changing $Z'$ boson. In this paper we examine whether it is
possible to distinguish the various models via CP-violating effects in $B \to
K^{(*)} \mu^+ \mu^-$. Using fits to the data, we find the following results. Of
all possible LQ models, only three can explain the data, and these are all
equivalent as far as $b \to s \mu^+ \mu^-$ processes are concerned. In this
single LQ model, the weak phase of the coupling can be large, leading to some
sizeable CP asymmetries in $B \to K^{(*)} \mu^+ \mu^-$. There is a spectrum of
$Z'$ models; the key parameter is $g_L^{\mu\mu}$, which describes the strength
of the $Z'$ coupling to $\mu^+\mu^-$. If $g_L^{\mu\mu}$ is small (large), the
constraints from $B^0_s$-${\bar B}^0_s$ mixing are stringent (weak), leading to
a small (large) value of the NP weak phase, and corresponding small (large) CP
asymmetries. We therefore find that the measurement of CP-violating asymmetries
in $B \to K^{(*)} \mu^+ \mu^-$ can indeed distinguish among NP $b \to s \mu^+
\mu^-$ models
|
The Ultrarelativistic Quantum Molecular Dynamics [UrQMD] model is widely used
to simulate heavy ion collisions in broad energy ranges. It consists of various
components to implement the different physical processes underlying the
transport approach. A major building block are the shared tables of constants,
implementing the baryon masses and widths. Unfortunately, many of these input
parameters are not well known experimentally. In view of the upcoming physics
program at FAIR, it is therefore of fundamental interest to explore the
stability of the model results when these parameters are varied. We perform a
systematic variation of particle masses and widths within the limits proposed
by the particle data group (or up to 10%). We find that the model results do
only weakly depend on the variation of these input parameters. Thus, we
conclude that the present implementation is stable with respect to the
modification of not yet well specified particle parameters.
|
The inclusive reactions $pp \rightarrow e^+ e^- X$ and $np \rightarrow e^+
e^- X$ at the laboratory kinetic energy of 1.25 GeV are investigated in a model
of dominance of nucleon and $\Delta$ resonances. Experimental data for these
reactions have recently been reported by the HADES Collaboration. In the
original model, the dileptons are produced either from the decays of nucleon
and $\Delta$ resonances $R \rightarrow N e^+ e^-$ or from the Dalitz decays of
$\pi^0$- and $\eta$-mesons created in the $R \to N\pi^0$ and $R \to N\eta$
decays. We found that the distribution of dilepton invariant masses in the $pp
\rightarrow e^+ e^- X$ reaction is well reproduced by the contributions of $R
\rightarrow N e^+ e^-$ decays and $R \rightarrow N \pi^0$, $\pi^0 \to \gamma
e^+e^-$ decays. Among the resonances, the predominant contribution comes from
the $\Delta(1232)$, which determines both the direct decay channel $R
\rightarrow N e^+ e^-$ and the pion decay channel. In the collisions $np
\rightarrow e^+ e^- X$, additional significant contributions arise from the
$\eta$-meson Dalitz decays, produced in the $np \rightarrow np\eta$ and $np
\rightarrow d\eta$ reactions, the radiative capture $np \rightarrow d e^+ e^-$,
and the $np \rightarrow np e^+ e^-$ bremsstrahlung. These mechanisms may partly
explain the strong excess of dileptons in the cross section for collisions of
$np$ versus $pp$, which ranges from 7 to 100 times for the dilepton invariant
masses of 0.2 to 0.5 GeV.
|
In this short Comment, the difference in the treatment of the gauge function
presented in~\cite{VH1} and work of this author is analyzed. it is shown why
some transformation of the gauge function made by Hnizdo and Vaman gives
incorrect result.
|
We present several competitive ratios for the online busy time scheduling
problem with flexible jobs. The busy time scheduling problem is a fundamental
scheduling problem motivated by energy efficiency with the goal of minimizing
the total time that machines with multiple processors are enabled. In the busy
time scheduling problem, an unbounded number of machines is given, where each
machine has $g$ processors. No more than $g$ jobs can be scheduled
simultaneously on each machine. A machine consumes energy whenever at least one
job is scheduled at any time on the machine. Scheduling a single job at some
time $t$ consumes the same amount of energy as scheduling $g$ jobs at time $t$.
In the online setting, jobs are revealed when they are released.
We consider the cases where $g$ is unbounded and bounded. In this paper, we
revisit the bounds of the unbounded general setting from the literature and
tighten it significantly. We also consider agreeable jobs. For the bounded
setting, we show a tightened upper bound. Furthermore, we show the first
constant competitive ratio in the bounded setting that does not require
lookahead.
|
We present various Lattice Boltzmann Models which reproduce the effects of
rough walls, shear thinning and granular flow. We examine the boundary layers
generated by the roughness of the walls. Shear thinning produces plug flow with
a sharp density contrast at the boundaries. Density waves are spontaneously
generated when the viscosity has a nonlinear dependence on density which
characterizes granular flow.
|
We used low-energy, momentum-resolved inelastic electron scattering to study
surface collective modes of the three-dimensional topological insulators
Bi$_2$Se$_3$ and Bi$_{0.5}$Sb$_{1.5}$Te$_{3-x}$Se$_{x}$. Our goal was to
identify the "spin plasmon" predicted by Raghu and co-workers [S. Raghu, et
al., Phys. Rev. Lett. 104, 116401 (2010)]. Instead, we found that the primary
collective mode is a surface plasmon arising from the bulk, free carrers in
these materials. This excitation dominates the spectral weight in the bosonic
function of the surface, $\chi "(\textbf{q},\omega)$, at THz energy scales, and
is the most likely origin of a quasiparticle dispersion kink observed in
previous photoemission experiments. Our study suggests that the spin plasmon
may mix with this other surface mode, calling for a more nuanced understanding
of optical experiments in which the spin plasmon is reported to play a role.
|
In the paper, combined bimodal microscopic visualization of biological
objects based on reflection-mode optical and photoacoustic measurements was
presented. Gasmicrophone configuration was chosen for the registration of
photothermal response. Precise positioning of the scanning laser beam with
modulated intensity was performed employing acousto-optic deflectors.
Photoacoustic images are shown to give complementary details to the optical
images obtained in the reflected light. Specifically, the photoacoustic imaging
mode allows much better visualization of the features with enhanced heat
localization due to the reduced heat outflow. For example, the application of
the photoacoustic imaging mode was especially successful to visualize
Drosophila fly's micro-hairs. Furthermore, the photoacoustic image quality was
shown to be adjusted through modulation frequency.
|
An analytic solution to a generalized Grad-Shafranov equation with flow of
arbitrary direction is obtained upon adopting the generic linearizing ansatz
for the free functions related to the poloidal current, the static pressure and
the electric field. Subsequently, a D-shaped tokamak pertinent equilibrium with
sheared flow is constructed using the aforementioned solution.
|
An improved version has been submitted with the title: Recessional velocities
and Hubble's law in Schwarzschild-de Sitter space. arXiv:1001.1875
|
A novel image analysis algorithm as applied to images of Nuclear Track
Detectors (NTD) is presented. This process, involving sequential application of
deconvolution and convolution techniques, followed by the application of
Artificial Neural Network (ANN), is identifying the etch-pit openings in NTD
images with a higher degree of success compared to other conventional image
analysis techniques.
|
In this article we consider the three parameter family of elliptic curves
$E_t: y^2-4(x-t_1)^3+t_2(x-t_1)+t_3=0, t\in\C^3$ and study the modular
holomorphic foliation $\F_{\omega}$ in $\C^3$ whose leaves are constant locus
of the integration of a 1-form $\omega$ over topological cycles of $E_t$. Using
the Gauss-Manin connection of the family $E_t$, we show that $\F_{\omega}$ is
an algebraic foliation. In the case $\omega=\frac{xdx}{y}$, we prove that a
transcendent leaf of $\F_{\omega}$ contains at most one point with algebraic
coordinates and the leaves of $\F_{\omega}$ corresponding to the zeros of
integrals, never cross such a point. Using the generalized period map
associated to the family $E_t$, we find a uniformization of $\F_{\omega}$ in
$T$, where $T\subset \C^3$ is the locus of parameters $t$ for which $E_t$ is
smooth. We find also a real first integral of $\F_\omega$ restricted to $T$ and
show that $\F_{\omega}$ is given by the Ramanujan relations between the
Eisenstein series.
|
Vision-Language (VL) models have gained significant research focus, enabling
remarkable advances in multimodal reasoning. These architectures typically
comprise a vision encoder, a Large Language Model (LLM), and a projection
module that aligns visual features with the LLM's representation space. Despite
their success, a critical limitation persists: the vision encoding process
remains decoupled from user queries, often in the form of image-related
questions. Consequently, the resulting visual features may not be optimally
attuned to the query-specific elements of the image. To address this, we
introduce QA-ViT, a Question Aware Vision Transformer approach for multimodal
reasoning, which embeds question awareness directly within the vision encoder.
This integration results in dynamic visual features focusing on relevant image
aspects to the posed question. QA-ViT is model-agnostic and can be incorporated
efficiently into any VL architecture. Extensive experiments demonstrate the
effectiveness of applying our method to various multimodal architectures,
leading to consistent improvement across diverse tasks and showcasing its
potential for enhancing visual and scene-text understanding.
|
We examine the properties of a dc-biased quantum dot in the Coulomb blockade
regime. For voltages V large compared to the Kondo temperature T_K, the physics
is governed by the scales V and gamma, where gamma ~ V/ln^2(V/T_K) is the
non-equilibrium decoherence rate induced by the voltage-driven current. Based
on scaling arguments, self-consistent perturbation theory and perturbative
renormalization group, we argue that due to the large gamma, the system can be
described by renormalized perturbation theory for ln(V/T_K) >> 1. However, in
certain variants of the Kondo problem, two-channel Kondo physics is induced by
a large voltage V.
|
We propose an image encryption scheme based on quasi-resonant Rossby/drift
wave triads (related to elliptic surfaces) and Mordell elliptic curves (MECs).
By defining a total order on quasi-resonant triads, at a first stage we
construct quasi-resonant triads using auxiliary parameters of elliptic surfaces
in order to generate pseudo-random numbers. At a second stage, we employ an MEC
to construct a dynamic substitution box (S-box) for the plain image. The
generated pseudo-random numbers and S-box are used to provide diffusion and
confusion, respectively, in the tested image. We test the proposed scheme
against well-known attacks by encrypting all gray images taken from the
USC-SIPI image database. Our experimental results indicate the high security of
the newly developed scheme. Finally, via extensive comparisons we show that the
new scheme outperforms other popular schemes.
|
We present a method of locally inverting the sign of the coupling term in
tight-binding systems, by means of inserting a judiciously designed ancillary
site and eigenmode matching of the resulting vertex triplet. Our technique can
be universally applied to all lattice configurations, as long as the individual
sites can be detuned. We experimentally verify this method in laser-written
photonic lattices and confirm both the magnitude and the sign of the coupling
by interferometric measurements. Based on these findings, we demonstrate how
such universal sign-flipped coupling links can be embedded into extended
lattice structures to impose a $\mathbb{Z}_2$-gauge transformation. This opens
a new avenue for investigations on topological effects arising from magnetic
fields with aperiodic flux patterns or in disordered systems.
|
SOFIA/HAWC+ 154 $\mu$m Far-Infrared polarimetry observations of the
well-studied edge-on galaxy NGC 891 are analyzed and compared to simple disk
models with ordered (planar) and turbulent magnetic fields. The overall low
magnitude and the narrow dispersion of fractional polarization observed in the
disk require significant turbulence and a large number of turbulent
decorrelation cells along the line-of-sight through the plane. Higher surface
brightness regions along the major axis to either side of the nucleus show a
further reduction in polarization and are consistent with a view tangent to a
spiral feature in our disk models. The nucleus also has a similar low
polarization, and this is inconsistent with our model spiral galaxy where the
ordered magnetic field component would be nearly perpendicular to the
line-of-sight through the nucleus on an edge-on view. A model with a barred
spiral morphology with a magnetic field geometry derived from radio synchrotron
observations of face-on barred spirals fits the data much better. There is
clear evidence for a vertical field extending into the halo from one location
in the disk coincident with a polarization null point seen in near-infrared
polarimetry, probably due to a blowout caused by star formation. Although our
observations were capable of detecting a vertical magnetic field geometry
elsewhere in the halo, no clear signature was found. A reduced polarization due
to a mix of planar and vertical fields in the dusty regions of the halo best
explains our observations, but unusually significant turbulence cannot be ruled
out.
|
We introduce and study the random non-compact metric space called the
Brownian plane, which is obtained as the scaling limit of the uniform infinite
planar quadrangulation. Alternatively, the Brownian plane is identified as the
Gromov-Hausdorff tangent cone in distribution of the Brownian map at its root
vertex, and it also arises as the scaling limit of uniformly distributed
(finite) planar quadrangulations with n faces when the scaling factor tends to
0 less fast than n^{-1/4}. We discuss various properties of the Brownian plane.
In particular, we prove that the Brownian plane is homeomorphic to the plane,
and we get detailed information about geodesic rays to infinity.
|
This paper proposes a method to construct an adaptive agent that is universal
with respect to a given class of experts, where each expert is an agent that
has been designed specifically for a particular environment. This adaptive
control problem is formalized as the problem of minimizing the relative entropy
of the adaptive agent from the expert that is most suitable for the unknown
environment. If the agent is a passive observer, then the optimal solution is
the well-known Bayesian predictor. However, if the agent is active, then its
past actions need to be treated as causal interventions on the I/O stream
rather than normal probability conditions. Here it is shown that the solution
to this new variational problem is given by a stochastic controller called the
Bayesian control rule, which implements adaptive behavior as a mixture of
experts. Furthermore, it is shown that under mild assumptions, the Bayesian
control rule converges to the control law of the most suitable expert.
|
In our previous publication [1-3] it has been proven that the general
iteration solution of the Shwinger-Dyson equation for the full gluon propagator
(i.e., when the skeleton loop integrals, contributing into the gluon
self-energy, have to be iterated, which means that no any
truncations/approximations have been made) can be algebraically (i.e., exactly)
decomposed as the sum of the two principally different terms. The first term is
the Laurent expansion in integer powers of severe (i.e., more singular than
$1/q^2$) infrared singularities accompanied by the corresponding powers of the
mass gap and multiplied by the corresponding residues. The standard second term
is always as much singular as $1/q^2$ and otherwise remaining undetermined.
Here it is explicitly shown that the infrared renormalization of the mass gap
only is needed to render theory free of all severe infrared singularities in
the gluon sector. Moreover, this leads to the gluon confinement criterion in a
gauge-invariant way. As a result of the infrared renormalization of the mass
gap in the initial Laurent expansion, that is dimensionally regularized, the
simplest severe infrared singularity $(q^2)^{-2}$ survives only. It is
multiplied by the mass gap squared, which is the scale responsible for the
large scale structure of the true QCD vacuum. The $\delta$-type regularization
of the simplest severe infrared singularity (and its generalization for the
multi-loop skeleton integrals) is provided by the dimensional regularization
method correctly implemented into the theory of distributions. This makes it
possible to formulate exactly and explicitly the full gluon propagator (up to
its unimportant perturbative part).
|
This paper introduces a novel technique to decide the satisfiability of
formulae written in the language of Linear Temporal Logic with Both future and
past operators and atomic formulae belonging to constraint system D (CLTLB(D)
for short). The technique is based on the concept of bounded satisfiability,
and hinges on an encoding of CLTLB(D) formulae into QF-EUD, the theory of
quantifier-free equality and uninterpreted functions combined with D. Similarly
to standard LTL, where bounded model-checking and SAT-solvers can be used as an
alternative to automata-theoretic approaches to model-checking, our approach
allows users to solve the satisfiability problem for CLTLB(D) formulae through
SMT-solving techniques, rather than by checking the emptiness of the language
of a suitable automaton A_{\phi}. The technique is effective, and it has been
implemented in our Zot formal verification tool.
|
We compute the exact density of states and 2-point function of the
$\mathcal{N} =2$ super-symmetric SYK model in the large $N$ double-scaled
limit, by using combinatorial tools that relate the moments of the distribution
to sums over oriented chord diagrams. In particular we show how SUSY is
realized on the (highly degenerate) Hilbert space of chords. We further
calculate analytically the number of ground states of the model in each charge
sector at finite $N$, and compare it to the results from the double-scaled
limit. Our results reduce to the super-Schwarzian action in the low energy
short interaction length limit. They imply that the conformal ansatz of the
2-point function is inconsistent due to the large number of ground states, and
we show how to add this contribution. We also discuss the relation of the model
to $SL_q(2|1)$. For completeness we present an overview of the $\mathcal{N}=1$
super-symmetric SYK model in the large $N$ double-scaled limit.
|
Using GAP4, we determine the number of ad-nilpotent and abelian ideals of a
parabolic subalgebra of a simple Lie algebra of exceptional types E, F or G.
|
Fractal mesh refinement enables mesh refinement regimes to the Gibson scale
and beyond. Conventional multifractal subgrid models have cost considerations
which also lead to local mesh refinement, but more importantly, the application
of these models to compressible turbulent deflagration fronts is a topic for
future research, while subgrid models tuned to this complex physics lack a
multifractal search focus, and for cost reasons do not allow the large search
volumes required to find the sought for rare event to trigger a DDT.
Here we propose methods to resolve fine scales and locate rare possible DDT
trigger events within large volumes while addressing multifractal issues at a
feasible computational cost. We are motivated by the goal of confirming or
assessing the hypothesis of a delayed detonation in type Ia supernova and to
assessing the delay in this event if it is found to occur.
|
We propose a model for growing networks based on a finite memory of the
nodes. The model shows stylized features of real-world networks: power law
distribution of degree, linear preferential attachment of new links and a
negative correlation between the age of a node and its link attachment rate.
Notably, the degree distribution is conserved even though only the most
recently grown part of the network is considered. This feature is relevant
because real-world networks truncated in the same way exhibit a power-law
distribution in the degree. As the network grows, the clustering reaches an
asymptotic value larger than for regular lattices of the same average
connectivity. These high-clustering scale-free networks indicate that memory
effects could be crucial for a correct description of the dynamics of growing
networks.
|
We focus on the problem of finding a non-linear classification function that
lies in a Reproducing Kernel Hilbert Space (RKHS) both from the primal point of
view (finding a perfect separator when one exists) and the dual point of view
(giving a certificate of non-existence), with special focus on generalizations
of two classical schemes - the Perceptron (primal) and Von-Neumann (dual)
algorithms.
We cast our problem as one of maximizing the regularized normalized
hard-margin ($\rho$) in an RKHS and %use the Representer Theorem to rephrase it
in terms of a Mahalanobis dot-product/semi-norm associated with the kernel's
(normalized and signed) Gram matrix. We derive an accelerated smoothed
algorithm with a convergence rate of $\tfrac{\sqrt {\log n}}{\rho}$ given $n$
separable points, which is strikingly similar to the classical kernelized
Perceptron algorithm whose rate is $\tfrac1{\rho^2}$. When no such classifier
exists, we prove a version of Gordan's separation theorem for RKHSs, and give a
reinterpretation of negative margins. This allows us to give guarantees for a
primal-dual algorithm that halts in $\min\{\tfrac{\sqrt n}{|\rho|},
\tfrac{\sqrt n}{\epsilon}\}$ iterations with a perfect separator in the RKHS if
the primal is feasible or a dual $\epsilon$-certificate of near-infeasibility.
|
We investigate a model of two Kondo impurities coupled via an Ising
interaction. Exploiting the mapping to a generalized single-impurity Anderson
model, we establish that the model has a singlet and a (pseudospin) doublet
phase separated by a Kosterlitz-Thouless quantum phase transition. Based on a
strong-coupling analysis and renormalization group arguments, we show that at
this transition the conductance G through the system either displays a
zero-bias anomaly, G ~ |V|^{-2(\sqrt{2}-1)}, or takes a universal value, G =
e^2/(\pi\hbar) cos^2[\pi/(2\sqrt{2})], depending on the experimental setup.
Close to the Toulouse point of the individual Kondo impurities, the
strong-coupling analysis allows to obtain the location of the phase boundary
analytically. For general model parameters, we determine the phase diagram and
investigate the thermodynamics using numerical renormalization group
calculations. In the singlet phase close to the quantum phase transtion, the
entropy is quenched in two steps: first the two Ising-coupled spins form a
magnetic mini-domain which is, in a second step, screened by a Kondoesque
collective resonance in an effective solitonic Fermi sea. In addition, we
present a flow equation analysis which provides a different mapping of the
two-impurity model to a generalized single-impurity Anderson model in terms of
fully renormalized couplings, which is applicable for the whole range of model
parameters.
|
Recent numerical simulations have shown the strong impact of helicity on
\ADD{homogeneous} rotating hydrodynamic turbulence. The main effect can be
summarized through the following law, $n+\tilde n = -4$, where $n$ and $\tilde
n$ are respectively the power law indices of the one-dimensional energy and
helicity spectra. We investigate this rotating turbulence problem in the small
Rossby number limit by using the asymptotic weak turbulence theory derived
previously. We show that the empirical law is an exact solution of the helicity
equation where the power law indices correspond to perpendicular (to the
rotation axis) wave number spectra. It is proposed that when the cascade
towards small-scales tends to be dominated by the helicity flux the solution
tends to $\tilde n = -2$, whereas it is $\tilde n = -3/2$ when the energy flux
dominates. The latter solution is compatible with the so-called maximal
helicity state previously observed numerically and derived theoretically in the
weak turbulence regime when only the energy equation is used, whereas the
former solution is constrained by a locality condition.
|
Initial reactions involved in the bacterial degradation of polycyclic
aromatic hydrocarbons (PAHs) include a ring-dihydroxylation catalyzed by a
dioxygenase and a subsequent oxidation of the dihydrodiol products by a
dehydrogenase. In this study, the dihydrodiol dehydrogenase from the
PAH-degrading Sphingomonas strain CHY-1 has been characterized. The bphB gene
encoding PAH dihydrodiol dehydrogenase (PDDH) was cloned and overexpressed as a
His-tagged protein. The recombinant protein was purified as a homotetramer with
an apparent Mr of 110,000. PDDH oxidized the cis-dihydrodiols derived from
biphenyl and eight polycyclic hydrocarbons, including chrysene,
benz[a]anthracene, and benzo[a]pyene, to corresponding catechols. Remarkably,
the enzyme oxidized pyrene 4,5-dihydrodiol, whereas pyrene is not metabolized
by strain CHY-1. The PAH catechols produced by PDDH rapidly auto-oxidized in
air but were regenerated upon reaction of the o-quinones formed with NADH.
Kinetic analyses performed under anoxic conditions revealed that the enzyme
efficiently utilized two- to four-ring dihydrodiols, with Km values in the
range of 1.4 to 7.1 $\mu$M, and exhibited a much higher Michaelis constant for
NAD+ (Km of 160 $\mu$M). At pH 7.0, the specificity constant ranged from (1.3
$\pm$ 0.1) x 106 M?1 s?1 with benz[a]anthracene 1,2-dihydrodiol to (20.0 $\pm$
0.8) x 106 M?1 s?1 with naphthalene 1,2-dihydrodiol. The catalytic activity of
the enzyme was 13-fold higher at pH 9.5. PDDH was subjected to inhibition by
NADH and by 3,4-dihydroxyphenanthrene, and the inhibition patterns suggested
that the mechanism of the reaction was ordered Bi Bi. The regulation of PDDH
activity appears as a means to prevent the accumulation of PAH catechols in
bacterial cells.
|
The variant of phenomenological theory of humankind future existence after
time of demographic transition based on treating the time of demographic
transition as a point of phase transition and taking into account an appearing
of the new phase of mankind is proposed. The theory based on physical
phenomenological theories of phase transitions and classical equations for
system predatory-preys for two phases of mankind, take into account assumption
about a multifractal nature of the set of number of people in temporal axis and
contains control parameters. The theory includes scenario of destroying of
existent now human population by new phase of humanity and scenario of old and
new phases co-existence. In particular cases when the new phase of mankind is
absent the equations of theory may be formulated as equations of Kapitza,
Foerster, Hoerner, Kobelev and Nugaeva, Johansen and Sornette phenomenological
theories of growth of mankind.
|
It is shown that dominant trapping sets of regular LDPC codes, so called
absorption sets, undergo a two-phased dynamic behavior in the iterative
message-passing decoding algorithm. Using a linear dynamic model for the
iteration behavior of these sets, it is shown that they undergo an initial
geometric growth phase which stabilizes in a final bit-flipping behavior where
the algorithm reaches a fixed point. This analysis is shown to lead to very
accurate numerical calculations of the error floor bit error rates down to
error rates that are inaccessible by simulation. The topology of the dominant
absorption sets of an example code, the IEEE 802.3an (2048,1723) regular LDPC
code, are identified and tabulated using topological relationships in
combination with search algorithms.
|
We study the convective and absolute forms of azimuthal magnetorotational
instability (AMRI) in a Taylor-Couette (TC) flow with an imposed azimuthal
magnetic field. We show that the domain of the convective AMRI is wider than
that of the absolute AMRI. Actually, it is the absolute instability which is
the most relevant and important for magnetic TC flow experiments. The absolute
AMRI, unlike the convective one, stays in the device, displaying a sustained
growth that can be experimentally detected. We also study the global AMRI in a
TC flow of finite height using DNS and find that its emerging butterfly-type
structure -- a spatio-temporal variation in the form of upward and downward
traveling waves -- is in a very good agreement with the linear stability
analysis, which indicates the presence of two dominant absolute AMRI modes in
the flow giving rise to this global butterfly pattern.
|
In the first part of this paper we show that a set $E$ has locally finite
$s$-perimeter if and only if it can be approximated in an appropriate sense by
smooth open sets. In the second part we prove some elementary properties of
local and global $s$-minimal sets, such as existence and compactness. We also
compare the two notions of minimizer (i.e. local and global), showing that in
bounded open sets with Lipschitz boundary they coincide. However, in general
this is not true in unbounded open sets, where a global $s$-minimal set may
fail to exist (we provide an example in the case of a cylinder
$\Omega\times\mathbb{R}$).
|
The two dimensional BTW model of SOC, with probabilistically nonuniform
distribution of particles among the nearest neighbouring sites, is studied by
computer simulation. When the value of height variable of a particular site
reaches the critical value (z c = 4), the value of height variable of that site
is reduced by four units by distributing four particles among the four nearest
neighbouring sites. In this paper, it is considered that, two particles are
distributed equally among the two nearest neighbouring sites along X-axis.
However, the distribution of other two particles along Y- axis, is
probabilistically nonuniform. The variation of spatial average of the height
variable with time is studied. In the SOC state, the distribution of avalanche
sizes and durations is obtained. The total number of topplings occured during
the stipulated time of evolution is also calculated.
|
Chuang and Rouquier describe an action by perverse equivalences on the set of
bases of a triangulated category of Calabi-Yau dimension $-1$. We develop an
analogue of their theory for Calabi-Yau categories of dimension $w<0$ and show
it is equivalent to the mutation theory of $w$-simple-minded systems.
Given a non-positively graded, finite-dimensional symmetric algebra $A$, we
show that the differential graded stable category of $A$ has negative
Calabi-Yau dimension. When $A$ is a Brauer tree algebra, we construct a
combinatorial model of the dg-stable category and show that perverse
equivalences act transitively on the set of $|w|$-bases.
|
In this paper, p-dispersion problems are studied to select $p\geqslant 2$
representative points from a large 2D Pareto Front (PF), solution of
bi-objective optimization. Four standard p-dispersion variants are considered.
A novel variant, Max-Sum-Neighbor p-dispersion, is introduced for the specific
case of a 2D PF. Firstly, $2$-dispersion and $3$-dispersion problems are proven
solvable in $O(n)$ time in a 2D PF. Secondly, dynamic programming algorithms
are designed for three p-dispersion variants, proving polynomial complexities
in a 2D PF. Max-min p-dispersion is solvable in $O(pn\log n)$ time and $O(n)$
memory space. Max-Sum-Neighbor p-dispersion is proven solvable in $O(pn^2)$
time and{$O(n)$} space. Max-Sum-min p-dispersion is solvable in $O(pn^3)$ time
and $O(pn^2)$ space, this complexity holds also in 1D, proving for the first
time that Max-Sum-min p-dispersion is polynomial in 1D. Furthermore, properties
of these algorithms are discussed for an efficient implementation {and for a
practical application inside bi-objective meta-heuristics.
|
Using the excellent observed correlations among various infrared wavebands
with 12 and 60 micron luminosities, we calculate the 2-300 micron spectra of
galaxies as a function of luminosity. We then use 12 micron and 60 micron
galaxy luminosity functions derived from IRAS data, together with recent data
on the redshift evolution of galaxy emissivity, to derive a new, empirically
based IR background spectrum from stellar and dust emission in galaxies. Our
best estimate for the IR background is of order 2-3 nW/m^2/sr with a peak
around 200 microns reaching 6-8 nW/m^2/sr. Our empirically derived background
spectrum is fairly flat in the mid-IR, as opposed to spectra based on modeling
with discrete temperatures which exhibit a "valley" in the mid-IR. We also
derive a conservative lower limit to the IR background which is more than a
factor of 2 lower than our derived flux.
|
Cryogenic single-crystal optical cavities have the potential to provide
highest dimensional stability. We have investigated the long-term performance
of an ultra-stable laser system which is stabilized to a single-crystal silicon
cavity operated at 124 K. Utilizing a frequency comb, the laser is compared to
a hydrogen maser that is referenced to a primary caesium fountain standard and
to the $^{87}\mathrm{Sr}$ optical lattice clock at PTB. With fractional
frequency instabilities of $\sigma_y(\tau)\leq2\times10^{-16}$ for averaging
times of $\tau=60\mathrm{~s}$ to $1000\mathrm{~s}$ and $\sigma_y(1
\mathrm{d})\leq 2\times10^{-15}$ the stability of this laser, without any aid
from an atomic reference, surpasses the best microwave standards for short
averaging times and is competitive with the best hydrogen masers for longer
times of one day. The comparison of modeled thermal response of the cavity with
measured data indicates a fractional frequency drift below $5\times
10^{-19}/\mathrm{s}$, which we do not expect to be a fundamental limit.
|
]A multichain non-fungible tokens (NFTs) marketplace is a decentralized
platform where users can buy, sell, and trade NFTs across multiple blockchain
networks by using cross communication bridge. In past most of NFT marketplace
was based on singlechain in which NFTs have been bought, sold, and traded on a
same blockchain network without the need for any external platform. The
singlechain based marketplace have faced number of issues such as performance,
scalability, flexibility and limited transaction throughput consequently long
confirmation times and high transaction fees during high network usage.
Firstly, this paper provides the comprehensive overview about NFT Multichain
architecture and explore the challenges and opportunities of designing and
implementation phase of multichain NFT marketplace to overcome the issue of
single chain-based architecture. NFT multichain marketplace architecture
includes different blockchain networks that communicate with each other.
Secondly, this paper discusses the concept of mainchain interacting with
sidechains which refers to multi blockchain architecture where multiple
blockchain networks are connected to each other in a hierarchical structure and
identifies key challenges related to interoperability, security, scalability,
and user adoption. Finally, we proposed a novel architecture for a multichain
NFT marketplace, which leverages the benefits of multiple blockchain networks
and marketplaces to overcome these key challenges. Moreover, proposed
architecture is evaluated through a case study, demonstrating its ability to
support efficient and secure transactions across multiple blockchain networks
and highlighting the future trends NFTs and marketplaces and comprehensive
discussion about the technology.
|
In this note we show that the locally stationary wavelet process can be
decomposed into a sum of signals, each of which following a moving average
process with time-varying parameters. We then show that such moving average
processes are equivalent to state space models with stochastic design
components. Using a simple simulation step, we propose a heuristic method of
estimating the above state space models and then we apply the methodology to
foreign exchange rates data.
|
The Long Wavelength Array (LWA) will be a new multi-purpose radio telescope
operating in the frequency range 10-88 MHz. Upon completion, LWA will consist
of 53 phased array "stations" distributed over a region about 400 km in
diameter in the state of New Mexico. Each station will consist of 256 pairs of
dipole-type antennas whose signals are formed into beams, with outputs
transported to a central location for high-resolution aperture synthesis
imaging. The resulting image sensitivity is estimated to be a few mJy (5 sigma,
8 MHz, 2 polarizations, 1 hr, zenith) in 20-80 MHz; with resolution and field
of view of (8", 8 deg) and (2",2 deg) at 20 MHz and 80 MHz, respectively. All
256 dipole antennas are in place for the first station of the LWA (called
LWA-1), and commissioning activities are well underway. The station is located
near the core of the EVLA, and is expected to be fully operational in early
2011.
|
The nonequilibrium dynamic phase transition, in the kinetic Ising model in
presence of an oscillating magnetic field, has been studied both by Monte Carlo
simulation and by solving numerically the mean field dynamic equation of motion
for the average magnetisation. In both the cases, the Debye 'relaxation'
behaviour of the dynamic order parameter has been observed and the 'relaxation
time' is found to diverge near the dynamic transition point. The Debye
relaxation of the dynamic order parameter and the power law divergence of the
relaxation time have been obtained from a very approximate solution of the mean
field dynamic equation. The temperature variation of appropiately defined
'specific-heat' is studied by Monte Carlo simulation near the transition point.
The specific-heat has been observed to diverge near the dynamic transition
point.
|
Dual structures on causal sets called timelets are introduced, being discrete
analogs of global time coordinates. Algebraic and geometrical features of the
set of timelets on a causal set are studied. A characterization of timelets in
terms of incidence matrix of causal set is given. The connection between
timelets and preclusive coevents is established, it is shown that any timelet
has a unique decomposition over preclusive coevents. The equivalence classes of
timelets with respect to reascaling are shown to form a simplicial complex.
|
Object detection utilizing Frequency Modulated Continous Wave radar is
becoming increasingly popular in the field of autonomous systems. Radar does
not possess the same drawbacks seen by other emission-based sensors such as
LiDAR, primarily the degradation or loss of return signals due to weather
conditions such as rain or snow. However, radar does possess traits that make
it unsuitable for standard emission-based deep learning representations such as
point clouds. Radar point clouds tend to be sparse and therefore information
extraction is not efficient. To overcome this, more traditional digital signal
processing pipelines were adapted to form inputs residing directly in the
frequency domain via Fast Fourier Transforms. Commonly, three transformations
were used to form Range-Azimuth-Doppler cubes in which deep learning algorithms
could perform object detection. This too has drawbacks, namely the
pre-processing costs associated with performing multiple Fourier Transforms and
normalization. We explore the possibility of operating on raw radar inputs from
analog to digital converters via the utilization of complex transformation
layers. Moreover, we introduce hierarchical Swin Vision transformers to the
field of radar object detection and show their capability to operate on inputs
varying in pre-processing, along with different radar configurations, i.e.
relatively low and high numbers of transmitters and receivers, while obtaining
on par or better results than the state-of-the-art.
|
With the recently rapid development in deep learning, deep neural networks
have been widely adopted in many real-life applications. However, deep neural
networks are also known to have very little control over its uncertainty for
unseen examples, which potentially causes very harmful and annoying
consequences in practical scenarios. In this paper, we are particularly
interested in designing a higher-order uncertainty metric for deep neural
networks and investigate its effectiveness under the out-of-distribution
detection task proposed by~\cite{hendrycks2016baseline}. Our method first
assumes there exists an underlying higher-order distribution $\mathbb{P}(z)$,
which controls label-wise categorical distribution $\mathbb{P}(y)$ over classes
on the K-dimension simplex, and then approximate such higher-order distribution
via parameterized posterior function $p_{\theta}(z|x)$ under variational
inference framework, finally we use the entropy of learned posterior
distribution $p_{\theta}(z|x)$ as uncertainty measure to detect
out-of-distribution examples. Further, we propose an auxiliary objective
function to discriminate against synthesized adversarial examples to further
increase the robustness of the proposed uncertainty measure. Through
comprehensive experiments on various datasets, our proposed framework is
demonstrated to consistently outperform competing algorithms.
|
A high-quality 3D reconstruction of a scene from a collection of 2D images
can be achieved through offline/online mapping methods. In this paper, we
explore active mapping from the perspective of implicit representations, which
have recently produced compelling results in a variety of applications. One of
the most popular implicit representations - Neural Radiance Field (NeRF), first
demonstrated photorealistic rendering results using multi-layer perceptrons,
with promising offline 3D reconstruction as a by-product of the radiance field.
More recently, researchers also applied this implicit representation for online
reconstruction and localization (i.e. implicit SLAM systems). However, the
study on using implicit representation for active vision tasks is still very
limited. In this paper, we are particularly interested in applying the neural
radiance field for active mapping and planning problems, which are closely
coupled tasks in an active system. We, for the first time, present an RGB-only
active vision framework using radiance field representation for active 3D
reconstruction and planning in an online manner. Specifically, we formulate
this joint task as an iterative dual-stage optimization problem, where we
alternatively optimize for the radiance field representation and path planning.
Experimental results suggest that the proposed method achieves competitive
results compared to other offline methods and outperforms active reconstruction
methods using NeRFs.
|
Sentence embeddings encode natural language sentences as low-dimensional
dense vectors. A great deal of effort has been put into using sentence
embeddings to improve several important natural language processing tasks.
Relation extraction is such an NLP task that aims at identifying structured
relations defined in a knowledge base from unstructured text. A promising and
more efficient approach would be to embed both the text and structured
knowledge in low-dimensional spaces and discover semantic alignments or
mappings between them. Although a number of techniques have been proposed in
the literature for embedding both sentences and knowledge graphs, little is
known about the structural and semantic properties of these embedding spaces in
terms of relation extraction. In this paper, we investigate the aforementioned
properties by evaluating the extent to which sentences carrying similar senses
are embedded in close proximity sub-spaces, and if we can exploit that
structure to align sentences to a knowledge graph. We propose a set of
experiments using a widely-used large-scale data set for relation extraction
and focusing on a set of key sentence embedding methods. We additionally
provide the code for reproducing these experiments at
https://github.com/akalino/semantic-structural-sentences. These embedding
methods cover a wide variety of techniques ranging from simple word embedding
combination to transformer-based BERT-style model. Our experimental results
show that different embedding spaces have different degrees of strength for the
structural and semantic properties. These results provide useful information
for developing embedding-based relation extraction methods.
|
In interstellar environment, fullerene species readily react with large
molecules (e.g., PAHs and their derivatives) in the gas phase, which may be the
formation route of carbon dust grains in space. In this work, the gas-phase
ion-molecule collision reaction between fullerene cations (Cn+, n=32, 34, ...,
60) and functionalized PAH molecules (9-hydroxyfluorene, C13H10O) are
investigated both experimentally and theoretically. The experimental results
show that fullerene/9-hydroxyfluorene cluster cations are efficiently formed,
leading to a series of large fullerene/9-hydroxyfluorene cluster cations (e.g.,
[(C13H10O)C60]+, [(C13H10O)3C58+, and [(C26H18O)(C13H10O)2C48]+). The binding
energies and optimized structures of typical fullerene/9-hydroxyfluorene
cluster cations were calculated. The bonding ability plays a decisive role in
the cluster formation processes. The reaction surfaces, modes and combination
reaction sites can result in different binding energies, which represent the
relative chemical reactivity. Therefore, the geometry and composition of
fullerene/9-hydroxyfluorene cluster cations are complicated. In addition, there
is an enhanced chemical reactivity for smaller fullerene cations, which is
mainly attributed to the newly formed deformed carbon rings (e.g., 7 C-ring).
As part of the coevolution network of interstellar fullerene chemistry, our
results suggest that ion-molecule collision reactions contribute to the
formation of various fullerene/9-hydroxyfluorene cluster cations in the ISM,
providing insights into different chemical reactivity caused by oxygenated
functional groups (e.g., hydroxyl, OH, or ether, C-O-C) on the cluster
formations.
|
A new closure of the BBGKY hierarchy is developed, which results in a
convergent kinetic equation that provides a rigorous extension of plasma
kinetic theory into the regime of strong Coulomb coupling. The approach is
based on a single expansion parameter which enforces that the exact equilibrium
limit is maintained at all orders. Because the expansion parameter does not
explicitly depend on the range or the strength of the interaction potential,
the resulting kinetic theory does not suffer from the typical divergences at
short and long length scales encountered when applying the standard kinetic
equations to Coulomb interactions. The approach demonstrates that particles
effectively interact via the potential of mean force and that the range of this
force determines the size of the collision volume. When applied to a plasma,
the collision operator is shown to be related to the effective potential theory
[Baalrud and Daligault, Phys. Rev. Lett 110, 235001 (2013)]. The relationship
between this and previous kinetic theories is discussed.
|
In this paper we study noise-induced bistability in a specific circuit with
many biological implications, namely a single-step enzymatic cycle described by
Michaelis Menten equations with quasi-steady state assumption. We study the
system both with a Master Equation formalism, and with the Fokker-Planck
continuous approximation, characterizing the conditions in which the continuous
approach is a good approximation of the exact discrete model. An analysis of
the stationary distribution in both cases shows that bimodality can not occur
in such a system. We discuss which additional requirements can generate
stochastic bimodality, by coupling the system with a chemical reaction
involving enzyme production and turnover. This extended system shows a bistable
behaviour only in specific parameter windows depending on the number of
molecules involved, providing hints about which should be a feasible system
size in order that such a phenomenon could be exploited in real biological
systems.
|
We consider a gas of Newtonian self-gravitating particles in two-dimensional
space, finding a phase transition, with a high temperature homogeneous phase
and a low temperature clumped one. We argue that the system is described in
terms of a gas with fractal behaviour.
|
Wick polynomials and Wick products are studied in the context of
non-commutative probability theory. It is shown that free, boolean and
conditionally free Wick polynomials can be defined and related through the
action of the group of characters over a particular Hopf algebra. These results
generalize our previous developments of a Hopf algebraic approach to cumulants
and Wick products in classical probability theory.
|
3D point cloud interpretation is a challenging task due to the randomness and
sparsity of the component points. Many of the recently proposed methods like
PointNet and PointCNN have been focusing on learning shape descriptions from
point coordinates as point-wise input features, which usually involves
complicated network architectures. In this work, we draw attention back to the
standard 3D convolutions towards an efficient 3D point cloud interpretation.
Instead of converting the entire point cloud into voxel representations like
the other volumetric methods, we voxelize the sub-portions of the point cloud
only at necessary locations within each convolution layer on-the-fly, using our
dynamic voxelization operation with self-adaptive voxelization resolution. In
addition, we incorporate 3D group convolution into our dense convolution kernel
implementation to further exploit the rotation invariant features of point
cloud. Benefiting from its simple fully-convolutional architecture, our network
is able to run and converge at a considerably fast speed, while yields on-par
or even better performance compared with the state-of-the-art methods on
several benchmark datasets.
|
The Lifshitz theory of van der Waals forces is utilized for the systematic
calculation of the non-retarded room temperature Hamaker constants between 26
identical isotropic elemental metals that are embedded in vacuum or in pure
water. The full spectral method, complemented with a Drude-like low frequency
extrapolation, is employed for the elemental metals benefitting from the
availability of extended-in-frequency reliable dielectric data. The simple
spectral method is employed for pure water and three dielectric representations
are explored. Numerical truncation and low frequency extrapolation effects are
shown to be negligible. The accuracy of common Lifshitz approximations is
quantified. The Hamaker constants for 100 metal combinations are reported; the
geometric mixing rule is revealed to be highly accurate in vacuum and water.
|
This thesis is devoted to the study of dynamical symmetry enhancement of
black hole horizons in string theory. In particular, we consider supersymmetric
horizons in the low energy limit of string theory known as supergravity and we
prove the $\textit{horizon conjecture}$ for a number of supergravity theories.
We first give important examples of symmetry enhancement in $D=4$ and the
mathematical preliminaries required for the analysis. Type IIA supergravity is
the low energy limit of $D=10$ IIA string theory, but also the dimensional
reduction of $D=11$ supergravity which itself the low energy limit of M-theory.
We prove that Killing horizons in IIA supergravity with compact spatial
sections preserve an even number of supersymmetries. By analyzing the global
properties of the Killing spinors, we prove that the near-horizon geometries
undergo a supersymmetry enhancement. This follows from a set of generalized
Lichnerowicz-type theorems we establish, together with an index theory
argument. We also show that the symmetry algebra of horizons with non-trivial
fluxes includes an $\mathfrak{sl}(2, \mathbb{R})$ subalgebra. As an
intermediate step in the proof, we also demonstrate new Lichnerowicz type
theorems for spin bundle connections whose holonomy is contained in a general
linear group. We prove the same result for Roman's Massive IIA supergravity. We
also consider the near-horizon geometry of supersymmetric extremal black holes
in un-gauged and gauged 5-dimensional supergravity, coupled to abelian vector
multiplets. We consider important examples in $D=5$ such as the BMPV and
supersymmetric black ring solution, and investigate the near-horizon geometry
to show the enhancement of the symmetry algebra of the Killing vectors. We
repeat a similar analysis as above to prove the horizon conjecture. We also
investigate the conditions on the geometry of the spatial horizon section
$\cal{S}$.
|
A new class of 3-manifold invariants is constructed from representations of
the category of framed tangles.
|
We show that the edge of a two-dimensional topological insulator can be used
to construct a solid state Stern-Gerlach spin-splitter. By threading such a
Stern-Gerlach apparatus with a magnetic flux, Ahranov-Bohm like interference
effects are introduced. Using ferromagnetic leads, the setup can be used to
both measure magnetic flux and as a spintronics switch. With normal metallic
leads a switchable spintronics NOT-gate can be implemented. Furthermore, we
show that a sequence of such devices can be used to construct a single-qubit
$SU(2)$-gate, one of the two gates required for a universal quantum computer.
The field sensitivity, or switching field, $b$ is related to the device
characteristic size $r$ through $b = \frac{\hbar}{qr^2}$, with $q$ the unit of
electric charge.
|
The detailed analysis of the problem of possible magnetic behavior of the
carbon-based structures was fulfilled to elucidate and resolve (at least
partially) some unclear issues. It was the purpose of the present paper to look
somewhat more critically into some conjectures which have been made and to the
peculiar and contradictory experimental results in this rather indistinct and
disputable field. Firstly the basic physics of magnetism was briefly addressed.
Then a few basic questions were thoroughly analyzed and critically reconsidered
to elucidate the possible relevant mechanism (if any) which may be responsible
for observed peculiarities of the "magnetic" behavior in these systems. The
arguments supporting the existence of the intrinsic magnetism in carbon-based
materials, including pure graphene were analyzed critically. It was concluded
that recently published works have shown clearly that the results of the
previous studies, where the "ferromagnetism" was detected in pure graphene,
were incorrect. Rather, graphene is strongly diamagnetic, similar to graphite.
Thus the possible traces of a quasi-magnetic behavior which some authors
observed in their samples may be attributed rather to induced magnetism due to
the impurities, defects, etc. On the basis of the present analysis the
conclusion was made that the thorough and detailed experimental studies of
these problems only may shed light on the very complicated problem of the
magnetism of carbon-based materials. Lastly the peculiarities of the magnetic
behavior of some related materials and the trends for future developments were
mentioned.
|
We characterize a class of topological Ramsey spaces such that each element
$\mathcal R$ of the class induces a collection $\{\mathcal R_k\}_{k<\omega}$ of
projected spaces which have the property that every Baire set is Ramsey. Every
projected space $\mathcal R_k$ is a subspace of the corresponding space of
length-$k$ approximation sequences with the Tychonoff, equivalently metric,
topology.
This answers a question of S. Todorcevic and generalizes the results of
Carlson \cite{Carlson}, Carlson-Simpson \cite{CarSim2}, Pr\"omel-Voigt
\cite{PromVoi}, and Voigt \cite{Voigt}. We also present a new family of
topological Ramsey spaces contained in the aforementioned class which
generalize the spaces of ascending parameter words of Carlson-Simpson
\cite{CarSim2} and Pr\"omel-Voigt \cite{PromVoi} and the spaces
$\FIN_m^{[\infty]}$, $0<m<\omega$, of block sequences defined by Todorcevic
\cite{Todo}.
|
This paper tackles the challenges of implementing few-shot learning on
embedded systems, specifically FPGA SoCs, a vital approach for adapting to
diverse classification tasks, especially when the costs of data acquisition or
labeling prove to be prohibitively high. Our contributions encompass the
development of an end-to-end open-source pipeline for a few-shot learning
platform for object classification on a FPGA SoCs. The pipeline is built on top
of the Tensil open-source framework, facilitating the design, training,
evaluation, and deployment of DNN backbones tailored for few-shot learning.
Additionally, we showcase our work's potential by building and deploying a
low-power, low-latency demonstrator trained on the MiniImageNet dataset with a
dataflow architecture. The proposed system has a latency of 30 ms while
consuming 6.2 W on the PYNQ-Z1 board.
|
Building on previous work by Lambert, Plagne and the third author, we study
various aspects of the behavior of additive bases in infinite abelian groups.
We show that, for every such group $T$, the number of essential subsets of any
additive basis is finite, and also that the number of essential subsets of
cardinality $k$ contained in an additive basis of order at most $h$ can be
bounded in terms of $h$ and $k$ alone. These results extend the reach of two
theorems, one due to Deschamps and Farhi and the other to Hegarty, bearing upon
$\mathbf{N}$. Also, using invariant means, we address a classical problem,
initiated by Erd\H{o}s and Graham and then generalized by Nash and Nathanson
both in the case of $\mathbf{N}$, of estimating the maximal order $X_T(h,k)$
that a basis of cocardinality $k$ contained in an additive basis of order at
most $h$ can have. Among other results, we prove that $X_T(h,k)=O(h^{2k+1})$
for every integer $k \ge 1$. This result is new even in the case where $k=1$.
Besides the maximal order $X_T(h,k)$, the typical order $S_T(h,k)$ is also
studied. Our methods actually apply to a wider class of infinite abelian
semigroups, thus unifying in a single axiomatic frame the theory of additive
bases in $\mathbf{N}$ and in abelian groups.
|
We analyze the effect of spin degree of freedom on fidelity decay and entropy
production of a many-particle fermionic(bosonic) system in a mean-field,
quenched by a random two-body interaction preserving many-particle spin $S$.
The system Hamiltonian is represented by embedded Gaussian orthogonal ensemble
(EGOE) of random matrices (for time-reversal and rotationally invariant
systems) with one plus two-body interactions preserving $S$ for
fermions/bosons. EGOE are paradigmatic models to study the dynamical transition
from integrability to chaos in interacting many-body quantum systems. A simple
general picture, in which the variances of the eigenvalue density play a
central role, is obtained for describing the short-time dynamics of fidelity
decay and entropy production. Using some approximations, an EGOE formula for
the time ($t_{sat}$) for the onset of saturation of entropy, is also derived.
These analytical EGOE results are in good agreement with numerical
calculations. Moreover, both fermion and boson systems show significant spin
dependence on the relaxation dynamics of the fidelity and entropy.
|
We propose a new scaling theory for general quantum breakdown phenomena. We
show, taking Landau-Zener type breakdown as a particular example, that the
breakdown phenomena can be viewed as a quantum phase transition for which the
scaling theory is developed. The application of this new scaling theory to
Zener type breakdown in Anderson insulators, and quantum quenching has been
discussed.
|
Classification performances of the supervised machine learning techniques
such as support vector machines, neural networks and logistic regression are
compared for modulation recognition purposes. The simple and robust features
are used to distinguish continuous-phase FSK from QAM-PSK signals. Signals
having root-raised-cosine shaped pulses are simulated in extreme noisy
conditions having joint impurities of block fading, lack of symbol and sampling
synchronization, carrier offset, and additive white Gaussian noise. The
features are based on sample mean and sample variance of the imaginary part of
the product of two consecutive complex signal values.
|
An optimization problem considering AC power flow constraints and integer
decision variables can usually be posed as a mixed-integer quadratically
constrained quadratic program (MIQCQP) problem. In this paper, first, a set of
valid linear equalities are applied to strengthen the semidefinite program
(SDP) relaxation of the MIQCQP problem without significantly increasing the
problem dimension so that an enhanced mixed-integer SDP (MISDP) relaxation,
which is a mixed-integer convex problem, is obtained. Then, the enhanced MISDP
relaxation is reformulated as a disjunctive programming (DP) problem which is
tighter than the former one, since the disjunctions are designed to capture the
disjunctive nature of the terms in the rank-1 constraint about the integral
variables. The DP relaxation is then equivalently convert-ed back into a MISDP
problem the feasible set of whose continu-ous relaxation is the convex hull of
feasible region of the DP prob-lem. Finally, globally optimal solution of the
DP problem which is the tightest relaxation for the MIQCQP proposed in the
paper is obtained by solving the resulting MISDP problem using a
branch-and-bound (B&B) algorithm. Computational efficiency of the B&B algorithm
is expected to be high since feasible set of the continuous relaxation of a
MISDP sub-problem is the convex hull of that of the corresponding DP
sub-problem. To further reduce the dimension of the resulting MISDP problem, a
compact for-mulation of this problem is proposed considering the sparsity. An
optimal placement problem of smart PV inverter in distribution systems
integrated with high penetration of PV, which is an MIQCQP problem, is studied
as an example. The proposed ap-proach is tested on an IEEE distribution system.
The results show that it can effectively improve the tightness and feasibility
of the SDP relaxation.
|
It is expected that the implementation of minimal length in quantum models
leads to a consequent lowering of Planck's scale. In this paper, using the
quantum model with minimal length of Kempf et al \cite{kempf0}, we examine the
effect of the minimal length on the Casimir force between parallel plates.
|
The Coxeter lattices, which we denote $A_{n/m}$, are a family of lattices
containing many of the important lattices in low dimensions. This includes
$A_n$, $E_7$, $E_8$ and their duals $A_n^*$, $E_7^*$ and $E_8^*$. We consider
the problem of finding a nearest point in a Coxeter lattice. We describe two
new algorithms, one with worst case arithmetic complexity $O(n\log{n})$ and the
other with worst case complexity O(n) where $n$ is the dimension of the
lattice. We show that for the particular lattices $A_n$ and $A_n^*$ the
algorithms reduce to simple nearest point algorithms that already exist in the
literature.
|
We construct a cover of the non-incident point-hyperplane graph of projective
dimension 3 for fields of characteristic 2. If the cardinality of the field is
larger than 2, we obtain an elementary construction of the non-split extension
of SL_4 (F) by F^6.
|
Subsets and Splits