text
stringlengths 6
128k
|
---|
In recent years, there has been a lot of research activity focused on
carrying out non-asymptotic convergence analyses for actor-critic algorithms.
Recently a two-timescale critic-actor algorithm has been presented for the
discounted cost setting in the look-up table case where the timescales of the
actor and the critic are reversed and only asymptotic convergence shown. In our
work, we present the first two-timescale critic-actor algorithm with function
approximation in the long-run average reward setting and present the first
finite-time non-asymptotic as well as asymptotic convergence analysis for such
a scheme. We obtain optimal learning rates and prove that our algorithm
achieves a sample complexity of $\mathcal{\tilde{O}}(\epsilon^{-2.08})$ for the
mean squared error of the critic to be upper bounded by $\epsilon$ which is
better than the one obtained for two-timescale actor-critic in a similar
setting. A notable feature of our analysis is that unlike recent
single-timescale actor-critic algorithms, we present a complete asymptotic
convergence analysis of our scheme in addition to the finite-time bounds that
we obtain and show that the (slower) critic recursion converges asymptotically
to the attractor of an associated differential inclusion with actor parameters
corresponding to local maxima of a perturbed average reward objective. We also
show the results of numerical experiments on three benchmark settings and
observe that our critic-actor algorithm performs on par and is in fact better
than the other algorithms considered.
|
A model is presented for the gravity-driven flow of rainwater descending
through the soil layer of a green roof, treated as a porous medium on a flat
permeable surface representing an efficient drainage layer. A fully saturated
zone is shown to occur. It is typically a thin layer, relative to the total
soil thickness, and lies at the bottom of the soil layer. This provides a
bottom boundary condition for the partially saturated upper zone. It is shown
that after the onset of rainfall, well-defined fronts of water can descend
through the soil layer. Also the rainwater flow is relatively quick compared
with the moisture uptake by the roots of the plants in the roof. In separate
models the exchanges of water are described between the (smaller-scale) porous
granules of soil, the roots and the rainwater in the inter-granule pores.
|
We prove an extension of the Stein-Weiss weighted estimates for fractional
integrals, in the context of $L^{p}$ spaces with different integrability
properties in the radial and the angular direction. In this way, the classical
estimates can be unified with their improved radial versions. A number of
consequences are obtained: in particular we deduce precised versions of
weighted Sobolev embeddings, Caffarelli-Kohn-Nirenberg estimates, and
Strichartz estimates for the wave equation, which extend the radial
improvements to the case of arbitrary functions.
|
Molecular devices, as future electronics, seek low-resistivity contacts for
the energy saving. At the same time, the contacts should intensify desired
properties of tailored electronic elements. In this work, we focus our
attention on two classes of organic switches connected to carbon-nanotube leads
and operating due to photo- or field-induced proton transfer (PT) process. By
means of the first-principles atomistic simulations of the ballistic
conductance, we search for atomic contacts which strengthen diversity of the
two swapped I-V characteristics between two tautomers of a given molecular
system. We emphasize, that the low-resistive character of the contacts is not
necessarily in accordance with the switching properties. Very often, the
higher-current flow makes it more difficult to distinguish between the logic
states of the molecular device. Instead, the resistive contacts multiply a
current gear at the tautomeric transition to a larger extent. The low- and
high-bias work regimes set additional conditions, which are fulfilled by
different contacts. In some cases, the peroxide contacts or the direct
connection to the tube perform better than the popular sulfur contact.
Additionally, we find that the switching-bias value is not an inherent property
of the conducting molecule, but it strongly depends on the chosen contacts.
|
We study the problem of designing dynamic intervention policies for
minimizing networked defaults in financial networks. Formally, we consider a
dynamic version of the celebrated Eisenberg-Noe model of financial network
liabilities and use this to study the design of external intervention policies.
Our controller has a fixed resource budget in each round and can use this to
minimize the effect of demand/supply shocks in the network. We formulate the
optimal intervention problem as a Markov Decision Process and show how we can
leverage the problem structure to efficiently compute optimal intervention
policies with continuous interventions and provide approximation algorithms for
discrete interventions. Going beyond financial networks, we argue that our
model captures dynamic network intervention in a much broader class of dynamic
demand/supply settings with networked inter-dependencies. To demonstrate this,
we apply our intervention algorithms to various application domains, including
ridesharing, online transaction platforms, and financial networks with agent
mobility. In each case, we study the relationship between node centrality and
intervention strength, as well as the fairness properties of the optimal
interventions.
|
Considering accretion onto a charged dilaton black hole, the fundamental
equations governing accretion, general analytic expressions for critical
points, critical velocity, critical speed of sound, and ultimately the mass
accretion rate are obtained. A new constraint on the dilation parameter coming
from string theory is found and the case for polytropic gas is delved into a
detailed discussion. It is found that the dialtion and the adiabatic index of
accreted material have deep effects on the accretion process.
|
We consider a homogeneous fibration $G/L \to G/K$, with symmetric fiber and
base, where $G$ is a compact connected semisimple Lie group and $L$ has maximal
rank in $G$. We suppose the base space $G/K$ is isotropy irreducible and the
fiber $K/L$ is simply connected. We investigate the existence of $G$-invariant
Einstein metrics on $G/L$ such that the natural projection onto $G/K$ is a
Riemannian submersion with totally geodesic fibers. These spaces are divided in
two types: the fiber $K/L$ is isotropy irreducible or is the product of two
irreducible symmetric spaces. We classify all the $G$-invariant Einstein
metrics with totally geodesic fibers for the first type. For the second type,
we classify all these metrics when $G$ is an exceptional Lie group. If $G$ is a
classical Lie group we classify all such metrics which are the orthogonal sum
of the normal metrics on the fiber and on the base or such that the restriction
to the fiber is also Einstein.
|
We revisit the problem of finding the entanglement entropy of a scalar field
on a lattice by tracing over its degrees of freedom inside a sphere. It is
known that this entropy satisfies the area law -- entropy proportional to the
area of the sphere -- when the field is assumed to be in its ground state. We
show that the area law continues to hold when the scalar field degrees of
freedom are in generic coherent states and a class of squeezed states. However,
when excited states are considered, the entropy scales as a lower power of the
area. This suggests that for large horizons, the ground state entropy
dominates, whereas entropy due to excited states gives power law corrections.
We discuss possible implications of this result to black hole entropy.
|
We study two non-Markovian gene-expression models in which protein production
is a stochastic process with a fat-tailed non-exponential waiting time
distribution (WTD). For both models, we find two distinct scaling regimes
separated by an exponentially long time, proportional to the mean first passage
time (MFPT) to a ground state (with zero proteins) of the dynamics, from which
the system can only exit via a non-exponential reaction. At times shorter than
the MFPT the dynamics are stationary and ergodic, entailing similarity across
different realizations of the same process, with an increased Fano factor of
the protein distribution, even when the WTD has a finite cutoff. Notably, at
times longer than the MFPT the dynamics are nonstationary and nonergodic,
entailing significant variability across different realizations. The MFPT to
the ground state is shown to directly affect the average population sizes and
we postulate that the transition to nonergodicity is universal in such
non-Markovian models.
|
Clustering is a widely deployed unsupervised learning tool. Model-based
clustering is a flexible framework to tackle data heterogeneity when the
clusters have different shapes. Likelihood-based inference for mixture
distributions often involves non-convex and high-dimensional objective
functions, imposing difficult computational and statistical challenges. The
classic expectation-maximization (EM) algorithm is a computationally thrifty
iterative method that maximizes a surrogate function minorizing the
log-likelihood of observed data in each iteration, which however suffers from
bad local maxima even in the special case of the standard Gaussian mixture
model with common isotropic covariance matrices. On the other hand, recent
studies reveal that the unique global solution of a semidefinite programming
(SDP) relaxed $K$-means achieves the information-theoretically sharp threshold
for perfectly recovering the cluster labels under the standard Gaussian mixture
model. In this paper, we extend the SDP approach to a general setting by
integrating cluster labels as model parameters and propose an iterative
likelihood adjusted SDP (iLA-SDP) method that directly maximizes the exact
observed likelihood in the presence of data heterogeneity. By lifting the
cluster assignment to group-specific membership matrices, iLA-SDP avoids
centroids estimation -- a key feature that allows exact recovery under
well-separateness of centroids without being trapped by their adversarial
configurations. Thus iLA-SDP is less sensitive than EM to initialization and
more stable on high-dimensional data. Our numeric experiments demonstrate that
iLA-SDP can achieve lower mis-clustering errors over several widely used
clustering methods including $K$-means, SDP and EM algorithms.
|
Segmented, or slow-wave electrodes have emerged as an index-matching solution
to improve bandwidth of traveling-wave Mach Zehnder and phase modulators on the
thin-film lithium niobate on insulator platform. However, these devices require
the use of a quartz handle or substrate removal, adding cost and additional
processing. In this work, a high-speed dual-output electro-optic intensity
modulator in the thin-film silicon nitride and lithium niobate material system
that uses segmented electrodes for RF and optical index matching is presented.
The device uses a silicon handle and does not require substrate removal. A
silicon handle allows the use of larger wafer sizes to increase yield, and
lends itself to processing in established silicon foundries that may not have
the capability to process a quartz or fused silica wafer. The modulator has an
interaction region of 10 mm, shows a DC half wave voltage of 3.75 V, an
ultra-high extinction ratio of roughly 45 dB consistent with previous work, and
a fiber-to-fiber insertion loss of 7.47 dB with a 95 GHz 3 dB bandwidth.
|
Results of DC and frequency dependent conductivity in the quantum limit, i.e.
hw > kT, for a broad range of dopant concentrations in nominally uncompensated,
crystalline phosphorous doped silicon and amorphous niobium-silicon alloys are
reported. These materials fall under the general category of disordered
insulating systems, which are referred to as electron glasses. Using microwave
resonant cavities and quasi-optical millimeter wave spectroscopy we are able to
study the frequency dependent response on the insulating side of the
metal-insulator transition. We identify a quantum critical regime, a Fermi
glass regime and a Coulomb glass regime. Our phenomenological results lead to a
phase diagram description, or taxonomy, of the electrodynamic response of
electron glass systems.
|
Investigations of CP violation in hadron sector may be done using
measurements in the ThO molecule. Recent measurements in this molecule improved
the limit on electron EDM by an order of magnitude. Another time reversal (T)
and parity (P) violating effect in $^{229}$ThO is induced by the nuclear
magnetic quadrupole moment. We have performed nuclear and molecular
calculations to express this effect in terms of the strength constants of
T,P-odd nuclear forces, neutron EDM, QCD vacuum angle $\theta$, quark EDM and
chromo-EDM.
|
We consider the problem of real-time remote monitoring of a two-state Markov
process, where a sensor observes the state of the source and makes a decision
on whether to transmit the status updates over an unreliable channel or not. We
introduce a modified randomized stationary sampling and transmission policy
where the decision to perform sampling occurs probabilistically depending on
the current state of the source and whether the system was in a sync state
during the previous time slot or not. We then propose two new performance
metrics, coined the Version Innovation Age (VIA) and the Age of Incorrect
Version (AoIV) and analyze their performance under the modified randomized
stationary and other state-of-the-art sampling and transmission policies.
Specifically, we derive closed-form expressions for the distribution and the
average of VIA, AoIV, and Age of Incorrect Information (AoII) under these
policies. Furthermore, we formulate and solve three constrained optimization
problems. The first optimization problem aims to minimize the average VIA
subject to constraints on the time-averaged sampling cost and time-averaged
reconstruction error. In the second and third problems, the objective is to
minimize the average AoIV and AoII, respectively, while considering a
constraint on the time-averaged sampling cost. Finally, we compare the
performance of various sampling and transmission policies and identify the
conditions under which each policy outperforms the others in optimizing the
proposed metrics.
|
We propose a new method for discretizing the time variable in integrable
lattice systems while maintaining the locality of the equations of motion. The
method is based on the zero-curvature (Lax pair) representation and the
lowest-order "conservation laws". In contrast to the pioneering work of
Ablowitz and Ladik, our method allows the auxiliary dependent variables
appearing in the stage of time discretization to be expressed locally in terms
of the original dependent variables. The time-discretized lattice systems have
the same set of conserved quantities and the same structures of the solutions
as the continuous-time lattice systems; only the time evolution of the
parameters in the solutions that correspond to the angle variables is
discretized. The effectiveness of our method is illustrated using examples such
as the Toda lattice, the Volterra lattice, the modified Volterra lattice, the
Ablowitz-Ladik lattice (an integrable semi-discrete nonlinear Schroedinger
system), and the lattice Heisenberg ferromagnet model. For the Volterra lattice
and modified Volterra lattice, we also present their ultradiscrete analogues.
|
We show in random matrix theory, microwave measurements, and computer
simulations that the mean free path of a random medium and the strength and
position of an embedded reflector can be determined from radiation scattered by
the system. The mean free path and strength of the reflector are determined
from the statistics of transmission. The statistics of transmission are
independent of the position of the reflector. The reflector's position can be
found, however, from the average dwell time for waves incident from one side of
the sample.
|
In this talk we study beyond Standard Model scenarios where the Higgs is
non-linearly realized. The one-loop ultraviolet divergences of the low-energy
effective theory at next-to-leading order, O(p^4), are computed by means of the
background-field method and the heat-kernel expansion. The power counting in
non-linear theories shows that these divergences are determined by the
leading-order effective Lagrangian L_2. We focus our attention on the most
important O(p^4) divergences, which are provided by the loops of Higgs and
electroweak Goldstones, as these particle are the only ones that couple through
derivatives in L_2. The one-loop divergences are renormalized by O(p^4)
effective operators, and set their running. This implies the presence of chiral
logarithms in the amplitudes along with the O(p^4) low-energy couplings, which
are of a similar importance and should not be neglected in next-to-leading
order effective theory calculations, e.g. in composite scenarios.
|
Biomedical language understanding benchmarks are the driving forces for
artificial intelligence applications with large language model (LLM) back-ends.
However, most current benchmarks: (a) are limited to English which makes it
challenging to replicate many of the successes in English for other languages,
or (b) focus on knowledge probing of LLMs and neglect to evaluate how LLMs
apply these knowledge to perform on a wide range of bio-medical tasks, or (c)
have become a publicly available corpus and are leaked to LLMs during
pre-training. To facilitate the research in medical LLMs, we re-build the
Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark into a
large scale prompt-tuning benchmark, PromptCBLUE. Our benchmark is a suitable
test-bed and an online platform for evaluating Chinese LLMs' multi-task
capabilities on a wide range bio-medical tasks including medical entity
recognition, medical text classification, medical natural language inference,
medical dialogue understanding and medical content/dialogue generation. To
establish evaluation on these tasks, we have experimented and report the
results with the current 9 Chinese LLMs fine-tuned with differtent fine-tuning
techniques.
|
This paper proposes a simple technical approach for the analytical derivation
of Point-in-Time PD (probability of default) forecasts, with minimal data
requirements. The inputs required are the current and future Through-the-Cycle
PDs of the obligors, their last known default rates, and a measurement of the
systematic dependence of the obligors. Technically, the forecasts are made from
within a classical asset-based credit portfolio model, with the additional
assumption of a simple (first/second order) autoregressive process for the
systematic factor. This paper elaborates in detail on the practical issues of
implementation, especially on the parametrization alternatives. We also show
how the approach can be naturally extended to low-default portfolios with
volatile default rates, using Bayesian methodology. Furthermore, expert
judgments on the current macroeconomic state, although not necessary for the
forecasts, can be embedded into the model using the Bayesian technique. The
resulting PD forecasts can be used for the derivation of expected lifetime
credit losses as required by the newly adopted accounting standard IFRS 9. In
doing so, the presented approach is endogenous, as it does not require any
exogenous macroeconomic forecasts, which are notoriously unreliable and often
subjective. Also, it does not require any dependency modeling between PDs and
macroeconomic variables, which often proves to be cumbersome and unstable.
|
The conformal Galilei algebra (CGA) is a non-semisimple Lie algebra labelled
by two parameters $d$ and $\ell$. The aim of the present work is to investigate
the lowest weight representations of CGA with $d = 1$ for any integer value of
$\ell$. First we focus on the reducibility of the Verma modules. We give a
formula for the Shapovalov determinant and it follows that the Verma module is
irreducible if $\ell = 1$ and the lowest weight is nonvanishing. We prove that
the Verma modules contain many singular vectors, i.e., they are reducible when
$\ell \neq 1$. Using the singular vectors, hierarchies of partial differential
equations defined on the group manifold are derived. The differential equations
are invariant under the kinematical transformation generated by CGA. Finally we
construct irreducible lowest weight modules obtained from the reducible Verma
modules.
|
For the description of the Universe expansion, compatible with observational
data, a model of modified gravity - Lovelock gravity with dilaton - is
investigated. D-dimensional space with 3- and (D-4)-dimensional maximally
symmetric subspaces is considered. Space without matter and space with perfect
fluid are under test. In various forms of the theory under way (third order
without dilaton and second order - Einstein-Gauss-Bonnet gravity - with dilaton
and without it) stationary, power-law, exponential and exponent-of-exponent
form cosmological solutions are obtained. Last two forms include solutions
which are clear to describe accelerating expansion of 3-dimensional subspace.
Also there is a set of solutions describing cosmological expansion which does
not tend to isotropization in the presence of matter.
|
We prove that the random simple cubic planar graph $\mathsf{C}_n$ with an
even number $n$ of vertices admits a novel uniform infinite cubic planar graph
(UICPG) as quenched local limit. We describe how the limit may be constructed
by a series of random blow-up operations applied to the dual map of the
type~III Uniform Infinite Planar Triangulation established by Angel and Schramm
(Comm. Math. Phys., 2003). Our main technical lemma is a contiguity relation
between $\mathsf{C}_n$ and a model where the networks inserted at the links of
the largest $3$-connected component of $\mathsf{C}_n$ are replaced by
independent copies of a specific Boltzmann network. We prove that the number of
vertices of the largest $3$-connected component concentrates at $\kappa n$ for
$\kappa \approx 0.85085$, with Airy-type fluctuations of order $n^{2/3}$. The
second-largest component is shown to have significantly smaller size
$O_p(n^{2/3})$.
|
We investigate the possible effects of short-baseline antinu_e disappearance
implied by the reactor antineutrino anomaly on the Double-Chooz determination
of theta_{13} through the normalization of the initial antineutrino flux with
the Bugey-4 measurement. We show that the effects are negligible and the value
of theta_{13} obtained by the Double-Chooz collaboration is accurate only if
Delta m^2_{41} is larger than about 3 eV^2. For smaller values of Delta
m^2_{41} the short-baseline oscillations are not fully averaged at Bugey-4 and
the uncertainties due to the reactor antineutrino anomaly can be of the same
order of magnitude of the intrinsic Double-Chooz uncertainties.
|
Primordial black holes formed in an early post-inflation matter-dominated
epoch during preheating provide a novel pathway for a source of the dark matter
that utilizes known physics in combination with plausible speculations about
the role of quantum gravity. Two cases are considered here: survival of
Planck-scale relics and an early universe accretion scenario for formation of
primordial black holes of asteroid-scale masses.
|
Using large language models (LLMs) for source code has recently gained
attention. LLMs, such as Transformer-based models like Codex and ChatGPT, have
been shown to be highly capable of solving a wide range of programming
problems. However, the extent to which LLMs understand problem descriptions and
generate programs accordingly or just retrieve source code from the most
relevant problem in training data based on superficial cues has not been
discovered yet. To explore this research question, we conduct experiments to
understand the robustness of several popular LLMs, CodeGen and GPT-3.5 series
models, capable of tackling code generation tasks in introductory programming
problems. Our experimental results show that CodeGen and Codex are sensitive to
the superficial modifications of problem descriptions and significantly impact
code generation performance. Furthermore, we observe that Codex relies on
variable names, as randomized variables decrease the solved rate significantly.
However, the state-of-the-art (SOTA) models, such as InstructGPT and ChatGPT,
show higher robustness to superficial modifications and have an outstanding
capability for solving programming problems. This highlights the fact that
slight modifications to the prompts given to the LLMs can greatly affect code
generation performance, and careful formatting of prompts is essential for
high-quality code generation, while the SOTA models are becoming more robust to
perturbations.
|
We classify the canonical threefold singularities that allow an effective
two-torus action. This extends classification results of Mori on terminal
threefold singularities and of Ishida and Iwashita on toric canonical threefold
singularities. Our classification relies on lattice point emptiness of certain
polytopal complexes with rational vertices. Scaling the polytopes by the least
common multiple $k$ of the respective denominators, we investigate
$k$-emptiness of polytopes with integer vertices. We show that two dimensional
$k$-empty polytopes either are sporadic or come in series given by Farey
sequences. We finally present the Cox ring iteration tree of the classified
singularities, where all roots, i.e. all spectra of factorial Cox rings, are
generalized compound du Val singularities.
|
We construct a supersymmetric version of the triplet Higgs model for neutrino
masses, which can generate a baryon asymmetry of the Universe through
lepton-number violation and is consistent with the gravitino constraints.
|
We show how to create maximal entanglement between spectrally distinct
solid-state emitters embedded in a waveguide interferometer. By revealing the
rich underlying structure of multi-photon scattering in emitters, we show that
a two-photon input state can generate deterministic maximal entanglement even
for emitters with significantly different transition energies and line-widths.
The optimal frequency of the input is determined by two competing processes:
which-path erasure and interaction strength. We find that smaller spectral
overlap can be overcome with higher photon numbers, and quasi-monochromatic
photons are optimal for entanglement generation. Our work provides a new
methodology for solid-state entanglement generation, where the requirement for
perfectly matched emitters can be relaxed in favour of optical state
optimisation.
|
A very general class of axially-symmetric metrics in general relativity (GR)
that includes rotations is used to discuss the dynamics of
rotationally-supported galaxies. The exact vacuum solutions of the Einstein
equations for this extended Weyl class of metrics allow us to deduce rigorously
the following: (i) GR rotational velocity always exceeds the Newtonian velocity
(thanks to Lenz's law in GR); (ii) A non-vanishing intrinsic angular momentum
($J$) for a galaxy demands the asymptotic constancy of the Weyl (vectorial)
length parameter ($a$) -a behavior identical to that found for the Kerr metric;
(iii) Asymptotic constancy of the same parameter $a$ also demands a plateau in
the rotational velocity. Unlike the Kerr metric, the extended Weyl metric can
and has been continued within the galaxy and it has been shown under what
conditions Gau\ss\ \&\ Amp\'ere laws emerge along with Ludwig's extended GEM
theory with its attendant non-linear rate equations for the velocity field.
Better estimates (than that from the Newtonian theory) for the escape velocity
of the Sun and a reasonable rotation curve \&\ $J$ for our own galaxy has been
presented.
|
In this work a robust clustering algorithm for stationary time series is
proposed. The algorithm is based on the use of estimated spectral densities,
which are considered as functional data, as the basic characteristic of
stationary time series for clustering purposes. A robust algorithm for
functional data is then applied to the set of spectral densities. Trimming
techniques and restrictions on the scatter within groups reduce the effect of
noise in the data and help to prevent the identification of spurious clusters.
The procedure is tested in a simulation study, and is also applied to a real
data set.
|
In this paper, association results from genome-wide association studies
(GWAS) are combined with a deep learning framework to test the predictive
capacity of statistically significant single nucleotide polymorphism (SNPs)
associated with obesity phenotype. Our approach demonstrates the potential of
deep learning as a powerful framework for GWAS analysis that can capture
information about SNPs and the important interactions between them. Basic
statistical methods and techniques for the analysis of genetic SNP data from
population-based genome-wide studies have been considered. Statistical
association testing between individual SNPs and obesity was conducted under an
additive model using logistic regression. Four subsets of loci after
quality-control (QC) and association analysis were selected: P-values lower
than 1x10-5 (5 SNPs), 1x10-4 (32 SNPs), 1x10-3 (248 SNPs) and 1x10-2 (2465
SNPs). A deep learning classifier is initialised using these sets of SNPs and
fine-tuned to classify obese and non-obese observations. Using a deep learning
classifier model and genetic variants with P-value < 1x10-2 (2465 SNPs) it was
possible to obtain results (SE=0.9604, SP=0.9712, Gini=0.9817, LogLoss=0.1150,
AUC=0.9908 and MSE=0.0300). As the P-value increased, an evident deterioration
in performance was observed. Results demonstrate that single SNP analysis fails
to capture the cumulative effect of less significant variants and their overall
contribution to the outcome in disease prediction, which is captured using a
deep learning framework.
|
We study a (1+1)-dimensional directed polymer in a random environment on the
integer lattice with log-gamma distributed weights. Among directed polymers,
this model is special in the same way as the last-passage percolation model
with exponential or geometric weights is special among growth models, namely,
both permit explicit calculations. With appropriate boundary conditions, the
polymer with log-gamma weights satisfies an analogue of Burke's theorem for
queues. Building on this, we prove the conjectured values for the fluctuation
exponents of the free energy and the polymer path, in the case where the
boundary conditions are present and both endpoints of the polymer path are
fixed. For the polymer without boundary conditions and with either fixed or
free endpoint, we get the expected upper bounds on the exponents.
|
We give a bijection between permutations of length 2n and certain pairs of
Dyck paths with labels on the down steps. The bijection arises from a game in
which two players alternate selecting from a set of 2n items: the permutation
encodes the players' preference ordering of the items, and the Dyck paths
encode the order in which items are selected under optimal play. We enumerate
permutations by certain statistics, AA inversions and BB inversions, which have
natural interpretations in terms of the game. We give new proofs of classical
identities such as \sum_p \prod_{i=1}^n q^{h_i -1} [h_i]_q = [1]_q [3]_q ...
[2n-1]_q where the sum is over all Dyck paths p of length 2n, and the h_i are
the heights of the down steps of p.
|
The nature of the exchange coupling variation in an antiferromagnetic
spin-1/2 system can be used to tailor its ground-state properties. In
particular, dimerized Heisenberg rings containing domain walls have localized
states which can serve as "flying spin qubits" when the domain walls are moved.
We show theoretically that, when two of these rings are coupled, the movement
of the domain walls leads to modulation of the effective exchange interaction
between the qubits. Appropriately chosen configurations of domain walls can
give rise to ferromagnetic effective exchange. We describe how these spin rings
may be used as basic building blocks to construct quantum spin systems whose
properties are tunable by virtue of the exchange variation within the rings.
|
We introduce Noise Injection Node Regularization (NINR), a method of
injecting structured noise into Deep Neural Networks (DNN) during the training
stage, resulting in an emergent regularizing effect. We present theoretical and
empirical evidence for substantial improvement in robustness against various
test data perturbations for feed-forward DNNs when trained under NINR. The
novelty in our approach comes from the interplay of adaptive noise injection
and initialization conditions such that noise is the dominant driver of
dynamics at the start of training. As it simply requires the addition of
external nodes without altering the existing network structure or optimization
algorithms, this method can be easily incorporated into many standard problem
specifications. We find improved stability against a number of data
perturbations, including domain shifts, with the most dramatic improvement
obtained for unstructured noise, where our technique outperforms other existing
methods such as Dropout or $L_2$ regularization, in some cases. We further show
that desirable generalization properties on clean data are generally
maintained.
|
This paper introduces an Enhanced Boolean version of the Correlation Matrix
Memory (CMM), which is useful to work with binary memories. A novel Boolean
Orthonormalization Process (BOP) is presented to convert a non-orthonormal
Boolean basis, i.e., a set of non-orthonormal binary vectors (in a Boolean
sense) to an orthonormal Boolean basis, i.e., a set of orthonormal binary
vectors (in a Boolean sense). This work shows that it is possible to improve
the performance of Boolean CMM thanks BOP algorithm. Besides, the BOP algorithm
has a lot of additional fields of applications, e.g.: Steganography, Hopfield
Networks, Bi-level image processing, etc. Finally, it is important to mention
that the BOP is an extremely stable and fast algorithm.
|
Absolute total electron-ion recombination rate coefficients of argonlike
Sc3+(3s2 3p6) ions have been measured for relative energies between electrons
and ions ranging from 0 to 45 eV. This energy range comprises all dielectronic
recombination resonances attached to 3p -> 3d and 3p -> 4s excitations. A broad
resonance with an experimental width of 0.89 +- 0.07 eV due to the 3p5 3d2 2F
intermediate state is found at 12.31 +- 0.03 eV with a small experimental
evidence for an asymmetric line shape. From R-Matrix and perturbative
calculations we infer that the asymmetric line shape may not only be due to
quantum mechanical interference between direct and resonant recombination
channels as predicted by Gorczyca et al. [Phys. Rev. A 56, 4742 (1997)], but
may partly also be due to the interaction with an adjacent overlapping DR
resonance of the same symmetry. The overall agreement between theory and
experiment is poor. Differences between our experimental and our theoretical
resonance positions are as large as 1.4 eV. This illustrates the difficulty to
accurately describe the structure of an atomic system with an open 3d-shell
with state-of-the-art theoretical methods. Furthermore, we find that a
relativistic theoretical treatment of the system under study is mandatory since
the existence of experimentally observed strong 3p5 3d2 2D and 3p5 3d 4s 2D
resonances can only be explained when calculations beyond LS-coupling are
carried out.
|
The surface pattern formation on a gelation surface is analyzed using an
effective surface roughness. The spontaneous surface deformation on
DiMethylAcrylAmide (DMAA) gelation surface is controlled by temperature,
initiator concentration, and ambient oxygen. The effective surface roughness is
defined using 2-dimensional photo data to characterize the surface deformation.
Parameter dependence of the effective surface roughness is systematically
investigated. We find that decrease of ambient oxygen, increase of initiator
concentration, and high temperature tend to suppress the surface deformation in
almost similar manner. That trend allows us to collapse all the data to a
unified master curve. As a result, we finally obtain an empirical scaling form
of the effective surface roughness. This scaling is useful to control the
degree of surface patterning. However, the actual dynamics of this pattern
formation is not still uncovered.
|
We study 2d Ising Field Theory (IFT) in the low-temperature phase in
lightcone quantization, and show that integrating out zero modes generates a
very compact form for the effective lightcone interaction that depends on the
finite volume vacuum expectation value of the $\sigma$ operator. This form is
most naturally understood in a conformal basis for the lightcone Hilbert space.
We further verify that this simple form reproduces to high accuracy results for
the spectra, the $c$-function, and the form-factors from integrability methods
for the magnetic deformation of IFT. For generic non-integrable values of
parameters we also compute the above observables and compare our numeric
results to those of equal-time truncation. In particular, we report on new
measurements of various bound-state form-factors as well as the stress-tensor
spectral density. We find that the stress tensor spectral density provides
additional evidence that certain resonances of IFT are surprisingly narrow,
even at generic strong coupling. Explicit example code for constructing the
effective Hamiltonian is included in an appendix.
|
We present results of three SLD analyses: our final determination of the rate
of gluon splitting into b-bbar, an improved measurement of the inclusive b
quark fragmentation function in Z0 decays, and a preliminary first measurement
of the energy correlation between the two leading B hadrons in Z0 decays. Our
results are obtained using hadronic \z0decays produced in e+e- annihilations at
the Stanford Linear Collider (SLC) between 1996 and 1998 and collected in the
SLC Large Detector (SLD). In this period, we used an upgraded vertex detector
with wide acceptance and excellent impact parameter resolution, thus improving
considerably our tagging capability for low-energy B hadrons.
|
We perform a direct search for an isotropic stochastic gravitational-wave
background (SGWB) produced by cosmic strings in the Parkes Pulsar Timing Array
second data release. We find no evidence for such an SGWB, and therefore place
$95\%$ confidence level upper limits on the cosmic string tension, $G\mu$, as a
function of the reconnection probability, $p$, which can be less than 1 in the
string-theory-inspired models. The upper bound on the cosmic string tension is
$G\mu \lesssim 5.1 \times 10^{-10}$ for $p = 1$, which is about five orders of
magnitude tighter than the bound derived from the null search of individual
gravitational wave burst from cosmic string cusps in the PPTA DR2.
|
We consider the torsional rigidity and the principal eigenvalue related to
the $p$-Laplace operator. The goal is to find upper and lower bounds to
products of suitable powers of the quantities above in various classes of
domains. The limit cases $p=1$ and $p=\infty$ are also analyzed, which amount
to consider the Cheeger constant of a domain and functionals involving the
distance function from the boundary.
|
We present a study of higher order QCD corrections beyond NLO to processes
with an electroweak vector boson, W or Z, in association with jets. We focus on
the regions of high transverse momenta of commonly used differential
distributions. We employ the LoopSim method to merge NLO samples of different
multiplicity obtained from MCFM and from BLACKHAT+SHERPA in order to compute
the dominant part of the NNLO corrections for high-pT observables. We find that
these corrections are indeed substantial for a number of experimentally
relevant observables. For other observables, they lead to significant reduction
of scale uncertainties.
|
We investigate the phase diagram of a two-species Bose-Hubbard model
including a conversion term, by which two particles from the first species can
be converted into one particle of the second species, and vice-versa. The model
can be related to ultra-cold atom experiments in which a Feshbach resonance
produces long-lived bound states viewed as diatomic molecules. The model is
solved exactly by means of Quantum Monte Carlo simulations. We show than an
"inversion of population" occurs, depending on the parameters, where the second
species becomes more numerous than the first species. The model also exhibits
an exotic incompressible "Super-Mott" phase where the particles from both
species can flow with signs of superfluidity, but without global supercurrent.
We present two phase diagrams, one in the (chemical potential, conversion)
plane, the other in the (chemical potential, detuning) plane.
|
We report the first observation of color-suppressed $\overline{B}^0\to D^0
\pi^0$ and $D^{(*)0} \omega$ decays and evidence of $\overline{B}^0 \to D^{*0}
\pi^0$ and $D^{(*)0} \eta$. The branching fractions are found to be ${\cal B}
(\overline{B}^0\to D^0 \pi^0) = (2.9 ^{+0.4}_{-0.3} \pm 0.6) \times 10^{-4}$,
${\cal B} (\overline{B}^0 \to D^0 \omega) = (1.7 ^{+0.5 +0.3}_{-0.4 -0.4})
\times 10^{-4}$, and ${\cal B} (\overline{B}^0 \to D^{*0} \omega) = (3.4
^{+1.3}_{-1.1}\pm 0.8) \times 10^{-4}$. The analysis is based on a data sample
of 21.3 fb$^{-1}$ collected at the $\Upsilon(4S)$ resonance by the Belle
detector at the KEKB $e^{+} e^{-}$ collider.
|
The off-shell anomalous chromomagnetic dipole moment of the standard model
quarks ($u$, $d$, $s$, $c$ and $b$), at the $Z$ gauge boson mass scale, is
computed by using the $\overline{\textrm{MS}}$ scheme. The numerical results
disagree with all the previous predictions reported in the literature and show
a discrepancy of up to two orders of magnitude in certain situations.
|
Integrated Kerr micro-combs, a powerful source of many wavelengths for
photonic RF and microwave signal processing, are particularly useful for
transversal filter systems. They have many advantages including a compact
footprint, high versatility, large numbers of wavelengths, and wide bandwidths.
We review recent progress on photonic RF and microwave high bandwidth temporal
signal processing based on Kerr micro-combs with spacings from 49-200GHz. We
cover integral and fractional Hilbert transforms, differentiators as well as
integrators. The potential of optical micro-combs for RF photonic applications
in functionality and ability to realize integrated solutions is also discussed.
|
Tests and studies concerning the design and performance of the GlueX Central
Drift
Chamber (CDC) are presented. A full-scale prototype was built to test and
steer the mechanical and electronic design. Small scale prototypes were
constructed to test for sagging and to do timing and resolution studies of the
detector. These studies were used to choose the gas mixture and to program a
Monte Carlo simulation that can predict the detector response in an external
magnetic field. Particle identification and charge division possibilities were
also investigated.
|
Machine Reading Comprehension (MRC) has become enormously popular recently
and has attracted a lot of attention. However, the existing reading
comprehension datasets are mostly in English. In this paper, we introduce a
Span-Extraction dataset for Chinese machine reading comprehension to add
language diversities in this area. The dataset is composed by near 20,000 real
questions annotated on Wikipedia paragraphs by human experts. We also annotated
a challenge set which contains the questions that need comprehensive
understanding and multi-sentence inference throughout the context. We present
several baseline systems as well as anonymous submissions for demonstrating the
difficulties in this dataset. With the release of the dataset, we hosted the
Second Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC
2018). We hope the release of the dataset could further accelerate the Chinese
machine reading comprehension research. Resources are available:
https://github.com/ymcui/cmrc2018
|
We solve Klein-Gordon equation for massless scalars on d+1 dimensional
Minkowski (Euclidean) space in terms of the Cauchy data on the hypersurface
t=0. By inserting the solution into the action of massless scalars in Minkowski
(Euclidean) space we obtain the action of dual theory on the boundary t=0 which
is exactly the holographic dual of conformally coupled scalars on d+1
dimensional (Euclidean anti) de Sitter space obtained in (A)dS/CFT
correspondence. The observed equivalence of dual theories is explained using
the one-to-one map between conformally coupled scalar fields on Minkowski
(Euclidean) space and (Euclidean anti) de Sitter space which is an isomorphism
between the hypersurface t=0 of Minkowski (Euclidean) space and the boundary of
(A)dS space.
|
We consider an extension of the conventional quantum Heisenberg algebra,
assuming that coordinates as well as momenta fulfil nontrivial commutation
relations. As a consequence, a minimal length and a minimal mass scale are
implemented. Our commutators do not depend on positions and momenta and we
provide an extension of the coordinate coherent state approach to
Noncommutative Geometry. We explore, as toy model, the corresponding quantum
field theory in a (2+1)-dimensional spacetime. Then we investigate the more
realistic case of a (3+1)-dimensional spacetime, foliated into noncommutative
planes. As a result, we obtain propagators, which are finite in the ultraviolet
as well as the infrared regime.
|
We examine Hawking radiation for a (2+1)-dimensional spinning black hole and
study the interesting possibility of tunneling through the event horizon which
acts as a classically forbidden barrier. Our finding shows it to be much lower
than its nonrotating counterpart. We further explore the associated
thermodynamics in terms of Hawking temperature and give estimates of black hole
parameters like the surface gravity and entropy.
|
So far the study of black hole perturbations has been mostly focussed upon
the classical black holes with singularities at the origin and hidden by event
horizon. Compared to that, the regular black holes are a completely new class
of solutions arising out of modification of general theory of relativity by
coupling gravity to an external form of matter. Therefore it is extremely
important to study the behaviour of such regular black holes under different
types of perturbations. Recently a new regular Bardeen black hole solution with
a de Sitter branch has been proposed by Fernando. We compute the quasi-normal
(QN) frequencies for the regular Bardeen de Sitter (BdS) black hole due to
massless and massive scalar field perturbations as well as the massless Dirac
perturbations. We analyze the behaviour of both real and imaginary parts of
quasinormal frequencies by varying different parameters of the theory.
|
Generative retrieval (GR) has emerged as a transformative paradigm in search
and recommender systems, leveraging numeric-based identifier representations to
enhance efficiency and generalization. Notably, methods like TIGER employing
Residual Quantization-based Semantic Identifiers (RQ-SID), have shown
significant promise in e-commerce scenarios by effectively managing item IDs.
However, a critical issue termed the "\textbf{Hourglass}" phenomenon, occurs in
RQ-SID, where intermediate codebook tokens become overly concentrated,
hindering the full utilization of generative retrieval methods. This paper
analyses and addresses this problem by identifying data sparsity and
long-tailed distribution as the primary causes. Through comprehensive
experiments and detailed ablation studies, we analyze the impact of these
factors on codebook utilization and data distribution. Our findings reveal that
the "Hourglass" phenomenon substantially impacts the performance of RQ-SID in
generative retrieval. We propose effective solutions to mitigate this issue,
thereby significantly enhancing the effectiveness of generative retrieval in
real-world E-commerce applications.
|
On the basis of recent precise measurements of the electric form factor of
the proton, the Zemach moments, needed as input parameters for the
determination of the proton rms radius from the measurement of the Lamb shift
in muonic hydrogen, are calculated. It turns out that the new moments give an
uncertainty as large as the presently stated error of the recent Lamb shift
measurement of Pohl et al.. De Rujula's idea of a large Zemach moment in order
to reconcile the five standard deviation discrepancy between the muonic Lamb
shift determination and the result of electronic experiments is shown to be in
clear contradiction with experiment. Alternative explanations are touched upon.
|
This is the first of a series of papers presenting the Modules for
Experiments in Stellar Astrophysics (MESA) Isochrones and Stellar Tracks (MIST)
project, a new comprehensive set of stellar evolutionary tracks and isochrones
computed using MESA, a state-of-the-art open-source 1D stellar evolution
package. In this work, we present models with solar-scaled abundance ratios
covering a wide range of ages ($5 \leq \rm \log(Age)\;[yr] \leq 10.3$), masses
($0.1 \leq M/M_{\odot} \leq 300$), and metallicities ($-2.0 \leq \rm [Z/H] \leq
0.5$). The models are self-consistently and continuously evolved from the
pre-main sequence to the end of hydrogen burning, the white dwarf cooling
sequence, or the end of carbon burning, depending on the initial mass. We also
provide a grid of models evolved from the pre-main sequence to the end of core
helium burning for $-4.0 \leq \rm [Z/H] < -2.0$. We showcase extensive
comparisons with observational constraints as well as with some of the most
widely used existing models in the literature. The evolutionary tracks and
isochrones can be downloaded from the project website at
http://waps.cfa.harvard.edu/MIST/.
|
When two smooth manifold bundles over the same base are fiberwise
tangentially homeomorphic, the difference is measured by a homology class in
the total space of the bundle. We call this the relative smooth structure
class. Rationally and stably, this is a complete invariant. We give a more or
less complete and self-contained exposition of this theory which is a
reformulation of some of the results of [7].
An important application is the computation of the Igusa-Klein higher
Reidemeister torsion invariants of these exotic smooth structures. Namely, the
higher torsion invariant is equal to the Poincar\'e dual of the image of the
smooth structure class in the homology of the base. This is proved in the
companion paper [11] written by the first two authors.
|
Let $Oct_{1}^{+}$ and $Oct_{2}^{+}$ be the planar and non-planar graphs that
obtained from the Octahedron by 3-splitting a vertex respectively. For
$Oct_{1}^{+}$, we prove that a 4-connected graph is $Oct_{1}^{+}$-free if and
only if it is $C_{6}^{2}$, $C_{2k+1}^{2}$ $(k \geq 2)$ or it is obtained from
$C_{5}^{2}$ by repeatedly 4-splitting vertices. We also show that a planar
graph is $Oct_{1}^{+}$-free if and only if it is constructed by repeatedly
taking 0-, 1-, 2-sums starting from $\{K_{1}, K_{2} ,K_{3}\} \cup \mathscr{K}
\cup \{Oct,L_{5} \}$, where $\mathscr{K}$ is the set of graphs obtained by
repeatedly taking the special 3-sums of $K_{4}$. For $Oct_{2}^{+}$, we prove
that a 4-connected graph is $Oct_{2}^{+}$-free if and only if it is planar,
$C_{2k+1}^{2}$ $(k \geq 2)$, $L(K_{3,3})$ or it is obtained from $C_{5}^{2}$ by
repeatedly 4-splitting vertices.
|
In this paper we study $\varphi$-minimal surfaces in $\mathbb{R}^3$ when the
function $\varphi$ is invariant under a two-parametric group of translations.
Particularly those which are complete graphs over domains in $\mathbb{R}^2$. We
describe a full classification of complete flat embedded $\varphi$-minimal
surfaces if $\varphi$ is strictly monotone and characterize rotational
$\varphi$-minimal surfaces by its behavior at infinity when $\varphi$ has a
quadratic growth.
|
This study proposes an algorithm for modeling compressible flows in spherical
shells in nearly incompressible and weakly compressible regimes based on an
implicit direction splitting approach. The method retains theoretically
expected convergence rates and remains stable for extremely small values of the
characteristic Mach number. The staggered spatial discretization on the MAC
stencil, commonly used in numerical methods for incompressible Navier-Stokes
equations, was found to be convenient for the discretization of the
compressible Navier-Stokes equations written in the non-conservative form in
terms of the primitive variables. This approach helped to avoid the
high-frequency oscillations without any artificial stabilization terms.
Nonlinear Picard iterations with the splitting error reduction were also
implemented to allow one to obtain a solution of the fully nonlinear system of
equations. These results, alongside excellent parallel performance, prove the
viability of the direction splitting approach in large-scale high-resolution
high-performance simulations of atmospheric and oceanic flows.
|
We study the magnetic phase diagram of the $J_1$--$J_2$ Heisenberg
antiferromagnet on a honeycomb lattice at the strongly frustrated point
$J_2/J_1=1/2$ using large-scale Monte Carlo simulations. At low temperatures we
find three different field regimes, each characterized by different broken
discrete symmetries. In low magnetic fields up to $h_{c1}/J_1\approx 2.9$ the
$Z_3$ rotational lattice symmetry is spontaneously broken while a
1/2-magnetization plateau is stabilized around $h_{c2}/J_1=4$. The collinear
plateau state and the coplanar state in higher fields break the $Z_4$
translational symmetry and correspond to triple-$q$ magnetic structures. The
intermediate phase $h_{c1}<h<h_{c2}$ has an interesting symmetry structure,
breaking simultaneously the $Z_3$ and $Z_4$ symmetries. At much lower
temperatures the spatial broken discrete symmetries coexist with the quasi
long-range order of the transverse spin components.
|
Global deep-learning weather prediction models have recently been shown to
produce forecasts that rival those from physics-based models run at operational
centers. It is unclear whether these models have encoded atmospheric dynamics,
or simply pattern matching that produces the smallest forecast error. Answering
this question is crucial to establishing the utility of these models as tools
for basic science. Here we subject one such model, Pangu-weather, to a set of
four classical dynamical experiments that do not resemble the model training
data. Localized perturbations to the model output and the initial conditions
are added to steady time-averaged conditions, to assess the propagation speed
and structural evolution of signals away from the local source. Perturbing the
model physics by adding a steady tropical heat source results in a classical
Matsuno--Gill response near the heating, and planetary waves that radiate into
the extratropics. A localized disturbance on the winter-averaged North Pacific
jet stream produces realistic extratropical cyclones and fronts, including the
spontaneous emergence of polar lows. Perturbing the 500hPa height field alone
yields adjustment from a state of rest to one of wind--pressure balance over ~6
hours. Localized subtropical low pressure systems produce Atlantic hurricanes,
provided the initial amplitude exceeds about 5 hPa, and setting the initial
humidity to zero eliminates hurricane development. We conclude that the model
encodes realistic physics in all experiments, and suggest it can be used as a
tool for rapidly testing ideas before using expensive physics-based models.
|
In this paper we introduce the notion of orbit equivalence for semigroup
actions and the concept of generalized linear control system on smooth
manifold. The main goal is to prove that, under certain conditions, the
semigroup system of a generalized linear control system on a smooth manifold is
orbit equivalent to the semigroup system of a linear control system on a
homogeneous space.
|
This paper presents results of three-dimensional direct numerical simulations
(DNS) and global linear stability analyses (LSA) of a viscous incompressible
flow past a finite-length cylinder with two free flat ends. The cylindrical
axis is normal to the streamwise direction. The work focuses on the effects of
aspect ratios (in the range of $0.5\leq \rm{\small AR} \leq2$, cylinder length
over diameter) and Reynolds numbers ($Re\leq1000$ based on cylinder diameter
and uniform incoming velocity) on the onset of vortex shedding in this flow.
All important flow patterns have been identified and studied, especially as
$\rm{\small AR}$ changes. The appearance of a steady wake pattern when
$\rm{\small AR}\leq1.75$ has not been discussed earlier in the literature for
this flow. LSA based on the time-mean flow has been applied to understand the
Hopf bifurcation past which vortex shedding happens. The nonlinear DNS results
indicate that there are two vortex shedding patterns at different $Re$, one is
transient and the other is nonlinearly saturated. The vortex-shedding
frequencies of these two flow patterns correspond to the eigenfrequencies of
the two global modes in the stability analysis of the time-mean flow. Wherever
possible, we compare the results of our analyses to those of the flows past
other short-$\rm{\small AR}$ bluff bodies in order that our discussions bear
more general meanings.
|
Motivated by the need to study the molecular mechanism underlying Type 1
Diabetes (T1D) with the gene expression data collected from both the patients
and healthy controls at multiple time points, we propose an innovative method
for jointly estimating multiple dependent Gaussian graphical models. Compared
to the existing methods, the proposed method has a few significant advantages.
First, it includes a meta-analysis procedure to explicitly integrate
information across distinct conditions. In contrast, the existing methods often
integrate information through prior distributions or penalty function, which is
usually less efficient. Second, instead of working on original data, the
Bayesian step of the proposed method works on edge-wise scores, through which
the proposed method avoids to invert high-dimensional covariance matrices and
thus can perform very fast. The edge-wise score forms an equivalent measure of
the partial correlation coefficient and thus provides a good summary for the
graph structure information contained in the data under each condition. Third,
the proposed method can provide an overall uncertainty measure for the edges
detected in multiple graphical models, while the existing methods only produce
a point estimate or are feasible for very small size problems. We prove
consistency of the proposed method under mild conditions and illustrate its
performance using simulated and real data examples. The numerical results
indicate the superiority of the proposed method over the existing ones in both
estimation accuracy and computational efficiency. Extension of the proposed
method to joint estimation of multiple mixed graphical models is
straightforward.
|
It is demonstrated how the right hand sides of the Lorentz Transformation
equations may be written, in a Lorentz invariant manner, as 4--vector scalar
products. This implies the existence of invariant length intervals analogous to
invariant proper time intervals. This formalism, making essential use of the
4-vector electromagnetic potential concept, provides a short derivation of the
Lorentz force law of classical electrodynamics, the conventional definition of
the magnetic field, in terms of spatial derivatives of the 4--vector potential
and the Faraday-Lenz Law. An important distinction between the physical
meanings of the space-time and energy-momentum 4--vectors is pointed out.
|
Instead of formulating the state space of a quantum field theory over one big
Hilbert space, it has been proposed by Kijowski [Kijowski 1977] to represent
quantum states as projective families of density matrices over a collection of
smaller, simpler Hilbert spaces. One can thus bypass the need to select a
vacuum state for the theory, and still be provided with an explicit and
constructive description of the quantum state space, at least as long as the
label set indexing the projective structure is countable. Because uncountable
label sets are much less practical in this context, we develop in the present
article a general procedure to trim an originally uncountable label set down to
countable cardinality. In particular, we investigate how to perform this
tightening of the label set in a way that preserves both the physical content
of the algebra of observables and its symmetries. This work is notably
motivated by applications to the holonomy-flux algebra underlying Loop Quantum
Gravity. Building on earlier work by Okolow [arXiv:1304.6330], a projective
state space was introduced for this algebra in [arXiv:1411.3592]. However, the
non-trivial structure of the holonomy-flux algebra prevents the construction of
satisfactory semi-classical states. Implementing the general procedure just
mentioned in the case of a one-dimensional version of this algebra, we show how
a discrete subalgebra can be extracted without destroying universality nor
diffeomorphism invariance. On this subalgebra, states can then be constructed
whose semi-classicality is enforced step by step, starting from collective,
macroscopic degrees of freedom and going down progressively toward smaller and
smaller scales.
|
We present a novel approach to accelerate astrophysical hydrodynamical
simulations. In astrophysical many-body simulations, GRAPE (GRAvity piPE)
system has been widely used by many researchers. However, in the GRAPE systems,
its function is completely fixed because specially developed LSI is used as a
computing engine. Instead of using such LSI, we are developing a special
purpose computing system using Field Programmable Gate Array (FPGA) chips as
the computing engine. Together with our developed programming system, we have
implemented computing pipelines for the Smoothed Particle Hydrodynamics (SPH)
method on our PROGRAPE-3 system. The SPH pipelines running on PROGRAPE-3 system
have the peak speed of 85 GFLOPS and in a realistic setup, the SPH calculation
using one PROGRAPE-3 board is 5-10 times faster than the calculation on the
host computer. Our results clearly shows for the first time that we can
accelerate the speed of the SPH simulations of a simple astrophysical phenomena
using considerable computing power offered by the hardware.
|
We study spaces of modelled distributions with singular behaviour near the
boundary of a domain that, in the context of the theory of regularity
structures, allow one to give robust solution theories for singular stochastic
PDEs with boundary conditions. The calculus of modelled distributions
established in Hairer (Invent. Math. 198, 2014) is extended to this setting. We
formulate and solve fixed point problems in these spaces with a class of
kernels that is sufficiently large to cover in particular the Dirichlet and
Neumann heat kernels. These results are then used to provide solution theories
for the KPZ equation with Dirichlet and Neumann boundary conditions and for the
2D generalised parabolic Anderson model with Dirichlet boundary conditions.
In the case of the KPZ equation with Neumann boundary conditions, we show
that, depending on the class of mollifiers one considers, a "boundary
renormalisation" takes place. In other words, there are situations in which a
certain boundary condition is applied to an approximation to the KPZ equation,
but the limiting process is the Hopf-Cole solution to the KPZ equation with a
different boundary condition.
|
We report an experimental study of particle kinematics in a 3-dimensional
system of inelastic spheres fluidized by intense vibration. The motion of
particles in the interior of the medium is tracked by high speed video imaging,
yielding a spatially-resolved measurement of the velocity distribution. The
distribution is wider than a Gaussian and broadens continuously with increasing
volume fraction. The deviations from a Gaussian distribution for this
boundary-driven system are different in sign and larger in magnitude than
predictions for homogeneously driven systems. We also find correlations between
velocity components which grow with increasing volume fraction.
|
Minimal coupling leads to problems such as loss of causality if one wants to
describe charged particles of spin greater than one propagating in a constant
electromagnetic background. Regge trajectories in string theory contain such
states, so their study may allow us to investigate possible avenues to remedy
the pathologies. We present here two explicit forms, related by field
redefinitions, of the Lagrangian describing the bosonic states in the first
massive level of open superstrings in four dimensions. The first one reduces,
when the electromagnetic field is set to zero, to the Fierz-Pauli Lagrangian
for the spin-2 mode. The second one is a more compact form which simplifies the
derivation of a Fierz-Pauli system of equations of motion and constraints.
|
In this paper, we deal with the problem of object detection on remote sensing
images. Previous methods have developed numerous deep CNN-based methods for
object detection on remote sensing images and the report remarkable
achievements in detection performance and efficiency. However, current
CNN-based methods mostly require a large number of annotated samples to train
deep neural networks and tend to have limited generalization abilities for
unseen object categories. In this paper, we introduce a few-shot learning-based
method for object detection on remote sensing images where only a few annotated
samples are provided for the unseen object categories. More specifically, our
model contains three main components: a meta feature extractor that learns to
extract feature representations from input images, a reweighting module that
learn to adaptively assign different weights for each feature representation
from the support images, and a bounding box prediction module that carries out
object detection on the reweighted feature maps. We build our few-shot object
detection model upon YOLOv3 architecture and develop a multi-scale object
detection framework. Experiments on two benchmark datasets demonstrate that
with only a few annotated samples our model can still achieve a satisfying
detection performance on remote sensing images and the performance of our model
is significantly better than the well-established baseline models.
|
In robotics, many control and planning schemes have been developed to ensure
human physical safety in human-robot interaction. The human psychological state
and the expectation towards the robot, however, are typically neglected. Even
if the robot behaviour is regarded as biomechanically safe, humans may still
react with a rapid involuntary motion (IM) caused by a startle or surprise.
Such sudden, uncontrolled motions can jeopardize safety and should be prevented
by any means. In this letter, we propose the Expectable Motion Unit (EMU),
which ensures that a certain probability of IM occurrence is not exceeded in a
typical HRI setting. Based on a model of IM occurrence generated through an
experiment with 29 participants, we establish the mapping between robot
velocity, robot-human distance, and the relative frequency of IM occurrence.
This mapping is processed towards a real-time capable robot motion generator
that limits the robot velocity during task execution if necessary. The EMU is
combined in a holistic safety framework that integrates both the physical and
psychological safety knowledge. A validation experiment showed that the EMU
successfully avoids human IM in five out of six cases.
|
Robust controllers that stabilize dynamical systems even under disturbances
and noise are often formulated as solutions of nonsmooth, nonconvex
optimization problems. While methods such as gradient sampling can handle the
nonconvexity and nonsmoothness, the costs of evaluating the objective function
may be substantial, making robust control challenging for dynamical systems
with high-dimensional state spaces. In this work, we introduce multi-fidelity
variants of gradient sampling that leverage low-cost, low-fidelity models with
low-dimensional state spaces for speeding up the optimization process while
nonetheless providing convergence guarantees for a high-fidelity model of the
system of interest, which is primarily accessed in the last phase of the
optimization process. Our first multi-fidelity method initiates gradient
sampling on higher fidelity models with starting points obtained from cheaper,
lower fidelity models. Our second multi-fidelity method relies on ensembles of
gradients that are computed from low- and high-fidelity models. Numerical
experiments with controlling the cooling of a steel rail profile and laminar
flow in a cylinder wake demonstrate that our new multi-fidelity gradient
sampling methods achieve up to two orders of magnitude speedup compared to the
single-fidelity gradient sampling method that relies on the high-fidelity model
alone.
|
Pairwise compatibility measure (CM) is a key component in solving the jigsaw
puzzle problem (JPP) and many of its recently proposed variants. With the rapid
rise of deep neural networks (DNNs), a trade-off between performance (i.e.,
accuracy) and computational efficiency has become a very significant issue.
Whereas an end-to-end DNN-based CM model exhibits high performance, it becomes
virtually infeasible on very large puzzles, due to its highly intensive
computation. On the other hand, exploiting the concept of embeddings to
alleviate significantly the computational efficiency, has resulted in degraded
performance, according to recent studies. This paper derives an advanced CM
model (based on modified embeddings and a new loss function, called hard batch
triplet loss) for closing the above gap between speed and accuracy; namely a CM
model that achieves SOTA results in terms of performance and efficiency
combined. We evaluated our newly derived CM on three commonly used datasets,
and obtained a reconstruction improvement of 5.8% and 19.5% for so-called
Type-1 and Type-2 problem variants, respectively, compared to best known
results due to previous CMs.
|
We introduce an iterative method for computing the first eigenpair
$(\lambda_{p},e_{p})$ for the $p$-Laplacian operator with homogeneous Dirichlet
data as the limit of $(\mu_{q,}u_{q}) $ as $q\rightarrow p^{-}$, where $u_{q}$
is the positive solution of the sublinear Lane-Emden equation
$-\Delta_{p}u_{q}=\mu_{q}u_{q}^{q-1}$ with same boundary data. The method is
shown to work for any smooth, bounded domain. Solutions to the Lane-Emden
problem are obtained through inverse iteration of a super-solution which is
derived from the solution to the torsional creep problem. Convergence of
$u_{q}$ to $e_{p}$ is in the $C^{1}$-norm and the rate of convergence of
$\mu_{q}$ to $\lambda_{p}$ is at least $O(p-q)$. Numerical evidence is
presented.
|
With the growing use of underwater acoustic communications (UWAC) for both
industrial and military operations, there is a need to ensure communication
security. A particular challenge is represented by underwater acoustic networks
(UWANs), which are often left unattended over long periods of time. Currently,
due to physical and performance limitations, UWAC packets rarely include
encryption, leaving the UWAN exposed to external attacks faking legitimate
messages. In this paper, we propose a new algorithm for message authentication
in a UWAN setting. We begin by observing that, due to the strong spatial
dependency of the underwater acoustic channel, an attacker can attempt to mimic
the channel associated with the legitimate transmitter only for a small set of
receivers, typically just for a single one. Taking this into account, our
scheme relies on trusted nodes that independently help a sink node in the
authentication process. For each incoming packet, the sink fuses beliefs
evaluated by the trusted nodes to reach an authentication decision. These
beliefs are based on estimated statistical channel parameters, chosen to be the
most sensitive to the transmitter-receiver displacement. Our simulation results
show accurate identification of an attacker's packet. We also report results
from a sea experiment demonstrating the effectiveness of our approach.
|
Alfven turbulence caused by statistically isotropic and homogeneous
primordial magnetic field induces correlations in the cosmic microwave
background anisotropies. The correlations are specifically between spherical
harmonic modes a_{l-1,m} and a_{l+1,m}. In this paper we approach this issue
from phase analysis of the CMB maps derived from the WMAP data sets. Using
circular statistics and return phase mapping we examine phase correlation of
\Delta l=2 for the primordial non-Gaussianity caused by the Alfven turbulence
at the epoch of recombination. Our analyses show that such specific features
from the power-law Alfven turbulence do not contribute significantly in the
phases of the maps and could not be a source of primordial non-Gaussianity of
the CMB.
|
In this note, we derive a stability and weak-strong uniqueness principle for
volume-preserving mean curvature flow. The proof is based on a new notion of
volume-preserving gradient flow calibrations, which is a natural extension of
the concept in the case without volume preservation recently introduced by
Fischer et al. [arXiv:2003.05478]. The first main result shows that any strong
solution with certain regularity is calibrated. The second main result consists
of a stability estimate in terms of a relative entropy, which is valid in the
class of distributional solutions to volume-preserving mean curvature flow.
|
A cubic partition consists of partition pairs $(\lambda,\mu)$ such that
$\vert\lambda\vert+\vert\mu\vert=n$ where $\mu$ involves only even integers but
no restriction is placed on $\lambda$. This paper initiates the notion of
generalized cubic partitions and will prove a number of new congruences akin to
the classical Ramanujan-type. The tools emphasize three methods of proofs. The
paper concludes with a conjecture on the rarity of the aforementioned
Ramanujan-type congruences.
|
We introduce a simple, efficient and precise polynomial heuristic for a key
NP complete problem, minimum vertex cover. Our method is iterative and operates
in probability space. Once a stable probability solution is found we find the
true combinatorial solution from the probabilities. For system sizes which are
amenable to exact solution by conventional means, we find a correct minimum
vertex cover for all cases which we have tested, which include random graphs
and diluted triangular lattices of up to 100 sites. We present precise data for
minimum vertex cover on graphs of up to 50,000 sites. Extensions of the method
to hard core lattices gases and other NP problems are discussed.
|
This paper presents a puncturing technique to design length-compatible polar
codes. The punctured bits are identified with the help of differential
evolution (DE). A DE-based optimization framework is developed where the sum of
the bit-error-rate (BER) values of the information bits is minimized. We
identify a set of bits which can be avoided for puncturing in the case of
additive white Gaussian noise (AWGN) channels. This reduces the size of the
candidate puncturing patterns. Simulation results confirm the superiority of
the proposed technique over other state-of-the-art puncturing methods.
|
Subaddivity type matrix inequalities for concave funcions and symetric norms
are given.
|
Every character on a graded connected Hopf algebra decomposes uniquely as a
product of an even character and an odd character (Aguiar, Bergeron, and
Sottile, math.CO/0310016).
We obtain explicit formulas for the even and odd parts of the universal
character on the Hopf algebra of quasi-symmetric functions. They can be
described in terms of Legendre's beta function evaluated at half-integers, or
in terms of bivariate Catalan numbers:
$$ C(m,n)=\frac{(2m)!(2n)!}{m!(m+n)!n!}. $$
Properties of characters and of quasi-symmetric functions are then used to
derive several interesting identities among bivariate Catalan numbers and in
particular among Catalan numbers and central binomial coefficients.
|
Recent works explore collaboration between humans and teams of robots. These
approaches make sense if the human is already working with the robot team; but
how should robots encourage nearby humans to join their teams in the first
place? Inspired by behavioral economics, we recognize that humans care about
more than just team efficiency -- humans also have biases and expectations for
team dynamics. Our hypothesis is that the way inclusive robots divide the task
(i.e., how the robots split a larger task into subtask allocations) should be
both legible and fair to the human partner. In this paper we introduce a
bilevel optimization approach that enables robot teams to identify high-level
subtask allocations and low-level trajectories that optimize for legibility,
fairness, or a combination of both objectives. We then test our resulting
algorithm across studies where humans watch or play with robot teams. We find
that our approach to generating legible teams makes the human's role clear, and
that humans typically prefer to join and collaborate with legible teams instead
of teams that only optimize for efficiency. Incorporating fairness alongside
legibility further encourages participation: when humans play with robots, we
find that they prefer (potentially inefficient) teams where the subtasks or
effort are evenly divided. See videos of our studies here
https://youtu.be/cfN7O5na3mg
|
We completely determine all varieties of monoids on whose free objects all
fully invariant congruences or all fully invariant congruences contained in the
least semilattice congruence permute. Along the way, we find several new monoid
varieties with the distributive subvariety lattice (only a few examples of
varieties with such a property are known so far).
|
We reconsider the evaluation of OOD detection methods for image recognition.
Although many studies have been conducted so far to build better OOD detection
methods, most of them follow Hendrycks and Gimpel's work for the method of
experimental evaluation. While the unified evaluation method is necessary for a
fair comparison, there is a question of if its choice of tasks and datasets
reflect real-world applications and if the evaluation results can generalize to
other OOD detection application scenarios. In this paper, we experimentally
evaluate the performance of representative OOD detection methods for three
scenarios, i.e., irrelevant input detection, novel class detection, and domain
shift detection, on various datasets and classification tasks. The results show
that differences in scenarios and datasets alter the relative performance among
the methods. Our results can also be used as a guide for practitioners for the
selection of OOD detection methods.
|
This paper investigates the stability of Twitter counts of scientific
publications over time. For this, we conducted an analysis of the availability
statuses of over 2.6 million Twitter mentions received by the 1,154 most
tweeted scientific publications recorded by Altmetric.com up to October 2017.
Results show that of the Twitter mentions for these highly tweeted
publications, about 14.3% have become unavailable by April 2019. Deletion of
tweets by users is the main reason for unavailability, followed by suspension
and protection of Twitter user accounts. This study proposes two measures for
describing the Twitter dissemination structures of publications: Degree of
Originality (i.e., the proportion of original tweets received by a paper) and
Degree of Concentration (i.e., the degree to which retweets concentrate on a
single original tweet). Twitter metrics of publications with relatively low
Degree of Originality and relatively high Degree of Concentration are observed
to be at greater risk of becoming unstable due to the potential disappearance
of their Twitter mentions. In light of these results, we emphasize the
importance of paying attention to the potential risk of unstable Twitter
counts, and the significance of identifying the different Twitter dissemination
structures when studying the Twitter metrics of scientific publications.
|
Given an arbitrary d>0 we construct a group G and a group ring element S in
Z[G] such that the spectral measure mu of S has the property that mu((0,eps)) >
C/|log(eps)|^(1+d) for small eps. In particular the Novikov-Shubin invariant of
any such S is 0. The constructed examples show that the best known upper bounds
on mu((0,eps)) are not far from being optimal.
|
We prove a generalisation of Rudin's theorem on proper holomorphic maps from
the unit ball to the case of proper holomorphic maps from pseudoellipsoids.
|
The performance of a silicon photomultiplier has been assessed at low
temperature in order to evaluate its suitability as a scintillation readout
device in liquid argon particle physics detectors. The gain, measured as 2.1E6
for a constant over-voltage of 4V was measured between 25degC and -196degC and
found to be invariant with temperature, the corresponding single photoelectron
dark count rate reducing from 1MHz to 40Hz respectively. Following multiple
thermal cycles no deterioration in the device performance was observed. The
photon detection efficiency (PDE) was assessed as a function of photon
wavelength and temperature. For an over-voltage of 4V, the PDE, found again to
be invariant with temperature, was measured as 25% for 460nm photons and 11%
for 680nm photons. Device saturation due to high photon flux rate, observed
both at room temperature and -196degC, was again found to be independent of
temperature. Although the output signal remained proportional to the input
signal so long as the saturation limit was not exceeded, the photoelectron
pulse resolution and decay time increased slightly at -196degC.
|
This article studies the problem of approximating functions belonging to a
Hilbert space $H_d$ with an isotropic or anisotropic Gaussian reproducing
kernel, $$ K_d(\bx,\bt) =
\exp\left(-\sum_{\ell=1}^d\gamma_\ell^2(x_\ell-t_\ell)^2\right) \ \ \ \mbox{for
all}\ \ \bx,\bt\in\reals^d. $$ The isotropic case corresponds to using the same
shape parameters for all coordinates, namely $\gamma_\ell=\gamma>0$ for all
$\ell$, whereas the anisotropic case corresponds to varying shape parameters
$\gamma_\ell$. We are especially interested in moderate to large $d$.
|
The electric charge density in mesoscopic superconductors with circular
symmetry, i.e. disks and cylinders, is studied within the phenomenological
Ginzburg-Landau approach. We found that even in the Meissner state there is a
charge redistribution in the sample which makes the sample edge become
negatively charged. In the vortex state there is a competition between this
Meissner charge and the vortex charge which may change the polarity of the
charge at the sample edge with increasing magnetic field. It is shown
analytically that in spite of the charge redistribution the mesoscopic sample
as a whole remains electrically neutral.
|
We report the development and implementation of hybrid methods that combine
quantum mechanics (QM) with molecular mechanics (MM) to theoretically
characterize thiolated gold clusters. We use, as training systems, structures
such as Au25(SCH2-R)18 and Au38(SCH2-R)24, which can be readily compared with
recent crystallographic data. We envision that such an approach will lead to an
accurate description of key structural and electronic signatures at a fraction
of the cost of a full quantum chemical treatment. As an example, we demonstrate
that calculations of the 1H and 13C NMR shielding constants with our proposed
QM/MM model maintain the qualitative features of a full DFT calculation, with
an order-of-magnitude increase in computational efficiency.
|
We present a method for segmenting an arbitrary number of moving objects in
image sequences using the geometry of 6 points in 2D to infer motion
consistency. The method has been evaluated on the Hopkins 155 database and
surpasses current state-of-the-art methods such as SSC, both in terms of
overall performance on two and three motions but also in terms of maximum
errors. The method works by finding initial clusters in the spatial domain, and
then classifying each remaining point as belonging to the cluster that
minimizes a motion consistency score. In contrast to most other motion
segmentation methods that are based on an affine camera model, the proposed
method is fully projective.
|
In this paper we propose some novel path planning strategies for a double
integrator with bounded velocity and bounded control inputs. First, we study
the following version of the Traveling Salesperson Problem (TSP): given a set
of points in $\real^d$, find the fastest tour over the point set for a double
integrator. We first give asymptotic bounds on the time taken to complete such
a tour in the worst-case. Then, we study a stochastic version of the TSP for
double integrator where the points are randomly sampled from a uniform
distribution in a compact environment in $\real^2$ and $\real^3$. We propose
novel algorithms that perform within a constant factor of the optimal strategy
with high probability. Lastly, we study a dynamic TSP: given a stochastic
process that generates targets, is there a policy which guarantees that the
number of unvisited targets does not diverge over time? If such stable policies
exist, what is the minimum wait for a target? We propose novel stabilizing
receding-horizon algorithms whose performances are within a constant factor
from the optimum with high probability, in $\real^2$ as well as $\real^3$. We
also argue that these algorithms give identical performances for a particular
nonholonomic vehicle, Dubins vehicle.
|
In the HERAPDF2.0 PDF analysis it was noted that the fit $\chi^2$ worsens
significantly at low $Q^2$ for both NLO and NNLO fits. The turn over of the
reduced cross section at low-$x$ and low $Q^2$ due to the contribution of the
longitudinal cross section $F_L$ is also not very well described. In this paper
the prediction for $F_L$ is highlighted and the corresponding extraction of
$F_2$ from the data is further investigated, showing discrepancies with
description of HERAPDF2.0 at low $x$ and $Q^2$. The effect of adding a simple
higher twist term of the form ~$F_L*A/Q^2$ to the description of $F_L$ is
investigated. This results in a significantly better description of the reduced
cross-sections, $F_2$ and $F_L$ at low $x$, $Q^2$ and a significantly lower
$\chi^2$ for the NNLO fit as compared to the NLO fit. This is not the case if
the higher twist term is added to $F_2$
|
The Newman-Janis algorithm, which involves complex-coordinate
transformations, establishes connections between static and spherically
symmetric black holes and rotating and/or axially symmetric ones, such as
between Schwarzschild black holes and Kerr black holes, and between
Schwarzschild black holes and Taub-NUT black holes. However, the
transformations in the two samples are based on different physical mechanisms.
The former connection arises from the exponentiation of spin operators, while
the latter from a duality operation. In this paper, we mainly investigate how
the connections manifest in the dynamics of black holes. Specifically, we focus
on studying the correlations of quasinormal frequencies among Schwarzschild,
Kerr, and Taub-NUT black holes. This analysis allows us to explore the physics
of complex-coordinate transformations in the spectrum of quasinormal
frequencies.
|
Multi-speaker speech recognition of unsegmented recordings has diverse
applications such as meeting transcription and automatic subtitle generation.
With technical advances in systems dealing with speech separation, speaker
diarization, and automatic speech recognition (ASR) in the last decade, it has
become possible to build pipelines that achieve reasonable error rates on this
task. In this paper, we propose an end-to-end modular system for the LibriCSS
meeting data, which combines independently trained separation, diarization, and
recognition components, in that order. We study the effect of different
state-of-the-art methods at each stage of the pipeline, and report results
using task-specific metrics like SDR and DER, as well as downstream WER.
Experiments indicate that the problem of overlapping speech for diarization and
ASR can be effectively mitigated with the presence of a well-trained separation
module. Our best system achieves a speaker-attributed WER of 12.7%, which is
close to that of a non-overlapping ASR.
|
This paper addresses the open question formulated as: Which levels of
abstraction are appropriate in the synthetic modelling of life and cognition?
within the framework of info-computational constructivism, treating natural
phenomena as computational processes on informational structures. At present we
lack the common understanding of the processes of life and cognition in living
organisms with the details of co-construction of informational structures and
computational processes in embodied, embedded cognizing agents, both living and
artifactual ones. Starting with the definition of an agent as an entity capable
of acting on its own behalf, as an actor in Hewitt Actor model of computation,
even so simple systems as molecules can be modelled as actors exchanging
messages (information). We adopt Kauffmans view of a living agent as something
that can reproduce and undergoes at least one thermodynamic work cycle. This
definition of living agents leads to the Maturana and Varelas identification of
life with cognition. Within the info-computational constructive approach to
living beings as cognizing agents, from the simplest to the most complex living
systems, mechanisms of cognition can be studied in order to construct synthetic
model classes of artifactual cognizing agents on different levels of
organization.
|
Subsets and Splits