text
stringlengths 6
128k
|
---|
A decrease of fracture toughness of irradiated materials is usually observed,
as reported for austenitic stainless steels in Light Water Reactors (LWRs) or
copper alloys for fusion applications. For a wide range of applications (e.g.
structural steels irradiated at low homologous temperature), void growth and
coalescence fracture mechanism has been shown to be still predominant. As a
consequence, a comprehensive study of the effects of irradiation-induced
hardening mechanisms on void growth and coalescence in irradiated materials is
required. The effects of irradiation on ductile fracture mechanisms - void
growth to coalescence - are assessed in this study based on model experiments.
Pure copper thin tensile samples have been irradiated with protons up to 0.01
dpa. Micron-scale holes drilled through the thickness of these samples
subjected to uniaxial loading conditions allow a detailed description of void
growth and coalescence. In this study, experimental data show that physical
mechanisms of micron-scale void growth and coalescence are similar between the
unirradiated and irradiated copper. However, an acceleration of void growth is
observed in the later case, resulting in earlier coalescence, which is
consistent with the decrease of fracture toughness reported in irradiated
materials. These results are qualitatively reproduced with numerical
simulations accounting for irradiation macroscopic hardening and decrease of
strain-hardening capability.
|
For an equation of state in which pressure is a function only of density, the
analysis of Newtonian stellar structure is simple in principle if the system is
axisymmetric, or consists of a corotating binary. It is then required only to
solve two equations: one stating that the "injection energy", $\kappa$, a
potential, is constant throughout the stellar fluid, and the other being the
integral over the stellar fluid to give the gravitational potential. An
iterative solution of these equations generally diverges if $\kappa$ is held
fixed, but converges with other choices. We investigate the mathematical reason
for this convergence/divergence by starting the iteration from an approximation
that is perturbatively different from the actual solution. A cycle of iteration
is then treated as a linear "updating" operator, and the properties of the
linear operator, especially its spectrum, determine the convergence properties.
For simplicity, we confine ourselves to spherically symmetric models in which
we analyze updating operators both in the finite dimensional space
corresponding to a finite difference representation of the problem, and in the
continuum, and we find that the fixed-$\kappa$ operator is self-adjoint and
generally has an eigenvalue greater than unity; in the particularly important
case of a polytropic equation of state with index greater than unity, we prove
that there must be such an eigenvalue. For fixed central density, on the other
hand, we find that the updating operator has only a single eigenvector, with
zero eigenvalue, and is nilpotent in finite dimension, thereby giving a
convergent solution.
|
A drawing robot avatar is a robotic system that allows for telepresence-based
drawing, enabling users to remotely control a robotic arm and create drawings
in real-time from a remote location. The proposed control framework aims to
improve bimanual robot telepresence quality by reducing the user workload and
required prior knowledge through the automation of secondary or auxiliary
tasks. The introduced novel method calculates the near-optimal Cartesian
end-effector pose in terms of visual feedback quality for the attached
eye-to-hand camera with motion constraints in consideration. The effectiveness
is demonstrated by conducting user studies of drawing reference shapes using
the implemented robot avatar compared to stationary and teleoperated camera
pose conditions. Our results demonstrate that the proposed control framework
offers improved visual feedback quality and drawing performance.
|
First results from RHIC on charged multiplicities, evolution of
multiplicities with centrality, particle ratios and transverse momentum
distributions in central and minimum bias collisions, are analyzed in a string
model which includes hard collisions, collectivity in the initial state
considered as string fusion, and rescattering of the produced secondaries.
Multiplicities and their evolution with centrality are successfully reproduced.
Transverse momentum distributions in the model show a larger $p_T$-tail than
experimental data, disagreement which grows with increasing centrality.
Discrepancies with particle ratios appear and are examined comparing with
previous features of the model at SPS.
|
In this article, we characterise the operadic variety of commutative
associative algebras over a field via a (categorical) condition: the
associativity of the so-called cosmash product. This condition, which is
closely related to commutator theory, is quite strong: for example, groups do
not satisfy it. However, in the case of commutative associative algebras, the
cosmash product is nothing more than the tensor product; which explains why in
this case it is associative. We prove that in the setting of operadic varieties
of algebras over a field, it is the only example. Further examples in the
non-operadic case are also discussed.
|
Since 1870s, scientists have been taking deep insight into Lie groups and Lie
algebras. With the development of Lie theory, Lie groups have got profound
significance in many branches of mathematics and physics. In Lie theory,
exponential mapping between Lie groups and Lie algebras plays a crucial role.
Exponential mapping is the mechanism for passing information from Lie algebras
to Lie groups. Since many computations are performed much more easily by
employing Lie algebras, exponential mapping is indispensable while studying Lie
groups. In this paper, we first put forward a novel idea of designing
cryptosystem based on Lie groups and Lie algebras. Besides, combing with
discrete logarithm problem(DLP) and factorization problem(FP), we propose some
new intractable assumptions based on exponential mapping. Moreover, in analog
with Boyen's sceme(AsiaCrypt 2007), we disign a public key encryption scheme
based on non-Abelian factorization problems in Lie Groups. Finally, our
proposal is proved to be IND-CCA2 secure in the random oracle model.
|
This is the second part of a work in which we show how to solve a large class
of Lindblad master equations for non-interacting particles on $L$ sites. Here
we concentrate on fermionic particles. In parallel to part I for bosons, but
with important differences, we show how to reduce the problem to diagonalizing
an $L \times L$ non-Hermitian matrix which, for boundary dissipative driving of
a uniform chain, is a tridiagonal bordered Toeplitz matrix. In this way, both
for fermionic and spin systems alike, we can obtain analytical expressions for
the normal master modes and their relaxation rates (rapidities) and we show how
to construct the non-equilibrium steady state.
|
The human cortical layer exhibits a convoluted morphology that is unique to
each individual. Conventional volumetric fMRI processing schemes take for
granted the rich information provided by the underlying anatomy. We present a
method to study fMRI data on subject-specific cerebral hemisphere cortex (CHC)
graphs, which encode the cortical morphology at the resolution of voxels in
3-D. We study graph spectral energy metrics associated to fMRI data of 100
subjects from the Human Connectome Project database, across seven tasks.
Experimental results signify the strength of CHC graphs' Laplacian eigenvector
bases in capturing subtle spatial patterns specific to different functional
loads as well as experimental conditions within each task.
|
We combine soft and collinear matrix elements to define transverse momentum
dependent parton distributions (TMDs) for gluons, free from rapidity
divergences. We establish a factorization theorem at next-to-leading order for
the Higgs boson transverse momentum ($q_T$) spectrum, and use it to derive
evolution equations for gluon TMDs. The evolution for all gluon TMDs is driven
by a universal kernel, i.e. the same for all polarizations. In the region of
intermediate $q_T$ we match the unpolarized, helicity and linearly polarized
gluon distributions onto PDFs. We calculate the resummed Higgs transverse
momentum distribution at NNLL, including the contribution of the linearly
polarized gluons, and investigate the impact of non-perturbative models for the
transverse distribution of gluons inside the proton.
|
This article discusses two books on the topic of alternative logics in
science: "Deviant Logic", by Susan Haack, and "Alternative Logics: Do Sciences
Need Them?", edited by Paul Weingartner.
|
In the materials design domain, much of the data from materials calculations
are stored in different heterogeneous databases. Materials databases usually
have different data models. Therefore, the users have to face the challenges to
find the data from adequate sources and integrate data from multiple sources.
Ontologies and ontology-based techniques can address such problems as the
formal representation of domain knowledge can make data more available and
interoperable among different systems. In this paper, we introduce the
Materials Design Ontology (MDO), which defines concepts and relations to cover
knowledge in the field of materials design. MDO is designed using domain
knowledge in materials science (especially in solid-state physics), and is
guided by the data from several databases in the materials design field. We
show the application of the MDO to materials data retrieved from well-known
materials databases.
|
For every integer $g\ge 2$ we construct 3-dimensional genus-$g$
1-handlebodies smoothly embedded in $S^4$ with the same boundary, and which are
defined by the same cut systems of their boundary, yet which are not isotopic
rel. boundary via any locally flat isotopy even when their interiors are pushed
into $B^5$. This proves a conjecture of Budney-Gabai for genus at least 2.
|
In an earlier work we have shown the global (for all initial data and all
time) well-posedness of strong solutions to the three-dimensional viscous
primitive equations of large scale oceanic and atmospheric dynamics. In this
paper we show that for certain class of initial data the corresponding smooth
solutions of the inviscid (non-viscous) primitive equations blow up in finite
time. Specifically, we consider the three-dimensional inviscid primitive
equations in a three-dimensional infinite horizontal channel, subject to
periodic boundary conditions in the horizontal directions, and with no-normal
flow boundary conditions on the solid, top and bottom, boundaries. For certain
class of initial data we reduce this system into the two-dimensional system of
primitive equations in an infinite horizontal strip with the same type of
boundary conditions; and then show that for specific sub-class of initial data
the corresponding smooth solutions of the reduced inviscid two-dimensional
system develop singularities in finite time.
|
User-generated replies to hate speech are promising means to combat hatred,
but questions about whether they can stop incivility in follow-up conversations
linger. We argue that effective replies stop incivility from emerging in
follow-up conversations - replies that elicit more incivility are
counterproductive. This study introduces the task of predicting the incivility
of conversations following replies to hate speech. We first propose a metric to
measure conversation incivility based on the number of civil and uncivil
comments as well as the unique authors involved in the discourse. Our metric
approximates human judgments more accurately than previous metrics. We then use
the metric to evaluate the outcomes of replies to hate speech. A linguistic
analysis uncovers the differences in the language of replies that elicit
follow-up conversations with high and low incivility. Experimental results show
that forecasting incivility is challenging. We close with a qualitative
analysis shedding light into the most common errors made by the best model.
|
This paper considers galactic scale Beacons from the point of view of expense
to a builder on Earth. For fixed power density in the far field, what is the
cost-optimum interstellar Beacon system? Experience shows an optimum tradeoff,
depending on transmission frequency and on antenna size and power. This emerges
by minimizing the cost of producing a desired effective isotropic radiated
power, which in turn determines the maximum range of detectability of a
transmitted signal. We derive general relations for cost-optimal aperture and
power. For linear dependence of capital cost on transmitter power and antenna
area, minimum capital cost occurs when the cost is equally divided between
antenna gain and radiated power. For non-linear power law dependence a similar
simple division occurs. This is validated in cost data for many systems;
industry uses this cost optimum as a rule-of-thumb. Costs of pulsed
cost-efficient transmitters are estimated from these relations using current
cost parameters ($/W, $/m2) as a basis. Galactic-scale Beacons demand effective
isotropic radiated power >1017 W, emitted powers are >1 GW, with antenna areas
> km2. We show the scaling and give examples of such Beacons. Thrifty beacon
systems would be large and costly, have narrow searchlight beams and short
dwell times when the Beacon would be seen by an alien oberver at target areas
in the sky. They may revisit an area infrequently and will likely transmit at
higher microwave frequencies, ~10 GHz. The natural corridor to broadcast is
along the galactic spiral radius or along the spiral galactic arm we are in.
Our second paper argues that nearly all SETI searches to date had little chance
of seeing such Beacons.
|
In this paper, we propose a salient-context based semantic matching method to
improve relevance ranking in information retrieval. We first propose a new
notion of salient context and then define how to measure it. Then we show how
the most salient context can be located with a sliding window technique.
Finally, we use the semantic similarity between a query term and the most
salient context terms in a corpus of documents to rank those documents.
Experiments on various collections from TREC show the effectiveness of our
model compared to the state-of-the-art methods.
|
Classical dimensional analysis is one of the cornerstones of qualitative
physics and is also used in the analysis of engineering systems, for example in
engineering design. The basic power product relationship in dimensional
analysis is identical to one way of defining toric ideals in algebraic
geometry, a large and growing field. This paper exploits the toric
representation to provide a method for automatic dimensional analysis for
engineering systems. In particular all "primitive", invariants for a particular
problem, in a well defined sense, can be found using such methods.
|
Abstract simulation of one transition system by another is introduced as a
means to simulate a potentially infinite class of similar transition sequences
within a single transition sequence. This is useful for proving confluence
under invariants of a given system, as it may reduce the number of proof cases
to consider from infinity to a finite number. The classical confluence results
for Constraint Handling Rules (CHR) can be explained in this way, using CHR as
a simulation of itself. Using an abstract simulation based on a ground
representation, we extend these results to include confluence under invariant
and modulo equivalence, which have not been done in a satisfactory way before.
|
This paper provides the first comprehensive evaluation and analysis of modern
(deep-learning) unsupervised anomaly detection methods for chemical process
data. We focus on the Tennessee Eastman process dataset, which has been a
standard litmus test to benchmark anomaly detection methods for nearly three
decades. Our extensive study will facilitate choosing appropriate anomaly
detection methods in industrial applications.
|
A $q$-Gaussian measure is a generalization of a Gaussian measure. This
generalization is obtained by replacing the exponential function with the power
function of exponent $1/(1-q)$ ($q\neq 1$). The limit case $q=1$ recovers a
Gaussian measure. For $1\leq q <3$, the set of all $q$-Gaussian densities over
the real line satisfies a certain regularity condition to define information
geometric structures such as an entropy and a relative entropy via escort
expectations. The ordinary expectation of a random variable is the integral of
the random variable with respect to its law. Escort expectations admit us to
replace the law to any other measures. A choice of escort expectations on the
set of all $q$-Gaussian densities determines an entropy and a relative entropy.
One of most important escort expectations on the set of all $q$-Gaussian
densities is the $q$-escort expectation since this escort expectation
determines the Tsallis entropy and the Tsallis relative entropy.
The phenomenon gauge freedom of entropies is that different escort
expectations determine the same entropy, but different relative entropies. In
this note, we first introduce a refinement of the $q$-logarithmic function.
Then we demonstrate the phenomenon on an open set of all $q$-Gaussian densities
over the real line by using the refined $q$-logarithmic functions. We write
down the corresponding Riemannian metric.
|
Forward physics with CMS at the LHC covers a wide range of physics subjects,
including very low-x QCD, underlying event and multiple interactions
characteristics, gamma-mediated processes, shower development at the energy
scale of primary cosmic ray interactions with the atmosphere, diffraction in
the presence of a hard scale and even MSSM Higgs discovery in central exclusive
production. We describe the forward detector instrumentation around the CMS
interaction point and present selected feasibility studies to illustrate their
physics potential.
|
This paper introduces Interactive Tables (iTBLS), a dataset of interactive
conversations situated in tables from scientific articles. This dataset is
designed to facilitate human-AI collaborative problem-solving through
AI-powered multi-task tabular capabilities. In contrast to prior work that
models interactions as factoid QA or procedure synthesis, iTBLS broadens the
scope of interactions to include mathematical reasoning, natural language
manipulation, and expansion of existing tables from natural language
conversation by delineating interactions into one of three tasks:
interpretation, modification, or generation. Additionally, the paper presents a
suite of baseline approaches to iTBLS, utilizing zero-shot prompting and
parameter-efficient fine-tuning for different computing situations. We also
introduce a novel multi-step approach and show how it can be leveraged in
conjunction with parameter-efficient fine-tuning to achieve the
state-of-the-art on iTBLS; outperforming standard parameter-efficient
fine-tuning by up to 15% on interpretation, 18% on modification, and 38% on
generation.
|
Nowadays large-scale distributed machine learning systems have been deployed
to support various analytics and intelligence services in IT firms. To train a
large dataset and derive the prediction/inference model, e.g., a deep neural
network, multiple workers are run in parallel to train partitions of the input
dataset, and update shared model parameters. In a shared cluster handling
multiple training jobs, a fundamental issue is how to efficiently schedule jobs
and set the number of concurrent workers to run for each job, such that server
resources are maximally utilized and model training can be completed in time.
Targeting a distributed machine learning system using the parameter server
framework, we design an online algorithm for scheduling the arriving jobs and
deciding the adjusted numbers of concurrent workers and parameter servers for
each job over its course, to maximize overall utility of all jobs, contingent
on their completion times. Our online algorithm design utilizes a primal-dual
framework coupled with efficient dual subroutines, achieving good long-term
performance guarantees with polynomial time complexity. Practical effectiveness
of the online algorithm is evaluated using trace-driven simulation and testbed
experiments, which demonstrate its outperformance as compared to commonly
adopted scheduling algorithms in today's cloud systems.
|
A simple class of unitary renormalization group transformations that force
hamiltonians towards a band-diagonal form produce few-body interactions in
which low- and high-energy states are decoupled, which can greatly simplify
many-body calculations. One such transformation has been applied to
phenomenological and effective field theory nucleon-nucleon interactions with
success, but further progress requires consistent treatment of at least the
three-nucleon interaction. In this paper we demonstrate in an extremely simple
model how these renormalization group transformations consistently evolve two-
and three-body interactions towards band-diagonal form, and introduce a
diagrammatic approach that generalizes to the realistic nuclear problem.
|
Simulation of chemical activity of corrugated graphene within density
functional theory predicts an enhancement of its chemical activity if the ratio
of height of the corrugation (ripple) to its radius is larger than 0.07.
Further growth of the curvature of the ripples results in appearance of midgap
states which leads to an additional strong increase of chemisororption energy.
These results open a way for tunable functionalization of graphene, namely,
depending of curvature of the ripples one can provide both homogeneous (for
small curvatures) and spot-like (for large curvatures) functionalization.
|
Let a torus act on a compact oriented manifold $M$ with isolated fixed
points, with an additional mild assumption that its isotropy submanifolds are
orientable. We associate a signed labeled multigraph encoding the fixed point
data (weights and signs at fixed points and isotropy submanifolds) of the
manifold. We study operations on $M$ and its multigraph, (self) connected sum
and blow up, etc. When the circle group acts on a 6-dimensional $M$, we
classify such a multigraph by proving that we can convert it into the empty
graph by successively applying two types of operations. In particular, this
classifies the fixed point data of any such manifold. We prove this by showing
that for any such manifold, we can successively take equivariant connected sums
at fixed points with itself, $\mathbb{CP}^3$, and 6-dimensional analogue $Z_1$
and $Z_2$ of the Hirzebruch surfaces (and these with opposite orientations) to
a fixed point free action on a compact oriented 6-manifold. We also classify a
multigraph for a torus action on a 4-dimensional $M$.
|
Topological materials occupy the central stage in the modern condensed matter
physics because of their robust metallic edge or surface states protected by
the topological invariant, characterizing the electronic band structure in the
bulk. Higher-order topological (HOT) states extend this usual bulk-boundary
correspondence, so they host the modes localized at lower-dimensional
boundaries, such as corners and hinges. Here we theoretically demonstrate that
dislocations, ubiquitous defects in crystalline materials, can probe
higher-order topology, recently realized in various platforms. We uncover that
HOT insulators respond to dislocations through symmetry protected finite-energy
in-gap electronic modes, localized at the defect core, which originate from an
interplay between the orientation of the HOT mass domain wall and the Burgers
vector of the dislocation. As such, these modes become gapless only when the
Burgers vector points toward lower-dimensional gapless boundaries. Our findings
are consequential for the systematic probing of the extended bulk-boundary
correspondence in a broad range of HOT crystals, and photonic and phononic or
mechanical metamaterials through the bulk topological lattice defects.
|
This paper studies the relative importance of attention heads in
Transformer-based models to aid their interpretability in cross-lingual and
multi-lingual tasks. Prior research has found that only a few attention heads
are important in each mono-lingual Natural Language Processing (NLP) task and
pruning the remaining heads leads to comparable or improved performance of the
model. However, the impact of pruning attention heads is not yet clear in
cross-lingual and multi-lingual tasks. Through extensive experiments, we show
that (1) pruning a number of attention heads in a multi-lingual
Transformer-based model has, in general, positive effects on its performance in
cross-lingual and multi-lingual tasks and (2) the attention heads to be pruned
can be ranked using gradients and identified with a few trial experiments. Our
experiments focus on sequence labeling tasks, with potential applicability on
other cross-lingual and multi-lingual tasks. For comprehensiveness, we examine
two pre-trained multi-lingual models, namely multi-lingual BERT (mBERT) and
XLM-R, on three tasks across 9 languages each. We also discuss the validity of
our findings and their extensibility to truly resource-scarce languages and
other task settings.
|
We theoretically consider the substrate-induced Majorana localization length
renormalization in nanowires in contact with a bulk superconductor in the
strong tunnel-coupled regime, showing explicitly that this renormalization
depends strongly on the transverse size of the one-dimensional nanowires. For
metallic (e.g. Fe on Pb) or semiconducting (e.g. InSb on Nb) nanowires, the
renormalization effect is found to be very strong and weak respectively because
the transverse confinement size in the two situations happens to be 0.5nm
(metallic nanowire) and 20nm (semiconducting nanowire). Thus, the Majorana
localization length could be very short (long) for metallic (semiconducting)
nanowires even for the same values of all other parameters (except for the
transverse wire size). We also show that any tunneling conductance measurements
in such nanowires, carried out at temperatures and/or energy resolutions
comparable to the induced superconducting energy gap, cannot distinguish
between the existence of the Majorana modes or ordinary subgap fermionic states
since both produce very similar broad and weak peaks in the subgap tunneling
conductance independent of the localization length involved. Only low
temperature (and high resolution) tunneling measurements manifesting sharp zero
bias peaks can be considered to be signatures of Majorana modes in topological
nanowires.
|
We observed the pulsar PSR J1648-4611 with Suzaku. Two X-ray sources, Suzaku
J1648-4610 (Src A) and Suzaku J1648-4615 (Src B), were found in the field of
view. Src A is coincident with the pulsar PSR J1648-4611, which was also
detected by the Fermi Gamma-ray Space Telescope. A hard-band image indicates
that Src A is spatially extended. We found point sources in the vicinity of Src
A by using a Chandra image of the same region, but the point sources have soft
X-ray emission and cannot explain the hard X-ray emission of Src A. The
hard-band spectrum of Src A can be reproduced by a power-law model with a
photon index of 2.0^{+0.9}_{-0.7}. The X-ray flux in the 2-10 keV band is 1.4
\times 10^{-13} erg cm^{-2} s^{-1}. The diffuse emission suggests a pulsar wind
nebula around PSR J1648-4611, but the luminosity of Src A is much larger than
that expected from the spin-down luminosity of the pulsar. Parts of the
very-high-energy gamma-ray emission of HESS J1646-458 may be powered by this
pulsar wind nebula driven by PSR J1648-4611. Src B has soft emission, and its
X-ray spectrum can be described by a power-law model with a photon index of
3.0^{+1.4}_{-0.8}. The X-ray flux in the 0.4-10 keV band is 6.4 \times 10^{-14}
erg s^{-1} cm^{-2}. No counterpart for Src B is found in literatures.
|
$\operatorname{SL}(2,q)$-unitals are unitals of order $q$ admitting a regular
action of $\operatorname{SL}(2,q)$ on the complement of some block. They can be
obtained from affine $\operatorname{SL}(2,q)$-unitals via parallelisms. We
compute a sharp upper bound for automorphism groups of affine
$\operatorname{SL}(2,q)$-unitals and show that exactly two parallelisms are
fixed by all automorphisms. In $\operatorname{SL}(2,q)$-unitals obtained as
closures of affine $\operatorname{SL}(2,q)$-unitals via those two parallelisms,
we show that there is one block fixed under the full automorphism group.
|
The effect of density fluctuations upon light propagation is calculated
perturbatively in a matter dominated irrotational universe. The starting point
is the perturbed metric (second order in the perturbation strength), while the
output is the Hubble diagram. Density fluctuations cause this diagram to
broaden to a strip. Moreover, the shift of the diagram mimics accelerated
expansion.
|
In this paper, we propose an effective scene text recognition method using
sparse coding based features, called Histograms of Sparse Codes (HSC) features.
For character detection, we use the HSC features instead of using the
Histograms of Oriented Gradients (HOG) features. The HSC features are extracted
by computing sparse codes with dictionaries that are learned from data using
K-SVD, and aggregating per-pixel sparse codes to form local histograms. For
word recognition, we integrate multiple cues including character detection
scores and geometric contexts in an objective function. The final recognition
results are obtained by searching for the words which correspond to the maximum
value of the objective function. The parameters in the objective function are
learned using the Minimum Classification Error (MCE) training method.
Experiments on several challenging datasets demonstrate that the proposed
HSC-based scene text recognition method outperforms HOG-based methods
significantly and outperforms most state-of-the-art methods.
|
The Galilean (and more generally Milne) invariance of Newtonian theory allows
for Killing vector fields of a general kind, whereby the Lie derivative of a
field is not required to vanish but only to be cancellable by some
infinitesimal Galilean (respectively Milne) gauge transformation. In this
paper, it is shown that both the Killing-Milne vector fields, which preserve
the background Newtonian space-time structure, and the Killing-Cartan vector
fields, which in addition preserve the gravitational field, form a Lie
subalgebra.
|
Using geometric algebra and calculus to express the laws of electromagnetism
we are able to present magnitudes and relations in a gradual way, escalating
the number of dimensions. In the one-dimensional case, charge and current
densities, the electric field E and the scalar and vector potentials get a
geometric interpretation in spacetime diagrams. The geometric vector derivative
applied to these magnitudes yields simple expressions leading to concepts like
displacement current, continuity and gauge or retarded time, with a clear
geometric meaning. As the geometric vector derivative is invertible, we
introduce simple Green's functions and, with this, it is possible to obtain
retarded Li\'enard-Wiechert potentials propagating naturally at the speed of
light. In two dimensions, these magnitudes become more complex, and a magnetic
field B appears as a pseudoscalar which was absent in the one-dimensional
world. The laws of induction reflect the relations between E and B, and it is
possible to arrive to the concepts of capacitor, electric circuit and Poynting
vector, explaining the flow of energy. The solutions to the wave equations in
this two-dimensional scenario uncover now the propagation of physical effects
at the speed of light. This anticipates the same results in the real
three-dimensional world, but endowed in this case with a nature which is
totally absent in one or three dimensions. Electromagnetic waves propagating
entirely at the speed of light can thus be viewed as a consequence of living in
a world with an odd number of spatial dimensions. Finally, in the real
three-dimensional world the same set of simple multivector differential
expressions encode the fundamental laws and concepts of electromagnetism.
|
So far the null results from axion searches have enforced a huge hierarchy
between the Peccei-Quinn and electroweak symmetry breaking scales. Then the
inevitable Higgs portal poses a large fine tuning on the standard model Higgs
scalar. Now we find if the Peccei-Quinn global symmetry has a set of residually
discrete symmetries, these global and discrete symmetries can achieve a chain
breaking at low scales such as the accessible TeV scale. This novel mechanism
can accommodate some new phenomena including a sizable coupling of the standard
model Higgs boson to the axion.
|
Many methods of semantic image segmentation have borrowed the success of open
compound domain adaptation. They minimize the style gap between the images of
source and target domains, more easily predicting the accurate pseudo
annotations for target domain's images that train segmentation network. The
existing methods globally adapt the scene style of the images, whereas the
object styles of different categories or instances are adapted improperly. This
paper proposes the Object Style Compensation, where we construct the
Object-Level Discrepancy Memory with multiple sets of discrepancy features. The
discrepancy features in a set capture the style changes of the same category's
object instances adapted from target to source domains. We learn the
discrepancy features from the images of source and target domains, storing the
discrepancy features in memory. With this memory, we select appropriate
discrepancy features for compensating the style information of the object
instances of various categories, adapting the object styles to a unified style
of source domain. Our method enables a more accurate computation of the pseudo
annotations for target domain's images, thus yielding state-of-the-art results
on different datasets.
|
The investigation of combinatorial diameters of polyhedra is a classical
topic in linear programming due to its connection with the possibility of an
efficient pivot rule for the simplex method. We are interested in the diameters
of polyhedra formed from the so-called parallel or series connection of
oriented matroids: oriented matroids are the natural way to connect
representable matroid theory with the combinatorics of linear programming, and
these connections are fundamental operations for the construction of more
complicated matroids from elementary matroid blocks.
We prove that, for polyhedra whose combinatorial diameter satisfies the
Hirsch-conjecture bound regardless of the right-hand sides in a standard-form
description, the diameters of their parallel or series connections remain small
in the Hirsch-conjecture bound. These results are a substantial step toward
devising a diameter bound for all polyhedra defined through totally-unimodular
matrices based on Seymour's famous decomposition theorem.
Our proof techniques and results exhibit a number of interesting features.
While the parallel connection leads to a bound that adds just a constant, for
the series connection one has to linearly take into account the maximal value
in a specific coordinate of any vertex. Our proofs also require a careful
treatment of non-revisiting edge walks in degenerate polyhedra, as well as the
construction of edge walks that may take a `detour' to facets that satisfy the
non-revisiting conjecture when the underlying polyhedron may not.
|
Fast radio bursts (FRBs) are millisecond-timescale bursts of coherent radio
emission that are luminous enough to be detectable at cosmological distances.
In this review I describe the discovery of FRBs, subsequent advances in our
understanding of them, and future prospects. Thousands of potentially
observable FRBs reach Earth every day; they probably originate from highly
magnetic and/or rapidly rotating neutron stars in the distant Universe. Some
FRBs repeat, with this sub-class often occurring in highly magnetic
environments. Two repeaters exhibit cyclic activity windows, consistent with an
orbital period. One nearby FRB was from a Galactic magnetar during an X-ray
outburst. The host galaxies of some FRBs have been located, providing
information about the host environments and the total baryonic content of the
Universe.
|
We have integrated single and coupled superconducting transmon qubits into
flip-chip modules. Each module consists of two chips -- one quantum chip and
one control chip -- that are bump-bonded together. We demonstrate time-averaged
coherence times exceeding $90\,\mu s$, single-qubit gate fidelities exceeding
$99.9\%$, and two-qubit gate fidelities above $98.6\%$. We also present device
design methods and discuss the sensitivity of device parameters to variation in
interchip spacing. Notably, the additional flip-chip fabrication steps do not
degrade the qubit performance compared to our baseline state-of-the-art in
single-chip, planar circuits. This integration technique can be extended to the
realisation of quantum processors accommodating hundreds of qubits in one
module as it offers adequate input/output wiring access to all qubits and
couplers.
|
We have observed the remnant of supernova SN~1987A (SNR~1987A), located in
the Large Magellanic Cloud (LMC), to search for periodic and/or transient radio
emission with the Parkes 64\,m-diameter radio telescope. We found no evidence
of a radio pulsar in our periodicity search and derived 8$\sigma$ upper bounds
on the flux density of any such source of $31\,\mu$Jy at 1.4~GHz and
$21\,\mu$Jy at 3~GHz. Four candidate transient events were detected with
greater than $7\sigma$ significance, with dispersion measures (DMs) in the
range 150 to 840\,cm$^{-3}\,$pc. For two of them, we found a second pulse at
slightly lower significance. However, we cannot at present conclude that any of
these are associated with a pulsar in SNR~1987A. As a check on the system, we
also observed PSR~B0540$-$69, a young pulsar which also lies in the LMC. We
found eight giant pulses at the DM of this pulsar. We discuss the implications
of these results for models of the supernova remnant, neutron star formation
and pulsar evolution.
|
Statistical methods relating tensor predictors to scalar outcomes in a
regression model generally vectorize the tensor predictor and estimate the
coefficients of its entries employing some form of regularization, use
summaries of the tensor covariate, or use a low dimensional approximation of
the coefficient tensor. However, low rank approximations of the coefficient
tensor can suffer if the true rank is not small. We propose a tensor regression
framework which assumes a soft version of the parallel factors (PARAFAC)
approximation. In contrast to classic PARAFAC, where each entry of the
coefficient tensor is the sum of products of row-specific contributions across
the tensor modes, the soft tensor regression (Softer) framework allows the
row-specific contributions to vary around an overall mean. We follow a Bayesian
approach to inference, and show that softening the PARAFAC increases model
flexibility, leads to improved estimation of coefficient tensors, more accurate
identification of important predictor entries, and more precise predictions,
even for a low approximation rank. From a theoretical perspective, we show that
employing Softer leads to a weakly consistent posterior distribution of the
coefficient tensor, irrespective of the true or approximation tensor rank, a
result that is not true when employing the classic PARAFAC for tensor
regression. In the context of our motivating application, we adapt Softer to
symmetric and semi-symmetric tensor predictors and analyze the relationship
between brain network characteristics and human traits.soft
|
We study the coupling of non-linear supersymmetry to supergravity. The
goldstino nilpotent superfield of global supersymmetry coupled to supergravity
is described by a geometric action of the chiral curvature superfield R subject
to the constraint (R-\lambda)^2=0 with an appropriate constant \lambda. This
constraint can be found as the decoupling limit of the scalar partner of the
goldstino in a class of f(R) supergravity theories.
|
Context. Globular clusters (GCs) carry information about the formation
histories and gravitational fields of their host galaxies. B\'ilek et al.
(2019, BSR19 hereafter) reported that the radial profiles of the volume number
density of GCs in GC systems (GCSs) follow broken power laws, while the breaks
occur approximately at the a0 radii. These are the radii at which the
gravitational fields of the galaxies equal the galactic acceleration scale $a_0
= 1.2\times 10^{-10}$ms$^{-2}$ known from the radial acceleration relation or
the MOND theory of modified dynamics.
Aims. Our main goals here are to explore whether the results of BSR19 hold
true for galaxies of a wider mass range and for the red and blue GC
subpopulations.
Methods. We exploited catalogs of photometric GC candidates in the Fornax
galaxy cluster based on ground and space observations and a new catalog of
spectroscopic GCs of NGC 1399, the central galaxy of the cluster. For every
galaxy, we obtained the parameters of the broken power-law density by fitting
the on-sky distribution of the GC candidates, while allowing for a constant
density of contaminants. The logarithmic stellar masses of our galaxy sample
span 8.0 - 11.4$M_\odot$.
Results. All investigated GCSs with a sufficient number of members show
broken power-law density profiles. This holds true for the total GC population
and the blue and red subpopulations. The inner and outer slopes and the break
radii agree well for the different GC populations. The break radii agree with
the a0 radii typically within a factor of two for all GC color subpopulations.
The outer slopes correlate better with the a0 radii than with the galactic
stellar masses. The break radii of NGC 1399 vary in azimuth, such that they are
greater toward and against the interacting neighbor galaxy NGC 1404.
|
Learning with graphs has attracted significant attention recently. Existing
representation learning methods on graphs have achieved state-of-the-art
performance on various graph-related tasks such as node classification, link
prediction, etc. However, we observe that these methods could leak serious
private information. For instance, one can accurately infer the links (or node
identity) in a graph from a node classifier (or link predictor) trained on the
learnt node representations by existing methods. To address the issue, we
propose a privacy-preserving representation learning framework on graphs from
the \emph{mutual information} perspective. Specifically, our framework includes
a primary learning task and a privacy protection task, and we consider node
classification and link prediction as the two tasks of interest. Our goal is to
learn node representations such that they can be used to achieve high
performance for the primary learning task, while obtaining performance for the
privacy protection task close to random guessing. We formally formulate our
goal via mutual information objectives. However, it is intractable to compute
mutual information in practice. Then, we derive tractable variational bounds
for the mutual information terms, where each bound can be parameterized via a
neural network. Next, we train these parameterized neural networks to
approximate the true mutual information and learn privacy-preserving node
representations. We finally evaluate our framework on various graph datasets.
|
This paper tackles a multi-agent bandit setting where $M$ agents cooperate
together to solve the same instance of a $K$-armed stochastic bandit problem.
The agents are \textit{heterogeneous}: each agent has limited access to a local
subset of arms and the agents are asynchronous with different gaps between
decision-making rounds. The goal for each agent is to find its optimal local
arm, and agents can cooperate by sharing their observations with others. While
cooperation between agents improves the performance of learning, it comes with
an additional complexity of communication between agents. For this
heterogeneous multi-agent setting, we propose two learning algorithms, \ucbo
and \AAE. We prove that both algorithms achieve order-optimal regret, which is
$O\left(\sum_{i:\tilde{\Delta}_i>0} \log T/\tilde{\Delta}_i\right)$, where
$\tilde{\Delta}_i$ is the minimum suboptimality gap between the reward mean of
arm $i$ and any local optimal arm. In addition, a careful selection of the
valuable information for cooperation, \AAE achieves a low communication
complexity of $O(\log T)$. Last, numerical experiments verify the efficiency of
both algorithms.
|
Submitted by the authors for the June 27-29 Princeton Conference. Questions
should be directed to<EMAIL_ADDRESS> |
We give a self-contained introduction to accessible categories and how they
shed light on both model- and set-theoretic questions. We survey for example
recent developments on the study of presentability ranks, a notion of
cardinality localized to a given category, as well as stable independence, a
generalization of pushouts and model-theoretic forking that may interest
mathematicians at large. We give many examples, including recently discovered
connections with homotopy theory and homological algebra. We also discuss
concrete versions of accessible categories (such as abstract elementary
classes), and how they allow nontrivial `element by element' constructions. We
conclude with a new proof of the equivalence between saturated and homogeneous
which does not use the coherence axiom of abstract elementary classes.
|
Work belongs to the most basic notions in thermodynamics but it is not well
understood in quantum systems, especially in open quantum systems. By
introducing a novel concept of work functional along individual Feynman path,
we invent a new approach to study thermodynamics in the quantum regime. Using
the work functional, we derive a path-integral expression for the work
statistics. By performing the $\hbar$ expansion, we analytically prove the
quantum-classical correspondence of the work statistics. In addition, we obtain
the quantum correction to the classical fluctuating work. We can also apply
this approach to an open quantum system in the strong coupling regime described
by the quantum Brownian motion model. This approach provides an effective way
to calculate the work in open quantum systems by utilizing various path
integral techniques. As an example, we calculate the work statistics for a
dragged harmonic oscillator in both isolated and open quantum systems.
|
We calculate the low temperature resistivity in low density 2D hole gases in
GaAs heterostructures by including screened charged impurity and phonon
scattering in the theory. Our calculated resistance, which shows striking
temperature dependent non-monotonicity arising from the competition among
screening, nondegeneracy, and phonon effects, is in excellent agreement with
recent experimental data.
|
A nonuniform condensate is usually described by the Gross-Pitaevskii (GP)
equation, which is derived with the help of the c-number ansatz $\hat{
\Psi}(\mathbf{r},t)=\Psi (\mathbf{r},t)$. Proceeding from a more accurate
operator ansatz $\hat{\Psi}(\mathbf{r},t)=\hat{a}_{0}\Psi (\mathbf{r},t)
\sqrt{N}$, we find the equation $i\hbar \frac{\partial \Psi
(\mathbf{r},t)}{\partial t}=-\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}\Psi
(\mathbf{r},t)}{\partial \mathbf{r}^{2}}+\left( 1-\frac{1}{N}\right) 2c\Psi
(\mathbf{r},t)|\Psi(\mathbf{r},t)|^{2}$ (the GP$_{N}$ equation). It differs
from the GP equation by the factor $(1-1/N)$, where $N$ is the number of Bose
particles. We compare the accuracy of the GP and GP$_{N}$ equations by
analyzing the ground state of a one-dimensional system of point bosons with
repulsive interaction ($c>0$) and zero boundary conditions. Both equations are
solved numerically, and the system energy $E$ and the particle density profile
$\rho (x)$ are determined for various values of~$N$, the mean particle density
$\bar{\rho}$, and the coupling constant $\gamma =c/\bar{\rho}$. The solutions
are compared with the exact ones obtained by the Bethe ansatz. The results show
that in the weak coupling limit ($N^{-2}\ll \gamma \lesssim 0.1$), the GP and
GP$_{N}$ equations describe the system equally well if $N\gtrsim 100$. For
few-boson systems ($N\lesssim 10$) with $\gamma \lesssim N^{-2}$ the solutions
of the GP$_{N}$ equation are in excellent agreement with the exact ones. That
is, the multiplier $(1-1/N)$ allows one to describe few-boson systems with high
accuracy. This means that it is reasonable to extend the notion of
Bose-Einstein condensation to few-particle systems.
|
We analyze a family of methods for statistical causal inference from sample
under the so-called Additive Noise Model. While most work on the subject has
concentrated on establishing the soundness of the Additive Noise Model, the
statistical consistency of the resulting inference methods has received little
attention. We derive general conditions under which the given family of
inference methods consistently infers the causal direction in a nonparametric
setting.
|
The hyperfine interactions at the uranium site in the antiferromagnetic USb2
compound were calculated within the density functional theory (DFT) employing
the augmented plane wave plus local orbital (APW+lo) method. We investigated
the dependence of the nuclear quadruple interactions to the magnetic structure
in USb2 compound. The investigation were performed applying the so called band
correlated LDA+U theory self consistently. The self consistent LDA+U
calculations were gradually added to the performed generalized gradient
approximation (GGA) including scalar relativistic spin orbit interactions in a
second variation scheme. The result, which is in agreement with experiment,
shows that the 5f-electrons have the tendency to be hybridized with the
conduction electrons in the ferromagnetic uranium planes.
|
Artificial Neural Networks have shown impressive success in very different
application cases. Choosing a proper network architecture is a critical
decision for a network's success, usually done in a manual manner. As a
straightforward strategy, large, mostly fully connected architectures are
selected, thereby relying on a good optimization strategy to find proper
weights while at the same time avoiding overfitting. However, large parts of
the final network are redundant. In the best case, large parts of the network
become simply irrelevant for later inferencing. In the worst case, highly
parameterized architectures hinder proper optimization and allow the easy
creation of adverserial examples fooling the network. A first step in removing
irrelevant architectural parts lies in identifying those parts, which requires
measuring the contribution of individual components such as neurons. In
previous work, heuristics based on using the weight distribution of a neuron as
contribution measure have shown some success, but do not provide a proper
theoretical understanding. Therefore, in our work we investigate game theoretic
measures, namely the Shapley value (SV), in order to separate relevant from
irrelevant parts of an artificial neural network. We begin by designing a
coalitional game for an artificial neural network, where neurons form
coalitions and the average contributions of neurons to coalitions yield to the
Shapley value. In order to measure how well the Shapley value measures the
contribution of individual neurons, we remove low-contributing neurons and
measure its impact on the network performance. In our experiments we show that
the Shapley value outperforms other heuristics for measuring the contribution
of neurons.
|
From wearables to powerful smart devices, modern automatic speech recognition
(ASR) models run on a variety of edge devices with different computational
budgets. To navigate the Pareto front of model accuracy vs model size,
researchers are trapped in a dilemma of optimizing model accuracy by training
and fine-tuning models for each individual edge device while keeping the
training GPU-hours tractable. In this paper, we propose Omni-sparsity DNN,
where a single neural network can be pruned to generate optimized model for a
large range of model sizes. We develop training strategies for Omni-sparsity
DNN that allows it to find models along the Pareto front of word-error-rate
(WER) vs model size while keeping the training GPU-hours to no more than that
of training one singular model. We demonstrate the Omni-sparsity DNN with
streaming E2E ASR models. Our results show great saving on training time and
resources with similar or better accuracy on LibriSpeech compared to
individually pruned sparse models: 2%-6.6% better WER on Test-other.
|
We construct quantum evolution operators on the space of states, that realize
the metaplectic representation of the modular group SL(2,Z_2^n). This
representation acts in a natural way on the coordinates of the non-commutative
2-torus and thus is relevant for noncommutative field theories as well as
theories of quantum space-time. The larger class of operators, thus defined,
may be useful for the more efficient realization of new quantum algorithms.
|
SINFONI, the SINgle Faint Object Near-infrared Investigation, is an
instrument for the Very Large Telescope (VLT), which will start its operation
mid 2002 and allow for the first time near infrared (NIR) integral field
spectroscopy at the diffraction limit of an 8-m telescope. SINFONI is the
combination of two state-of-the art instruments, the integral field
spectrometer SPIFFI, built by the Max-Planck-Institut fuer extraterrestrische
Physik (MPE), and the adaptive optics (AO) system MACAO, built by the European
Southern Observatory (ESO). It will allow a unique type of observations by
delivering simultaneously high spatial resolution (pixel sizes 0.025arcsec to
0.25arcsec) and a moderate spectral resolution (R~2000 to R~4500), where the
higher spectral resolution mode will allow for software OH suppression. This
opens new prospects for astronomy.
|
A brief overview of the lattice technique of studying QCD is presented.
Recent results from the UKQCD Collaboration's simulations with dynamical quarks
are then presented. In this work, the calculations are all at a fixed lattice
spacing and volume, but varying sea quark mass from infinite (corresponding to
the quenched simulation) down to roughly that of the strange quark mass. The
main aim of this work is to uncover dynamical quark effects from these
``matched'' ensembles.
|
Transformer-based pretrained models like BERT, GPT-2 and T5 have been
finetuned for a large number of natural language processing (NLP) tasks, and
have been shown to be very effective. However, while finetuning, what changes
across layers in these models with respect to pretrained checkpoints is
under-studied. Further, how robust are these models to perturbations in input
text? Does the robustness vary depending on the NLP task for which the models
have been finetuned? While there exists some work on studying the robustness of
BERT finetuned for a few NLP tasks, there is no rigorous study that compares
this robustness across encoder only, decoder only and encoder-decoder models.
In this paper, we characterize changes between pretrained and finetuned
language model representations across layers using two metrics: CKA and STIR.
Further, we study the robustness of three language models (BERT, GPT-2 and T5)
with eight different text perturbations on classification tasks from the
General Language Understanding Evaluation (GLUE) benchmark, and generation
tasks like summarization, free-form generation and question generation. GPT-2
representations are more robust than BERT and T5 across multiple types of input
perturbation. Although models exhibit good robustness broadly, dropping nouns,
verbs or changing characters are the most impactful. Overall, this study
provides valuable insights into perturbation-specific weaknesses of popular
Transformer-based models, which should be kept in mind when passing inputs. We
make the code and models publicly available
[https://github.com/PavanNeerudu/Robustness-of-Transformers-models].
|
Single crystals of CaNi2 and CaNi3 were successfully grown out of excess Ca.
Both compounds manifest a metallic ground state with enhanced, temperature
dependent magnetic susceptibility. The relatively high Stoner factors of Z =
0.79 and Z = 0.87 found for CaNi2 and CaNi3, respectively, reveal their close
vicinity to ferromagnetic instabilities. The pronounced field dependence of the
magnetic susceptibility of CaNi3 at low temperatures (T < 25 K) suggests strong
ferromagnetic fluctuations. A corresponding contribution to the specific heat
with a temperature dependence of T^3lnT was also observed.
|
With the aid of low-energy (500 eV) electron-beam direct writing, patterns of
perpendicularly-aligned Single-wall carbon nanotube (SWNT) forests were
realized on Nafion modified substrates via Fe3+ assisted self-assembly.
Infrared spectroscopy (IR), atomic force microscopy (AFM) profilometry and
contact angle measurements indicated that low-energy electron-beam cleaved the
hydrophilic side chains (-SO3H and C-O-C) of Nafion to low molecular byproducts
that sublimed in the ultra-high vacuum (UHV) environment exposing the
hydrophobic Nafion backbone. Auger mapping and AFM microscopy affirmed that the
exposed hydrophobic domains absorbed considerably less Fe3+ ions upon exposure
to pH 2.2 aqueous FeCl3 solution, which yield considerably less FeO(OH)/FeOCl
precipitates (FeO(OH) in majority) upon washing with lightly basic DMF solution
containing trace amounts of adsorbed moisture. Such differential deposition of
FeO(OH)/FeOCl precipitates provided the basis for the patterned site-specific
self-assembly of SWNT forests as demonstrated by AFM and resonance Raman
spectroscopy.
|
Recent jet results in $p\bar{p}$ collisions at $\sqrt{s}$=1.96 TeV from the
CDF experiment at the Tevatron are presented. The jet inclusive cross section
is compared to next-to-leading order QCD prediction in different rapidity
regions. The $b$-jet inclusive cross section is measured exploiting the long
lifetime and large mass of $B$-hadrons. Jet shapes, W+jets and W/Z+photon cross
sections are also measured and compared to expectations from QCD production.
|
Let gcd(a,b)=1. J. Olsson and D. Stanton proved that the maximum number of
boxes in a simultaneous (a,b)-core is (a^2-1)(b^2-1)/24, and that this maximum
was achieved by a unique core. P. Johnson combined Ehrhart theory with the
polynomial method to prove D. Armstrong's conjecture that the expected number
of boxes in a simultaneous (a,b)-core is (a-1)(b-1)(a+b+1)/24. We extend P.
Johnson's method to compute the variance to be ab(a-1)(b-1)(a+b)(a+b+1)/1440.
By extending the definitions of "simultaneous cores" and "number of boxes" to
affine Weyl groups, we give uniform generalizations of all three formulae above
to simply-laced affine types. We further explain the appearance of the number
24 using the "strange formula" of H. Freudenthal and H. de Vries.
|
We solve for the local vertical structure of a thin accretion disk threaded
by a poloidal magnetic field. The angular velocity deviates from the Keplerian
value as a result of the radial Lorentz force, but is constant on magnetic
surfaces. Angular momentum transport and energy dissipation in the disk are
parametrized by an alpha-prescription, and a Kramers opacity law is assumed to
hold. We also determine the stability of the equilibria with respect to the
magnetorotational (or Balbus-Hawley) instability. If the magnetic field is
sufficiently strong, stable equilibria can be found in which the angle of
inclination, i, of the magnetic field to the vertical at the surface of the
disk has any value in the range [0,90 degrees). By analyzing the dynamics of a
transonic outflow in the corona of the disk, we show that a certain potential
difference must be overcome even when i > 30 degrees. We determine this
potential difference as a function of i for increasing values of the vertical
magnetic field strength. For magnetorotationally stable equilibria, the
potential difference increases faster than the fourth power of the magnetic
field strength, quickly exceeding a value corresponding to the central
temperature of the disk, and is minimized with respect to i at i approximately
equal to 38 degrees. We show that this property is relatively insensitive to
the form of the opacity law. Our results suggest that an additional source of
energy, such as coronal heating, may be required for the launching of an
outflow from a magnetized disk.
|
We study the field theory for the SU($N_c$) symmetric antiferromagnetic
quantum critical metal with a one-dimensional Fermi surface embedded in general
space dimensions between two and three. The asymptotically exact solution valid
in this dimensional range provides an interpolation between the perturbative
solution obtained from the $\epsilon$-expansion near three dimensions and the
nonperturbative solution in two dimensions. We show that critical exponents are
smooth functions of the space dimension. However, physical observables exhibit
subtle crossovers that make it hard to access subleading scaling behaviors in
two dimensions from the low-energy solution obtained above two dimensions.
These crossovers give rise to noncommutativities, where the low-energy limit
does not commute with the limits in which the physical dimensions are
approached.
|
Distant galaxies show correlations between their current star-formation rates
(SFRs) and stellar masses, implying that their star-formation histories (SFHs)
are highly similar. Moreover, observations show that the UV luminosities and
stellar masses grow from z=8 to 3, implying that the SFRs increase with time.
We compare the cosmologically averaged evolution in galaxies at 3 < z < 8 at
constant comoving number density, n = 2 x 10^-4 Mpc^-3. This allows us to study
the evolution of stellar mass and star formation in the galaxy predecessors and
descendants in ways not possible using galaxies selected at constant stellar
mass or SFR, quantities that evolve strongly in time. We show that the average
SFH of these galaxies increase smoothly from z=8 to 3 as SFR ~ t^alpha with
alpha = 1.7 +/- 0.2. This conflicts with assumptions that the SFR is either
constant or declines exponentially in time. We show that the stellar mass
growth in these galaxies is consistent with this derived SFH. This provides
evidence that the slope of the high-mass end of the IMF is approximately
Salpeter unless the duty cycle of star formation is much less than unity. We
argue that these relations follow from gas accretion (either through accretion
or delivered by mergers) coupled with galaxy disk growth under the assumption
that the SFR depends on the local gas surface density. This predicts that gas
fractions decrease from z=8 to 3 on average as f_gas ~ (1+z)^0.9 for galaxies
with this number density. The implied galaxy gas accretion rates at z > 4 are
as fast and may even exceed the SFR: this is the "gas accretion epoch". At z <
4 the SFR overtakes the implied gas accretion rate, indicating a period where
galaxies consume gas faster than it is acquired. At z < 3, galaxies with this
number density depart from these relations implying that star formation and gas
accretion are slowed at later times.
|
Quantum information storage using charge-neutral quasiparticles are expected
to play a crucial role in the future of quantum computers. In this regard,
magnons or collective spin-wave excitations in solid-state materials are
promising candidates in the future of quantum computing. Here, we study the
quantum squeezing of Dirac and topological magnons in a bosonic honeycomb
optical lattice with spin-orbit interaction by utilizing the mapping to quantum
spin-$1/2$ XYZ Heisenberg model on the honeycomb lattice with discrete Z$_2$
symmetry and a Dzyaloshinskii-Moriya interaction. We show that the squeezed
magnons can be controlled by the Z$_2$ anisotropy and demonstrate how the noise
in the system is periodically modified in the ferromagnetic and
antiferromagnetic phases of the model. Our results also apply to solid-state
honeycomb (anti)ferromagnetic insulators.
|
We consider the problem of analyzing timestamped relational events between a
set of entities, such as messages between users of an on-line social network.
Such data are often analyzed using static or discrete-time network models,
which discard a significant amount of information by aggregating events over
time to form network snapshots. In this paper, we introduce a block point
process model (BPPM) for continuous-time event-based dynamic networks. The BPPM
is inspired by the well-known stochastic block model (SBM) for static networks.
We show that networks generated by the BPPM follow an SBM in the limit of a
growing number of nodes. We use this property to develop principled and
efficient local search and variational inference procedures initialized by
regularized spectral clustering. We fit BPPMs with exponential Hawkes processes
to analyze several real network data sets, including a Facebook wall post
network with over 3,500 nodes and 130,000 events.
|
Let $\Sigma$ be a finite alphabet, $\Omega=\Sigma^{\mathbb{Z}^{d}}$ equipped
with the shift action, and $\mathcal{I}$ the simplex of shift-invariant
measures on $\Omega$. We study the relation between the restriction
$\mathcal{I}_n$ of $\mathcal{I}$ to the finite cubes
$\{-n,...,n\}^d\subset\mathbb{Z}^d$, and the polytope of "locally invariant"
measures $\mathcal{I}_n^{loc}$. We are especially interested in the geometry of
the convex set $\mathcal{I}_n$ which turns out to be strikingly different when
$d=1$ and when $d\geq 2$. A major role is played by shifts of finite type which
are naturally identified with faces of $\mathcal{I}_n$, and uniquely ergodic
shifts of finite type, whose unique invariant measure gives rise to extreme
points of $\mathcal{I}_n$, although in dimension $d\geq 2$ there are also
extreme points which arise in other ways. We show that
$\mathcal{I}_n=\mathcal{I}_n^{loc}$ when $d=1$, but in higher dimension they
differ for $n$ large enough. We also show that while in dimension one
$\mathcal{I}_n$ are polytopes with rational extreme points, in higher
dimensions every computable convex set occurs as a rational image of a face of
$\mathcal{I}_n$ for all large enough $n$.
|
HD 93521 is a massive, rapidly rotating star that is located about 1 kpc
above the Galactic disk, and the evolutionary age for its estimated mass is
much less than the time-of-flight if it was ejected from the disk. Here we
present a re-assessment of both the evolutionary and kinematical timescales for
HD 93521. We calculate a time-of-flight of 39 +/- 3 Myr based upon the distance
and proper motions from Gaia EDR3 and a summary of radial velocity
measurements. We then determine the stellar luminosity using a rotational model
combined with the observed spectral energy distribution and distance. A
comparison with evolutionary tracks for rotating stars from Brott et al. yields
an evolutionary age of about 5 +/- 2 Myr. We propose that the solution to the
timescale discrepancy is that HD 93521 is a stellar merger product. It was
probably ejected from the Galactic disk as a close binary system of lower mass
stars that eventually merged to create the rapidly rotating and single massive
star we observe today.
|
We study the Epstein zeta function $E_n(L,s)$ for $s>\frac{n}{2}$ and
determine for fixed $c>\frac{1}{2}$ the value distribution and moments of
$E_n(\cdot,cn)$ (suitably normalized) as $n\to\infty$. We further discuss the
random function $c\mapsto E_n(\cdot,cn)$ for $c\in[A,B]$ with $\frac{1}{2}<A<B$
and determine its limit distribution as $n\to\infty$.
|
In Minkowski spacetime, we consider an isolated system made of two pointlike
bodies interacting at a distance, in the nonradiative approximation. Our
framework is the covariant and a priori Hamiltonian formalism of "predictive
relativistic mechanics", founded on the equal-time condition. The issue of an
equivalent one-body description is discussed. We distinguish two different
concepts: on the one hand an almost purely kinematic relative particle, on the
other hand an effective particle which involves an explicit dynamical
formulation; several versions of the latter are possible. Relative and
effective particles have the same orbit, but may differ by their schedules.
|
We study the derivative nonlinear Schr\"odinger equation on the real line and
obtain global-in-time bounds on high order Sobolev norms.
|
Computational modelling of diffusion in heterogeneous media is prohibitively
expensive for problems with fine-scale heterogeneities. A common strategy for
resolving this issue is to decompose the domain into a number of
non-overlapping sub-domains and homogenize the spatially-dependent diffusivity
within each sub-domain (homogenization cell). This process yields a
coarse-scale model for approximating the solution behaviour of the original
fine-scale model at a reduced computational cost. In this paper, we study
coarse-scale diffusion models in block heterogeneous media and investigate, for
the first time, the effect that various factors have on the accuracy of
resulting coarse-scale solutions. We present new findings on the error
associated with homogenization as well as confirm via numerical experimentation
that periodic boundary conditions are the best choice for the homogenization
cell and demonstrate that the smallest homogenization cell that is
computationally feasible should be used in numerical simulations.
|
We analyze the performance of quantum parameter estimation in the presence of
the most general Gaussian dissipative reservoir. We derive lower bounds on the
precision of phase estimation and a closely related problem of frequency
estimation. For both problems we show that it is impossible to achieve the
Heisenberg limit asymptotically in the presence of such a reservoir. However,
we also find that for any fixed number of probes used in the setup there exists
a Gaussian dissipative reservoir, which, in principle, allows for the
Heisenberg-limited performance for that number of probes. We discuss a
realistic implementation of a frequency estimation scheme in the presence of a
Gaussian dissipative reservoir in a cavity system.
|
This paper provides a unifying framework for a range of categorical
constructions characterised by universal mapping properties, within the realm
of compactifications of discrete structures. Some classic examples fit within
this broad picture: the Bohr compactification of an abelian group via
Pontryagin duality, the zero-dimensional Bohr compactification of a
semilattice, and the Nachbin order-compactification of an ordered set.
The notion of a natural extension functor is extended to suitable categories
of structures and such a functor is shown to yield a reflection into an
associated category of topological structures. Our principal results address
reconciliation of the natural extension with the Bohr compactification or its
zero-dimensional variant. In certain cases the natural extension functor and a
Bohr compactification functor are the same, in others the functors have
different codomains but may agree on all objects. Coincidence in the stronger
sense occurs in the zero-dimensional setting precisely when the domain is a
category of structures whose associated topological prevariety is standard. It
occurs, in the weaker sense only, for the class of ordered sets and, as we
show, also for infinitely many classes of ordered structures.
Coincidence results aid understanding of Bohr-type compactifications, which
are defined abstractly. Ideas from natural duality theory lead to an explicit
description of the natural extension which is particularly amenable for any
prevariety of algebras with a finite, dualisable, generator. Examples of such
classes---often varieties---are plentiful and varied, and in many cases the
associated topological prevariety is standard.
|
We have studied the evolution of the thermoelectric power S(T) with oxygen
doping of single-layered Bi2Sr2CuO6+d thin films and ceramics in the overall
superconducting (Tc, S290K) phase diagram. While the universal relation between
the room-temperature thermopower S290K and the critical temperature is found to
hold in the strongly overdoped region (d>0.14), a strong violation is observed
in the underdoped part of the phase diagram. The observed behaviour is compared
with other cuprates and the different scenarios are discussed.
|
This work addresses the finite-horizon robust covariance control problem for
discrete-time, partially observable, linear system affected by random zero mean
noise and deterministic but unknown disturbances restricted to lie in what is
called ellitopic uncertainty set (e.g., finite intersection of centered at the
origin ellipsoids/elliptic cylinders). Performance specifications are imposed
on the random state-control trajectory via averaged convex quadratic
inequalities, linear inequalities on the mean, as well as pre-specified upper
bounds on the covariance matrix. For this problem we develop a computationally
tractable procedure for designing affine control policies, in the sense that
the parameters of the policy that guarantees the aforementioned performance
specifications are obtained as solutions to an explicit convex program. Our
theoretical findings are illustrated by a numerical example.
|
This paper introduces an innovative task focused on editing the personality
traits of Large Language Models (LLMs). This task seeks to adjust the models'
responses to opinion-related questions on specified topics since an
individual's personality often manifests in the form of their expressed
opinions, thereby showcasing different personality traits. Specifically, we
construct a new benchmark dataset PersonalityEdit to address this task. Drawing
on the theory in Social Psychology, we isolate three representative traits,
namely Neuroticism, Extraversion, and Agreeableness, as the foundation for our
benchmark. We then gather data using GPT-4, generating responses that not only
align with a specified topic but also embody the targeted personality trait. We
conduct comprehensive experiments involving various baselines and discuss the
representation of personality behavior in LLMs. Our intriguing findings uncover
potential challenges of the proposed task, illustrating several remaining
issues. We anticipate that our work can provide the NLP community with
insights. Code and datasets are available at
https://github.com/zjunlp/EasyEdit.
|
A white noise quantum stochastic calculus is developped using classical
measure theory as mathematical tool. Wick's and Ito's theorems have been
established. The simplest quantum stochastic differential equation has been
solved, unicity and the conditions for unitarity have been proven. The
Hamiltonian of the associated one parameter strongly continuous group has been
calculated explicitely.
|
We present all-dielectric polaritonic metasurfaces consisting of properly
sculptured cylinders to sustain the dynamic anapole, i.e. a non-radiating
alternating current distribution. One way for the anapole to emerge, is by
combining modes based on the first and the fourth Mie resonance of a cylinder
made of high permittivity LiTaO$_3$ for operation in the low THz. The circular
cross-section of each cylinder varies periodically along its length in a binary
way, from small to large, while its overall circular symmetry has to be broken
in order to remove parasitic magnetic modes. Small cross-sections are the main
source of the \textit{electric dipole} Mie mode, while large cross-sections
sustain the fourth, \textit{mixed toroidal dipole} Mie mode. With proper
adjustment, the generated electric and toroidal moments interfere destructively
producing a non-radiating source, the dynamic anapole, the existence of which
is attested by a sharp dip in the reflection from the metasurface, due
exclusively to electric and toroidal dipoles. Moreover, we show that, by
breaking the circular symmetry of each cylinder, a substantial toroidal dipole
emerges from the \textit{magnetic quadrupole} Mie mode, which in combination
with the electric dipole also produces the dynamic anapole. The sensitivity of
the anapole states to the material dissipation losses is examined leading to
the conclusion that the proposed metasurfaces offer a scheme for realistically
implementing the anapole.
|
We present a new flexible estimator for comparing theoretical templates for
the predicted bispectrum of the CMB anisotropy to observations. This estimator,
based on binning in harmonic space, generalizes the optimal estimator of
Komatsu, Spergel, and Wandelt by allowing an adjustable weighting scheme for
masking possible foreground and other contaminants and detecting particular
noteworthy features in the bispectrum. The utility of this estimator is
illustrated by demonstrating how acoustic oscillations in the bispectrum and
other details of the bispectral shape could be detected in the future PLANCK
data provided that fNL is sufficiently large. The character and statistical
weight of the acoustic oscillations and the decay tail are described in detail.
|
Let $S$ be an orientable surface with negative Euler characteristic. For $k
\in \mathbb{N}$, let $\mathcal{C}_{k}(S)$ denote the $\textit{k-curve graph}$,
whose vertices are isotopy classes of essential simple closed curves on $S$,
and whose edges correspond to pairs of curves that can be realized to intersect
at most $k$ times. The theme of this paper is that the geometry of
Teichm\"uller space and of the mapping class group captures local combinatorial
properties of $\mathcal{C}_{k}(S)$. Using techniques for measuring distance in
Teichm\"uller space, we obtain upper bounds on the following three quantities
for large $k$: the clique number of $\mathcal{C}_{k}(S)$ (exponential in $k$,
which improves on all previously known bounds and which is essentially sharp);
the maximum size of the intersection, whenever it is finite, of a pair of links
in $\mathcal{C}_{k}$ (quasi-polynomial in $k$); and the diameter in
$\mathcal{C}_{0}(S)$ of a large clique in $\mathcal{C}_{k}(S)$ (uniformly
bounded). As an application, we obtain quasi-polynomial upper bounds, depending
only on the topology of $S$, on the number of short simple closed geodesics on
any square-tiled surface homeomorphic to $S$.
|
We prove that a countable semigroup $S$ is locally finite if and only if the
Arens-Michael envelope of its semigroup algebra is a $(DF)$-space. This is a
counterpart to a recent result of the author, which asserts that $S$ is
finitely generated if and only if the Arens-Michael envelope is a Fr\'echet
space.
|
An application of the new formulation of the eigenchannel method [R.
Szmytkowski, Ann. Phys. (N.Y.) {\bf 311}, 503 (2004)] to quantum scattering of
Dirac particles from non-local separable potentials is presented. Eigenchannel
vectors, related directly to eigenchannels, are defined as eigenvectors of a
certain weighted eigenvalue problem. Moreover, negative cotangents of
eigenphase-shifts are introduced as eigenvalues of that spectral problem.
Eigenchannel spinor as well as bispinor harmonics are expressed throughout the
eigenchannel vectors. Finally, the expressions for the bispinor as well as
matrix scattering amplitudes and total cross section are derived in terms of
eigenchannels and eigenphase-shifts. An illustrative example is also provided.
|
In \cite{Zhu}, the authors give a general definition of K\"ahler angle. There
are many results about K\"ahler angle one can try to generalize to the general
case. In this paper, we focus on the symplectic critical surfaces in Hermite
surfaces which is a generalization of \cite{HL1} or \cite{HLS1}.
|
Current developments in autonomous off-road driving are steadily increasing
performance through higher speeds and more challenging, unstructured
environments. However, this operating regime subjects the vehicle to larger
inertial effects, where consideration of higher-order states is necessary to
avoid failures such as rollovers or excessive impact forces. Aggressive driving
through Model Predictive Control (MPC) in these conditions requires dynamics
models that accurately predict safety-critical information. This work aims to
empirically quantify this aggressive operating regime and its effects on the
performance of current models. We evaluate three dynamics models of varying
complexity on two distinct off-road driving datasets: one simulated and the
other real-world. By conditioning trajectory data on higher-order states, we
show that model accuracy degrades with aggressiveness and simpler models
degrade faster. These models are also validated across datasets, where
accuracies over safety-critical states are reported and provide benchmarks for
future work.
|
Giant molecular clouds (GMCs) are the primary reservoirs of cold,
star-forming molecular gas in the Milky Way and similar galaxies, and thus any
understanding of star formation must encompass a model for GMC formation,
evolution, and destruction. These models are necessarily constrained by
measurements of interstellar molecular and atomic gas, and the emergent,
newborn stars. Both observations and theory have undergone great advances in
recent years, the latter driven largely by improved numerical simulations, and
the former by the advent of large-scale surveys with new telescopes and
instruments. This chapter offers a thorough review of the current state of the
field.
|
The classical functional linear regression model (FLM) and its extensions,
which are based on the assumption that all individuals are mutually
independent, have been well studied and are used by many researchers. This
independence assumption is sometimes violated in practice, especially when data
with a network structure are collected in scientific disciplines including
marketing, sociology and spatial economics. However, relatively few studies
have examined the applications of FLM to data with network structures. We
propose a novel spatial functional linear model (SFLM), that incorporates a
spatial autoregressive parameter and a spatial weight matrix into FLM to
accommodate spatial dependencies among individuals. The proposed model is
relatively flexible as it takes advantage of FLM in handling high-dimensional
covariates and spatial autoregressive (SAR) model in capturing network
dependencies. We develop an estimation method based on functional principal
component analysis (FPCA) and maximum likelihood estimation. Simulation studies
show that our method performs as well as the FPCA-based method used with FLM
when no network structure is present, and outperforms the latter when network
structure is present. A real weather data is also employed to demonstrate the
utility of the SFLM.
|
We analyze continuous partial differential models of topological insulators
in the form of systems of Dirac equations. We describe the bulk and interface
topological properties of the materials by means of indices of Fredholm
operators constructed from the Dirac operators by spectral calculus. We show
the stability of these topological invariants with respect to perturbations by
a large class of spatial heterogeneities. These models offer a quantitative
tool to analyze the interplay between topology and spatial fluctuations in
topological phases of matter. The theory is first presented for two-dimensional
materials, which display asymmetric (chiral) transport along interfaces. It is
then generalized to arbitrary dimensions with the additional assumption of
chiral symmetry in odd spatial dimensions.
|
The integral length scale ($\mathcal{L}$) is considered to be characteristic
of the largest motions of a turbulent flow, and as such, it is an input
parameter in modern and classical approaches of turbulence theory and numerical
simulations. Its experimental estimation, however, could be difficult in
certain conditions, for instance, when the experimental calibration required to
measure $\mathcal{L}$ is hard to achieve (hot-wire anemometry on large scale
wind-tunnels, and field measurements), or in 'standard' facilities using active
grids due to the behaviour of their velocity autocorrelation function
$\rho(r)$, which does not in general cross zero. In this work, we provide two
alternative methods to estimate $\mathcal{L}$ using the variance of the
distance between successive zero crossings of the streamwise velocity
fluctuations, thereby reducing the uncertainty of estimating $\mathcal{L}$
under similar experimental conditions. These methods are applicable to variety
of situations such as active grids flows, field measurements, and large scale
wind tunnels.
|
Among the various possibilities to probe the theory behind the recent
accelerated expansion of the universe, the energy conditions (ECs) are of
particular interest, since it is possible to confront and constrain the many
models, including different theories of gravity, with observational data. In
this context, we use the ECs to probe any alternative theory whose extra term
acts as a cosmological constant. For this purpose, we apply a model-independent
approach to reconstruct the recent expansion of the universe. Using Type Ia
supernova, baryon acoustic oscillations and cosmic-chronometer data, we perform
a Markov Chain Monte Carlo analysis to put constraints on the effective
cosmological constant $\Omega^0_{\rm eff}$. By imposing that the cosmological
constant is the only component that possibly violates the ECs, we derive lower
and upper bounds for its value. For instance, we obtain that $0.59 <
\Omega^0_{\rm eff} < 0.91$ and $0.40 < \Omega^0_{\rm eff} < 0.93$ within,
respectively, $1\sigma$ and $3\sigma$ confidence levels. In addition, about
30\% of the posterior distribution is incompatible with a cosmological
constant, showing that this method can potentially rule it out as a mechanism
for the accelerated expansion. We also study the consequence of these
constraints for two particular formulations of the bimetric massive gravity.
Namely, we consider the Visser's theory and the Hassan and Roses's massive
gravity by choosing a background metric such that both theories mimic General
Relativity with a cosmological constant. Using the $\Omega^0_{\rm eff}$
observational bounds along with the upper bounds on the graviton mass we obtain
constraints on the parameter spaces of both theories.
|
We use micro-Raman spectroscopy to study strain profiles in graphene
monolayers suspended over SiN membranes micropatterned with holes of
non-circular geometry. We show that a uniform differential pressure load
$\Delta P$ over elliptical regions of free-standing graphene yields measurable
deviations from hydrostatic strain conventionally observed in
radially-symmetric microbubbles. The top hydrostatic strain $\bar{\varepsilon}$
we observe is estimated to be $\approx0.7\%$ for $\Delta P = 1\,{\rm bar}$ in
graphene clamped to elliptical SiN holes with axis $40$ and $20\,{\rm \mu m}$.
In the same configuration, we report a $G_\pm$ splitting of $10\,{\rm cm^{-1}}$
which is in good agreement with the calculated anisotropy $\Delta\varepsilon
\approx 0.6\%$ for our device geometry. Our results are consistent with the
most recent reports on the Gr\"uneisen parameters. Perspectives for the
achievement of arbitrary strain configurations by designing suitable SiN holes
and boundary clamping conditions are discussed.
|
Cosmological features of Barrow Holographic Dark Energy (BHDE), a recent
generalization of original Holographic dark energy with a richer structure, are
studied in the context of DGP brane, RS II brane-world, and the cyclic
universe. It is found that a flat FRW scenario with pressure less dust and a
dark energy component described as BHDE can accommodate late time acceleration
with Hubble horizon considered as infrared cut off even in the absence of
interaction between the dark sectors. Statefinder diagnostic reveals that these
model resemble $\Lambda CDM$ cosmology in future. It is found that BHDE
parameter $\Delta$, despite its theoretically constrained range of values, is
significant in describing the evolution of the universe, however, a classically
stable cosmological model cannot be obtained in the RS-II and DGP brane.
Viability of the models is also probed with observed Hubble data.
|
Saturn's polar stratosphere exhibits the seasonal growth and dissipation of
broad, warm, vortices poleward of $\sim75^\circ$ latitude, which are strongest
in the summer and absent in winter. The longevity of the exploration of the
Saturn system by Cassini allows the use of infrared spectroscopy to trace the
formation of the North Polar Stratospheric Vortex (NPSV), a region of enhanced
temperatures and elevated hydrocarbon abundances at millibar pressures. We
constrain the timescales of stratospheric vortex formation and dissipation in
both hemispheres. Although the NPSV formed during late northern spring, by the
end of Cassini's reconnaissance (shortly after northern summer solstice), it
still did not display the contrasts in temperature and composition that were
evident at the south pole during southern summer. The newly-formed NPSV was
bounded by a strengthening stratospheric thermal gradient near $78^\circ$N. The
emergent boundary was hexagonal, suggesting that the Rossby wave responsible
for Saturn's long-lived polar hexagon - which was previously expected to be
trapped in the troposphere - can influence the stratospheric temperatures some
300 km above Saturn's clouds.
|
In this work, a study of epitaxial growth was carried out by means of
wavelets formalism. We showed the existence of a dynamic scaling form in
wavelet discriminated linear MBE equation where diffusion and noise are the
dominant effects. We determined simple and exact scaling functions involving
the scale of the wavelets when the system size is set to infinity. Exponents
were determined for both, correlated and uncorrelated noise. The wavelet
methodology was applied to a computer model simulating the linear epitaxial
growth; the results showed a very good agreement with analytical formulation.
We also considered epitaxial growth with the additional Ehrlich$-$Schwoebel
effect. We characterized the coarsening of mounds formed on the surface during
the nonlinear phase using the wavelet power spectrum. The latter have an
advantage over other methods in the sense that one can track the coarsening in
both frequency (or scale) space and real space simultaneously. We showed that
the averaged wavelet power spectrum (also called scalegram) over all the
positions on the surface profile, identified the existence of a dominant scale
$a^*$, which increases with time following a power law relation of the form
$a^* \sim t^n$, where $n\simeq 1/3$.
|
This paper uses techniques from Random Matrix Theory to find the ideal
training-testing data split for a simple linear regression with m data points,
each an independent n-dimensional multivariate Gaussian. It defines "ideal" as
satisfying the integrity metric, i.e. the empirical model error is the actual
measurement noise, and thus fairly reflects the value or lack of same of the
model. This paper is the first to solve for the training and test size for any
model in a way that is truly optimal. The number of data points in the training
set is the root of a quartic polynomial Theorem 1 derives which depends only on
m and n; the covariance matrix of the multivariate Gaussian, the true model
parameters, and the true measurement noise drop out of the calculations. The
critical mathematical difficulties were realizing that the problems herein were
discussed in the context of the Jacobi Ensemble, a probability distribution
describing the eigenvalues of a known random matrix model, and evaluating a new
integral in the style of Selberg and Aomoto. Mathematical results are supported
with thorough computational evidence. This paper is a step towards automatic
choices of training/test set sizes in machine learning.
|
We study the propagation of bosonic strings in singular target space-times.
For describing this, we assume this target space to be the quotient of a smooth
manifold $M$ by a singular foliation ${\cal F}$ on it. Using the technical tool
of a gauge theory, we propose a smooth functional for this scenario, such that
the propagation is assured to lie in the singular target on-shell, i.e. only
after taking into account the gauge invariant content of the theory. One of the
main new aspects of our approach is that we do not limit ${\cal F}$ to be
generated by a group action. We will show that, whenever it exists, the above
gauging is effectuated by a single geometrical and universal gauge theory,
whose target space is the generalized tangent bundle $TM\oplus T^*M$.
|
We provide sharp estimates for the distribution function of a martingale
transform of the indicator function of an event. They are formulated in terms
of Burkholder functions, which are reduced to the already known Bellman
functions for extremal problems on $\mathrm{BMO}$. The reduction implicitly
uses an unexpected phenomenon of automatic concavity for those Bellman
functions: their concavity in some directions implies concavity with respect to
other directions. A similar question for a martingale transform of a bounded
random variable is also considered.
|
An ability to postpone one's execution without penalty provides an important
strategic advantage in high-frequency trading. To elucidate competition between
traders one has to formulate to a quantitative theory of formation of the
execution price from market expectations and quotes. This theory was provided
in 2005 by Foucault, Kadan and Kandel. I derive asymptotic distribution of the
bids/offers as a function of the ratio of patient and impatient traders using
the dynamic version of the Foucault, Kadan and Kandel dynamic Limit Order Book
(LOB) model.
Dynamic version of the LOB model allows stylized but sufficiently realistic
representation of the trading markets. In particular, dynamic LOB allows
simulation of the distribution of execution times and spreads from
high-frequency quotes. Significant analytic progress is made towards
understanding of trading as competition for immediacy of execution between
traders. The results are qualitatively compared with empirical volume-at-price
distribution of highly liquid stocks.
|
Subsets and Splits