text
stringlengths 6
128k
|
---|
We propose a scenario in which a simple power-like primary spectrum for
protons with sources at cosmological distances leads to a quantitative
description of all the details of the observed cosmic ray spectrum for energies
from 10^{17} eV to 10^{21} eV. As usual, the ultrahigh energy protons with
energies above E_{GZK} ~ 4 x 10^{19} eV loose a large fraction of their
energies by the photoproduction of pions on the cosmic microwave background,
which finally decay mainly into neutrinos. In our scenario, these so-called
cosmogenic neutrinos interact with nucleons in the atmosphere through Standard
Model electroweak instanton-induced processes and produce air showers which are
hardly distinguishable from ordinary hadron-initiated air showers. In this way,
they give rise to a second contribution to the observed cosmic ray spectrum --
in addition to the one from above mentioned protons -- which reaches beyond
E_{GZK}. Since the whole observed spectrum is uniquely determined by a single
primary injection spectrum, no fine tuning is needed to fix the ratio of the
spectra below and above E_{GZK}. The statistical analysis shows an excellent
goodness of this scenario. Possible tests of it range from observations at
cosmic ray facilities and neutrino telescopes to searches for QCD
instanton-induced processes at HERA.
|
We present a fixed point theorem for a class of (potentially) non-monotonic
functions over specially structured complete lattices. The theorem has as a
special case the Knaster-Tarski fixed point theorem when restricted to the case
of monotonic functions and Kleene's theorem when the functions are additionally
continuous. From the practical side, the theorem has direct applications in the
semantics of negation in logic programming. In particular, it leads to a more
direct and elegant proof of the least fixed point result of [Rondogiannis and
W.W.Wadge, ACM TOCL 6(2): 441-467 (2005)]. Moreover, the theorem appears to
have potential for possible applications outside the logic programming domain.
|
We propose an alternative definition for families of stable pairs $(X,D)$
over a possibly non-reduced base when $D$ is reduced, by replacing $(X,D)$ with
an appropriate orbifold pair $(\mathcal X,\mathcal D)$. This definition of a
stable family ends up being equivalent to previous ones, but has the advantage
of being more amenable to the tools of deformation theory. Moreover, adjunction
for $(\mathcal X,\mathcal D)$ holds on the nose; there is no correction term
coming from the different. This leads to the existence of functorial gluing
morphisms for families of stable surfaces and functorial morphisms from $(n +
1)$ dimensional stable pairs to $n$ dimensional polarized orbispace. As an
application, we study the deformation theory of some surface pairs.
|
In phenomenological studies of low-energy supersymmetry, running gaugino
masses are often taken to be equal near the scale of apparent gauge coupling
unification. However, many known mechanisms can avoid this universality, even
in models with unified gauge interactions. One example is an F-term vacuum
expectation value that is a singlet under the Standard Model gauge group but
transforms non-trivially in the symmetric product of two adjoint
representations of a group that contains the Standard Model gauge group. Here,
I compute the ratios of gaugino masses that follow from F-terms in non-singlet
representations of SO(10) and E_6 and their sub-groups, extending well-known
results for SU(5). The SO(10) results correct some long-standing errors in the
literature.
|
The standard approach to realize a spin liquid state is through magnetically
frustrated states, relying on ingredients such as the lattice geometry,
dimensionality, and magnetic interaction type of the spins. While Heisenberg
spins on a pyrochlore lattice with only antiferromagnetic nearest neighbors
interactions are theoretically proven disordered, spins in real systems
generally include longer-range interactions. The spatial correlations at longer
distances typically stabilize a long-range order rather than enhancing a spin
liquid state. Both states can, however, be destroyed by short-range static
correlations introduced by chemical disorder. Here, using disorder-free
specimens with a clear long-range antiferromagnetic order, we refine the spin
structure of the Heisenberg spinel ZnFe2O4 through neutron magnetic
diffraction. The unique wave vector (1, 0, 1/2) leads to a spin structure that
can be viewed as alternatively stacked ferromagnetic and antiferromagnetic
tetrahedra in a three-dimensional checkerboard form. Stable coexistence of
these opposing types of clusters is enabled by the bipartite
breathing-pyrochlore crystal structure, leading to a second order phase
transition at 10 K. The diffraction intensity of ZnFe2O4 is an exact complement
to the inelastic scattering intensity of several chromate spinel systems which
are regarded as model classical spin liquids. Our results challenge this
attribution, and suggest instead of the six-spin ring-mode, spin excitations in
chromate spinels are closely related to the (1, 0, 1/2) type of spin order and
the four-spin ferromagnetic cluster locally at one tetrahedron.
|
John Ellard Gore FRAS, MRIA (1845-1910) was an Irish amateur astronomer and
prolific author of popular astronomy books. His main observational interest was
variable stars, of which he discovered several, and he served as the first
Director of the BAA Variable Star Section. He was also interested in binary
stars, leading him to calculate orbital elements of many such systems. He
demonstrated that the companion of Sirius, thought by many to be a dark body,
was in fact self luminous. In doing so he provided the first indication of the
immense density of what later became known as white dwarfs.
|
In this article we report a novel analytic solution for a cosmological model
with a matter content described by a one component dissipative fluid, in the
framework of the causal Israel-Stewart theory. Some physically well motivated
analytical relations for the bulk viscous coefficient, the relaxation time and
a bariotropic equation of state are postulated. We study within the parameter
space, which label the solution, a suited region compatible with an accelerated
expansion of the universe for late times, as well as stability properties of
the solution at the critical parameter values $ \gamma = 1$ and for $ s = 1/2
$. We study as well the consequences that arise from the positiveness of the
entropy production along the time evolution. In general, the accelerated
expansion at late times is only possible when $\epsilon\geq 1/18$, which
implies a very large non-adiabatic contribution the speed of sound.
|
Atmospheric chemistry models have shown molecular oxygen can build up in
CO2-dominated atmospheres on potentially habitable exoplanets without an input
of life. Existing models typically assume a surface pressure of 1 bar. Here we
present model scenarios of CO2-dominated atmospheres with the surface pressure
ranging from 0.1 to 10 bars, while keeping the surface temperature at 288 K. We
use a one-dimensional photochemistry model to calculate the abundance of O2 and
other key species, for outgassing rates ranging from a Venus-like volcanic
activity up to 20x Earth-like activity. The model maintains the redox balance
of the atmosphere and the ocean, and includes the pressure dependency of
outgassing on the surface pressure. Our calculations show that the surface
pressure is a controlling parameter in the photochemical stability and oxygen
buildup of CO2-dominated atmospheres. The mixing ratio of O2 monotonically
decreases as the surface pressure increases at the very high outgassing rates,
whereas it increases as the surface pressure increases at the lower-than-Earth
outgassing rates. Abiotic O2 can only build up to the detectable level, defined
as 1e-3 in volume mixing ratio, in 10-bar atmospheres with the Venus-like
volcanic activity rate and the reduced outgassing rate of H2 due to the high
surface pressure. Our results support the search for biological activities and
habitability via atmospheric O2 on terrestrial planets in the habitable zone of
Sun-like stars.
|
We present a statistical detection of 1.5 GHz radio continuum emission from a
sample of faint z~4 Lyman-break galaxies (LBGs). LBGs are key tracers of the
high-redshift star formation history and important sources of UV photons that
ionized the intergalactic medium in the early universe. In order to better
constrain the extinction and intrinsic star formation rate (SFR) of
high-redshift LBGs, we combine the latest ultradeep Karl G. Jansky Very Large
Array 1.5 GHz radio image and the Hubble Space Telescope Advance Camera for
Surveys (ACS) optical images in the Great Observatories Origins Deep
Survey-North. We select a large sample of 1771 z~4 LBGs from the ACS catalogue
using $\bband$-dropout color criteria. Our LBG samples have $\iband$~25-28
(AB), ~0-3 magnitudes fainter than M*_UV at z~4. In our stacked radio images,
we find the LBGs to be point-like under our 2" angular resolution. We measure
their mean 1.5 GHz flux by stacking the measurements on the individual objects.
We achieve a statistical detection of $S_{1.5GHz}$=0.210+-0.075 uJy at ~3
sigma, first time on such a faint LBG population at z~4. The measurement takes
into account the effects of source size and blending of multiple objects. The
detection is visually confirmed by stacking the radio images of the LBGs, and
the uncertainty is quantified with Monte Carlo simulations on the radio image.
The stacked radio flux corresponds to an intrinsic SFR of 16.0+-5.7 M/yr, which
is 2.8X the SFR derived from the rest-frame UV continuum luminosity. This
factor of 2.8 is in excellent agreement with the extinction correction derived
from the observed UV continuum spectral slope, using the local calibration of
meurer99. This result supports the use of the local calibration on
high-redshift LBGs for deriving the extinction correction and SFR, and also
disfavors a steep reddening curve such as that of the Small Magellanic Cloud.
|
We construct uncountably many simply connected open 3-manifolds with genus
one ends homeomorphic to the Cantor set. Each constructed manifold has the
property that any self homeomorphism of the manifold (which necessarily extends
to a homeomorphism of the ends) fixes the ends pointwise. These manifolds are
complements of rigid generalized Bing-Whitehead (BW) Cantor sets. Previous
examples of rigid Cantor sets with simply connected complement in $R^{3}$ had
infinite genus and it was an open question as to whether finite genus examples
existed. The examples here exhibit the minimum possible genus, genus one. These
rigid generalized BW Cantor sets are constructed using variable numbers of Bing
and Whitehead links. Our previous result with \v{Z}eljko determining when BW
Cantor sets are equivalently embedded in $R^{3}$ extends to the generalized
construction. This characterization is used to prove rigidity and to
distinguish the uncountably many examples.
|
In this paper, we prove the existence and conjugacy of injectors of a
generalized $\pi$-soluble groups for the Hartley class defined by a invariable
Hartley function, and give a description of the structure of the injectors.
|
I discuss the current status of the comparison between theoretical
predictions and experimental data, relevant to the production of open charm and
bottom quarks in photon-hadron and photon-photon collisions. I advocate the use
of a formalism that matches fixed-order computations to resummed computations
in order to make firm statements on heavy flavour production as described by
perturbative QCD.
|
Modern statistical applications often involve minimizing an objective
function that may be nonsmooth and/or nonconvex. This paper focuses on a broad
Bregman-surrogate algorithm framework including the local linear approximation,
mirror descent, iterative thresholding, DC programming and many others as
particular instances. The recharacterization via generalized Bregman functions
enables us to construct suitable error measures and establish global
convergence rates for nonconvex and nonsmooth objectives in possibly high
dimensions. For sparse learning problems with a composite objective, under some
regularity conditions, the obtained estimators as the surrogate's fixed points,
though not necessarily local minimizers, enjoy provable statistical guarantees,
and the sequence of iterates can be shown to approach the statistical truth
within the desired accuracy geometrically fast. The paper also studies how to
design adaptive momentum based accelerations without assuming convexity or
smoothness by carefully controlling stepsize and relaxation parameters.
|
We study the crystalline universal deformation ring R (and its ideal of
reducibility I) of a mod p Galois representation rho_0 of dimension n whose
semisimplification is the direct sum of two absolutely irreducible mutually
non-isomorphic constituents rho_1 and rho_2. Under some assumptions on Selmer
groups associated with rho_1 and rho_2 we show that R/I is cyclic and often
finite. Using ideas and results of (but somewhat different assumptions from)
Bellaiche and Chenevier we prove that I is principal for essentially self-dual
representations and deduce statements about the structure of R. Using a new
commutative algebra criterion we show that given enough information on the
Hecke side one gets an R=T-theorem. We then apply the technique to modularity
problems for 2-dimensional representations over an imaginary quadratic field
and a 4-dimensional representation over the rationals.
|
In supervised learning, the presence of noise can have a significant impact
on decision making. Since many classifiers do not take label noise into account
in the derivation of the loss function, including the loss functions of
logistic regression, SVM, and AdaBoost, especially the AdaBoost iterative
algorithm, whose core idea is to continuously increase the weight value of the
misclassified samples, the weight of samples in many presence of label noise
will be increased, leading to a decrease in model accuracy. In addition, the
learning process of BP neural network and decision tree will also be affected
by label noise. Therefore, solving the label noise problem is an important
element of maintaining the robustness of the network model, which is of great
practical significance. Granular ball computing is an important modeling method
developed in the field of granular computing in recent years, which is an
efficient, robust and scalable learning method. In this paper, we pioneered a
granular ball neural network algorithm model, which adopts the idea of
multi-granular to filter label noise samples during model training, solving the
current problem of model instability caused by label noise in the field of deep
learning, greatly reducing the proportion of label noise in training samples
and improving the robustness of neural network models.
|
We study the topic of "extremal" planar graphs, defining
$\mathrm{ex_{_{\mathcal{P}}}}(n,H)$ to be the maximum number of edges possible
in a planar graph on $n$ vertices that does not contain a given graph $H$ as a
subgraph. In particular,we examine the case when $H$ is a small cycle,obtaining
$\mathrm{ex_{_{\mathcal{P}}}}(n,C_{4}) \leq \frac{15}{7}(n-2)$ for all $n \geq
4$ and $\mathrm{ex_{_{\mathcal{P}}}}(n,C_{5}) \leq \frac{12n-33}{5}$ for all $n
\geq 11$, and showing that both of these bounds are tight.
|
It is well known that general relativity does not admit gravitational geons
that are stationary, asymptotically flat, singularity free and topologically
trivial. However, it is likely that general relativity will receive corrections
at large curvatures and the modified field equations may admit solutions
corresponding to this type of geon. If geons are produced in the early universe
and survive until today they could account for some of the dark matter that has
been "observed" in galaxies and galactic clusters.
In this paper I consider gravitational geons in 1+1 dimensional theories of
gravity. I show that the Jackiw-Teitelboim theory with corrections proportional
to $R^2$ and $\Box R$ admits gravitational geons. I also show that
gravitational geons exist in a class of theories that includes Lagrangians
proportional to $R^{2/3}$.
|
The values of the determinant of Vandermonde matrices with real elements are
analyzed both visually and analytically over the unit sphere in various
dimensions. For three dimensions some generalized Vandermonde matrices are
analyzed visually. The extreme points of the ordinary Vandermonde determinant
on finite-dimensional unit spheres are given as the roots of rescaled Hermite
polynomials and a recursion relation is provided for the polynomial
coefficients. Analytical expressions for these roots are also given for
dimension three to seven. A transformation of the optimization problem is
provided and some relations between the ordinary and generalized Vandermonde
matrices involving limits are discussed.
|
The mass distribution of fission fragments of actinide and superheavy nuclei
can be explained if a new state of nuclear matter, a nucleon phase, is created
in any fission event.
|
A color image contains luminance and chrominance components representing the
intensity and color information respectively. The objective of the work
presented in this paper is to show the significance of incorporating the
chrominance information for the task of scene classification. An improved
color-to-grayscale image conversion algorithm by effectively incorporating the
chrominance information is proposed using color-to-gay structure similarity
index (C2G-SSIM) and singular value decomposition (SVD) to improve the
perceptual quality of the converted grayscale images. The experimental result
analysis based on the image quality assessment for image decolorization called
C2G-SSIM and success rate (Cadik and COLOR250 datasets) shows that the proposed
image decolorization technique performs better than 8 existing benchmark
algorithms for image decolorization. In the second part of the paper, the
effectiveness of incorporating the chrominance component in scene
classification task is demonstrated using the deep belief network (DBN) based
image classification system developed using dense scale invariant feature
transform (SIFT) as features. The levels of chrominance information
incorporated by the proposed image decolorization technique is confirmed by the
improvement in the overall scene classification accuracy . Also, the overall
scene classification performance is improved by the combination of models
obtained using the proposed and the conventional decolorization methods.
|
In this article, we have studied the difference-difference Lotka-Volterra
equations in p-adic number space and its p-adic valuation version. We pointed
out that the structure of the space given by taking the ultra-discrete limit is
the same as that of the $p$-adic valuation space.
|
We consider a communication method, where the sender encodes n classical bits
into 1 qubit and sends it to the receiver who performs a certain measurement
depending on which of the initial bits must be recovered. This procedure is
called (n,1,p) quantum random access code (QRAC) where p > 1/2 is its success
probability. It is known that (2,1,0.85) and (3,1,0.79) QRACs (with no
classical counterparts) exist and that (4,1,p) QRAC with p > 1/2 is not
possible.
We extend this model with shared randomness (SR) that is accessible to both
parties. Then (n,1,p) QRAC with SR and p > 1/2 exists for any n > 0. We give an
upper bound on its success probability (the known (2,1,0.85) and (3,1,0.79)
QRACs match this upper bound). We discuss some particular constructions for
several small values of n.
We also study the classical counterpart of this model where n bits are
encoded into 1 bit instead of 1 qubit and SR is used. We give an optimal
construction for such codes and find their success probability exactly--it is
less than in the quantum case.
Interactive 3D quantum random access codes are available on-line at
http://home.lanet.lv/~sd20008/racs .
|
Micro-segmentation is an emerging security technique that separates physical
networks into isolated logical micro-segments (workloads). By tying
fine-grained security policies to individual workloads, it limits the
attacker's ability to move laterally through the network, even after
infiltrating the perimeter defences. While micro-segmentation is proved to be
effective for shrinking enterprise networks attack surface, its impact
assessment is almost absent in the literature. This research is dedicated to
developing an analytical framework to characterise and quantify the
effectiveness of micro-segmentation on enhancing networks security. We rely on
a twofold graph-feature based framework of the network connectivity and attack
graphs to evaluate the network exposure and robustness, respectively. While the
former assesses the network assets connectedness, reachability and centrality,
the latter depicts the ability of the network to resist goal-oriented
attackers. Tracking the variations of formulated metrics values post the
deployment of micro-segmentation reveals exposure reduction and robustness
improvement in the range of 60% - 90%.
|
This paper presents a novel approach to including non-instantaneous discrete
control transitions in the linear hybrid automaton approach to simulation and
verification of hybrid control systems. In this paper we study the control of a
continuously evolving analog plant using a controller programmed in a
synchronous programming language. We provide extensions to the synchronous
subset of the SystemJ programming language for modeling, implementation, and
verification of such hybrid systems. We provide a sound rewrite semantics that
approximate the evolution of the continuous variables in the discrete domain
inspired from the classical supervisory control theory. The resultant discrete
time model can be verified using classical model-checking tools. Finally, we
show that systems designed using our approach have a higher fidelity than the
ones designed using the hybrid automaton approach.
|
We use CANDELS imaging, 3D-HST spectroscopy, and Chandra X-ray data to
investigate if active galactic nuclei (AGNs) are preferentially fueled by
violent disk instabilities funneling gas into galaxy centers at 1.3<z<2.4. We
select galaxies undergoing gravitational instabilities using the number of
clumps and degree of patchiness as proxies. The CANDELS visual classification
system is used to identify 44 clumpy disk galaxies, along with mass-matched
comparison samples of smooth and intermediate morphology galaxies. We note
that, despite being being mass-matched and having similar star formation rates,
the smoother galaxies tend to be smaller disks with more prominent bulges
compared to the clumpy galaxies. The lack of smooth extended disks is probably
a general feature of the z~2 galaxy population, and means we cannot directly
compare with the clumpy and smooth extended disks observed at lower redshift.
We find that z~2 clumpy galaxies have slightly enhanced AGN fractions selected
by integrated line ratios (in the mass-excitation method), but the spatially
resolved line ratios indicate this is likely due to extended phenomena rather
than nuclear AGNs. Meanwhile the X-ray data show that clumpy, smooth, and
intermediate galaxies have nearly indistinguishable AGN fractions derived from
both individual detections and stacked non-detections. The data demonstrate
that AGN fueling modes at z~1.85 - whether violent disk instabilities or
secular processes - are as efficient in smooth galaxies as they are in clumpy
galaxies.
|
The spectrum of light baryons and mesons has been reproduced recently by
Brodsky and Teramond from a holographic dual to QCD inspired in the AdS/CFT
correspondence. They associate fluctuations about the AdS geometry with four
dimensional angular momenta of the dual QCD states. We use a similar approach
to estimate masses of glueball states with different spins and their
excitations. We consider Dirichlet and Neumann boundary conditions and find
approximate linear Regge trajectories for these glueballs. In particular the
Neumann case is consistent with the Pomeron trajectory.
|
Following Rutherford's 1920 historical hypothesis of the neutron as a
compressed hydrogen atom in the core of stars, the laboratory synthesis of the
neutron from protons and electrons was claimed in the late 1960 by the Italian
priest-physicist Don Carlo Borghi and his associates via a metal chamber
containing a partially ionized hydrogen gas at a fraction of $1 bar$ pressure
traversed by an electric arc with $5 J$ energy and microwaves with $10^{10}
s^{-1}$ frequency. The experiment remained unverified for decades due to the
lack of theoretical understanding of the results. In this note we report
various measurements showing that, under certain conditions, electric arcs
within a hydrogen gas produce neutral, hadron-size entities that are absorbed
by stable nuclei and subsequently result in the release of detectable neutrons,
thus confirming Don Borghi's experiment. The possibility that said entities are
neutrons is discussed jointly with other alternatives. Due to their simplicity,
a primary scope of this note is to stimulate the independent re-run of the
tests as conducted or in suitable alternative forms.
|
We perform a detailed investigation of total lifetimes for the doubly heavy
baryons $\Xi_{QQ'}$, $\Omega_{QQ'}$ in the framework of operator product
expansion over the inverse heavy quark mass, whereas, to estimate matrix
elements of operators obtained in OPE, approximations of nonrelativistic QCD
are used.
|
We continue McCartor and Robertson's recent demonstration of the
indispensability of ghost fields in the light-cone gauge quantization of gauge
fields. It is shown that the ghost fields are indispensable in deriving
well-defined antiderivatives and in regularizing the most singular component of
gauge field propagator. To this end it is sufficient to confine ourselves to
noninteracting abelian fields. Furthermore to circumvent dealing with
constrained systems, we construct the temporal gauge canonical formulation of
the free electromagnetic field in auxiliary coordinates
$x^{\mu}=(x^-,x^+,x^1,x^2)$ where $x^-=x^0 cos{\theta}-x^3 sin{\theta}, x^+=x^0
sin{\theta}+x^3 cos{\theta}$ and $x^-$ plays the role of time. In so doing we
can quantize the fields canonically without any constraints, unambiguously
introduce "static ghost fields" as residual gauge degrees of freedom and
construct the light-cone gauge solution in the light-cone representation by
simply taking the light-cone limit (${\theta}\to \pi/4$). As a by product we
find that, with a suitable choice of vacuum the Mandelstam-Leibbrandt form of
the propagator can be derived in the ${\theta}=0$ case (the temporal gauge
formulation in the equal-time representation).
|
We generalize the ordinary aggregation process to allow for choice. In
ordinary aggregation, two random clusters merge and form a larger aggregate. In
our implementation of choice, a target cluster and two candidate clusters are
randomly selected, and the target cluster merges with the larger of the two
candidate clusters. We study the long-time asymptotic behavior, and find that
as in ordinary aggregation, the size density adheres to the standard scaling
form. However, aggregation with choice exhibits a number of novel features.
First, the density of the smallest clusters exhibits anomalous scaling. Second,
both the small-size and the large-size tails of the density are overpopulated,
at the expense of the density moderate-size clusters. We also study the
complementary case where the smaller candidate clusters participates in the
aggregation process, and find abundance of moderate clusters at the expense of
small and large clusters. Additionally, we investigate aggregation processes
with choice among multiple candidate clusters, and a symmetric implementation
where the choice is between two pairs of clusters.
|
We give examples over arbitrary fields of rings of invariants that are not
finitely generated. The group involved can be as small as three copies of the
additive group, as in Mukai's examples over the complex numbers. The failure of
finite generation comes from certain elliptic fibrations or abelian surface
fibrations having positive Mordell-Weil rank.
Our work suggests a generalization of the Morrison-Kawamata cone conjecture
from Calabi-Yau varieties to klt Calabi-Yau pairs. We prove the conjecture in
dimension 2 in the case of minimal rational elliptic surfaces.
|
The main goal of this paper is to generalize Jacobi and Gauss-Seidel methods
for solving non-square linear system. Towards this goal, we present iterative
procedures to obtain an approximate solution for non-square linear system. We
derive sufficient conditions for the convergence of such iterative methods.
Procedure is given to show that how an exact solution can be obtained from
these methods. Lastly, an example is considered to compare these methods with
other available method(s) for the same.
|
We present the ALMA detection of molecular outflowing gas in the central
regions of NGC4945, one of the nearest starbursts and also one of the nearest
hosts of an active galactic nucleus (AGN). We detect four outflow plumes in CO
(3-2) at ~0.3" resolution that appear to correspond to molecular gas located
near the edges of the known ionized outflow cone and its (unobserved)
counterpart behind the disk. The fastest and brightest of these plumes has
emission reaching observed line-of-sight projected velocities of over 450 km/s
beyond systemic, equivalent to an estimated physical outflow velocity v>600
km/s for the fastest emission. Most of these plumes have corresponding emission
in HCN or HCO+ (4-3). We discuss a kinematic model for the outflow emission
where the molecular gas has the geometry of the ionized gas cone and shares the
rotation velocity of the galaxy when ejected. We use this model to explain the
velocities we observe, constrain the physical speed of the ejected material,
and account for the fraction of outflowing gas that is not detected due to
confusion with the galaxy disk. We estimate a total molecular mass outflow rate
dMmol/dt~20 Msun/yr flowing through a surface within 100 pc of the disk
midplane, likely driven by a combination of the central starburst and AGN.
|
Most parameterized complexity classes are defined in terms of a parameterized
version of the Boolean satisfiability problem (the so-called weighted
satisfiability problem). For example, Downey and Fellow's W-hierarchy is of
this form. But there are also classes, for example, the A-hierarchy, that are
more naturally characterised in terms of model-checking problems for certain
fragments of first-order logic.
Downey, Fellows, and Regan were the first to establish a connection between
the two formalisms by giving a characterisation of the W-hierarchy in terms of
first-order model-checking problems. We improve their result and then prove a
similar correspondence between weighted satisfiability and model-checking
problems for the A-hierarchy and the W^*-hierarchy. Thus we obtain very uniform
characterisations of many of the most important parameterized complexity
classes in both formalisms.
Our results can be used to give new, simple proofs of some of the core
results of structural parameterized complexity theory.
|
Motivated by the results of Hashimoto and Taylor, we perform a detailed study
of the mass spectrum of the non-abelian Born-Infeld theory, defined by the
symmetrized trace prescription, on tori with constant magnetic fields turned
on. Subsequently, we compare this for several cases to the mass spectrum of
intersecting D-branes. Exact agreement is found in only two cases: BPS
configurations on the four-torus and coinciding tilted branes. Finally we
investigate the fluctuation dynamics of an arbitrarily wrapped Dp-brane with
flux.
|
Learning policies for complex tasks that require multiple different skills is
a major challenge in reinforcement learning (RL). It is also a requirement for
its deployment in real-world scenarios. This paper proposes a novel framework
for efficient multi-task reinforcement learning. Our framework trains agents to
employ hierarchical policies that decide when to use a previously learned
policy and when to learn a new skill. This enables agents to continually
acquire new skills during different stages of training. Each learned task
corresponds to a human language description. Because agents can only access
previously learned skills through these descriptions, the agent can always
provide a human-interpretable description of its choices. In order to help the
agent learn the complex temporal dependencies necessary for the hierarchical
policy, we provide it with a stochastic temporal grammar that modulates when to
rely on previously learned skills and when to execute new skills. We validate
our approach on Minecraft games designed to explicitly test the ability to
reuse previously learned skills while simultaneously learning new skills.
|
We define a proof system for exceptions which is close to the syntax for
exceptions, in the sense that the exceptions do not appear explicitly in the
type of any expression. This proof system is sound with respect to the intended
denotational semantics of exceptions. With this inference system we prove
several properties of exceptions.
|
Westerlund 1 (Wd1) is potentially the largest star cluster in the Galaxy.
That designation critically depends upon the distance to the cluster, yet the
cluster is highly obscured, making luminosity-based distance estimates
difficult. Using {\it Gaia} Data Release 2 (DR2) parallaxes and Bayesian
inference, we infer a parallax of $0.35^{+0.07}_{-0.06}$ mas corresponding to a
distance of $2.6^{+0.6}_{-0.4}$ kpc. To leverage the combined statistics of all
stars in the direction of Wd1, we derive the Bayesian model for a cluster of
stars hidden among Galactic field stars; this model includes the parallax
zero-point. Previous estimates for the distance to Wd1 ranged from 1.0 to 5.5
kpc, although values around 5 kpc have usually been adopted. The {\it Gaia} DR2
parallaxes reduce the uncertainty from a factor of 3 to 18\% and rules out the
most often quoted value of 5 kpc with 99\% confidence. This new distance allows
for more accurate mass and age determinations for the stars in Wd1. For
example, the previously inferred initial mass at the main-sequence turn-off was
around 40 M$_{\odot}$; the new {\it Gaia} DR2 distance shifts this down to
about 22 M$_{\odot}$. This has important implications for our understanding of
the late stages of stellar evolution, including the initial mass of the
magnetar and the LBV in Wd1. Similarly, the new distance suggests that the
total cluster mass is about four times lower than previously calculated.
|
In this paper we employ two recent analytical approaches to investigate the
possible classes of traveling wave solutions of some members of a
recently-derived integrable family of generalized Camassa-Holm (GCH) equations.
A recent, novel application of phase-plane analysis is employed to analyze the
singular traveling wave equations of three of the GCH NLPDEs, i.e. the possible
non-smooth peakon and cuspon solutions. One of the considered GCH equations
supports both solitary (peakon) and periodic (cuspon) cusp waves in different
parameter regimes. The second equation does not support singular traveling
waves and the last one supports four-segmented, non-smooth $M$-wave solutions.
Moreover, smooth traveling waves of the three GCH equations are considered.
Here, we use a recent technique to derive convergent multi-infinite series
solutions for the homoclinic orbits of their traveling-wave equations,
corresponding to pulse (kink or shock) solutions respectively of the original
PDEs. We perform many numerical tests in different parameter regime to pinpoint
real saddle equilibrium points of the corresponding GCH equations, as well as
ensure simultaneous convergence and continuity of the multi-infinite series
solutions for the homoclinic orbits anchored by these saddle points. Unlike the
majority of unaccelerated convergent series, high accuracy is attained with
relatively few terms. We also show the traveling wave nature of these pulse and
front solutions to the GCH NLPDEs.
|
We consider various versions of adaptive Gibbs and Metropolis within-Gibbs
samplers, which update their selection probabilities (and perhaps also their
proposal distributions) on the fly during a run, by learning as they go in an
attempt to optimise the algorithm. We present a cautionary example of how even
a simple-seeming adaptive Gibbs sampler may fail to converge. We then present
various positive results guaranteeing convergence of adaptive Gibbs samplers
under certain conditions.
|
Sol-gel electrophoresis is used to grow PbTiO3 nanotube arrays in porous
anodic alumina template channels, because it is a cheap and simple method for
the growth of nanostructures and has the advantage of better tube growth
control. Moreover, this method can produce nanotubes with high quality and more
condense structures. In this technique, semiconductor porous anodic alumina
templates are used to grow the nanotube arrays. Consequently, close-packed
PbTiO3 nanotube arrays are grown in the template channels. It is shown here
that, to the best of our knowledge, sol-gel electrophoresis is the only method,
applicable for producing PbTiO3 nanotubes with thickness below 20 nm (section
3.3). Also, the effect of deposition time on the wall thickness is
investigated, for a fix electrophoresis voltage. The thickness of the grown
nanotubes is uniform; an important issue for the ferroelectric properties of
the deposited nanolayers for future investigations.
|
Low magnetic field scanning tunneling spectroscopy of individual Abrikosov
vortices in heavily overdoped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ unveils a
clear d-wave electronic structure of the vortex core, with a zero-bias
conductance peak at the vortex center that splits with increasing distance from
the core. We show that previously reported unconventional electronic
structures, including the low energy checkerboard charge order in the vortex
halo and the absence of a zero-bias conductance peak at the vortex center, are
direct consequences of short inter-vortex distance and consequent vortex-vortex
interactions prevailing in earlier experiments.
|
Diffusion Language models (DLMs) are a promising avenue for text generation
due to their practical properties on tractable controllable generation. They
also have the advantage of not having to predict text autoregressively.
However, despite these notable features, DLMs have not yet reached the
performance levels of their autoregressive counterparts. One of the ways to
reduce the performance gap between these two types of language models is to
speed up the generation of DLMs. Therefore, we propose a novel methodology to
address this issue in this work. It enables the execution of more generation
steps within a given time frame, leading to higher-quality outputs.
Specifically, our methods estimate DLMs completeness of text generation and
allow adaptive halting of the generation process. We evaluate our methods on
Plaid, SSD, and CDCD DLMs and create a cohesive perspective on their generation
workflows. Finally, we confirm that our methods allow halting these models and
decrease the generation time by $10$-$40$\% without a drop in the quality of
model samples.
|
We study out-of-time-order correlation (OTOC) for one-dimensional
periodically driven hardcore bosons in the presence of Aubry-Andr\'e (AA)
potential and show that both the spectral properties and the saturation values
of OTOC in the steady state of these driven systems provide a clear distinction
between the localized and delocalized phases of these models. Our results,
obtained via exact numerical diagonalization of these boson chains, thus
indicate that OTOC can provide a signature of drive induced delocalization even
for systems which do not have a well defined semiclassical (and/or large N)
limit. We demonstrate the presence of such signature by analyzing two different
drive protocols for hardcore bosons chains leading to distinct physical
phenomena and discuss experiments which can test our theory.
|
Recent research highlights that the Directed Accumulator (DA), through its
parametrization of geometric priors into neural networks, has notably improved
the performance of medical image recognition, particularly with small and
imbalanced datasets. However, DA's potential in pixel-wise dense predictions is
unexplored. To bridge this gap, we present the Directed Accumulator Grid
(DAGrid), which allows geometric-preserving filtering in neural networks, thus
broadening the scope of DA's applications to include pixel-level dense
prediction tasks. DAGrid utilizes homogeneous data types in conjunction with
designed sampling grids to construct geometrically transformed representations,
retaining intricate geometric information and promoting long-range information
propagation within the neural networks. Contrary to its symmetric counterpart,
grid sampling, which might lose information in the sampling process, DAGrid
aggregates all pixels, ensuring a comprehensive representation in the
transformed space. The parallelization of DAGrid on modern GPUs is facilitated
using CUDA programming, and also back propagation is enabled for deep neural
network training. Empirical results show DAGrid-enhanced neural networks excel
in supervised skin lesion segmentation and unsupervised cardiac image
registration. Specifically, the network incorporating DAGrid has realized a
70.8% reduction in network parameter size and a 96.8% decrease in FLOPs, while
concurrently improving the Dice score for skin lesion segmentation by 1.0%
compared to state-of-the-art transformers. Furthermore, it has achieved
improvements of 4.4% and 8.2% in the average Dice score and Dice score of the
left ventricular mass, respectively, indicating an increase in registration
accuracy for cardiac images. The source code is available at
https://github.com/tinymilky/DeDA.
|
The light output produced by light ions (Z<=4) in CsI(Tl) crystals is studied
over a wide range of detected energies (E<=300 MeV). Energy-light calibration
data sets are obtained with the 10 cm crystals in the recently upgraded
High-Resolution Array (HiRA10). We use proton recoil data from 40,48Ca + CH2 at
28 MeV/u, 56.6 MeV/u, 39 MeV/u and 139.8 MeV/u and data from a dedicated
experiment with direct low-energy beams. We also use the punch through points
of p, d, and t particles from 40,48Ca + 58,64Ni, 112,124Sn collisions reactions
at 139.8 MeV/u. Non-linearities, arising in particular from Tl doping and light
collection efficiency in the CsI crystals, are found to significantly affect
the light output and therefore the calibration of the detector response for
light charged particles, especially the hydrogen isotopes. A new empirical
parametrization of the hydrogen light output, L(E,Z=1,A), is proposed to
account for the observed effects. Results are found to be consistent for all 48
CsI(Tl) crystals in a cluster of 12 HiRA10 telescopes.
|
The number field sieve is the most efficient known algorithm for factoring
large integers that are free of small prime factors. For the polynomial
selection stage of the algorithm, Montgomery proposed a method of generating
polynomials which relies on the construction of small modular geometric
progressions. Montgomery's method is analysed in this paper and the existence
of suitable geometric progressions is considered.
|
Between matrix factorization or Random Walk with Restart (RWR), which method
works better for recommender systems? Which method handles explicit or implicit
feedback data better? Does additional information help recommendation?
Recommender systems play an important role in many e-commerce services such as
Amazon and Netflix to recommend new items to a user. Among various
recommendation strategies, collaborative filtering has shown good performance
by using rating patterns of users. Matrix factorization and random walk with
restart are the most representative collaborative filtering methods. However,
it is still unclear which method provides better recommendation performance
despite their extensive utility.
In this paper, we provide a comparative study of matrix factorization and RWR
in recommender systems. We exactly formulate each correspondence of the two
methods according to various tasks in recommendation. Especially, we newly
devise an RWR method using global bias term which corresponds to a matrix
factorization method using biases. We describe details of the two methods in
various aspects of recommendation quality such as how those methods handle
cold-start problem which typically happens in collaborative filtering. We
extensively perform experiments over real-world datasets to evaluate the
performance of each method in terms of various measures. We observe that matrix
factorization performs better with explicit feedback ratings while RWR is
better with implicit ones. We also observe that exploiting global popularities
of items is advantageous in the performance and that side information produces
positive synergy with explicit feedback but gives negative effects with
implicit one.
|
Eosinophilic Esophagitis (EoE) represents a challenging condition for medical
providers today. The cause is currently unknown, the impact on a patient's
daily life is significant, and it is increasing in prevalence. Traditional
approaches for medical image diagnosis such as standard deep learning
algorithms are limited by the relatively small amount of data and difficulty in
generalization. As a response, two methods have arisen that seem to perform
well: Diffusion and Multi-Domain methods with current research efforts favoring
diffusion methods. For the EoE dataset, we discovered that a Multi-Domain
Adversarial Network outperformed a Diffusion based method with a FID of 42.56
compared to 50.65. Future work with diffusion methods should include a
comparison with Multi-Domain adaptation methods to ensure that the best
performance is achieved.
|
Inclusive J/$\psi$ production has been studied with the ALICE detector in
p-Pb collisions at the nucleon-nucleon center of mass energy $\sqrt{s_{\rm
NN}}$ = 5.02 TeV at the CERN LHC. The measurement is performed in the center of
mass rapidity domains $2.03<y_{\rm cms}<3.53$ and $-4.46<y_{\rm cms}<-2.96$,
down to zero transverse momentum, studying the $\mu^+\mu^-$ decay mode. In this
paper, the J/$\psi$ production cross section and the nuclear modification
factor $R_{\rm pPb}$ for the rapidities under study are presented. While at
forward rapidity, corresponding to the proton direction, a suppression of the
J/$\psi$ yield with respect to binary-scaled pp collisions is observed, in the
backward region no suppression is present. The ratio of the forward and
backward yields is also measured differentially in rapidity and transverse
momentum. Theoretical predictions based on nuclear shadowing, as well as on
models including, in addition, a contribution from partonic energy loss, are in
fair agreement with the experimental results.
|
Modeling and predicting the performance of students in collaborative learning
paradigms is an important task. Most of the research presented in literature
regarding collaborative learning focuses on the discussion forums and social
learning networks. There are only a few works that investigate how students
interact with each other in team projects and how such interactions affect
their academic performance. In order to bridge this gap, we choose a software
engineering course as the study subject. The students who participate in a
software engineering course are required to team up and complete a software
project together. In this work, we construct an interaction graph based on the
activities of students grouped in various teams. Based on this student
interaction graph, we present an extended graph transformer framework for
collaborative learning (CLGT) for evaluating and predicting the performance of
students. Moreover, the proposed CLGT contains an interpretation module that
explains the prediction results and visualizes the student interaction
patterns. The experimental results confirm that the proposed CLGT outperforms
the baseline models in terms of performing predictions based on the real-world
datasets. Moreover, the proposed CLGT differentiates the students with poor
performance in the collaborative learning paradigm and gives teachers early
warnings, so that appropriate assistance can be provided.
|
This paper introduces a new approach to quantify the impact of forward
propagated demand and weather uncertainty on power system planning and
operation models. Recent studies indicate that such sampling uncertainty,
originating from demand and weather time series inputs, should not be ignored.
However, established uncertainty quantification approaches fail in this context
due to the data and computing resources required for standard Monte Carlo
analysis with disjoint samples. The method introduced here uses an m out of n
bootstrap with shorter time series than the original, enhancing computational
efficiency and avoiding the need for any additional data. It both quantifies
output uncertainty and determines the sample length required for desired
confidence levels. Simulations and validation exercises are performed on two
capacity expansion planning models and one unit commitment and economic
dispatch model. A diagnostic for the validity of estimated uncertainty bounds
is discussed. The models, data and code are made available.
|
The genetic algorithm (GA) is an optimization and search technique based on
the principles of genetics and natural selection. A GA allows a population
composed of many individuals to evolve under specified selection rules to a
state that maximizes the "fitness" function. In that process, crossover
operator plays an important role. To comprehend the GAs as a whole, it is
necessary to understand the role of a crossover operator. Today, there are a
number of different crossover operators that can be used in GAs. However, how
to decide what operator to use for solving a problem? A number of test
functions with various levels of difficulty has been selected as a test polygon
for determine the performance of crossover operators. In this paper, a novel
crossover operator called 'ring crossover' is proposed. In order to evaluate
the efficiency and feasibility of the proposed operator, a comparison between
the results of this study and results of different crossover operators used in
GAs is made through a number of test functions with various levels of
difficulty. Results of this study clearly show significant differences between
the proposed operator and the other crossover operators.
|
In recent years there has been a push to discover the governing equations
dynamical systems directly from measurements of the state, often motivated by
systems that are too complex to directly model. Although there has been
substantial work put into such a discovery, doing so in the case of large noise
has proved challenging. Here we develop an algorithm for Simultaneous
Identification and Denoising of a Dynamical System (SIDDS). We infer the noise
in the state measurements by requiring that the denoised data satisfies the
dynamical system with an equality constraint. This is unlike existing work
where the mismatch in the dynamical system is treated as a penalty in the
objective. We assume the dynamics is represented in a pre-defined basis and
develop a sequential quadratic programming approach to solve the SIDDS problem
featuring a direct solution of KKT system with a specialized preconditioner. In
addition, we show how we can include sparsity promoting regularization using an
iteratively reweighted least squares approach. The resulting algorithm leads to
estimates of the dynamical system that approximately achieve the Cram\'er-Rao
lower bound and, with sparsity promotion, can correctly identify the sparsity
structure for higher levels of noise than existing techniques. Moreover,
because SIDDS decouples the data from the evolution of the dynamical system, we
show how to modify the problem to accurately identify systems from low sample
rate measurements. The inverse problem approach and solution framework used by
SIDDS has the potential to be expanded to related problems identifying
governing equations from noisy data.
|
We demonstrate that the charge distributions in Hubbard-model representations
of transition metal oxide heterojucntions can be described by a Thomas-Fermi
theory in which the energy is approximated as the sum of the electrostatic
energy and the uniform three-dimensional Hubbard model energy per site at the
local density equals to a constant. When charged atomic layers in the oxides
are approximated as two-dimensional sheets with uniform charge density, the
electrostatic energy is simply evaluated. We find that this Thomas-Fermi theory
can reproduce results obtained from full Hartree-Fock theory for various
different heterostructures. We also show explicitly how Thomas-Fermi theory can
be used to estimate some key properties qualitatively.
|
In view of the relation between information and thermodynamics we investigate
how much information about an external protocol can be stored in the memory of
a stochastic measurement device given an energy budget. We consider a layered
device with a memory component storing information about the external
environment by monitoring the history of a sensory part coupled to the
environment. We derive an integral fluctuation theorem for the entropy
production and a measure of the information accumulated in the memory device.
Its most immediate consequence is that the amount of information is bounded by
the average thermodynamic entropy produced by the process. At equilibrium no
entropy is produced and therefore the memory device does not add any
information about the environment to the sensory component. Consequently, if
the system operates at equilibrium the addition of a memory component is
superfluous. Such device can be used to model the sensing process of a cell
measuring the external concentration of a chemical compound and encoding the
measurement in the amount of phosphorylated cytoplasmic proteins.
|
This paper presents our submission to the SardiStance 2020 shared task,
describing the architecture used for Task A and Task B. While our submission
for Task A did not exceed the baseline, retraining our model using all the
training tweets, showed promising results leading to (f-avg 0.601) using
bidirectional LSTM with BERT multilingual embedding for Task A. For our
submission for Task B, we ranked 6th (f-avg 0.709). With further investigation,
our best experimented settings increased performance from (f-avg 0.573) to
(f-avg 0.733) with same architecture and parameter settings and after only
incorporating social interaction features -- highlighting the impact of social
interaction on the model's performance.
|
It is demonstrated that an infinite set of string-tree level on-shell Ward
identities, which are valid to all sigma-model loop orders, can be
systematically constructed without referring to the string field theory. As
examples, bosonic massive scattering amplitudes are calculated explicitly up to
the second massive excited states. Ward identities satisfied by these
amplitudees are derived by using zero-norm states in the spetrum. In
particular, the inter-particle Ward identity generated by the D2xD2' zero-norm
state at the second massive level is demonstrated. The four physical
propagating states of this mass level are then shown to form a large gauge
multiplet. This result justifies our previous consideration on higher
inter-spin symmetry from the generalized worldsheet sigma-model point of view.
|
Informal arguments that cryptographic protocols are secure can be made
rigorous using inductive definitions. The approach is based on ordinary
predicate calculus and copes with infinite-state systems. Proofs are generated
using Isabelle/HOL. The human effort required to analyze a protocol can be as
little as a week or two, yielding a proof script that takes a few minutes to
run.
Protocols are inductively defined as sets of traces. A trace is a list of
communication events, perhaps comprising many interleaved protocol runs.
Protocol descriptions incorporate attacks and accidental losses. The model spy
knows some private keys and can forge messages using components decrypted from
previous traffic. Three protocols are analyzed below: Otway-Rees (which uses
shared-key encryption), Needham-Schroeder (which uses public-key encryption),
and a recursive protocol by Bull and Otway (which is of variable length).
One can prove that event $ev$ always precedes event $ev'$ or that property
$P$ holds provided $X$ remains secret. Properties can be proved from the
viewpoint of the various principals: say, if $A$ receives a final message from
$B$ then the session key it conveys is good.
|
We determine the Ramsey number of a connected clique matching. That is, we
show that if $G$ is a $2$-edge-coloured complete graph on $(r^2 - r - 1)n - r +
1$ vertices, then there is a monochromatic connected subgraph containing $n$
disjoint copies of $K_r$, and that this number of vertices cannot be reduced.
|
We have revisited the electronic structure of infinite-layer RNiO$_2$ (R= La,
Nd) in light of the recent discovery of superconductivity in Sr-doped
NdNiO$_2$. From a comparison to their cuprate counterpart CaCuO$_2$, we derive
essential facts related to their electronic structures, in particular the
values for various hopping parameters and energy splittings, and the influence
of the spacer cation. From this detailed comparison, we comment on expectations
in regards to superconductivity. In particular, both materials exhibit a large
ratio of longer-range hopping to near-neighbor hopping which should be
conducive for superconductivity.
|
The (mostly) insulating behaviour of PrBa2Cu3O7-d is still unexplained and
even more interesting since the occasional appearance of superconductivity in
this material. Since YBa2Cu3O7-d is nominally iso-structural and always
superconducting, we have measured the electron momentum density in these
materials. We find that they differ in a striking way, the wavefunction
coherence length in PrBa2Cu3O7-d being strongly suppressed. We conclude that Pr
on Ba-site substitution disorder is responsible for the metal-insulator
transition. Preliminary efforts at growth with a method to prevent disorder
yield 90K superconducting PrBa2Cu3O7-d crystallites.
|
In this paper we provide a definition of pattern of outliers in contingency
tables within a model-based framework. In particular, we make use of log-linear
models and exact goodness-of-fit tests to specify the notions of outlier and
pattern of outliers. The language and some techniques from Algebraic Statistics
are essential tools to make the definition clear and easily applicable. Some
numerical examples show how to use our definitions.
|
Recent progress in autoencoder-based sparse identification of nonlinear
dynamics (SINDy) under $\ell_1$ constraints allows joint discoveries of
governing equations and latent coordinate systems from spatio-temporal data,
including simulated video frames. However, it is challenging for $\ell_1$-based
sparse inference to perform correct identification for real data due to the
noisy measurements and often limited sample sizes. To address the data-driven
discovery of physics in the low-data and high-noise regimes, we propose
Bayesian SINDy autoencoders, which incorporate a hierarchical Bayesian
sparsifying prior: Spike-and-slab Gaussian Lasso. Bayesian SINDy autoencoder
enables the joint discovery of governing equations and coordinate systems with
a theoretically guaranteed uncertainty estimate. To resolve the challenging
computational tractability of the Bayesian hierarchical setting, we adapt an
adaptive empirical Bayesian method with Stochatic gradient Langevin dynamics
(SGLD) which gives a computationally tractable way of Bayesian posterior
sampling within our framework. Bayesian SINDy autoencoder achieves better
physics discovery with lower data and fewer training epochs, along with valid
uncertainty quantification suggested by the experimental studies. The Bayesian
SINDy autoencoder can be applied to real video data, with accurate physics
discovery which correctly identifies the governing equation and provides a
close estimate for standard physics constants like gravity $g$, for example, in
videos of a pendulum.
|
Spontaneous reporting systems (SRS) have been developed to collect adverse
event records that contain personal demographics and sensitive information like
drug indications and adverse reactions. The release of SRS data may disclose
the privacy of the data provider. Unlike other microdata, very few
anonymyization methods have been proposed to protect individual privacy while
publishing SRS data. MS(k, {\theta}*)-bounding is the first privacy model for
SRS data that considers multiple individual records, mutli-valued sensitive
attributes, and rare events. PPMS(k, {\theta}*)-bounding then is proposed for
solving cross-release attacks caused by the follow-up cases in the periodical
SRS releasing scenario. A recent trend of microdata anonymization combines the
traditional syntactic model and differential privacy, fusing the advantages of
both models to yield a better privacy protection method. This paper proposes
the PPMS-DP(k, {\theta}*, {\epsilon}) framework, an enhancement of PPMS(k,
{\theta}*)-bounding that embraces differential privacy to improve privacy
protection of periodically released SRS data. We propose two anonymization
algorithms conforming to the PPMS-DP(k, {\theta}*, {\epsilon}) framework,
PPMS-DPnum and PPMS-DPall. Experimental results on the FAERS datasets show that
both PPMS-DPnum and PPMS-DPall provide significantly better privacy protection
than PPMS-(k, {\theta}*)-bounding without sacrificing data distortion and data
utility.
|
We propose a hybrid model to fit the X-ray spectra of atoll-type X-ray
transients in the soft and hard states. This model uniquely produces luminosity
tracks that are proportional to T^4 for both the accretion disk and boundary
layer. The model also indicates low Comptonization levels for the soft state,
gaining a similarity to black holes in the relationship between Comptonization
level and the strength of integrated rms variability in the power density
spectrum. The boundary layer appears small, with a surface area that is roughly
constant across soft and hard states. This result may suggestion that the NS
radius is smaller than its inner-most stable circular orbit.
|
This paper illustrates how one can deduce preference from observed choices
when attention is not only limited but also random. In contrast to earlier
approaches, we introduce a Random Attention Model (RAM) where we abstain from
any particular attention formation, and instead consider a large class of
nonparametric random attention rules. Our model imposes one intuitive
condition, termed Monotonic Attention, which captures the idea that each
consideration set competes for the decision-maker's attention. We then develop
revealed preference theory within RAM and obtain precise testable implications
for observable choice probabilities. Based on these theoretical findings, we
propose econometric methods for identification, estimation, and inference of
the decision maker's preferences. To illustrate the applicability of our
results and their concrete empirical content in specific settings, we also
develop revealed preference theory and accompanying econometric methods under
additional nonparametric assumptions on the consideration set for binary choice
problems. Finally, we provide general purpose software implementation of our
estimation and inference results, and showcase their performance using
simulations.
|
The relics of disrupted satellite galaxies around the Milky Way and Andromeda
have been found, but direct evidence of a satellite galaxy in the early stages
of being disrupted has remained elusive. We have discovered a dwarf satellite
galaxy in the process of being torn apart by gravitational tidal forces as it
merges with a larger galaxy's dark matter halo. Our results illustrate the
morphological transformation of dwarf galaxies by tidal interaction and the
continued build-up of galaxy halos.
|
The splitting number of a link is the minimal number of crossing changes
between different components required to convert it into a split link. We
obtain a lower bound on the splitting number in terms of the (multivariable)
signature and nullity. Although very elementary and easy to compute, this bound
turns out to be suprisingly efficient. In particular, it makes it a routine
check to recover the splitting number of 129 out of the 130 prime links with at
most 9 crossings. Also, we easily determine 16 of the 17 splitting numbers that
were studied by Batson and Seed using Khovanov homology, and later computed by
Cha, Friedl and Powell using a variety of techniques. Finally, we determine the
splitting number of a large class of 2-bridge links which includes examples
recently computed by Borodzik and Gorsky using a Heegaard Floer theoretical
criterion.
|
The path-integral formulation of quantum cosmology with a massless scalar
field as a sum-over-histories of volume transitions is discussed, with
particular but non-exclusive reference to loop quantum cosmology. Exploiting
the analogy with the relativistic particle, we give a complete overview of the
possible two-point functions, pointing out the choices involved in their
definitions, deriving their vertex expansions and the composition laws they
satisfy. We clarify the origin and relations of different quantities previously
defined in the literature, in particular the tie between definitions using a
group averaging procedure and those in a deparametrized framework. Finally, we
draw some conclusions about the physics of a single quantum universe (where
there exist superselection rules on positive- and negative-frequency sectors
and different choices of inner product are physically equivalent) and
multiverse field theories where the role of these sectors and the inner product
are reinterpreted.
|
Given an integer $m\geq2$, the Hardy--Littlewood inequality (for real
scalars) says that for all $2m\leq p\leq\infty$, there exists a constant
$C_{m,p}% ^{\mathbb{R}}\geq1$ such that, for all continuous $m$--linear forms
$A:\ell_{p}^{N}\times\cdots\times\ell_{p}^{N}\rightarrow\mathbb{R}$ and all
positive integers $N$, \[ \left( \sum_{j_{1},...,j_{m}=1}^{N}\left\vert
A(e_{j_{1}},...,e_{j_{m}% })\right\vert ^{\frac{2mp}{mp+p-2m}}\right)
^{\frac{mp+p-2m}{2mp}}\leq C_{m,p}^{\mathbb{R}}\left\Vert A\right\Vert . \] The
limiting case $p=\infty$ is the well-known Bohnenblust--Hille inequality; the
behavior of the constants $C_{m,p}^{\mathbb{R}}$ is an open problem. In this
note we provide nontrivial lower bounds for these constants.
|
It is one of the most challenging problems in applied mathematics to
approximatively solve high-dimensional partial differential equations (PDEs).
Recently, several deep learning-based approximation algorithms for attacking
this problem have been proposed and tested numerically on a number of examples
of high-dimensional PDEs. This has given rise to a lively field of research in
which deep learning-based methods and related Monte Carlo methods are applied
to the approximation of high-dimensional PDEs. In this article we offer an
introduction to this field of research by revisiting selected mathematical
results related to deep learning approximation methods for PDEs and reviewing
the main ideas of their proofs. We also provide a short overview of the recent
literature in this area of research.
|
Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions. It is
used to identify the co-expressed genes in specific cells or tissues that are
actively used to make proteins. This method is used to analysis the gene
expression, an important task in bioinformatics research. Cluster analysis of
gene expression data has proved to be a useful tool for identifying
co-expressed genes, biologically relevant groupings of genes and samples. In
this paper we applied K-Means with Automatic Generations of Merge Factor for
ISODATA- AGMFI. Though AGMFI has been applied for clustering of Gene Expression
Data, this proposed Enhanced Automatic Generations of Merge Factor for ISODATA-
EAGMFI Algorithms overcome the drawbacks of AGMFI in terms of specifying the
optimal number of clusters and initialization of good cluster centroids.
Experimental results on Gene Expression Data show that the proposed EAGMFI
algorithms could identify compact clusters with perform well in terms of the
Silhouette Coefficients cluster measure.
|
To know which operators to apply and in which order, as well as attributing
good values to their parameters is a challenge for users of computer vision.
This paper proposes a solution to this problem as a multi-agent system modeled
according to the Vowel approach and using the Q-learning algorithm to optimize
its choice. An implementation is given to test and validate this method.
|
Consider a communication network with a source, a relay and a destination.
Each time interval, the source may dynamically choose between a few possible
coding schemes, based on the channel state, traffic pattern and its own queue
status. For example, the source may choose between a direct route to the
destination and a relay-assisted scheme. Clearly, due to the difference in the
performance achieved, as well as the resources each scheme uses, a sender might
wish to choose the most appropriate one based on its status.
In this work, we formulate the problem as a Semi-Markov Decision Process.
This formulation allows us to find an optimal policy, expressed as a function
of the number of packets in the source queue and other parameters. In
particular, we show a general solution which covers various configurations,
including different packet size distributions and varying channels.
Furthermore, for the case of exponential transmission times, we analytically
prove the optimal policy has a threshold structure, that is, there is a unique
value of a single parameter which determines which scheme (or route) is
optimal. Results are also validated with simulations for several interesting
models.
|
The outflowing molecular gas in the circumnuclear disk (CND) of the nearby
(D=14 Mpc) AGN-starburst composite galaxy NGC 1068 is considered as a
manifestation of ongoing AGN feedback. The large spread of velocities from the
outflowing gas is likely driving various kinds of shock chemistry across the
CND. We performed a multiline molecular study using CH3OH with the aim of
characterizing the gas properties probed by CH3OH in the CND of NGC 1068, and
investigating its potential association with molecular shocks. Multi-transition
CH3OH were imaged at the resolution of 0.''5-0.''8 with the Atacama Large
Millimeter/submillimeter Array (ALMA). We performed non-LTE radiative transfer
analysis coupled with a Bayesian inference process in order to determine the
gas properties such as the gas volume density and the gas kinetic temperature.
The gas densities traced by CH3OH point to $\sim 10^{6}$ cm\textsuperscript{-3}
across all the CND regions. The gas kinetic temperature cannot be well
constrained in any of the CND regions though the inferred temperature is likely
low ($\lesssim 100$ K).The low gas temperature traced by CH3OH suggests shocks
and subsequent fast cooling as the origin of the observed gas-phase CH3OH
abundance. We also note that the E-/A- isomer column density ratio inferred is
fairly close to unity, which is interestingly different from the Galactic
measurements in the literature. It remains inconclusive whether CH3OH
exclusively traces slow and non-dissociative shocks, or whether the CH3OH
abundance can actually be boosted in both fast and slow shocks.
|
We study the time lags between the continuum emission of quasars at different
wavelengths, based on more than four years of multi-band ($g$, $r$, $i$, $z$)
light-curves in the Pan-STARRS Medium Deep Fields. As photons from different
bands emerge from different radial ranges in the accretion disk, the lags
constrain the sizes of the accretion disks. We select 240 quasars with
redshifts $z \approx 1$ or $z \approx 0.3$ that are relatively emission line
free. The light curves are sampled from day to month timescales, which makes it
possible to detect lags on the scale of the light crossing time of the
accretion disks. With the code JAVELIN, we detect typical lags of several days
in the rest frame between the $g$ band and the $riz$ bands. The detected lags
are $\sim 2-3$ times larger than the light crossing time estimated from the
standard thin disk model, consistent with the recently measured lag in NGC5548
and micro-lensing measurements of quasars. The lags in our sample are found to
increase with increasing luminosity. Furthermore, the increase in lags going
from $g-r$ to $g-i$ and then to $g-z$ is slower than predicted in the thin disk
model, particularly for high luminosity quasars. The radial temperature profile
in the disk must be different from what is assumed. We also find evidence that
the lags decrease with increasing line ratios between ultraviolet FeII lines
and MgII, which may point to changes in the accretion disk structure at higher
metallicity.
|
We present a constructive procedure to obtain the critical behavior of
Painleve' VI transcendents and solve the connection problem. This procedure
yields two and one parameter families of solutions, including trigonometric and
logarithmic behaviors, and three classes of solutions with Taylor expansion at
a critical point.
|
Impact crater cataloging is an important tool in the study of the geological
history of planetary bodies in the Solar System, including dating of surface
features and geologic mapping of surface processes. Catalogs of impact craters
have been created by a diverse set of methods over many decades, including
using visible or near infra-red imagery and digital terrain models.
I present an automated system for crater detection and cataloging using a
digital terrain model (DTM) of Mars - In the algorithm craters are first
identified as rings or disks on samples of the DTM image using a convolutional
neural network with a UNET architecture, and the location and size of the
features are determined using a circle matching algorithm. I describe the
crater detection algorithm (CDA) and compare its performance relative to an
existing crater dataset. I further examine craters missed by the CDA as well as
potential new craters found by the algorithm. I show that the CDA can find
three-quarters of the resolvable craters in the Mars DTMs, with a median
difference of 5-10% in crater diameter compared to an existing database.
A version of this CDA has been used to process DTM data from the Moon and
Mercury Silburt et al. (2019). The source code for the complete CDA is
available at https://github.com/silburt/DeepMoon, and Martian crater datasets
generated using this CDA are available at https://doi.org/10.5683/SP2/MDKPC8.
|
We classify integer abc-equations c = a + b (to be defined), according to
their radical R(abc) and prove that the resulting equivalence classes contain
only a finite number of such equations. The proof depends on a 1933 theorem of
Kurt Mahler.
|
We introduce the class of nonpositively curved 2-complexes with the Isolated
Flats Property. These 2-complexes are, in a sense, hyperbolic relative to their
flats. More precisely, we show that several important properties of
Gromov-hyperbolic spaces hold `relative to flats' in nonpositively curved
2-complexes with the Isolated Flats Property.
We introduce the Relatively Thin Triangle Property, which states roughly that
the fat part of a geodesic triangle lies near a single flat. We also introduce
the Relative Fellow Traveller Property, which states that pairs of
quasigeodesics with common endpoints fellow travel relative to flats, in a
suitable sense. The main result of this paper states that in the setting of
CAT(0) 2-complexes, the Isolated Flats Property is equivalent to the Relatively
Thin Triangle Property and is also equivalent to the Relative Fellow Traveller
Property.
|
We report disk-shaped silicon optomechanical resonators with frequency up to
1.75 GHz in the ultrahigh frequency band. Optical transduction of the thermal
motion of the disks' in-plane vibrational modes yields a displacement
sensitivity of 4.1 \times 10^(-17) m/Hz^(1/2). Due to the reduced clamping
loss, these disk resonators possess high mechanical quality factors (Q), with
the highest value of 4370 for the 1.47 GHz mode measured in ambient air.
Numerical simulation on the modal frequency and mechanical Q for disks of
varying undercut shows modal coupling and suggests a realistic pedestal size to
achieve the highest possible Q.
|
The current generation of streaming media players often allow users to speak
commands (e.g., users can change the TV channel by pressing a button and saying
"ESPN"). However, these devices typically support a narrow range of control-
and search-oriented commands, and do not support deeper recommendation or
exploration queries. To study spoken natural language interactions with
recommenders, we have built MovieLens TV, a movie recommender system with no
input modalities except voice. In this poster, we describe MovieLens TV, with a
focus on lessons learned building a prototype system around an off-the-shelf
Amazon Echo.
|
The discipline of game theory was introduced in the context of economics, and
has been applied to study cyber attacker and defender behaviors. While
adaptions have been made to accommodate features in the cyber domain, these
studies are inherently limited by the root of game theory in economic systems
where players (i.e., agents) may be selfish but not malicious. In this SoK, we
systematize the major cybersecurity problems that have been studied with the
game-theoretic approach, the assumptions that have been made, the models and
solution concepts that have been proposed. The systematization leads to a
characterization of the technical gaps that must be addressed in order to make
game-theoretic cybersecurity models truly useful. We explore bridges to address
them.
|
Chas and Sullivan proved the existence of a Batalin-Vilkovisky algebra
structure in the homology of free loop spaces on closed finite dimensional
smooth manifolds using chains and chain homotopies. This algebraic structure
involves an associative product called the loop product, a Lie bracket called
the loop bracket, and a square 0 operator called the BV operator. Cohen and
Jones gave a homotopy theoretic description of the loop product in terms of
spectra. In this paper, we give an explicit homotopy theoretic description of
the loop bracket and, using this description, we give a homological proof of
the BV identity connecting the loop product, the loop bracket, and the BV
operator. The proof is based on an observation that the loop bracket and the BV
derivation are given by the same cycle in the free loop space, except that they
differ by parametrization of loops.
|
Time dependent quantum systems have become indispensable in science and its
applications, particularly at the atomic and molecular levels. Here, we discuss
the approximation of closed time dependent quantum systems on bounded domains,
via iterative methods in Sobolev space based upon evolution operators.
Recently, existence and uniqueness of weak solutions were demonstrated by a
contractive fixed point mapping defined by the evolution operators. Convergent
successive approximation is then guaranteed. This article uses the same mapping
to define quadratically convergent Newton and approximate Newton methods.
Estimates for the constants used in the convergence estimates are provided. The
evolution operators are ideally suited to serve as the framework for this
operator approximation theory, since the Hamiltonian is time dependent. In
addition, the hypotheses required to guarantee quadratic convergence of the
Newton iteration build naturally upon the hypotheses used for the
existence/uniqueness theory.
|
We present HST/WFPC2 images in Halpha+[NII]6548,6583 lines and continuum
radiation and a VLA map at 8 GHz of the H2O gigamaser galaxy TXS2226-184. This
galaxy has the most luminous H2O maser emission known to date. Our red
continuum images reveal a highly elongated galaxy with a dust lane crossing the
nucleus. The surface brightness profile is best fitted by a bulge plus
exponential disk model, favoring classification as a highly inclined spiral
galaxy (i=70 degree). The color map confirms the dust lane aligned with the
galaxy major axis and crossing the putative nucleus. The Halpha+[NII] map
exhibits a gaseous, jet-like structure perpendicular to the nuclear dust lane
and the galaxy major axis. The radio map shows compact, steep spectrum emission
which is elongated in the same direction as the Halpha+[NII] emission. By
analogy with Seyfert galaxies, we therefore suspect this alignment reflects an
interaction between the radio jet and the ISM. The axes of the nuclear dust
disk, the radio emission, and the optical line emission apparently define the
axis of the AGN. The observations suggest that in this galaxy the nuclear
accretion disk, obscuring torus, and large scale molecular gas layer are
roughly coplanar. Our classification of the host galaxy strengthens the trend
for megamasers to be found preferentially in highly inclined spiral galaxies.
|
This paper studies composite quantum systems, like atom-cavity systems and
coupled optical resonators, in the absence of external driving by resorting to
methods from quantum field theory. Going beyond the rotating wave
approximation, it is shown that the usually neglected counter-rotating part of
the Hamiltonian relates to the entropy operator and generates an irreversible
time evolution. The vacuum state of the system is shown to evolve into a
generalized coherent state exhibiting entanglement of the modes in which the
counter-rotating terms are expressed. Possible consequences at observational
level in quantum optics experiments are currently under study.
|
We study the Dirichlet problem in a domain with a small hole close to the
boundary. To do so, for each pair $\boldsymbol\varepsilon = (\varepsilon_1,
\varepsilon_2 )$ of positive parameters, we consider a perforated domain
$\Omega_{\boldsymbol\varepsilon}$ obtained by making a small hole of size
$\varepsilon_1 \varepsilon_2 $ in an open regular subset $\Omega$ of
$\mathbb{R}^n$ at distance $\varepsilon_1$ from the boundary $\partial\Omega$.
As $\varepsilon_1 \to 0$, the perforation shrinks to a point and, at the same
time, approaches the boundary. When $\boldsymbol\varepsilon \to (0,0)$, the
size of the hole shrinks at a faster rate than its approach to the boundary. We
denote by $u_{\boldsymbol\varepsilon}$ the solution of a Dirichlet problem for
the Laplace equation in $\Omega_{\boldsymbol\varepsilon}$. For a space
dimension $n\geq 3$, we show that the function mapping $\boldsymbol\varepsilon$
to $u_{\boldsymbol\varepsilon}$ has a real analytic continuation in a
neighborhood of $(0,0)$. By contrast, for $n=2$ we consider two different
regimes: $\boldsymbol\varepsilon$ tends to $(0,0)$, and $\varepsilon_1$ tends
to $0$ with $\varepsilon_2$ fixed. When $\boldsymbol\varepsilon\to(0,0)$, the
solution $u_{\boldsymbol\varepsilon}$ has a logarithmic behavior; when only
$\varepsilon_1\to0$ and $\varepsilon_2$ is fixed, the asymptotic behavior of
the solution can be described in terms of real analytic functions of
$\varepsilon_1$. We also show that for $n=2$, the energy integral and the total
flux on the exterior boundary have different limiting values in the two
regimes. We prove these results by using functional analysis methods in
conjunction with certain special layer potentials.
|
In this manuscript we introduce cubic bubbles that are pinned to 3D printed
millimetric frames immersed in water. Cubic bubbles are more stable over time
and space than standard spherical bubbles, while still allowing large
oscillations of their faces. We found that each face can be described as a
harmonic oscillator coupled to the other ones. These resonators are coupled by
the gas inside the cube but also by acoustic interactions in the liquid. We
provide an analytical model and 3D numerical simulations predicting the
resonance with a very good agreement. Acoustically, cubic bubbles prove to be
good monopole sub-wavelength emitters, with non-emissive secondary surfaces
modes.
|
In this paper, we investigate the nonrelativistic limit of normalized
solutions to a nonlinear Dirac equation as given below: \begin{equation*}
\begin{cases} &-i c\sum\limits_{k=1}^3\alpha_k\partial_k u +mc^2 \beta {u}-
\Gamma * (K |{u}|^\kappa) K|{u}|^{\kappa-2}{u}- P |{u}|^{s-2}{u}=\omega {u}, \\
&\displaystyle\int_{\mathbb{R}^3}\vert u \vert^2 dx =1.
\end{cases} \end{equation*} Here, $c>0$ represents the speed of light, $m >
0$ is the mass of the Dirac particle, $\omega\in\mathbb{R}$ emerges as an
indeterminate Lagrange multiplier, $\Gamma$, $K$, $P$ are real-valued function
defined on $\mathbb{R}^3$, also known as potential functions. Our research
first confirms the presence of normalized solutions to the Dirac equation under
high-speed light conditions. We then illustrate that these solutions progress
to become the ground states of a system of nonlinear Schr\"odinger equations
with a normalized constraint, exhibiting uniform boundedness and exponential
decay irrespective of the light speed. Our results form the first discussion on
nonrelativistic limit of normalized solutions to nonlinear Dirac equations.
This not only aids in the study of normalized solutions of the nonlinear
Schr\"odinger equations, but also physically explains that the normalized
ground states of high-speed particles and low-speed motion particles are
consistent.
|
The thickness dependences of the photocurrent quantum yield and photoenergy
parameters of silicon backside contact solar cells (BC SC) are investigated
theoretically and experimentally. The surface recombination rate on the
irradiated surface was minimized by means of creating the layers of microporous
silicon. A method of finding the surface recombination rate and the diffusion
length of minority carriers from the thickness dependences of the photocurrent
quantum yield under conditions of the strong absorption is proposed. The
performed studies allowed us to establish that the thinning of the BC SC
samples in the case of minimizing the surface recombination rate gives a
possibility to achieve rather high efficiencies of photoconversion. It is also
shown that the agreement between the experimental and theoretical spectral
dependences of the photocurrent quantum yield can be reached only with regard
for the coefficient of light reflection from the backside surface.
|
This paper presents updated measurements of the lifetimes of the B^0_s meson
and the \Lambda_b baryon using 4.4 million hadronic Z^0 decays recorded by the
OPAL detector at LEP from 1990 to 1995. A sample of B^0_s decays is obtained
using D_s^- \ell^+ combinations, where the D_s^- is fully reconstructed in the
\phi \pi^-, K^*0 K^- and K^- K^0_S decay channels and partially reconstructed
in the \phi \ell^- \nu(bar) X decay mode. A sample of \Lambda_b decays is
obtained using \Lambda_c^+ \ell^- combinations, where the \Lambda_c^+ is fully
reconstructed in its decay to a p K^- \pi^+ final state and partially
reconstructed in the \Lambda \ell^+ \nu X decay channel. From 172 +/- 28 D_s^-
\ell^+ combinations attributed to B^0_s decays, the measured lifetime is
\tau(B^0_s) = 1.50 +0.16 -0.15 +/- 0.04 ps,
where the errors are statistical and systematic, respectively. From the 129
+\- 25 \Lamda_c^+ \ell^- combinations attributed to \Lambda_b decays, the
measured lifetime
\tau(\Lambda_b) = 1.29 +0.24 -0.22 +/- 0.06 ps,
where the errors are statistical and systematic, respectively.
|
Electrohydraulic servosystems are widely employed in industrial applications
such as robotic manipulators, active suspensions, precision machine tools and
aerospace systems. They provide many advantages over electric motors, including
high force to weight ratio, fast response time and compact size. However,
precise control of electrohydraulic actuated systems, due to their inherent
nonlinear characteristics, cannot be easily obtained with conventional linear
controllers. Most flow control valves can also exhibit some hard nonlinearities
such as dead-zone due to valve spool overlap. This work describes the
development of an adaptive fuzzy controller for electrohydraulic actuated
systems with unknown dead-zone. The stability properties of the closed-loop
systems was proven using Lyapunov stability theory and Barbalat's lemma.
Numerical results are presented in order to demonstrate the control system
performance.
|
The automated segmentation of buildings in remote sensing imagery is a
challenging task that requires the accurate delineation of multiple building
instances over typically large image areas. Manual methods are often laborious
and current deep-learning-based approaches fail to delineate all building
instances and do so with adequate accuracy. As a solution, we present Trainable
Deep Active Contours (TDACs), an automatic image segmentation framework that
intimately unites Convolutional Neural Networks (CNNs) and Active Contour
Models (ACMs). The Eulerian energy functional of the ACM component includes
per-pixel parameter maps that are predicted by the backbone CNN, which also
initializes the ACM. Importantly, both the ACM and CNN components are fully
implemented in TensorFlow and the entire TDAC architecture is end-to-end
automatically differentiable and backpropagation trainable without user
intervention. TDAC yields fast, accurate, and fully automatic simultaneous
delineation of arbitrarily many buildings in the image. We validate the model
on two publicly available aerial image datasets for building segmentation, and
our results demonstrate that TDAC establishes a new state-of-the-art
performance.
|
For two dimensional Schroedinger Hamiltonians we formulate boundary
conditions that split the Hilbert space according to the chirality of the
eigenstates on the boundary. With magnetic fields, and in particular, for
Quantum Hall Systems, this splitting corresponds to edge and bulk states.
Applications to the integer and fractional Hall effect and some open problems
are described.
|
We investigate propagators in Lorentz (or Landau) gauge by Monte Carlo
simulations. In order to be able to compare with perturbative calculations we
use large $\beta$ values. There the breaking of the Z(2) symmetry turns out to
be important for all of the four lattice directions. Therefore we make sure
that the analysis is performed in the correct state. We discus implications of
the gauge fixing mechanism and point out the form of the weak-coupling behavior
to be expected in the presence of zero-momentum modes. Our numerical result is
that the gluon propagator in the weak-coupling limit is strongly affected by
zero-momentum modes. This is corroborated in detail by our quantitative
comparison with analytical calculations.
|
The so-called method of phase synchronization has been advocated in a number
of papers as a way of decoupling a system of linear second-order differential
equations by a linear transformation of coordinates and velocities. This is a
rather unusual approach because velocity-dependent transformations in general
do not preserve the second-order character of differential equations. Moreover,
at least in the case of linear transformations, such a velocity-dependent one
defines by itself a second-order system, which need not have anything to do, in
principle, with the given system or its reformulation. This aspect, and the
related questions of compatibility it raises, seem to have been overlooked in
the existing literature. The purpose of this paper is to clarify this issue and
to suggest topics for further research in conjunction with the general theory
of decoupling in a differential geometric context.
|
This paper elaborates on Conditional Handover (CHO) modelling, aimed at
maximizing the use of contention free random access (CFRA) during mobility.
This is a desirable behavior as CFRA increases the chance of fast and
successful handover. In CHO this may be especially challenging as the time
between the preparation and the actual cell change can be significantly longer
in comparison to non-conditional handover. Thus, new means to mitigate this
issue need to be defined. We present the scheme where beam-specific measurement
reporting can lead to CFRA resource updating prior to CHO execution. We have
run system level simulations to confirm that the proposed solution increases
the ratio of CFRA attempts during CHO. In the best-case scenario, we observe a
gain exceeding 13%. We also show how the average delay of completing the
handover is reduced. To provide the entire perspective, we present at what
expense these gains can be achieved by analyzing the increased signaling
overhead for updating the random access resources. The study has been conducted
for various network settings and considering higher frequency ranges at which
the user communicates with the network. Finally, we provide an outlook on
future extensions of the investigated solution.
|
We introduce the one-dimensional quasireciprocal lattices where the forward
hopping amplitudes between nearest neighboring sites $\{ t+t_{jR} \}$ are
chosen to be a random permutation of the backward hopping $\{ t+t_{jL} \}$ or
vice versa. The values of $\{ t_{jL} \}$ (or $\{t_{jR} \}$) can be periodic,
quasiperiodic, or randomly distributed. We show that the Hamiltonian matrices
are pseudo-Hermitian and the energy spectra are real as long as $\{ t_{jL} \}$
(or $\{t_{jR} \}$) are smaller than the threshold value. While the
non-Hermitian skin effect is always absent in the eigenstates due to the global
cancellation of local nonreciprocity, the competition between the
nonreciprocity and the accompanying disorders in hopping amplitudes gives rise
to energy-dependent localization transitions. Moreover, in the quasireciprocal
Su-Schrieffer-Heeger models with staggered hopping $t_{jL}$ (or $t_{jR}$),
topologically nontrivial phases are found in the real-spectra regimes
characterized by nonzero winding numbers. Finally, we propose an experimental
scheme to realize the quasireciprocal models in electrical circuits. Our
findings shed new light on the subtle interplay among nonreciprocity, disorder,
and topology.
|
Subsets and Splits