text
stringlengths 6
128k
|
---|
We study the non-linear background field redefinitions arising at the quantum
level in a spontaneously broken effective gauge field theory. The non-linear
field redefinitions are crucial for the symmetric (i.e. fulfilling all the
relevant functional identities of the theory) renormalization of
gauge-invariant operators. In a general $R_\xi$-gauge the classical
background-quantum splitting is also non-linearly deformed by radiative
corrections. In the Landau gauge these deformations vanish to all orders in the
loop expansion.
|
We model a system of n asymmetric firms selling a homogeneous good in a
common market through a pay-as-bid auction. Every producer chooses as its
strategy a supply function returning the quantity S(p) that it is willing to
sell at a minimum unit price p. The market clears at the price at which the
aggregate demand intersects the total supply and firms are paid the bid prices.
We study a game theoretic model of competition among such firms and focus on
its equilibria (Supply function equilibrium). The game we consider is a
generalization of both models where firms can either set a fixed quantity
(Cournot model) or set a fixed price (Bertrand model). Our main result is to
prove existence and provide a characterization of (pure strategy) Nash
equilibria in the space of K-Lipschitz supply functions.
|
The ability to localize and manipulate individual quasiparticles in
mesoscopic structures is critical in experimental studies of quantum mechanics
and thermodynamics, and in potential quantum information devices, e.g., for
topological schemes of quantum computation. In strong magnetic field, the
quantum Hall edge modes can be confined around the circumference of a small
antidot, forming discrete energy levels that have a unique ability to localize
fractionally charged quasiparticles. Here, we demonstrate a Dirac fermion
quantum Hall antidot in graphene in the integer quantum Hall regime, where
charge transport characteristics can be adjusted through the coupling strength
between the contacts and the antidot, from Coulomb blockade dominated tunneling
under weak coupling to the effectively non-interacting resonant tunneling under
strong coupling. Both regimes are characterized by single -flux and -charge
oscillations in conductance persisting up to temperatures over 2 orders of
magnitude higher than previous reports in other material systems. Such graphene
quantum Hall antidots may serve as a promising platform for building and
studying novel quantum circuits for quantum simulation and computation.
|
We describe a competetive equillibrium in a railway cargo transportation
model. We reduce the problem of finding this equillibrium to the solution of to
mutually dual convex optimization problems. According to L.V. Kantorvich we
interpret an optimal traffic policy for the model in terms of Lagrange
multipliers.
|
We report two dimensional Dirac fermions and quantum magnetoresistance in
single crystals of CaMnBi$_2$. The non-zero Berry's phase, small cyclotron
resonant mass and first-principle band structure suggest the existence of the
Dirac fermions in the Bi square nets. The in-plane transverse magnetoresistance
exhibits a crossover at a critical field $B^*$ from semiclassical weak-field
$B^2$ dependence to the high-field unsaturated linear magnetoresistance ($\sim
120%$ in 9 T at 2 K) due to the quantum limit of the Dirac fermions. The
temperature dependence of $B^*$ satisfies quadratic behavior, which is
attributed to the splitting of linear energy dispersion in high field. Our
results demonstrate the existence of two dimensional Dirac fermions in
CaMnBi$_2$ with Bi square nets.
|
Over the last decade, HST imaging studies have revealed that the centers of
most galaxies are occupied by compact, barely resolved sources. Based on their
structural properties, position in the fundamental plane, and spectra, these
sources clearly have a stellar origin. They are therefore called ``nuclear star
clusters'' (NCs) or ``stellar nuclei''. NCs are found in galaxies of all Hubble
types, suggesting that their formation is intricately linked to galaxy
evolution. In this contribution, I briefly review the results from recent
studies of NCs, touch on some ideas for their formation, and mention some open
issues related to the possible connection between NCs and supermassive black
holes.
|
In this paper, we consider the lengths of cycles that can be embedded on the
edges of the generalized pancake graphs which are the Cayley graph of the
generalized symmetric group $S(m,n)$, generated by prefix reversals. The
generalized symmetric group $S(m,n)$ is the wreath product of the cyclic group
of order $m$ and the symmetric group of order $n!$. Our main focus is the
underlying \emph{undirected} graphs, denoted by $\mathbb{P}_m(n)$. In the cases
when the cyclic group has one or two elements, these graphs are isomorphic to
the pancake graphs and burnt pancake graphs, respectively. We prove that when
the cyclic group has three elements, $\mathbb{P}_3(n)$ has cycles of all
possible lengths, thus resembling a similar property of pancake graphs and
burnt pancake graphs. Moreover, $\mathbb{P}_4(n)$ has all the even-length
cycles. We utilize these results as base cases and show that if $m>2$ is even,
$\mathbb{P}_m(n)$ has all cycles of even length starting from its girth to a
Hamiltonian cycle. Moreover, when $m>2$ is odd, $\mathbb{P}_m(n)$ has cycles of
all lengths starting from its girth to a Hamiltonian cycle. We furthermore show
that the girth of $\mathbb{P}_m(n)$ is $\min\{m,6\}$ if $m\geq3$, thus
complementing the known results for $m=1,2.$
|
The Bell-Clauser-Horne-Shimony-Holt inequality can be used to show that no
local hidden-variable theory can reproduce the correlations predicted by
quantum mechanics (QM). It can be proved that certain QM correlations lead to a
violation of the classical bound established by the inequality, while all
correlations, QM and classical, respect a QM bound (the Tsirelson bound). Here,
we show that these well-known results depend crucially on the assumption that
the values of physical magnitudes are scalars. The result implies, first, that
the origin of the Tsirelson bound is geometrical, not physical; and, second,
that a local hidden-variable theory does not contradict QM if the values of
physical magnitudes are vectors.
|
Levitated nanoparticles are a promising platform for sensing applications and
for macroscopic quantum experiments. While the nanoparticles' motional
temperatures can be reduced to near absolute zero, their uncontrolled internal
degrees of freedom remain much hotter, inevitably leading to the emission of
heat radiation. The decoherence and motional heating caused by this thermal
emission process is still poorly understood beyond the case of the
center-of-mass motion of point particles. Here, we present the master equation
describing the impact of heat radiation on the motional quantum state of
arbitrarily sized and shaped dielectric rigid rotors. It predicts the
localization of spatio-orientational superpositions only based on the bulk
material properties and the particle geometry. A counter-intuitive and
experimentally relevant implication of the presented theory is that
orientational superpositions of optically isotropic bodies are not protected by
their symmetry, even in the small-particle limit.
|
The goal of this article is to study the box dimension of the mixed
Katugampola fractional integral of two-dimensional continuous functions on [0;
1]X[0; 1]. We prove that the box dimension of the mixed Katugampola fractional
integral having fractional order (\alpha = (\alpha_1; \alpha_2); \alpha_1 > 0;
\alpha_2 > 0) of two-dimensional continuous functions on [0; 1]X[0; 1] is still
two. Moreover, the results are also established for the mixed Hadamard
fractional integral.
|
For piecewise-linear maps the stable and unstable manifolds of hyperbolic
periodic solutions are themselves piecewise-linear. Hence compact subsets of
these manifolds can be represented using polytopes (i.e. polygons, in the case
of two-dimensional manifolds). Such representations are efficient and exact so
for computational purposes are superior to representations that use a large
number of points on some mesh (as is usually done in the smooth setting). We
introduce a method for computing convex polytope representations of stable and
unstable manifolds. For an unstable manifold we iterate a suitably small subset
of the local unstable manifold and prior to each iteration subdivide polytopes
where they intersect the switching manifold of the map. We prove the output
converges to the (entire) unstable manifold and use it to visualise attractors
and bifurcations of the three-dimensional border-collision normal form: we
identify a heterodimensional-cycle, a two-dimensional unstable manifold whose
closure appears to be a unique attractor, and a piecewise-linear analogue of a
first homoclinic tangency where an attractor appears to be destroyed.
|
In this paper, we propose a graph-based kinship reasoning (GKR) network for
kinship verification, which aims to effectively perform relational reasoning on
the extracted features of an image pair. Unlike most existing methods which
mainly focus on how to learn discriminative features, our method considers how
to compare and fuse the extracted feature pair to reason about the kin
relations. The proposed GKR constructs a star graph called kinship relational
graph where each peripheral node represents the information comparison in one
feature dimension and the central node is used as a bridge for information
communication among peripheral nodes. Then the GKR performs relational
reasoning on this graph with recursive message passing. Extensive experimental
results on the KinFaceW-I and KinFaceW-II datasets show that the proposed GKR
outperforms the state-of-the-art methods.
|
Robust statistical features have emerged from the microscopic analysis of
dense pedestrian flows through a bottleneck, notably with respect to the time
gaps between successive passages. We pinpoint the mechanisms at the origin of
these features thanks to simple models that we develop and analyse
quantitatively. We disprove the idea that anticorrelations between successive
time gaps (i.e., an alternation between shorter ones and longer ones) are a
hallmark of a zipper-like intercalation of pedestrian lines and show that they
simply result from the possibility that pedestrians from distinct 'lines' or
directions cross the bottleneck within a short time interval. A second feature
concerns the bursts of escapes, i.e., egresses that come in fast succession.
Despite the ubiquity of exponential distributions of burst sizes, entailed by a
Poisson process, we argue that anomalous (power-law) statistics arise if the
bottleneck is nearly congested, albeit only in a tiny portion of parameter
space. The generality of the proposed mechanisms implies that similar
statistical features should also be observed for other types of particulate
flows.
|
A (p,q)-analogue of the classical Rogers-Szego polynomial is defined by
replacing the q-binomial coefficient in it by the (p,q)-binomial coefficient.
Exactly like the Rogers-Szego polynomial is associated with the q-oscillator
algebra it is found that the (p,q)-Rogers-Szego polynomial is associated with
the (p,q)-oscillator algebra.
|
A bivariate integer-valued autoregressive process of order 1 (BINAR(1)) with
copula-joint innovations is studied. Different parameter estimation methods are
analyzed and compared via Monte Carlo simulations with emphasis on estimation
of the copula dependence parameter. An empirical application on defaulted and
non-defaulted loan data is carried out using different combinations of copula
functions and marginal distribution functions covering the cases where both
marginal distributions are from the same family, as well as the case where they
are from different distribution families.
|
Let $\mu$ be a finite positive Borel measure on the interval $[0, 1)$ and
$f(z)=\sum_{n=0}^{\infty}a_{n}z^{n} \in H(\mathbb{D})$. The Ces\`aro-like
operator is defined by $$ \mathcal {C}_{\mu}
(f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}a_k\right)z^n, \ z\in
\mathbb{D}, $$ where, for $n\geq 0$, $\mu_n$ denotes the $n$-th moment of the
measure $\mu$, that is, $\mu_n=\int_{[0, 1)} t^{n}d\mu(t)$. Let $X$ and $Y$ be
subspaces of $H( \mathbb{D})$, the purpose of this paper is to study the action
of $\mathcal {C}_{\mu}$ on distinct pairs $(X, Y)$. The spaces considered in
this paper are Hardy space $H^{p}(0<p\leq\infty)$, Morrey space
$L^{2,\lambda}(0<\lambda\leq1)$, mean Lipschitz space, Bloch type space, etc.
|
In this work, we aim at providing a consistent analysis of the dust
properties from metal-poor to metal-rich environments by linking them to
fundamental galactic parameters. We consider two samples of galaxies: the Dwarf
Galaxy Survey (DGS) and KINGFISH, totalling 109 galaxies, spanning almost 2 dex
in metallicity. We collect infrared (IR) to submillimetre (submm) data for both
samples and present the complete data set for the DGS sample. We model the
observed spectral energy distributions (SED) with a physically-motivated dust
model to access the dust properties. Using a different SED model (modified
blackbody), dust composition (amorphous carbon), or wavelength coverage at
submm wavelengths results in differences in the dust mass estimate of a factor
two to three, showing that this parameter is subject to non-negligible
systematic modelling uncertainties. For eight galaxies in our sample, we find a
rather small excess at 500 microns (< 1.5 sigma). We find that the dust SED of
low-metallicity galaxies is broader and peaks at shorter wavelengths compared
to more metal-rich systems, a sign of a clumpier medium in dwarf galaxies. The
PAH mass fraction and the dust temperature distribution are found to be driven
mostly by the specific star-formation rate, SSFR, with secondary effects from
metallicity. The correlations between metallicity and dust mass or total-IR
luminosity are direct consequences of the stellar mass-metallicity relation.
The dust-to-stellar mass ratios of metal-rich sources follow the well-studied
trend of decreasing ratio for decreasing SSFR. The relation is more complex for
highly star-forming low-metallicity galaxies and depends on the chemical
evolutionary stage of the source (i.e., gas-to-dust mass ratio). Dust growth
processes in the ISM play a key role in the dust mass build-up with respect to
the stellar content at high SSFR and low metallicity. (abridged)
|
A possibility of extending the applicability range of non-relativistic
calculations of electronuclear response functions in the quasielasic peak
region is studied. We show that adopting a particular model for determining the
kinematical inputs of the non-relativistic calculations can extend this range
considerably, almost eliminating the reference frame dependence of the results.
We also show that there exists one reference frame, where essentially the same
result can be obtained with no need of adopting the particular kinematical
model. The calculation is carried out with the Argonne V18 potential and the
Urbana IX three-nucleon interaction. A comparison of these improved
calculations with experimental data shows a very good agreement for the
quasielastic peak positions at $q=500,$ 600, 700 MeV/c and for the peak heights
at the two lower $q$--values, while for the peak height at $q=700$ MeV/c one
finds differences of about 20%.
|
A traditional approach to realize self-adaptation in software engineering
(SE) is by means of feedback loops. The goals of the system can be specified as
formal properties that are verified against models of the system. On the other
hand, control theory (CT) provides a well-established foundation for designing
feedback loop systems and providing guarantees for essential properties, such
as stability, settling time, and steady state error. Currently, it is an open
question whether and how traditional SE approaches to self-adaptation consider
properties from CT. Answering this question is challenging given the principle
differences in representing properties in both fields. In this paper, we take a
first step to answer this question. We follow a bottom up approach where we
specify a control design (in Simulink) for a case inspired by Scuderia Ferrari
(F1) and provide evidence for stability and safety. The design is then
transferred into code (in C) that is further optimized. Next, we define
properties that enable verifying whether the control properties still hold at
code level. Then, we consolidate the solution by mapping the properties in both
worlds using specification patterns as common language and we verify the
correctness of this mapping. The mapping offers a reusable artifact to solve
similar problems. Finally, we outline opportunities for future work,
particularly to refine and extend the mapping and investigate how it can
improve the engineering of self-adaptive systems for both SE and CT engineers.
|
A Bayesian approach is adopted to analyze the sequence of seismic events and
their magnitudes near Jo\~ao C\^amara which occurred mainly from 1983 to 1998
along the Samambaia fault. In this work, we choose a Bayesian model for the
process of occurrence times conditional on the observed magnitude values
following the same procedure suggested by Stavrakakis and Tselentis (1987). The
model parameters are determined on the basis of historical and physical
information. We generate posterior samples from the joint posterior
distribution of the model parameters by using a variant of the
Metropolis-Hastings algorithm. We use the results in a variety of ways,
including the construction of pointwise posterior confidence bands for the
conditional intensity of the point process as a function of time, as well as, a
posterior distribuition as a function of the mean of occurrence per unit time.
|
We explore the statistical properties of energy transfer in ensembles of
doubly-driven Random- Matrix Floquet Hamiltonians, based on universal symmetry
arguments. The energy pumping efficiency distribution P(E) is associated with
the Hamiltonian parameter ensemble and the eigenvalue statistics of the Floquet
operator. For specific Hamiltonian ensembles, P(E) undergoes a transition that
cannot be associated with a symmetry breaking of the instantaneous Hamiltonian.
The Floquet eigenvalue spacing distribution indicates the considered ensembles
constitute generic nonintegrable Hamiltonian families. As a step towards
Hamiltonian engineering, we develop a machine-learning classifier to understand
the relative parameter importance in resulting high conversion efficiency. We
propose Random Floquet Hamiltonians as a general framework to investigate
frequency conversion effects in a new class of generic dynamical processes
beyond adiabatic pumps.
|
We present an interferometric sensor for investigating macroscopic quantum
mechanics on a table-top scale. The sensor consists of pair of suspended
optical cavities with a finesse in excess of 100,000 comprising 10 g
fused-silica mirrors. In the current room-temperature operation, we achieve a
peak sensitivity of \SI{0.5}{\fmasd} in the acoustic frequency band, limited by
the readout noise. With additional suppression of the readout noise, we will be
able to reach the quantum radiation pressure noise, which would represent a
novel measurement of the quantum back-action effect. Such a sensor can
eventually be utilised for demonstrating macroscopic entanglement and testing
semi-classical and quantum gravity models.
|
Recent advances in general relativistic magnetohydrodynamic simulations have
expanded and improved our understanding of the dynamics of black-hole accretion
disks. However, current simulations do not capture the thermodynamics of
electrons in the low density accreting plasma. This poses a significant
challenge in predicting accretion flow images and spectra from first
principles. Because of this, simplified emission models have often been used,
with widely different configurations (e.g., disk- versus jet-dominated
emission), and were able to account for the observed spectral properties of
accreting black-holes. Exploring the large parameter space introduced by such
models, however, requires significant computational power that exceeds
conventional computational facilities. In this paper, we use GRay, a fast
GPU-based ray-tracing algorithm, on the GPU cluster El Gato, to compute images
and spectra for a set of six general relativistic magnetohydrodynamic
simulations with different magnetic field configurations and black-hole spins.
We also employ two different parametric models for the plasma thermodynamics in
each of the simulations. We show that, if only the spectral properties of Sgr
A* are used, all twelve models tested here can fit the spectra equally well.
However, when combined with the measurement of the image size of the emission
using the Event Horizon Telescope, current observations rule out all models
with strong funnel emission, because the funnels are typically very extended.
Our study shows that images of accretion flows with horizon-scale resolution
offer a powerful tool in understanding accretion flows around black-holes and
their thermodynamic properties.
|
Let $X_1,\dots,X_n$ be independent centered random vectors in $\mathbb{R}^d$.
This paper shows that, even when $d$ may grow with $n$, the probability
$P(n^{-1/2}\sum_{i=1}^nX_i\in A)$ can be approximated by its Gaussian analog
uniformly in hyperrectangles $A$ in $\mathbb{R}^d$ as $n\to\infty$ under
appropriate moment assumptions, as long as $(\log d)^5/n\to0$. This improves a
result of Chernozhukov, Chetverikov & Kato [Ann. Probab. 45 (2017) 2309-2353]
in terms of the dimension growth condition. When $n^{-1/2}\sum_{i=1}^nX_i$ has
a common factor across the components, this condition can be further improved
to $(\log d)^3/n\to0$. The corresponding bootstrap approximation results are
also developed. These results serve as a theoretical foundation of simultaneous
inference for high-dimensional models.
|
In this letter we present a scan for new vacua within consistent truncations
of eleven/ten-dimensional supergravity down to five dimensions that preserve $N
= 2$ supersymmetry, after their complete classification in arXiv:2112.03931. We
first make explicit the link between the equations of exceptional
Sasaki-Einstein backgrounds in arXiv:1602.02158 and the standard BPS equations
for $5d$ $N = 2$ of arXiv:1601.00482. This derivation allows us to expedite a
scan for vacua preserving $N = 2$ supersymmetry within the framework used for
the classification presented in arXiv:2112.03931.
|
Developers and data scientists often struggle to write command-line inputs,
even though graphical interfaces or tools like ChatGPT can assist. The
solution? "ai-cli," an open-source system inspired by GitHub Copilot that
converts natural language prompts into executable commands for various Linux
command-line tools. By tapping into OpenAI's API, which allows interaction
through JSON HTTP requests, "ai-cli" transforms user queries into actionable
command-line instructions. However, integrating AI assistance across multiple
command-line tools, especially in open source settings, can be complex.
Historically, operating systems could mediate, but individual tool
functionality and the lack of a unified approach have made centralized
integration challenging. The "ai-cli" tool, by bridging this gap through
dynamic loading and linking with each program's Readline library API, makes
command-line interfaces smarter and more user-friendly, opening avenues for
further enhancement and cross-platform applicability.
|
We present in this letter an original freezing process yielding remarkably
homogeneous films of chiral smectics. This optical homogeneity is observed on
planar films as well as on films exhibiting various complex three-dimensional
shapes.
|
The status of the search at LEP2 for the Higgs in the Standard Model (SM) and
in the minimal supersymmetric extension of the Standard Model MSSM) is
reviewed. A preliminary lower limit of 95.5/c^2 at 95% C.L. on the SM Higgs is
obtained after a preliminary analysis of the data collected at sqrt(s)= 189
GeV. For standard choices of MSSM parameter sets, the search for the neutral
Higgs bosons h and A leads to preliminary 95% C.L. exclusion lower limits of
83.5GeV/c^2 and 84.5 GeV/c^2, respectively.
|
Risk, including economic risk, is increasingly a concern for public policy
and management. The possibility of dealing effectively with risk is hampered,
however, by lack of a sound empirical basis for risk assessment and management.
The paper demonstrates the general point for cost and demand risks in urban
rail projects. The paper presents empirical evidence that allow valid economic
risk assessment and management of urban rail projects, including benchmarking
of individual or groups of projects. Benchmarking of the Copenhagen Metro is
presented as a case in point. The approach developed is proposed as a model for
other types of policies and projects in order to improve economic and financial
risk assessment and management in policy and planning.
|
This paper derives an inequality relating the p-norm of a positive 2 x 2
block matrix to the p-norm of the 2 x 2 matrix obtained by replacing each block
by its p-norm. The inequality had been known for integer values of p, so the
main contribution here is the extension to all values p >= 1. In a special case
the result reproduces Hanner's inequality. As an application in quantum
information theory, the inequality is used to obtain some results concerning
maximal p-norms of product channels.
|
We examine the ability of gravitational lens time delays to reveal complex
structure in lens potentials. In Congdon, Keeton & Nordgren (2008), we
predicted how the time delay between the bright pair of images in a "fold" lens
scales with the image separation, for smooth lens potentials. Here we show that
the proportionality constant increases with the quadrupole moment of the lens
potential, and depends only weakly on the position of the source along the
caustic. We use Monte Carlo simulations to determine the range of time delays
that can be produced by realistic smooth lens models consisting of isothermal
ellipsoid galaxies with tidal shear. We can then identify outliers as "time
delay anomalies". We find evidence for anomalies in close image pairs in the
cusp lenses RX J1131$-$1231 and B1422+231. The anomalies in RX J1131$-$1231
provide strong evidence for substructure in the lens potential, while at this
point the apparent anomalies in B1422+231 mainly indicate that the time delay
measurements need to be improved. We also find evidence for time delay
anomalies in larger-separation image pairs in the fold lenses, B1608+656 and
WFI 2033$-$4723, and the cusp lens RX J0911+0551. We suggest that these
anomalies are caused by some combination of substructure and a complex lens
environment. Finally, to assist future monitoring campaigns we use our smooth
models with shear to predict the time delays for all known four-image lenses.
|
We empirically analyze a simple heuristic for large sparse set cover
problems. It uses the weighted greedy algorithm as a basic building block. By
multiplicative updates of the weights attached to the elements, the greedy
solution is iteratively improved. The implementation of this algorithm is
trivial and the algorithm is essentially free of parameters that would require
tuning. More iterations can only improve the solution. This set of features
makes the approach attractive for practical problems.
|
The study of open charm meson production provides an efficient tool for the
investigation of the properties of hot and dense matter formed in
nucleus-nucleus collisions. The interpretation of the existing di-muon data
from the CERN SPS suffers from a lack of knowledge on the mechanism and
properties of the open charm particle production. Due to this, the heavy-ion
programme of the \NASixtyOne experiment at the CERN SPS has been extended by
precise measurements of charm hadrons with short lifetimes. A new Vertex
Detector for measurements of the rare processes of open charm production in
nucleus-nucleus collisions was designed to meet the challenges of track
registration and high resolution in primary and secondary vertex
reconstruction. A small-acceptance version of the vertex detector was installed
in 2016 and tested with Pb+Pb collisions at 150\AGeVc. It was also operating
during the physics data taking on Xe+La and Pb+Pb collisions at 150\AGeVc
conducted in 2017 and 2018. This paper presents the detector design and
construction, data calibration, event reconstruction, and analysis procedure.
|
In this study we present a dynamical agent-based model to investigate the
interplay between the socio-economy of and SEIRS-type epidemic spreading over a
geographical area, divided to smaller area districts and further to smallest
area cells. The model treats the populations of cells and authorities of
districts as agents, such that the former can reduce their economic activity
and the latter can recommend economic activity reduction both with the overall
goal to slow down the epidemic spreading. The agents make decisions with the
aim of attaining as high socio-economic standings as possible relative to other
agents of the same type by evaluating their standings based on the local and
regional infection rates, compliance to the authorities' regulations, regional
drops in economic activity, and efforts to mitigate the spread of epidemic. We
find that the willingness of population to comply with authorities'
recommendations has the most drastic effect on the epidemic spreading: periodic
waves spread almost unimpeded in non-compliant populations, while in compliant
ones the spread is minimal with chaotic spreading pattern and significantly
lower infection rates. Health and economic concerns of agents turn out to have
lesser roles, the former increasing their efforts and the latter decreasing
them.
|
It is established experimentally that the low temperature photoelectric
spectra line width of shallow impurities depends not only on charged impurity
concentration $N_i=2KN_A$ and degree of samples compensation $%
K=N_A/N_D$, as it was believed earlier.To a great extent it depends on the
impurity distribution inhomogeneity also.For samples with homogeneous and
inhomogeneous distribution of impurities line width dependence character on
external electric fields, smaller than break down one, are different.This
broadening mechanism allows to control the quality of samples with nearly equal
impurity concentrations.
|
We consider the effects of Planck scale on four flavour neutrino mixings. The
gravational interaction at $M_{x}=M_{\rm planck}$, we find that for degenerate
neurino mass order, the Planck scale effects changes the mixing angle
$\theta'_{23}$, $\theta'_{12}$ values and $\theta'_{13}$, $\theta'_{14}$,
$\theta'_{34}$, $\theta'_{24}$ are unchanged above the GUT scale. In this
paper, we study neutrino mixing in four flavor above the GUT scale.
|
In this paper we prove the equidistribution of $\Cbf$-special subvarieties in
certain Kuga varieties, which implies a special case of the general
Andr\'e-Oort conjecture formulated for mixed Shimura varieties proposed by
R.Pink. The main idea is to reduce the equidistribution to a theorem of
Szpiro-Ullmo-Zhang on small points of abelian varieties and a theorem on the
equiditribution of $C$-special subvarieties of Kuga varieties of rigid type
treated by the author in a previous paper.
|
(abridged) We present a near-infrared (NIR) photometric variability study of
the candidate protoplanet, TMR-1C, located at a separation of about 10" (~1000
AU) from the Class I protobinary TMR-1AB in the Taurus molecular cloud. Our
campaign was conducted between October, 2011, and January, 2012. We were able
to obtain 44 epochs of observations in each of the H and Ks filters. Based on
the final accuracy of our observations, we do not find any strong evidence of
short-term NIR variability at amplitudes of >0.15-0.2 mag for TMR-1C or
TMR-1AB. Our present observations, however, have reconfirmed the
large-amplitude long-term variations in the NIR emission for TMR-1C, which were
earlier observed between 1998 and 2002, and have also shown that no particular
correlation exists between the brightness and the color changes. TMR-1C became
brighter in the H-band by ~1.8 mag between 1998 and 2002, and then fainter
again by ~0.7 mag between 2002 and 2011. In contrast, it has persistently
become brighter in the Ks-band in the period between 1998 and 2011. The (H-Ks)
color for TMR-1C shows large variations, from a red value of 1.3+/-0.07 and
1.6+/-0.05 mag in 1998 and 2000, to a much bluer color of -0.1+/-0.5 mag in
2002, and then again a red color of 1.1+/-0.08 mag in 2011. The observed
variability from 1998 to 2011 suggests that TMR-1C becomes fainter when it gets
redder, as expected from variable extinction, while the brightening observed in
the Ks-band could be due to physical variations in its inner disk structure.
The NIR colors for TMR-1C obtained using the high precision photometry from
1998, 2000, and 2011 observations are similar to the protostars in Taurus,
suggesting that it could be a faint dusty Class I source. Our study has also
revealed two new variable sources in the vicinity of TMR-1AB, which show
long-term variations of ~1-2 mag in the NIR colors between 2002 and 2011.
|
Topological invariants are fundamental characteristics reflecting global
properties of quantum systems, yet their exploration has predominantly been
limited to the static (DC) transport and transverse (Hall) channel. In this
work, we extend the spectral sum rules for frequency-resolved electric
conductivity $\sigma (\omega)$ in topological systems, and show that the sum
rule for the longitudinal channel is expressed through topological and
quantum-geometric invariants. We find that for dispersionless (flat) Chern
bands, the rule is expressed as, $ \int_{-\infty}^{+\infty} d\omega \,
\text{Re}(\sigma_{xx} + \sigma_{yy}) = C \Delta e^2$, where $C$ is the Chern
number, $\Delta$ the topological gap, and $e$ the electric charge. In scenarios
involving dispersive Chern bands, the rule is defined by the invariant of the
quantum metric, and Luttinger invariant, $\int_{-\infty}^{+\infty} d\omega \,
\text{Re}(\sigma_{xx} + \sigma_{yy}) = 2 \pi e^2 \Delta \sum_{\boldsymbol{k}}
\text{Tr} \, \mathcal{G}_{ij}(\boldsymbol{k})$+(Luttinger invariant), where
$\text{Tr} \, \mathcal {G}_{ij}$ is invariant of the Fubini-Study metric
(defining spread of Wannier orbitals). We further discuss the physical role of
topological and quantum-geometric invariants in spectral sum rules. Our
approach is adaptable across varied topologies and system dimensionalities.
|
This paper is concerned with the question of when a theory is refutable with
certainty on the basis of sequence of primitive observations. Beginning with
the simple definition of falsifiability as the ability to be refuted by some
finite collection of observations, I assess the literature on falsification and
its descendants within the context of the dividing lines of contemporary model
theory. The static case is broadly concerned with the question of how much of a
theory can be subjected to falsifying experiments. In much of the literature,
this question is tied up with whether the theory in question is axiomatizable
by a collection of universal first-order sentences. I argue that this is too
narrow a conception of falsification by demonstrating that a natural class of
theories of distinct model-theoretic interest -- so-called NIP theories -- are
themselves highly falsifiable.
|
Barycentric interpolation is arguably the method of choice for numerical
polynomial interpolation. The polynomial interpolant is expressed in terms of
function values using the so-called barycentric weights, which depend on the
interpolation points. Few explicit formulae for these barycentric weights are
known. In [H. Wang and S. Xiang, Math. Comp., 81 (2012), 861--877], the authors
have shown that the barycentric weights of the roots of Legendre polynomials
can be expressed explicitly in terms of the weights of the corresponding
Gaussian quadrature rule. This idea was subsequently implemented in the Chebfun
package [L. N. Trefethen and others, The Chebfun Development Team, 2011] and in
the process generalized by the Chebfun authors to the roots of Jacobi, Laguerre
and Hermite polynomials. In this paper, we explore the generality of the link
between barycentric weights and Gaussian quadrature and show that such
relationships are related to the existence of lowering operators for orthogonal
polynomials. We supply an exhaustive list of cases, in which all known formulae
are recovered and also some new formulae are derived, including the barycentric
weights for Gauss-Radau and Gauss-Lobatto points. Based on a fast ${\mathcal
O}(n)$ algorithm for the computation of Gaussian quadrature, due to Hale and
Townsend, this leads to an ${\mathcal O}(n)$ computational scheme for
barycentric weights.
|
Code generation focuses on the automatic conversion of natural language (NL)
utterances into code snippets. The sequence-to-tree (Seq2Tree) approaches are
proposed for code generation, with the guarantee of the grammatical correctness
of the generated code, which generate the subsequent Abstract Syntax Tree (AST)
node relying on antecedent predictions of AST nodes. Existing Seq2Tree methods
tend to treat both antecedent predictions and subsequent predictions equally.
However, under the AST constraints, it is difficult for Seq2Tree models to
produce the correct subsequent prediction based on incorrect antecedent
predictions. Thus, antecedent predictions ought to receive more attention than
subsequent predictions. To this end, in this paper, we propose an effective
method, named Antecedent Prioritized (AP) Loss, that helps the model attach
importance to antecedent predictions by exploiting the position information of
the generated AST nodes. We design an AST-to-Vector (AST2Vec) method, that maps
AST node positions to two-dimensional vectors, to model the position
information of AST nodes. To evaluate the effectiveness of our proposed loss,
we implement and train an Antecedent Prioritized Tree-based code generation
model called APT. With better antecedent predictions and accompanying
subsequent predictions, APT significantly improves the performance. We conduct
extensive experiments on four benchmark datasets, and the experimental results
demonstrate the superiority and generality of our proposed method.
|
In this work we study deBranges-Rovnyak spaces, $H(b)$, on the unit ball of
$\mathbb{C}^n$. We give an integral representation of the functions in $H(b)$
through the Clark measure on $S^n$ associated with $b$. A characterization of
admissible boundary limits is given in relation with finite angular
derivatives. Lastly, we examine the interplay between Clark measures and
angular derivatives showing that Clark measure associated with $b$ has an atom
at a boundary point if and only if $b$ has finite angular derivative at the
same point.
|
We prove exponential expressivity with stable ReLU Neural Networks (ReLU NNs)
in $H^1(\Omega)$ for weighted analytic function classes in certain polytopal
domains $\Omega$, in space dimension $d=2,3$. Functions in these classes are
locally analytic on open subdomains $D\subset \Omega$, but may exhibit isolated
point singularities in the interior of $\Omega$ or corner and edge
singularities at the boundary $\partial \Omega$. The exponential expression
rate bounds proved here imply uniform exponential expressivity by ReLU NNs of
solution families for several elliptic boundary and eigenvalue problems with
analytic data. The exponential approximation rates are shown to hold in space
dimension $d = 2$ on Lipschitz polygons with straight sides, and in space
dimension $d=3$ on Fichera-type polyhedral domains with plane faces. The
constructive proofs indicate in particular that NN depth and size increase
poly-logarithmically with respect to the target NN approximation accuracy
$\varepsilon>0$ in $H^1(\Omega)$. The results cover in particular solution sets
of linear, second order elliptic PDEs with analytic data and certain nonlinear
elliptic eigenvalue problems with analytic nonlinearities and singular,
weighted analytic potentials as arise in electron structure models. In the
latter case, the functions correspond to electron densities that exhibit
isolated point singularities at the positions of the nuclei. Our findings
provide in particular mathematical foundation of recently reported, successful
uses of deep neural networks in variational electron structure algorithms.
|
The neutrino telescopes of the present generation, depending on their
specific features, can reconstruct the neutrino spectra from a galactic burst.
Since the optical counterpart could be not available, it is desirable to have
at hand alternative methods to estimate the distance of the supernova explosion
using only the neutrino data. In this work we present preliminary results on
the method we are proposing to estimate the distance from a galactic supernova
based only on the spectral shape of the neutrino burst and assumptions on the
gravitational binding energy released an a typical supernova explosion due to
stellar collapses.
|
We present a new method to solve the dynamics of disordered spin systems on
finite time-scales. It involves a closed driven diffusion equation for the
joint spin-field distribution, with time-dependent coefficients described by a
dynamical replica theory which, in the case of detailed balance, incorporates
equilibrium replica theory as a stationary state. The theory is exact in
various limits. We apply our theory to both the symmetric- and the
non-symmetric Sherrington-Kirkpatrick spin-glass, and show that it describes
the (numerical) experiments very well.
|
The 3D localisation of an object and the estimation of its properties, such
as shape and dimensions, are challenging under varying degrees of transparency
and lighting conditions. In this paper, we propose a method for jointly
localising container-like objects and estimating their dimensions using two
wide-baseline, calibrated RGB cameras. Under the assumption of circular
symmetry along the vertical axis, we estimate the dimensions of an object with
a generative 3D sampling model of sparse circumferences, iterative shape
fitting and image re-projection to verify the sampling hypotheses in each
camera using semantic segmentation masks. We evaluate the proposed method on a
novel dataset of objects with different degrees of transparency and captured
under different backgrounds and illumination conditions. Our method, which is
based on RGB images only, outperforms in terms of localisation success and
dimension estimation accuracy a deep-learning based approach that uses depth
maps.
|
In the last few years the derivative expansion of the Non-Perturbative
Renormalization Group has proven to be a very efficient tool for the precise
computation of critical quantities. In particular, recent progress in the
understanding of its convergence properties allowed for an estimate of the
error bars as well as the precise computation of many critical quantities. In
this work we extend previous studies to the computation of several universal
amplitude ratios for the critical regime of $O(N)$ models using the derivative
expansion of the Non-Perturbative Renormalization Group at order
$\mathcal{O}(\partial^4)$ for three dimensional systems.
|
A relation between variational principles for equations of continuum
mechanics in Eulerian and Lagrangian descriptions is considered. It is shown
that for a system of differential equations in Eulerian variables corresponding
Lagrangian description is related to introducing nonlocal variables. The
connection between these descriptions is obtained in terms of differential
coverings. The relation between variational principles of a system of equations
and its symplectic structures is discussed. It is shown that if a system of
equations in Lagrangian variables can be derived from a variational principle
then there is no corresponding variational principle in Eulerian variables.
|
We show that nuclear spin subsystems can be completely controlled via
microwave irradiation of resolved anisotropic hyperfine interactions with a
nearby electron spin. Such indirect addressing of the nuclear spins via
coupling to an electron allows us to create nuclear spin gates whose
operational time is significantly faster than conventional direct addressing
methods. We experimentally demonstrate the feasibility of this method on a
solid-state ensemble system consisting of one electron and one nuclear spin.
|
In many intracellular processes, the length distribution of microtubules is
controlled by depolymerizing motor proteins. Experiments have shown that,
following non-specific binding to the surface of a microtubule, depolymerizers
are transported to the microtubule tip(s) by diffusion or directed walk and,
then, depolymerize the microtubule from the tip(s) after accumulating there. We
develop a quantitative model to study the depolymerizing action of such a
generic motor protein, and its possible effects on the length distribution of
microtubules. We show that, when the motor protein concentration in solution
exceeds a critical value, a steady state is reached where the length
distribution is, in general, non-monotonic with a single peak. However, for
highly processive motors and large motor densities, this distribution
effectively becomes an exponential decay. Our findings suggest that such motor
proteins may be selectively used by the cell to ensure precise control of MT
lengths. The model is also used to analyze experimental observations of
motor-induced depolymerization.
|
Vibrational motions in electronically excited states can be observed by
either time and frequency resolved infrared absorption or by off resonant
stimulated Raman techniques. Multipoint correlation function expressions are
derived for both signals. Three representations for the signal which suggest
different simulation protocols are developed. These are based on the forward
and the backward propagation of the wavefunction, sum over state expansion
using an effective vibration Hamiltonian and a semiclassical treatment of a
bath. We show that the effective temporal ($\Delta t$) and spectral
($\Delta\omega$) resolution of the techniques is not controlled solely by
experimental knobs but also depends on the system dynamics being probed. The
Fourier uncertainty $\Delta\omega\Delta t>1$ is never violated.
|
New algorithms for construction of asymptotic expansions for stationary
distributions of nonlinearly perturbed semi-Markov processes with finite phase
spaces are presented. These algorithms are based on a special technique of
sequential phase space reduction, which can be applied to processes with an
arbitrary asymptotic communicative structure of phase spaces. Asymptotic
expansions are given in two forms, without and with explicit bounds for
remainders.
|
We present a Bayesian method for feature selection in the presence of
grouping information with sparsity on the between- and within group level.
Instead of using a stochastic algorithm for parameter inference, we employ
expectation propagation, which is a deterministic and fast algorithm. Available
methods for feature selection in the presence of grouping information have a
number of short-comings: on one hand, lasso methods, while being fast,
underestimate the regression coefficients and do not make good use of the
grouping information, and on the other hand, Bayesian approaches, while
accurate in parameter estimation, often rely on the stochastic and slow Gibbs
sampling procedure to recover the parameters, rendering them infeasible e.g.
for gene network reconstruction. Our approach of a Bayesian sparse-group
framework with expectation propagation enables us to not only recover accurate
parameter estimates in signal recovery problems, but also makes it possible to
apply this Bayesian framework to large-scale network reconstruction problems.
The presented method is generic but in terms of application we focus on gene
regulatory networks. We show on simulated and experimental data that the method
constitutes a good choice for network reconstruction regarding the number of
correctly selected features, prediction on new data and reasonable computing
time.
|
In this paper, masses and radii of $\Sigma^-_u$ states hybrid charmonium
mesons are calculated by numerically solving the Schr\"odinger equation with
non-relativistic potential model. Results for calculated masses of $\Sigma^-_u$
states charmonium hybrid mesons are found to be close to the results obtained
through lattice simulations. Calculated masses are used to construct Regge
trajectories. It is found that the trajectories are almost linear and parallel.
|
Most recent CNN architectures use average pooling as a final feature encoding
step. In the field of fine-grained recognition, however, recent global
representations like bilinear pooling offer improved performance. In this
paper, we generalize average and bilinear pooling to "alpha-pooling", allowing
for learning the pooling strategy during training. In addition, we present a
novel way to visualize decisions made by these approaches. We identify parts of
training images having the highest influence on the prediction of a given test
image. It allows for justifying decisions to users and also for analyzing the
influence of semantic parts. For example, we can show that the higher capacity
VGG16 model focuses much more on the bird's head than, e.g., the lower-capacity
VGG-M model when recognizing fine-grained bird categories. Both contributions
allow us to analyze the difference when moving between average and bilinear
pooling. In addition, experiments show that our generalized approach can
outperform both across a variety of standard datasets.
|
Near-field radiative heat transfer (NFRHT) is strongly related with many
applications such as near-field imaging, thermos-photovoltaics and thermal
circuit devices. The active control of NFRHT is of great interest since it
provides a degree of tunability by external means. In this work, a magnetically
tunable multi-band NFRHT is revealed in a system of two suspended graphene
sheets at room temperature. It is found that the single-band spectra for B=0
split into multi-band spectra under an external magnetic field. Dual-band
spectra can be realized for a modest magnetic field (e.g., B=4 T). One band is
determined by intra-band transitions in the classical regime, which undergoes a
blue shift as the chemical potential increases. Meanwhile, the other band is
contributed by inter-Landau-level transitions in the quantum regime, which is
robust against the change of chemical potentials. For a strong magnetic field
(e.g., B=15 T), there is an additional band with the resonant peak appearing at
near-zero frequency (microwave regime), stemming from the magneto-plasmon zero
modes. The great enhancement of NFRHT at such low frequency has not been found
in any previous systems yet. This work may pave a way for multi-band thermal
information transfer based on atomically thin graphene sheets.
|
Nonlocal evolutionary equations containing memory terms model a variety of
non-Markovian processes. We present a Markovian embedding procedure for a class
of nonlocal equations by utilising the spectral representation of the nonlinear
memory kernel. This allows us to transform the nonlocal system to a
local-in-time system in an abstract extended space. We demonstrate our
embedding procedure and its efficacy for two different physical models, namely
the (i) 1D walking droplet and (ii) the 1D single-phase Stefan problem.
|
The Kreiss-Majda Lopatinski determinant encodes a uniform stability property
of shock wave solutions to hyperbolic systems of conservation laws in several
space variables. This note deals with the Lopatinski determinant for shock
waves of sufficiently small amplitude. The determinant is known to be non-zero
for so-called extreme shock waves, i. e., shock waves which are asscoiated with
either the slowest or the fastest mode the system displays for a given
direction of propagation, if the mode is Metivier convex. The result of the
note is that for arbitrarily small non-extreme shock waves associated with a
Metivier convex mode, the Lopatsinki determinant may vanish.
|
Most inverse optimization models impute unspecified parameters of an
objective function to make an observed solution optimal for a given
optimization problem with a fixed feasible set. We propose two approaches to
impute unspecified left-hand-side constraint coefficients in addition to a cost
vector for a given linear optimization problem. The first approach identifies
parameters minimizing the duality gap, while the second minimally perturbs
prior estimates of the unspecified parameters to satisfy strong duality, if it
is possible to satisfy the optimality conditions exactly. We apply these two
approaches to the general linear optimization problem. We also use them to
impute unspecified parameters of the uncertainty set for robust linear
optimization problems under interval and cardinality constrained uncertainty.
Each inverse optimization model we propose is nonconvex, but we show that a
globally optimal solution can be obtained either in closed form or by solving a
linear number of linear or convex optimization problems.
|
The surface detector (SD) array of the southern Pierre Auger Observatory will
consist of a triangular grid of 1600 water Cherenkov tanks with 1.5 km spacing.
For zenith angles less than 60deg the primary energy can be estimated from the
signal S(1000) at a distance of about 1000m from the shower axis, solely on
basis of SD data. A suitable lateral distribution function (LDF) S(r) is fitted
to the signals recorded by the water tanks and used to quantify S(1000).
Therefore, knowledge of the LDF is a fundamental requirement for determining
the energy of the primary particle. The Engineering Array (EA), a prototype
facility consisting of 32 tanks, has taken data continuously since late 2001.
On the basis of selected experimental data and Monte Carlo simulations various
preliminary LDFs are examined.
|
Density Matrix Renormalization Group (DMRG) algorithm has been extremely
successful for computing the ground states of one-dimensional quantum many-body
systems. For problems concerned with mixed quantum states, however, it is less
successful in that either such an algorithm does not exist yet or that it may
return unphysical solutions. Here we propose a positive matrix product ansatz
for mixed quantum states which preserves positivity by construction. More
importantly, it allows to build a DMRG algorithm which, the same as the
standard DMRG for ground states, iteratively reduces the global optimization
problem to local ones of the same type, with the energy converging
monotonically in principle. This algorithm is applied for computing both the
equilibrium states and the non-equilibrium steady states, and its advantages
are numerically demonstrated.
|
Quantum error correcting codes (QECCs) are the means of choice whenever
quantum systems suffer errors, e.g., due to imperfect devices, environments, or
faulty channels. By now, a plethora of families of codes is known, but there is
no universal approach to finding new or optimal codes for a certain task and
subject to specific experimental constraints. In particular, once found, a QECC
is typically used in very diverse contexts, while its resilience against errors
is captured in a single figure of merit, the distance of the code. This does
not necessarily give rise to the most efficient protection possible given a
certain known error or a particular application for which the code is employed.
In this paper, we investigate the loss channel, which plays a key role in
quantum communication, and in particular in quantum key distribution over long
distances. We develop a numerical set of tools that allows to optimize an
encoding specifically for recovering lost particles both deterministically and
probabilistically, where some knowledge about what was lost is available, and
demonstrate its capabilities. This allows us to arrive at new codes ideal for
the distribution of entangled states in this particular setting, and also to
investigate if encoding in qudits or allowing for non-deterministic correction
proves advantageous compared to known QECCs. While we here focus on the case of
losses, our methodology is applicable whenever the errors in a system can be
characterized by a known linear map.
|
In this communication we present together four distinct techniques for the
study of electronic structure of solids : the tight-binding linear muffin-tin
orbitals (TB-LMTO), the real space and augmented space recursions and the
modified exchange-correlation. Using this we investigate the effect of random
vacancies on the electronic properties of the carbon hexagonal allotrope,
graphene, and the non-hexagonal allotrope, planar T graphene. We have inserted
random vacancies at different concentrations, to simulate disorder in pristine
graphene and planar T graphene sheets. The resulting disorder, both on-site
(diagonal disorder) as well as in the hopping integrals (off-diagonal
disorder), introduces sharp peaks in the vicinity of the Dirac point built up
from localized states for both hexagonal and non-hexagonal structures. These
peaks become resonances with increasing vacancy concentration. We find that in
presence of vacancies, graphene-like linear dispersion appears in planar T
graphene and the cross points form a loop in the first Brillouin zone similar
to buckled T graphene that originates from $\pi$ and $\pi$* bands without
regular hexagonal symmetry. We also calculate the single-particle relaxation
time, $\tau(\vec{q})$ of $\vec{q}$ labeled quantum electronic states which
originates from scattering due to presence of vacancies, causing quantum level
broadening.
|
In cuprate high-temperature superconductors the small coherence lengths and
high transition termperatures result in strong thermal fluctuations, which
render the superconducting transition in applied magnetic fields into a wide
continuous crossover. A state with zero resistance is found only below the
vortex melting transition, which occurs well below the onset of superconducting
correlations. Here we investigate the vortex phase diagram of the novel
Fe-based superconductor in form of a high-quality single crystal of
Ba0.5K0.5Fe2As2, using three different experimental probes (specific heat,
thermal expansion and magnetization). We find clear thermodynamic signatures of
a vortex melting transition, which shows that the thermal fluctuations in
applied magnetic fields also have a considerable impact on the superconducting
properties of iron-based superconductors.
|
We present a novel method to estimate the motion matrix between overlapping
pairs of 3D views in the context of indoor scenes. We use the Manhattan world
assumption to introduce lightweight geometric constraints under the form of
planes into the problem, which reduces complexity by taking into account the
structure of the scene. In particular, we define a stochastic framework to
categorize planes as vertical or horizontal and parallel or non-parallel. We
leverage this classification to match pairs of planes in overlapping views with
point-of-view agnostic structural metrics. We propose to split the motion
computation using the classification and estimate separately the rotation and
translation of the sensor, using a quadric minimizer. We validate our approach
on a toy example and present quantitative experiments on a public RGB-D
dataset, comparing against recent state-of-the-art methods. Our evaluation
shows that planar constraints only add low computational overhead while
improving results in precision when applied after a prior coarse estimate. We
conclude by giving hints towards extensions and improvements of current
results.
|
FSS(Few-shot segmentation) aims to segment a target class using a small
number of labeled images(support set). To extract information relevant to the
target class, a dominant approach in best-performing FSS methods removes
background features using a support mask. We observe that this feature excision
through a limiting support mask introduces an information bottleneck in several
challenging FSS cases, e.g., for small targets and/or inaccurate target
boundaries. To this end, we present a novel method(MSI), which maximizes the
support-set information by exploiting two complementary sources of features to
generate super correlation maps. We validate the effectiveness of our approach
by instantiating it into three recent and strong FSS methods. Experimental
results on several publicly available FSS benchmarks show that our proposed
method consistently improves performance by visible margins and leads to faster
convergence. Our code and trained models are available at:
https://github.com/moonsh/MSI-Maximize-Support-Set-Information
|
We determine the number of statistically significant factors in a forecast
model using a random matrices test. The applied forecast model is of the type
of Reduced Rank Regression (RRR), in particular, we chose a flavor which can be
seen as the Canonical Correlation Analysis (CCA). As empirical data, we use
cryptocurrencies at hour frequency, where the variable selection was made by a
criterion from information theory. The results are consistent with the usual
visual inspection, with the advantage that the subjective element is avoided.
Furthermore, the computational cost is minimal compared to the cross-validation
approach.
|
A generalized dynamical robust nonlinear filtering framework is established
for a class of Lipschitz differential algebraic systems, in which the
nonlinearities appear both in the state and measured output equations. The
system is assumed to be affected by norm-bounded disturbance and to have both
norm-bounded uncertainties in the realization matrices as well as nonlinear
model uncertainties. We synthesize a robust H_infty filter through semidefinite
programming and strict linear matrix inequalities (LMIs). The admissible
Lipschitz constants of the nonlinear functions are maximized through LMI
optimization. The resulting H_infty filter guarantees asymptotic stability of
the estimation error dynamics with prespecified disturbance attenuation level
and is robust against time-varying parametric uncertainties as well as
Lipschitz nonlinear additive uncertainty. Explicit bound on the tolerable
nonlinear uncertainty is derived based on a norm-wise robustness analysis.
|
We investigate the linear stability of scalarized black holes (BHs) and
neutron stars (NSs) in the Einstein-scalar-Gauss-Bonnet (GB) theories against
the odd- and even-parity perturbations including the higher multipole modes. We
show that the angular propagation speeds in the even-parity perturbations in
the $\ell \to \infty$ limit, with $\ell$ being the angular multipole moments,
become imaginary and hence scalarized BH solutions suffer from the gradient
instability. We show that such an instability appears irrespective of the
structure of the higher-order terms in the GB coupling function and is caused
purely due to the existence of the leading quadratic term and the boundary
condition that the value of the scalar field vanishes at the spatial
infinity.~This indicates that the gradient instability appears at the point in
the mass-charge diagram where the scalarized branches bifurcate from the
Schwarzschild branch. We also show that scalarized BH solutions realized in a
nonlinear scalarization model also suffer from the gradient instability in the
even-parity perturbations. Our result also suggests the gradient instability of
the exterior solutions of the static and spherically-symmetric scalarized NS
solutions induced by the same GB coupling functions.
|
Transmission optical coherence tomography (OCT) enables analysis of
biological specimens in vitro through detection of forward scattered light. Up
to now, transmission OCT was considered as a technique that cannot directly
retrieve quantitative phase and is thus a qualitative method. In this paper, we
present qtOCT, a novel quantitative transmission optical coherence tomography
method. Unlike existing approaches, qtOCT allows for a direct, easy, fast and
rigorous retrieval of 2D integrated phase information from transmission
full-field swept-source OCT measurements. Our method is based on coherence
gating and allows user-defined temporal measurement range selection, making it
potentially suitable for analyzing multiple-scattering samples. We demonstrate
high consistency between qtOCT and digital holographic microscopy phase images.
This approach enhances transmission OCT capabilities, positioning it as a
viable alternative to quantitative phase imaging techniques.
|
In the present work the authors revisit a classical problem of crack
propagation in a lattice. Authors investigate the questions concerning possible
admissible steady-state crack propagations in an anisotropic lattice. It was
found that for certain values of contrast in elastic and strength properties of
a lattice the stationary crack propagation is impossible. Authors also address
a question of possible crack propagation at low velocity.
|
The $b$-value in earthquake magnitude-frequency distribution quantifies the
relative frequency of large versus small earthquakes. Monitoring its evolution
could provide fundamental insights into temporal variations of stress on
different fault patches. However, genuine $b$-value changes are often difficult
to distinguish from artificial ones induced by temporal variations of the
detection threshold. A highly innovative and effective solution to this issue
has recently been proposed by van der Elst (2021) through the b-positive
method, which is based on analyzing only the positive differences in magnitude
between successive earthquakes. Here, we provide support to the robustness of
the method, largely unaffected by detection issues due to the properties of
conditional probability. However, we show that the b-positive method becomes
less efficient when earthquakes below the threshold are reported, leading to
the paradoxical behavior that it is more efficient when the catalog is more
incomplete. Thus, we propose the b-more-incomplete method, where the b-method
is applied only after artificially filtering the instrumental catalog to be
more incomplete. We also present other modifications of the b-method, such as
the b-more-positive method, and demonstrate when these approaches can be
efficient in managing time-independent incompleteness present when the seismic
network is sparse. We provide analytical and numerical results and apply the
methods to fore-mainshock sequences investigated by van der Elst (2021) for
validation. The results support the observed small changes in $b$-value as
genuine foreshock features.
|
The calculation of the MP2 correlation energy for extended systems can be
viewed as a multi-dimensional integral in the thermodynamic limit, and the
standard method for evaluating the MP2 energy can be viewed as a trapezoidal
quadrature scheme. We demonstrate that existing analysis neglects certain
contributions due to the non-smoothness of the integrand, and may significantly
underestimate finite-size errors. We propose a new staggered mesh method, which
uses two staggered Monkhorst-Pack meshes for occupied and virtual orbitals,
respectively, to compute the MP2 energy. The staggered mesh method circumvents
a significant error source in the standard method, in which certain quadrature
nodes are always placed on points where the integrand is discontinuous. One
significant advantage of the proposed method is that there are no tunable
parameters, and the additional numerical effort needed can be negligible
compared to the standard MP2 calculation. Numerical results indicate that the
staggered mesh method can be particularly advantageous for quasi-1D systems, as
well as quasi-2D and 3D systems with certain symmetries.
|
We study low-density axisymmetric accretion flows onto black holes (BHs) with
two-dimensional hydrodynamical simulations, adopting the $\alpha$-viscosity
prescription. When the gas angular momentum is low enough to form a
rotationally supported disk within the Bondi radius ($R_{\rm B}$), we find a
global steady accretion solution. The solution consists of a rotational
equilibrium distribution at $r\sim R_{\rm B}$, where the density follows $\rho
\propto (1+R_{\rm B}/r)^{3/2}$, surrounding a geometrically thick and optically
thin accretion disk at the centrifugal radius, where thermal energy generated
by viscosity is transported via strong convection. Physical properties of the
inner solution agree with those expected in convection-dominated accretion
flows (CDAF; $\rho \propto r^{-1/2}$). In the inner CDAF solution, the gas
inflow rate decreases towards the center due to convection ($\dot{M}\propto
r$), and the net accretion rate (including both inflows and outflows) is
strongly suppressed by several orders of magnitude from the Bondi accretion
rate $\dot{M}_{\rm B}$ The net accretion rate depends on the viscous strength,
following $\dot{M}/\dot{M}_{\rm B}\propto (\alpha/0.01)^{0.6}$. This solution
holds for low accretion rates of $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}< 10^{-3}$
having minimal radiation cooling, where $\dot{M}_{\rm Edd}$ is the Eddington
rate. In a hot plasma at the bottom ($r<10^{-3}~R_{\rm B}$), thermal conduction
would dominate the convective energy flux. Since suppression of the accretion
by convection ceases, the final BH feeding rate is found to be
$\dot{M}/\dot{M}_{\rm B} \sim 10^{-3}-10^{-2}$. This rate is as low as
$\dot{M}/\dot{M}_{\rm Edd} \sim 10^{-7}-10^{-6}$ inferred for SgrA$^*$ and the
nuclear BHs in M31 and M87, and can explain the low luminosities in these
sources, without invoking any feedback mechanism.
|
Claims that the standard methodology of scientific testing is inapplicable to
Everettian quantum theory, and hence that the theory is untestable, are due to
misconceptions about probability and about the logic of experimental testing.
Refuting those claims by correcting those misconceptions leads to various
simplifications, notably the elimination of everything probabilistic from
fundamental physics (stochastic processes) and from the methodology of testing
('Bayesian' credences).
|
Detailed information on the fission process can be inferred from the
observation, modeling and theoretical understanding of prompt fission neutron
and $\gamma$-ray~observables. Beyond simple average quantities, the study of
distributions and correlations in prompt data, e.g., multiplicity-dependent
neutron and \gray~spectra, angular distributions of the emitted particles,
$n$-$n$, $n$-$\gamma$, and $\gamma$-$\gamma$~correlations, can place stringent
constraints on fission models and parameters that would otherwise be free to be
tuned separately to represent individual fission observables. The FREYA~and
CGMF~codes have been developed to follow the sequential emissions of prompt
neutrons and $\gamma$-rays~from the initial excited fission fragments produced
right after scission. Both codes implement Monte Carlo techniques to sample
initial fission fragment configurations in mass, charge and kinetic energy and
sample probabilities of neutron and $\gamma$~emission at each stage of the
decay. This approach naturally leads to using simple but powerful statistical
techniques to infer distributions and correlations among many observables and
model parameters. The comparison of model calculations with experimental data
provides a rich arena for testing various nuclear physics models such as those
related to the nuclear structure and level densities of neutron-rich nuclei,
the $\gamma$-ray~strength functions of dipole and quadrupole transitions, the
mechanism for dividing the excitation energy between the two nascent fragments
near scission, and the mechanisms behind the production of angular momentum in
the fragments, etc. Beyond the obvious interest from a fundamental physics
point of view, such studies are also important for addressing data needs in
various nuclear applications. (See text for full abstract.)
|
In this work, we prove rigorous error estimates for a hybrid method
introduced in [15] for solving the time-dependent radiation transport equation
(RTE). The method relies on a splitting of the kinetic distribution function
for the radiation into uncollided and collided components. A high-resolution
method (in angle) is used to approximate the uncollided components and a
low-resolution method is used to approximate the the collided component. After
each time step, the kinetic distribution is reinitialized to be entirely
uncollided. For this analysis, we consider a mono-energetic problem on a
periodic domains, with constant material cross-sections of arbitrary size. To
focus the analysis, we assume the uncollided equation is solved exactly and the
collided part is approximated in angle via a spherical harmonic expansion
($\text{P}_N$ method). Using a non-standard set of semi-norms, we obtain
estimates of the form $C(\varepsilon,\sigma,\Delta t)N^{-s}$ where $s\geq 1$
denotes the regularity of the solution in angle, $\varepsilon$ and $\sigma$ are
scattering parameters, $\Delta t$ is the time-step before reinitialization, and
$C$ is a complicated function of $\varepsilon$, $\sigma$, and $\Delta t$. These
estimates involve analysis of the multiscale RTE that includes, but necessarily
goes beyond, usual spectral analysis. We also compute error estimates for the
monolithic $\text{P}_N$ method with the same resolution as the collided part in
the hybrid. Our results highlight the benefits of the hybrid approach over the
monolithic discretization in both highly scattering and streaming regimes.
|
A manuscript identified bat sarbecoviruses with high sequence homology to
SARS-CoV-2 found in caves in Laos that can directly infect human cells via the
human ACE2 receptor (Coronaviruses with a SARS-CoV-2-like receptor binding
domain allowing ACE2-mediated entry into human cells isolated from bats of
Indochinese peninsula, Temmam S., et al.). Here, I examine the genomic sequence
of one of these viruses, BANAL-236, and show it has 5-UTR and 3-UTR secondary
structures that are non-canonical and, in fact, have never been seen in an
infective coronavirus. Specifically, the 5-UTR has a 177 nt copy-back extension
which forms an extended, highly stable duplex RNA structure. Because of this
copy-back, the four obligate Stem Loops (SL) -1, -2, -3, and -4 cis-acting
elements found in all currently known replicating coronaviruses are buried in
the extended duplex. The 3-UTR has a similar fold-back duplex of 144 nts and is
missing the obligate poly-A tail. Taken together, these findings demonstrate
BANAL-236 is missing eight obligate UTR cis-acting elements; each one of which
has previously been lethal to replication when modified individually. Neither
duplex copyback has ever been observed in an infective sarbecovirus, although
some of the features have been seen in defective interfering particles, which
can be found in co-infections with non-defective, replicating viruses. They are
also a common error seen during synthetic genome assembly in a laboratory.
BANAL-236 must have evolved an entirely unique mechanism for replication, RNA
translation, and RNA packaging never seen in a coronavirus and because it is a
bat sarbecovirus closely related to SARS-CoV-2, it is imperative that we
understand its unique mode of infectivity by a collaborative, international
research effort.
|
We ask the following question: what are the relative contributions of the
ensemble mean and the ensemble standard deviation to the skill of a
site-specific probabilistic temperature forecast? Is it the case that most of
the benefit of using an ensemble forecast to predict temperatures comes from
the ensemble mean, or from the ensemble spread, or is the benefit derived
equally from the two? The answer is that one of the two is much more useful
than the other.
|
We discuss the effect that small fluctuations of local anisotropy of
pressure, and energy density, may have on the occurrence of cracking in
spherical compact objects, satisfying a polytropic equation of state. Two
different kind of polytropes are considered. For both, it is shown that
departures from equilibrium may lead to the appearance of cracking, for a wide
range of values of the parameters defining the polytrope. Prospective
applications of the obtained results, to some astrophysical scenarios, are
pointed out.
|
We present a simple, modular graph-based convolutional neural network that
takes structural information from protein-ligand complexes as input to generate
models for activity and binding mode prediction. Complex structures are
generated by a standard docking procedure and fed into a dual-graph
architecture that includes separate sub-networks for the ligand bonded topology
and the ligand-protein contact map. This network division allows contributions
from ligand identity to be distinguished from effects of protein-ligand
interactions on classification. We show, in agreement with recent literature,
that dataset bias drives many of the promising results on virtual screening
that have previously been reported. However, we also show that our neural
network is capable of learning from protein structural information when, as in
the case of binding mode prediction, an unbiased dataset is constructed. We
develop a deep learning model for binding mode prediction that uses docking
ranking as input in combination with docking structures. This strategy mirrors
past consensus models and outperforms the baseline docking program in a variety
of tests, including on cross-docking datasets that mimic real-world docking use
cases. Furthermore, the magnitudes of network predictions serve as reliable
measures of model confidence
|
Preceding the complete suppression of chemical turbulence by means of global
feedback, a different universal type of transition, which is characterized by
the emergence of small-amplitude collective oscillation with strong turbulent
background, is shown to occur at much weaker feedback intensity. We illustrate
this fact numerically in combination with a phenomenological argument based on
the complex Ginzburg-Landau equation with global feedback.
|
We proposed a novel architecture for the problem of video super-resolution.
We integrate spatial and temporal contexts from continuous video frames using a
recurrent encoder-decoder module, that fuses multi-frame information with the
more traditional, single frame super-resolution path for the target frame. In
contrast to most prior work where frames are pooled together by stacking or
warping, our model, the Recurrent Back-Projection Network (RBPN) treats each
context frame as a separate source of information. These sources are combined
in an iterative refinement framework inspired by the idea of back-projection in
multiple-image super-resolution. This is aided by explicitly representing
estimated inter-frame motion with respect to the target, rather than explicitly
aligning frames. We propose a new video super-resolution benchmark, allowing
evaluation at a larger scale and considering videos in different motion
regimes. Experimental results demonstrate that our RBPN is superior to existing
methods on several datasets.
|
Increasing the transactional throughput of decentralized blockchains in a
secure manner has been the holy grail of blockchain research for most of the
past decade. This paper introduces a scheme for scaling blockchains while
retaining virtually identical security and decentralization, colloquially known
as optimistic rollup. We propose a layer-2 scaling technique using a
permissionless side chain with merged consensus. The side chain only supports
functionality to transact UTXOs and transfer funds to and from a parent chain
in a trust-minimized manner. Optimized implementation and engineering of client
code, along with improvements to block propagation efficiency versus currently
deployed systems, allow use of this side chain to scale well beyond the
capacities exhibited by contemporary blockchains without undue resource demands
on full nodes.
|
Recent diarization technologies can be categorized into two approaches, i.e.,
clustering and end-to-end neural approaches, which have different pros and
cons. The clustering-based approaches assign speaker labels to speech regions
by clustering speaker embeddings such as x-vectors. While it can be seen as a
current state-of-the-art approach that works for various challenging data with
reasonable robustness and accuracy, it has a critical disadvantage that it
cannot handle overlapped speech that is inevitable in natural conversational
data. In contrast, the end-to-end neural diarization (EEND), which directly
predicts diarization labels using a neural network, was devised to handle the
overlapped speech. While the EEND, which can easily incorporate emerging
deep-learning technologies, has started outperforming the x-vector clustering
approach in some realistic database, it is difficult to make it work for `long'
recordings (e.g., recordings longer than 10 minutes) because of, e.g., its huge
memory consumption. Block-wise independent processing is also difficult because
it poses an inter-block label permutation problem, i.e., an ambiguity of the
speaker label assignments between blocks. In this paper, we propose a simple
but effective hybrid diarization framework that works with overlapped speech
and for long recordings containing an arbitrary number of speakers. It modifies
the conventional EEND framework to simultaneously output global speaker
embeddings so that speaker clustering can be performed across blocks to solve
the permutation problem. With experiments based on simulated noisy reverberant
2-speaker meeting-like data, we show that the proposed framework works
significantly better than the original EEND especially when the input data is
long.
|
We study bilateral trade with interdependent values as an informed-principal
problem. The mechanism-selection game has multiple equilibria that differ with
respect to principal's payoff and trading surplus. We characterize the
equilibrium that is worst for every type of principal, and characterize the
conditions under which there are no equilibria with different payoffs for the
principal. We also show that this is the unique equilibrium that survives the
intuitive criterion.
|
The 3rd data release of the Gaia mission includes orbital solutions for $>
10^5$ single-lined spectroscopic binaries, representing more than an order of
magnitude increase in sample size over all previous studies. This dataset is a
treasure trove for searches for quiescent black hole + normal star binaries. We
investigate one population of black hole candidate binaries highlighted in the
data release: sources near the main sequence in the color-magnitude diagram
(CMD) with dynamically-inferred companion masses $M_2$ larger than the
CMD-inferred mass of the luminous star. We model light curves, spectral energy
distributions, and archival spectra of the 14 such objects in DR3 with
high-significance orbital solutions and inferred $M_2 > 3\,M_{\odot}$. We find
that 100\% of these sources are mass-transfer binaries containing a highly
stripped lower giant donor ($0.2 \lesssim M/M_{\odot} \lesssim 0.4$) and much
more massive ($2 \lesssim M/M_{\odot} \lesssim 2.5$) main-sequence accretor.
The Gaia orbital solutions are for the donors, which contribute about half the
light in the Gaia RVS bandpass but only $\lesssim 20\%$ in the $g-$band. The
accretors' broad spectral features likely prevented the sources from being
classified as double-lined. The donors are all close to Roche lobe-filling
($R/R_{\rm Roche\,lobe}>0.8$), but modeling suggests that a majority are
detached ($R/R_{\rm Roche\,lobe}<1$). Binary evolution models predict that
these systems will soon become detached helium white dwarf + main sequence "EL
CVn" binaries. Our investigation highlights both the power of Gaia data for
selecting interesting sub-populations of binaries and the ways in which binary
evolution can bamboozle standard CMD-based stellar mass estimates.
|
Various factorization-based methods have been proposed to leverage
second-order, or higher-order cross features for boosting the performance of
predictive models. They generally enumerate all the cross features under a
predefined maximum order, and then identify useful feature interactions through
model training, which suffer from two drawbacks. First, they have to make a
trade-off between the expressiveness of higher-order cross features and the
computational cost, resulting in suboptimal predictions. Second, enumerating
all the cross features, including irrelevant ones, may introduce noisy feature
combinations that degrade model performance. In this work, we propose the
Adaptive Factorization Network (AFN), a new model that learns arbitrary-order
cross features adaptively from data. The core of AFN is a logarithmic
transformation layer to convert the power of each feature in a feature
combination into the coefficient to be learned. The experimental results on
four real datasets demonstrate the superior predictive performance of AFN
against the start-of-the-arts.
|
Treating the metric as a classical background field, we show that the
cosmological constant does not run with the renormalization scale -- contrary
to some claims in the literature.
|
As hosts of living high-mass stars, Wolf-Rayet (WR) regions or WR galaxies
are ideal objects for constraining the high-mass end of the stellar initial
mass function (IMF). We construct a large sample of 910 WR galaxies/regions
that cover a wide range of stellar metallicity (from Z~0.001 up to Z~0.03), by
combining three catalogs of WR galaxies/regions previously selected from the
SDSS and SDSS-IV/MaNGA surveys. We measure the equivalent widths of the WR blue
bump at ~4650 A for each spectrum. They are compared with predictions from
stellar evolutionary models Starburst99 and BPASS, with different IMF
assumptions (high-mass slope {\alpha} of the IMF ranging from 1.0 up to 3.3).
Both singular evolution and binary evolution are considered. We also use a
Bayesian inference code to perform full spectral fitting to WR spectra with
stellar population spectra from BPASS as fitting templates. We then make model
selection among different {\alpha} assumptions based on Bayesian evidence.
These analyses have consistently led to a positive correlation of IMF high-mass
slope {\alpha} with stellar metallicity Z, i.e. with steeper IMF (more
bottom-heavy) at higher metallicities. Specifically, an IMF with {\alpha}=1.00
is preferred at the lowest metallicity (Z~0.001), and a Salpeter or even
steeper IMF is preferred at the highest metallicity (Z~0.03). These conclusions
hold even when binary population models are adopted.
|
In this study, we overview the problems associated with the usability of
cryptocurrency wallets, such as those used by ZCash, for end-users. The concept
of "holistic privacy," where information leaks in one part of a system can
violate the privacy expectations of different parts of the system, is
introduced as a requirement. To test this requirement with real-world software,
we did a 60 person task-based evaluation of the usability of a ZCash
cryptocurrency wallet by having users install and try to both send and receive
anonymized ZCash transactions, as well as install a VPN and Tor. While the
initial wallet installation was difficult, we found even a larger amount of
difficulty integrating the ZCash wallet into network-level protection like VPNs
or Tor, so only a quarter of users could complete a real-world purchase using
the wallet.
|
Let V be an n-dimensional vector space over a finite field F_q. We consider
on V the $\pi$-metric recently introduced by K. Feng, L. Xu and F. J.
Hickernell. In this short note we give a complete description of the group of
symmetries of V under the $\pi$-metric.
|
We show that the vacuum state functional for both open and closed string
field theories can be constructed from the vacuum expectation values it must
generate. The method also applies to quantum field theory and as an application
we give a diagrammatic description of the equivalance between Schrodinger and
covariant repreresentations of field theory.
|
The smallest known example of a family of modular categories that is not
determined by its modular data are the rank 49 categories
$\mathcal{Z}(\text{Vec}_G^{\omega})$ for $G=\mathbb{Z}_{11} \rtimes
\mathbb{Z}_{5}$. However, these categories can be distinguished with the
addition of a matrix of invariants called the $W$-matrix that contains
intrinsic information about punctured $S$-matrices. Here we show that it is a
common occurrence for knot and link invariants to carry more information than
the modular data. We present the results of a systematic investigation of the
invariants for small knots and links. We find many small knots and links that
are complete invariants of the $\mathcal{Z}(\text{Vec}_G^{\omega})$ when
$G=\mathbb{Z}_{11} \rtimes \mathbb{Z}_{5}$, including the $5_2$ knot.
|
The nearest-neighbor rule is a well-known classification technique that,
given a training set P of labeled points, classifies any unlabeled query point
with the label of its closest point in P. The nearest-neighbor condensation
problem aims to reduce the training set without harming the accuracy of the
nearest-neighbor rule.
FCNN is the most popular algorithm for condensation. It is heuristic in
nature, and theoretical results for it are scarce. In this paper, we settle the
question of whether reasonable upper-bounds can be proven for the size of the
subset selected by FCNN. First, we show that the algorithm can behave poorly
when points are too close to each other, forcing it to select many more points
than necessary. We then successfully modify the algorithm to avoid such cases,
thus imposing that selected points should "keep some distance". This
modification is sufficient to prove useful upper-bounds, along with
approximation guarantees for the algorithm.
|
In this chapter, we present a brief and non-exhaustive review of the
developments of theoretical models for accretion flows around neutron stars. A
somewhat chronological summary of crucial observations and modelling of timing
and spectral properties are given in sections 2 and 3. In section 4, we argue
why and how the Two-Component Advective Flow (TCAF) solution can be applied to
the cases of neutron stars when suitable modifications are made for the NSs. We
showcase some of our findings from Monte Carlo and Smoothed Particle
Hydrodynamic simulations which further strengthens the points raised in section
4. In summary, we remark on the possibility of future works using TCAF for both
weakly magnetic and magnetic Neutron Stars.
|
In this paper, we present a method to project co-authorship networks, that
accounts in detail for the geometrical structure of scientists collaborations.
By restricting the scope to 3-body interactions, we focus on the number of
triangles in the system, and show the importance of multi-scientists (more than
2) collaborations in the social network. This motivates the introduction of
generalized networks, where basic connections are not binary, but involve
arbitrary number of components. We focus on the 3-body case, and study
numerically the percolation transition.
|
Aims. Historical records provide evidence of extreme magnetic storms with
equatorward auroral extensions before the epoch of systematic magnetic
observations. One significant magnetic storm occurred on February 15, 1730. We
scale this magnetic storm with auroral extension and contextualise it based on
contemporary solar activity. Methods. We examined historical records in East
Asia and computed the magnetic latitude (MLAT) of observational sites to scale
magnetic storms. We also compared them with auroral records in Southern Europe.
We examined contemporary sunspot observations to reconstruct detailed solar
activity between 1729 and 1731. Results. We show 29 auroral records in East
Asian historical documents and 37 sunspot observations. Conclusions. These
records show that the auroral displays were visible at least down to 25.8{\deg}
MLAT throughout East Asia. In comparison with contemporary European records, we
show that the boundary of the auroral display closest to the equator surpassed
45.1{\deg} MLAT and possibly came down to 31.5{\deg} MLAT in its maximum phase,
with considerable brightness. Contemporary sunspot records show an active phase
in the first half of 1730 during the declining phase of the solar cycle. This
magnetic storm was at least as intense as the magnetic storm in 1989, but less
intense than the Carrington event.
|
We introduce and compare new compression approaches to obtain regularized
solutions of large linear systems which are commonly encountered in large scale
inverse problems. We first describe how to approximate matrix vector operations
with a large matrix through a sparser matrix with fewer nonzero elements, by
borrowing from ideas used in wavelet image compression. Next, we describe and
compare approaches based on the use of the low rank SVD, which can result in
further size reductions. We describe how to obtain the approximate low rank SVD
of the original matrix using the sparser wavelet compressed matrix. Some
analytical results concerning the various methods are presented and the results
of the proposed techniques are illustrated using both synthetic data and a very
large linear system from a seismic tomography application, where we obtain
significant compression gains with our methods, while still resolving the main
features of the solutions.
|
Subsets and Splits