text
stringlengths 6
128k
|
---|
We have investigated the effect of Ti doping on the transport properties
coupled with the magnetic ones in
Sm$_{0.55}$Sr$_{0.45}$Mn$_{1-\eta}$Ti$_{\eta}$O$_3$ ($0 \leq \eta \leq 0.04$).
The parent compound, Sm$_{0.55}$Sr$_{0.45}$MnO$_3$, exhibits a first-order
paramagnetic-insulator to ferromagnetic-metal transition just below $T_{\rm c}$
= 128 K. With substitution of Ti at Mn sites ($B$-site), $T_{\rm c}$ decreases
approximately linearly at the rate of 22 K$\%^{-1}$ while the width of thermal
hysteresis in magnetization and resistivity increases almost in an exponential
fashion. The most spectacular effect has been observed for the composition
$\eta$=0.03, where a magnetic field of only 1 T yields a huge
magnetoresistance, $1.2 \times 10^7$ $\%$ at $T_c\approx$ 63 K. With increasing
magnetic field, the transition shifts towards higher temperature, and the
first-order nature of the transition gets weakened and eventually becomes
crossover above a critical field ($H_{cr}$) which increases with Ti doping. For
Ti doping above 0.03, the system remains insulting without any ferromagnetic
ordering down to 2 K. The Monte-Carlo calculations based on a two-band double
exchange model show that the decrease of $T_{\rm c}$ with Ti doping is
associated with the increase of the lattice distortions around the doped Ti
ions.
|
In these expository notes, we describe some features of the multiplicative
coalescent and its connection with random graphs and minimum spanning trees. We
use Pitman's proof of Cayley's formula, which proceeds via a calculation of the
partition function of the additive coalescent, as motivation and as a
launchpad. We define a random variable which may reasonably be called the
empirical partition function of the multiplicative coalescent, and show that
its typical value is exponentially smaller than its expected value. Our
arguments lead us to an analysis of the susceptibility of the Erd\H{o}s-R\'enyi
random graph process, and thence to a novel proof of Frieze's \zeta(3)-limit
theorem for the weight of a random minimum spanning tree.
|
We propose an information transmission scheme by a swarm of anonymous
oblivious mobile robots on a graph. The swarm of robots travel from a sender
vertex to a receiver vertex to transmit a symbol generated at the sender. The
codeword for a symbol is a pair of an initial configuration at the sender and a
set of terminal configurations at the receiver. The set of such codewords forms
a code. We analyze the performance of the proposed scheme in terms of its code
size and transmission delay. We first demonstrate that a lower bound of the
transmission delay depends on the size of the swarm, and the code size is upper
bounded by an exponent of the size of the swarm. We then give two algorithms
for a swarm of a fixed size. The first algorithm realizes a near optimal code
size with a large transmission delay. The second algorithm realizes an optimal
transmission delay with a smaller code size. We then consider information
transmission by swarms of different sizes and present upper bounds of the
expected swarm size by the two algorithms. We also present lower bounds by
Shannon's lemma and noiseless coding theorem.
|
The problem of optimising a network of discretely firing neurons is
addressed. An objective function is introduced which measures the average
number of bits that are needed for the network to encode its state. When this
is minimised, it is shown that this leads to a number of results, such as
topographic mappings, piecewise linear dependence on the input of the
probability of a neuron firing, and factorial encoder networks.
|
Tucker decomposition is proposed to reduce the memory requirement of the
far-fields in the fast multipole method (FMM)-accelerated surface integral
equation simulators. It is particularly used to compress the far-fields of FMM
groups, which are stored in three-dimensional (3-D) arrays (or tensors). The
compressed tensors are then used to perform fast tensor-vector multiplications
during the aggregation and disaggregation stages of the FMM. For many practical
scenarios, the proposed Tucker decomposition yields a significant reduction in
the far-fields' memory requirement while dramatically accelerating the
aggregation and disaggregation stages. For the electromagnetic scattering
analysis of a 30{\lambda}-diameter sphere, it reduces the memory requirement of
the far-fields more than 87% while it expedites the aggregation and
disaggregation stages by a factor of 15.8 and 15.2, respectively, where
{\lambda} is the wavelength in free space.
|
We study the decay rate of process B->K l+ l- (l=e,mu) and some of its other
related observables, like forward backward asymmetry (A_{FB}), polarization
asymmetry (PA) and CP-asymmetry (A_{CP}) in R-parity violating (R_{p}) Minimal
Supersymmetric Standard Model (MSSM). The analysis shows that R_{p}Yukawa
coupling products contribute significantly to the branching fraction of B->K l+
l- within 1 sigma and 2 sigma. Study shows that PA and A_{FB} are sensitive
enough to R_{p}Yukawa coupling products and turn out to be good predictions for
measurement in future experiments.The CP-asymmetry calculated in this framework
agrees well with the recently reported value(i.e. 7%).
|
We show that under very general assumptions the partial Bergman kernel
function of sections vanishing along an analytic hypersurface has exponential
decay in a neighborhood of the vanishing locus. Considering an ample line
bundle, we obtain a uniform estimate of the Bergman kernel function associated
to a singular metric along the hypersurface. Finally, we study the asymptotics
of the partial Bergman kernel function on a given compact set and near the
vanishing locus.
|
The pinning effect of the periodic diameter modulations on the domain wall
propagation in FeCoCu individual nanowires is determined by Magnetic Force
Microscopy, MFM. A main bistable magnetic configuration is firstly concluded
from MFM images characterized by the spin reversal between two nearly single
domain states with opposite axial magnetization. Complementary micromagnetic
simulations confirm a vortex mediated magnetization reversal process. A refined
MFM imaging procedure under variable applied field allows us to observe
metastable magnetic states where the propagating domain wall is pinned at
certain positions with enlarged diameter. Moreover, it is demonstrated that in
some atypical nanowires with higher coercive field it is possible to control
the position of the pinned domain walls by an external magnetic field.
|
We obtain the hadronic mass spectrum in the `bag of bags' statistical
bootstrap model (BBSBM), implementing the colorless state condition, aside of
baryon and strangeness conservation, using group projection method. We study
the partition function, investigate the properties of dense hadronic matter,
and determine the conditions under which the system undergoes a phase
transition to a deconfined quark-gluon plasma. We show that a phase transition
cannot occur in the N=1 (Abelian) limit of our model, and is first order for
QCD-like case N=3.
|
In this challenge, we disentangle the deep filters from the original
DeepfilterNet and incorporate them into our Spec-UNet-based network to further
improve a hybrid Demucs (hdemucs) based remixing pipeline. The motivation
behind the use of the deep filter component lies at its potential in better
handling temporal fine structures. We demonstrate an incremental improvement in
both the Signal-to-Distortion Ratio (SDR) and the Hearing Aid Audio Quality
Index (HAAQI) metrics when comparing the performance of hdemucs against
different versions of our model.
|
The first algorithm for sampling the space of thick equilateral knots, as a
function of thickness, will be described. This algorithm is based on previous
algorithms of applying random reflections.
To prove the existence of the algorithm, we describe a method for turning any
knot into the regular planar polygon using only thickness non-decreasing moves.
This approach ensures that the algorithm has a positive probability of
connecting any two knots with the required thickness constraint and so is
ergodic. This ergodic sampling unlocks the ability to analyze the effects of
thickness on properties of the geometric knot such as radius of gyration.
This algorithm will be shown to be faster than previous methods for
generating thick knots, and the data from this algorithm shows that the radius
of gyration increases strongly with thickness and that the growth exponent for
radius of gyration increases with thickness.
|
We have measured the critical current dependence on the magnetic flux of two
long SNS junctions differing by the normal wire geometry. The samples are made
by a Au wire connected to W contacts, via Focused Ion Beam assisted deposition.
We could tune the magnetic pattern from the monotonic gaussian-like decay of a
quasi 1D normal wire to the Fraunhofer-like pattern of a square normal wire. We
explain the monotonic limit with a semiclassical 1D model, and we fit both
field dependences with numerical simulations of the 2D Usadel equation.
Furthermore, we observe both integer and fractional Shapiro steps. The magnetic
flux dependence of the integer steps reproduces as expected that of the
critical current Ic, while fractional steps decay slower with the flux than Ic.
|
Enhanced quantization offers a different classical/quantum connection than
that of canonical quantization in which $\hbar >0$ throughout. This result
arises when the only allowed Hilbert space vectors allowed in the quantum
action functional are coherent states, which leads to the classical action
functional augmented by additional terms of order $\hbar$. Canonical coherent
states are defined by unitary transformations of a fixed, fiducial vector.
While Gaussian vectors are commonly used as fiducial vectors, they cannot be
used for all systems. We focus on choosing fiducial vectors for several systems
including bosons, fermions, and anyons.
|
In this study, we examine the properties of donor stars of the three recently
discovered ultraluminous X-ray sources (ULXs) powered by rotating neutron
stars. For this purpose, we constructed a theoretical relation between the
X-ray luminosity ($L_{\rm{X}}$) and the orbital period ($P_{\rm{orb}}$)
suitable for ULXs with neutron stars. By using this new $L_{\rm{X}} -
P_{\rm{orb}}$ relation, we attempt to determine the currently unknown nature of
donor stars in ULXs associated with neutron stars. Especially, comparing the
observed properties with the stellar evolution tracks, we suggest that the
donor star in the NGC5907 ULX-1 system is a moderately massive star with $6 -
12 \rm{M}_{\odot}$, just departing from the main sequence phase. The results of
our models for the other two ULX systems (M82 X-2 and NGC7793 P-13) are
consistent with those in previous studies. Although there are only a few
samples, observed ULX systems with neutron stars seems to involve relatively
massive donors.
|
The effect of "dark energy" (i.e. the Lambda-term in Einstein equations) is
sought for at the interplanetary scales by comparing the rates of secular
increase in the lunar orbit obtained by two different ways: (1) measured
immediately by the laser ranging and (2) estimated independently from the
deceleration of the Earth's proper rotation. The first quantity involves both
the well-known effect of geophysical tides and the Kottler effect of
Lambda-term (i.e. a kind of the "local" Hubble expansion), while the second
quantity is associated only with the tidal influence. The difference between
them, 2.2 +/- 0.3 cm/yr, can be attributed just to the local Hubble expansion
with rate H_0^(loc) = 56 +/- 8 km/s/Mpc. Assuming that Hubble expansion is
formed locally only by the uniformly distributed dark energy (Lambda-term),
while globally also by a clumped substance (for the most part, the cold dark
matter), the total (large-scale) Hubble constant should be H_0 = 65 +/- 9
km/s/Mpc. This is in reasonable agreement both with the commonly-accepted WMAP
result, H_0 = 71 +/- 3.5 km/s/Mpc, and with the data on supernovae Ia
distribution. The above coincidence can serve as one more argument in favor of
the dark energy.
|
In 1990, Romero presented a beautiful formula for the projection onto the set
of rectangular matrices with prescribed row and column sums. Variants of
Romero's formula have been rediscovered by Khoury and by Glunt, Hayden, and
Reams, for bistochastic (square) matrices in 1998.
These results have found various generalizations and applications. In this
paper, we provide a formula for the more general problem of finding the
projection onto the set of rectangular matrices with prescribed scaled row and
column sums. Our approach is based on computing the Moore-Penrose inverse of a
certain linear operator associated with the problem. In fact, our analysis
holds even for Hilbert-Schmidt operators and we do not have to assume
consistency. We also perform numerical experiments featuring the new projection
operator.
|
The stationnary Josephson effect in a clean
Superconductor-Ferromagnet-Superconductor junction is revisited for arbitrarily
large spin polarizations. The quasiclassical calculation of the supercurrent
assumes that the Andreev reflection is complete for all channels. However, De
Jong and Beenakker have shown that the Andreev reflection at a clean FS
interface is incomplete, due to the exchange interaction in the ferromagnet.
Taking into account this incomplete Andreev reflection, we investigate the
quasiparticle spectrum, the Josephson current and the $0-\pi$ transition in a
ballistic single channel SFS junction. We find that energy gaps open in the
phase dependent spectrum. Although the spectrum is strongly modified when the
exchange energy increases, the Josephson current and the $0-\pi$ transition are
only weakly affected by the incomplete Andreev reflection, except when the
exchange energy is close to the Fermi energy.
|
We study fine structure related to finitely supported random walks on
infinite finitely generated discrete groups, largely motivated by dimension
group techniques. The unfaithful extreme harmonic functions (defined only on
proper space-time cones), aka unfaithful pure traces, can be represented on
systems of finite support, avoiding dead ends. This motivates properties of the
random walk (WC) and of the group (SWC) which become of interest in their own
right. While all abelian groups satisfy WC, the do not satisfy SWC; however
some abelian by finite groups do satisfy the latter, and we characterize when
this occurs. In general, we determine the maximal order ideals, aka, maximal
proper space-time subcones of that generated by the group element $1$ at time
zero), and show that the corresponding quotients are stationary simple
dimension groups, and that all such can occur for the free group on two
generators. We conclude with a case study of the discrete Heisenberg group,
determining among other things, the pure traces (these are the unfaithful ones,
not arising from characters).
|
Dissipative Kerr solitons (DKSs) intrinsically exhibit two degrees of freedom
through their group and phase rotation velocity. Periodic extraction of the DKS
into a waveguide produces a pulse train and yields the resulting optical
frequency comb's repetition rate and carrier-envelope offset, respectively.
Here, we demonstrate that it is possible to create a system with a single
repetition rate but two different phase velocities by employing dual driving
forces. By recasting these phase velocities into frequencies, we demonstrate,
experimentally and theoretically, that they can mix and create new
phase-velocity light following any four-wave mixing process, including both
degenerately pumped and non-degenerately pumped effects. In particular, we show
that a multiple-pumped DKS may generate a two-dimensional frequency comb, where
cascaded nonlinear mixing occurs in the phase velocity dimension as well as the
conventional mode number dimension, and where the repetition rate in each
dimension differs by orders of magnitude.
|
A new class of protocols called mirror benchmarking was recently proposed to
measure the system-level performance of quantum computers. These protocols
involve circuits with random sequences of gates followed by mirroring, that is,
inverting each gate in the sequence. We give a simple proof that mirror
benchmarking leads to an exponential decay of the survival probability with
sequence length, under the uniform noise assumption, provided the twirling
group forms a 2-design. The decay rate is determined by a quantity that is a
quadratic function of the error channel, and for certain types of errors is
equal to the unitarity. This result yields a new method for estimating the
coherence of noise. We present data from mirror benchmarking experiments run on
the Honeywell System Model H1. This data constitutes a set of performance
curves, indicating the success probability for random circuits as a function of
qubit number and circuit depth.
|
Deep neural networks are surprisingly efficient at solving practical tasks,
but the theory behind this phenomenon is only starting to catch up with the
practice. Numerous works show that depth is the key to this efficiency. A
certain class of deep convolutional networks -- namely those that correspond to
the Hierarchical Tucker (HT) tensor decomposition -- has been proven to have
exponentially higher expressive power than shallow networks. I.e. a shallow
network of exponential width is required to realize the same score function as
computed by the deep architecture. In this paper, we prove the expressive power
theorem (an exponential lower bound on the width of the equivalent shallow
network) for a class of recurrent neural networks -- ones that correspond to
the Tensor Train (TT) decomposition. This means that even processing an image
patch by patch with an RNN can be exponentially more efficient than a (shallow)
convolutional network with one hidden layer. Using theoretical results on the
relation between the tensor decompositions we compare expressive powers of the
HT- and TT-Networks. We also implement the recurrent TT-Networks and provide
numerical evidence of their expressivity.
|
We construct a family of inequivalent Calabi-Yau metrics on $\mathbf{C}^3$
asymptotic to $\mathbf{C} \times A_2$ at infinity, in the sense that any two of
these metrics cannot be related by a scaling and a biholomorphism. This
provides the first example of families of Calabi-Yau metrics asymptotic to a
fixed tangent cone at infinity, while keeping the underlying complex structure
fixed. We propose a refinement of a conjecture of Sz\'ekelyhidi addressing the
classification of such metrics.
|
We define and study pseudo-differential operators on a class of fractals that
include the post-critically finite self-similar sets and Sierpinski carpets.
Using the sub-Gaussian estimates of the heat operator we prove that our
operators have kernels that decay and, in the constant coefficient case, are
smooth off the diagonal. Our analysis can be extended to product of fractals.
While our results are applicable to a larger class of metric measure spaces
with Laplacian, we use them to study elliptic, hypoelliptic, and quasi-elliptic
operators on p.c.f. fractals, answering a few open questions posed in a series
of recent papers. We extend our class of operators to include the so called
H\"ormander hypoelliptic operators and we initiate the study of wavefront sets
and microlocal analysis on p.c.f. fractals.
|
We investigate S-boxes defined by pairs of Orthogonal Cellular Automata
(OCA), motivated by the fact that such CA always define bijective vectorial
Boolean functions, and could thus be interesting for the design of block
ciphers. In particular, we perform an exhaustive search of all nonlinear OCA
pairs of diameter $d=4$ and $d=5$, which generate S-boxes of size $6\times 6$
and $8\times 8$, respectively. Surprisingly, all these S-boxes turn out to be
linear, and thus they are not useful for the design of confusion layers in
block ciphers. However, a closer inspection of these S-boxes reveals a very
interesting structure. Indeed, we remark that the linear components space of
the OCA-based S-boxes found by our exhaustive search are themselves the kernels
of linear CA, or, equivalently, \emph{polynomial codes}. We finally classify
the polynomial codes of the S-boxes obtained in our exhaustive search and
observe that, in most cases, they actually correspond to the cyclic code with
generator polynomial $X^{b}+1$, where $b=d-1$. Although these findings rule out
the possibility of using OCA to design good S-boxes in block ciphers, they give
nonetheless some interesting insights for a theoretical characterization of
nonlinear OCA pairs, which is still an open question in general.
|
We find that the presence of a global $L_e-L_\mu-L_\tau$ ($\equiv L^\prime$)
symmetry and an $S_2$ permutation symmetry for the $\mu$- and $\tau$-families
supplemented by a discrete $Z_4$ symmetry naturally leads to almost maximal
atmospheric neutrino mixing and large solar neutrino mixing, which arise,
respectively, from type II seesaw mechanism initiated by an $S_2$-symmetric
triplet Higgs scalar $s$ with $L^\prime=2$ and from radiative mechanism of the
Zee type initiated by two singly charged scalars, an $S_2$-symmetric $h^+$ with
$L^\prime=0$ and an $S_2$-antisymmetric $h^{\prime +}$ with $L^\prime=2$. The
almost maximal mixing for atmospheric neutrinos is explained by the appearance
of the democratic coupling of $s$ to neutrinos ensured by $S_2$ and $Z_4$ while
the large mixing for solar neutrinos is explained by the similarity of $h^+$-
and $h^{\prime +}$-couplings described by $f^h_+\sim f^h_-$ and
$\mu_+\sim\mu_-$, where $f^h_+$ ($f^h_-$) and $\mu_+$ ($\mu_-$) stand for $h^+$
($h^{\prime +}$)-couplings, respectively, to leptons and to Higgs scalars.
|
We describe a simple implementation of black hole excision in 3+1 numerical
relativity. We apply this technique to a Schwarzschild black hole with octant
symmetry in Eddington-Finkelstein coordinates and show how one can obtain
accurate, long-term stable numerical evolutions.
|
The origin of black hole entropy and the black hole information problem
provide important clues for trying to piece together a quantum theory of
gravity. Thus far, discussions on this topic have mostly assumed that in a
consistent theory of gravity and quantum mechanics, quantum theory will be
unmodified. Here, we examine the black hole information problem in the context
of generalisations of quantum theory. In particular, we examine black holes in
the setting of generalised probabilistic theories, in which quantum theory and
classical probability theory are special cases. We compute the time it takes
information to escape a black hole, assuming that information is preserved. We
find that under some very general assumptions, the arguments of Page (that
information should escape the black hole after half the Hawking photons have
been emitted), and the black-hole mirror result of Hayden and Preskill (that
information can escape quickly) need to be modified. The modification is
determined entirely by what we call the Wootters-Hardy parameter associated
with a theory. We find that although the information leaves the black hole
after enough photons have been emitted, it is fairly generic that it fails to
appear outside the black hole at this point -- something impossible in quantum
theory due to the no-hiding theorem. The information is neither inside the
black hole, nor outside it, but is delocalised. Our central technical result is
an information decoupling theorem which holds in the generalised probabilistic
framework.
|
We establish plurisubharmonicity of the envelope of Poisson and Lelong
functionals on almost complex manifolds. That is, we generalize the
corresponding results for complex manifolds and almost complex manifolds of
complex dimension two. We also provide some applications to the regularization
of J-plurisubharmonic functions and to the characterization of compact
psh-hulls by pseudoholomorphic discs.
|
The Game of Poker Chips, Dominoes and Survival fosters team building and high
level cooperation in large groups, and is a tool applied in management training
exercises. Each player, initially given two colored poker chips, is allowed to
make exchanges with the game coordinator according to two rules, and must
secure a domino before time is called in order to `survive'. Though the rules
are simple, it is not evident by their form that the survival of the entire
group requires that they cooperate at a high level. From the point of view of
the game coordinator, the difficulty of the game for the group can be
controlled not only by the time limit, but also by the initial distribution of
chips, in a way we make precise by a time complexity type argument. That
analysis also provides insight into good strategies for group survival, those
taking the least amount of time. In addition, coordinators may also want to be
aware of when the game is `solvable', that is, when their initial distribution
of chips permits the survival of all group members if given sufficient time to
make exchanges. It turns out that the game is solvable if and only if the
initial distribution contains seven chips that have one of two particular color
distributions. In addition to being a lively game to play in management
training or classroom settings, the analysis of the game after play can make
for an engaging exercise in any basic discrete mathematics course to give a
basic introduction to elements of game theory, logical reasoning, number theory
and the computation of algorithmic complexities.
|
Sound localization aims to find the source of the audio signal in the visual
scene. However, it is labor-intensive to annotate the correlations between the
signals sampled from the audio and visual modalities, thus making it difficult
to supervise the learning of a machine for this task. In this work, we propose
an iterative contrastive learning framework that requires no data annotations.
At each iteration, the proposed method takes the 1) localization results in
images predicted in the previous iteration, and 2) semantic relationships
inferred from the audio signals as the pseudo-labels. We then use the
pseudo-labels to learn the correlation between the visual and audio signals
sampled from the same video (intra-frame sampling) as well as the association
between those extracted across videos (inter-frame relation). Our iterative
strategy gradually encourages the localization of the sounding objects and
reduces the correlation between the non-sounding regions and the reference
audio. Quantitative and qualitative experimental results demonstrate that the
proposed framework performs favorably against existing unsupervised and
weakly-supervised methods on the sound localization task.
|
We present a convenient notation for positive/negative-conditional equations.
The idea is to merge rules specifying the same function by using case-, if-,
match-, and let-expressions. Based on the presented macro-rule-construct,
positive/negative-conditional equational specifications can be written on a
higher level. A rewrite system translates the macro-rule-constructs into
positive/negative-conditional equations.
|
Finite element method (FEM) is one of the most important numerical methods in
modern engineering design and analysis. Since traditional serial FEM is
difficult to solve large FE problems efficiently and accurately,
high-performance parallel FEM has become one of the essential way to solve
practical engineering problems. Based on MiniFE program, which is released by
National Energy Research Scientific Computing Center(NERSC), this work analyzes
concrete steps, key computing pattern and parallel mechanism of parallel FEM.
According to experimental results, this work analyzes the proportion of
calculation amount of each module and concludes the main performance bottleneck
of the program. Based on that, we optimize the MiniFE program on a server
platform. The optimization focuses on the bottleneck of the program - SpMV
kernel, and uses an efficient storage format named BCRS. Moreover, an improving
plan of hybrid MPI+OpenMP programming is provided. Experimental results show
that the optimized program performs better in both SpMV kernel and
synchronization. It can increase the performance of the program, on average, by
8.31%. Keywords : finite element, parallel, MiniFE, SpMV, performance
optimization
|
Despite being a source of rich information, graphs are limited to pairwise
interactions. However, several real-world networks such as social networks,
neuronal networks, etc., involve interactions between more than two nodes.
Simplicial complexes provide a powerful mathematical framework to model such
higher-order interactions. It is well known that the spectrum of the graph
Laplacian is indicative of community structure, and this relation is exploited
by spectral clustering algorithms. Here we propose that the spectrum of the
Hodge Laplacian, a higher-order Laplacian defined on simplicial complexes,
encodes simplicial communities. We formulate an algorithm to extract simplicial
communities (of arbitrary dimension). We apply this algorithm to simplicial
complex benchmarks and to real higher-order network data including social
networks and networks extracted using language or text processing tools.
However, datasets of simplicial complexes are scarce, and for the vast majority
of datasets that may involve higher-order interactions, only the set of
pairwise interactions are available. Hence, we use known properties of the data
to infer the most likely higher-order interactions. In other words, we
introduce an inference method to predict the most likely simplicial complex
given the community structure of its network skeleton. This method identifies
as most likely the higher-order interactions inducing simplicial communities
that maximize the adjusted mutual information measured with respect to
ground-truth community structure. Finally, we consider higher-order networks
constructed through thresholding the edge weights of collaboration networks
(encoding only pairwise interactions) and provide an example of persistent
simplicial communities that are sustained over a wide range of the threshold.
|
In this paper, we propose a novel decentralized control method to maintain
Line-of-Sight connectivity for multi-robot networks in the presence of
Guassian-distributed localization uncertainty. In contrast to most existing
work that assumes perfect positional information about robots or enforces
overly restrictive rigid formation against uncertainty, our method enables
robots to preserve Line-of-Sight connectivity with high probability under
unbounded Gaussian-like positional noises while remaining minimally intrusive
to the original robots' tasks. This is achieved by a motion coordination
framework that jointly optimizes the set of existing Line-of-Sight edges to
preserve and control revisions to the nominal task-related controllers, subject
to the safety constraints and the corresponding composition of
uncertainty-aware Line-of-Sight control constraints. Such compositional control
constraints, expressed by our novel notion of probabilistic Line-of-Sight
connectivity barrier certificates (PrLOS-CBC) for pairwise robots using control
barrier functions, explicitly characterize the deterministic admissible control
space for the two robots. The resulting motion ensures Line-of-Sight
connectedness for the robot team with high probability. Furthermore, we propose
a fully decentralized algorithm that decomposes the motion coordination
framework by interleaving the composite constraint specification and solving
for the resulting optimization-based controllers. The optimality of our
approach is justified by the theoretical proofs. Simulation and real-world
experiments results are given to demonstrate the effectiveness of our method.
|
Pourchet proved in 1971 that every nonnegative univariate polynomial with
rational coefficients is a sum of five or fewer squares. Nonetheless, there are
no known algorithms for constructing such a decomposition. The sole purpose of
the present paper is to present a set of algorithms that decompose a given
nonnegative polynomial into a sum of six (five under some unproven conjecture
or when allowing weights) squares of polynomials. Moreover, we prove that the
binary complexity can be expressed polynomially in terms of classical
operations of computer algebra and algorithmic number theory.
|
We derive projected rotational velocities (vsini) for a sample of 156
Galactic OB star members of 35 clusters, HII regions, and associations. The HeI
lines at $\lambda\lambda$4026, 4388, and 4471A were analyzed in order to define
a calibration of the synthetic HeI full-widths at half maximum versus stellar
vsini. A grid of synthetic spectra of HeI line profiles was calculated in
non-LTE using an extensive helium model atom and updated atomic data. The
vsini's for all stars were derived using the He I FWHM calibrations but also,
for those target stars with relatively sharp lines, vsini values were obtained
from best fit synthetic spectra of up to 40 lines of CII, NII, OII, AlIII,
MgII, SiIII, and SIII. This calibration is a useful and efficient tool for
estimating the projected rotational velocities of O9-B5 main-sequence stars.
The distribution of vsini for an unbiased sample of early B stars in the
unbound association Cep OB2 is consistent with the distribution reported
elsewhere for other unbound associations.
|
A quantum field theoretic formulation of the dynamics of the Contact Process
on a regular graph of degree z is introduced. A perturbative calculation in
powers of 1/z of the effective potential for the density of particles phi(t)
and an instantonic field psi(t) emerging from the quantum formalism is
performed. Corrections to the mean-field distribution of densities of particles
in the out-of-equilibrium stationary state are derived in powers of 1/z.
Results for typical (e.g. average density) and rare fluctuation (e.g. lifetime
of the metastable state) properties are in very good agreement with numerical
simulations carried out on D-dimensional hypercubic (z=2D) and Cayley lattices.
|
The polar regions of Jupiter host a myriad of dynamically interesting
phenomena including vortex configurations, folded-filamentary regions (FFRs),
and chaotic flows. Juno observations have provided unprecedented views of the
high latitudes, allowing for more constraints to be placed upon the troposphere
and the overall atmospheric energy cycle. Moist convective events are believed
to be the primary drivers of energetic storm behavior as observed on the
planet. Here, we introduce a novel single layer shallow water model to
investigate the effects of polar moist convective events at high resolution,
the presence of dynamical instabilities over long timescales, and the emergence
of FFRs at high latitudes. We use a flexible, highly parallelizable,
finite-difference hydrodynamic code to explore the parameter space set up by
previous models. We study the long term effects of deformation length (Ld),
injected pulse size, and injected geopotential. We find that models with Ld
beyond 1500 km (planetary Burger number, Bu$=4.4\times10^{-4}$) tend to
homogenize their potential vorticity (PV) in the form of dominant stable polar
cyclones, while lower Ld cases tend to show less stability with regards to
Arnol'd-type flows. We also find that large turbulent forcing scales
consistently lead to the formation of high latitude FFRs. Our findings support
the idea that moist convection, occurring at high latitudes, may be sufficient
to produce the dynamical variety seen at the Jovian poles. Additionally,
derived values of localized horizontal shear and Ld may constrain FFR formation
and evolution.
|
It is conventional to calculate the probability of microlensing for a
cosmologically distant source based on the Press-Gunn approximation that the
lensing objects are uniformly and randomly distributed in the intervening space
with a constant comoving density. Here we investigate more realistic
cosmological microlensing statistics by considering the strong spatial
clustering of likely lensing objects with each other in galaxies and their
association with the clumps of dark matter that make up the massive halos of
galaxies. The distribution of microlensing optical depth (kappa) along randomly
chosen sight lines is calculated as is the conditional distribution of kappa
along sight lines near one which is strongly microlensed. Our overall result is
that the Press-Gunn approximation is a useful order-of-magnitude approximation
if the massive halos of galaxies are made of dark compact objects but that it
fails badly and can be qualitatively misleading in the more likely case in
which only the ordinary stellar populations of galaxies are the dominant source
of cosmological microlensing events. In particular, we find that microlensing
by stars is limited to of order 1 percent of high redshift sources at any one
time. Furthermore, even though only a small fraction of high redshift sources
are multiply-imaged (by galaxies), it is these sources that are most likely to
be microlensed by stars. Consequently, microlensing by stars is usually
observed at kappa's near 1 where the simple isolated point mass lens
approximation is not appropriate. However, if CDM halos are composed of
condensed objects, then more than 10 percent of high redshift sources are
microlensed at any given time. The vast majority of these sources are not
multiply-imaged, and have kappa's smaller than 0.01.
|
This paper presents a novel perspective for enhancing anti-spoofing
performance in zero-shot data domain generalization. Unlike traditional image
classification tasks, face anti-spoofing datasets display unique generalization
characteristics, necessitating novel zero-shot data domain generalization. One
step forward to the previous frame-wise spoofing prediction, we introduce a
nuanced metric calculation that aggregates frame-level probabilities for a
video-wise prediction, to tackle the gap between the reported frame-wise
accuracy and instability in real-world use-case. This approach enables the
quantification of bias and variance in model predictions, offering a more
refined analysis of model generalization. Our investigation reveals that simply
scaling up the backbone of models does not inherently improve the mentioned
instability, leading us to propose an ensembled backbone method from a Bayesian
perspective. The probabilistically ensembled backbone both improves model
robustness measured from the proposed metric and spoofing accuracy, and also
leverages the advantages of measuring uncertainty, allowing for enhanced
sampling during training that contributes to model generalization across new
datasets. We evaluate the proposed method from the benchmark OMIC dataset and
also the public CelebA-Spoof and SiW-Mv2. Our final model outperforms existing
state-of-the-art methods across the datasets, showcasing advancements in Bias,
Variance, HTER, and AUC metrics.
|
The shuffled linear regression problem aims to recover linear relationships
in datasets where the correspondence between input and output is unknown. This
problem arises in a wide range of applications including survey data, in which
one needs to decide whether the anonymity of the responses can be preserved
while uncovering significant statistical connections. In this work, we propose
a novel optimization algorithm for shuffled linear regression based on a
posterior-maximizing objective function assuming Gaussian noise prior. We
compare and contrast our approach with existing methods on synthetic and real
data. We show that our approach performs competitively while achieving
empirical running-time improvements. Furthermore, we demonstrate that our
algorithm is able to utilize the side information in the form of seeds, which
recently came to prominence in related problems.
|
It was noticed many years ago, in the framework of massless RG flows, that
the irrelevant composite operator $T \bar{T}$, built with the components of the
energy-momentum tensor, enjoys very special properties in 2D quantum field
theories, and can be regarded as a peculiar kind of integrable perturbation.
Novel interesting features of this operator have recently emerged from the
study of effective string theory models.In this paper we study further
properties of this distinguished perturbation. We discuss how it affects the
energy levels and one-point functions of a general 2D QFT in finite volume
through a surprising relation with a simple hydrodynamic equation. In the case
of the perturbation of CFTs, adapting a result by L\"uscher and Weisz we give a
compact expression for the partition function on a finite-length cylinder and
make a connection with the exact $g$-function method. We argue that, at the
classical level, the deformation naturally maps the action of $N$ massless free
bosons into the Nambu-Goto action in static gauge, in $N+2$ target space
dimensions, and we briefly discuss a possible interpretation of this result in
the context of effective string models.
|
This paper proposes a decentralized dynamic state estimation scheme for
microgrids. The approach employs the voltage and current measurements in the
dq0 reference frame through phasor synchronization to be able to exclude
orthogonal functions from their relationship formulas. Based on that premise,
we utilize a Kalman filter to dynamically estimate states of microgrids. The
decoupling of measurement values to state and input vectors reduces the
computational complexity. The Kalman filter considers the process noise
covariances, which are modified with respect to the covariance of measured
input values. Theoretical analysis and simulation results are provided for
validation.
|
The concept of homology, originally developed as a useful tool in algebraic
topology, has by now become pervasive in quite different branches of
mathematics. The notion particularly appears quite naturally in ergodic theory
in the study of measure-preserving transformations arising from various group
actions or, equivalently, the study of stationary sequences when adopting a
probabilistic perspective as in this paper. Our purpose is to give a new and
relatively short proof of the coboundary theorem due to Schmidt (1977) which
provides a sharp criterion that determines (and rules out) when two stationary
processes belong to the same \emph{null-homology equivalence class}. We also
discuss various aspects of null-homology within the class of Markov random
walks, compare null-homology with a formally stronger notion which we call {\it
strict-sense null-homology}. Finally, we also discuss some concrete cases where
the notion of null-homology turns up in a relevant manner.
|
In this paper we analyze Least Recently Used (LRU) caches operating under the
Shot Noise requests Model (SNM). The SNM was recently proposed to better
capture the main characteristics of today Video on Demand (VoD) traffic. We
investigate the validity of Che's approximation through an asymptotic analysis
of the cache eviction time. In particular, we provide a large deviation
principle, a law of large numbers and a central limit theorem for the cache
eviction time, as the cache size grows large. Finally, we derive upper and
lower bounds for the "hit" probability in tandem networks of caches under Che's
approximation.
|
The Gaussian reconstruction kernels have been proposed by Westover (1990) and
studied by the computer graphics community back in the 90s, which gives an
alternative representation of object 3D geometry from meshes and point clouds.
On the other hand, current state-of-the-art (SoTA) differentiable renderers,
Liu et al. (2019), use rasterization to collect triangles or points on each
image pixel and blend them based on the viewing distance. In this paper, we
propose VoGE, which utilizes the volumetric Gaussian reconstruction kernels as
geometric primitives. The VoGE rendering pipeline uses ray tracing to capture
the nearest primitives and blends them as mixtures based on their volume
density distributions along the rays. To efficiently render via VoGE, we
propose an approximate closeform solution for the volume density aggregation
and a coarse-to-fine rendering strategy. Finally, we provide a CUDA
implementation of VoGE, which enables real-time level rendering with a
competitive rendering speed in comparison to PyTorch3D. Quantitative and
qualitative experiment results show VoGE outperforms SoTA counterparts when
applied to various vision tasks, e.g., object pose estimation, shape/texture
fitting, and occlusion reasoning. The VoGE library and demos are available at:
https://github.com/Angtian/VoGE.
|
The possibility of achieving highly selective excitation of low metastable
states of hydrogen and helium atoms by using short laser pulses with reasonable
parameters is demonstrated theoretically. Interactions of atoms with the laser
field are studied by solving the close-coupling equations without
discretization. The parameters of laser pulses are calculated using different
kinds of optimization procedures. For the excitation durations of hundreds of
femtoseconds direct optimization of the parameters of one and two laser pulses
with Gaussian envelopes is used to introduce a number of simple schemes of
selective excitation. To treat the case of shorter excitation durations,
optimal control theory is used and the calculated optimal fields are
approximated by sequences of pulses with reasonable shapes. A new way to
achieve selective excitation of metastable atomic states by using sequences of
attosecond pulses is introduced.
|
Photons, as quanta of electromagnetic fields, determine the electromagnetic
properties of an extremely hot and dense medium. Considering the properties of
photons in the interacting medium of charged particles, we explicitly calculate
the electromagnetic properties such as the electric permittivity, magnetic
permeability, refractive index and the propagation speed of electromagnetic
signals in extremely hot and dense background in cosmos. Photons acquire
dynamically generated mass in a medium. The screening mass of photon, Debye
shielding length and the plasma frequency are calculated as functions of
statistical parameters of the medium. We study the properties of the
propagating particles in astrophysical systems of distinct statistical
conditions. The modifications in the medium properties lead to the equation of
state of the system. We mainly calculate all these parameters for extremely
high temperatures of the early universe.
|
Major chip manufacturers have all introduced Multithreaded processors. These
processors are used for running a variety of workloads. Efficient resource
utilization is an important design aspect in such processors. Particularly, it
is important to take advantage of available memory-level parallelism(MLP). In
this paper I propose a MLP aware operating system (OS) scheduling algorithm for
Multithreaded Multi-core processors. By observing the MLP available in each
thread and by balancing it with available MLP resources in the system the OS
will come up with a new schedule of threads for the next quantum that could
potentially improve overall performance. We do a qualitative comparison of our
solution with other hardware and software techniques. This work can be extended
by doing a quantitative evaluation and by further refining the scheduling
optimization.
|
Two left-invariant Lorentzian problems on the Heisenberg group are
considered. The Pontryagin maximum principle was applied to both problems and a
parameterization of abnormal and normal extremal trajectories was obtained.
Reachability sets and the existence of optimal trajectories are investigated.
|
The distances to fast radio bursts (FRBs) are crucial for understanding their
underlying engine, and for their use as cosmological probes. In this paper, we
provide three statistical estimates of the distance to ASKAP FRBs. First, we
show that the number of events of similar luminosity in ASKAP does not scale as
distance cubed, as one would expect, when directly using the observed
dispersion measure (DM) to infer distance. Second, by comparing the average DMs
of FRBs observed with different instruments, we estimated the average redshift
of ASKAP FRBs to be $z\sim 0.01$ using CHIME and ASKAP, and $z\lesssim0.07$
using Parkes and ASKAP. Both values are much smaller than the upper limit
$z\sim0.3$ estimated directly from the DM. Third, we cross-correlate the
locations of the ASKAP FRBs with existing large-area redshift surveys, and see
a 3$\sigma$ correlation with the 2MASS Redshift Survey and a 5$\sigma$
correlation with the HI Parkes All Sky Survey at $z\sim0.007$. This corresponds
well with the redshift of the most likely host galaxy of ASKAP FRB 171020,
which is at $z=0.00867$. These arguments combined suggest an extremely nearby
origin of ASKAP FRBs and a local environment with accumulated electrons that
contribute a DM of several hundred pc/cm$^3$, which should be accounted for in
theoretical models.
|
Multi-head self-attention-based Transformers have shown promise in different
learning tasks. Albeit these models exhibit significant improvement in
understanding short-term and long-term contexts from sequences, encoders of
Transformers and their variants fail to preserve layer-wise contextual
information. Transformers usually project tokens onto sparse manifolds and fail
to preserve mathematical equivalence among the token representations. In this
work, we propose TransJect, an encoder model that guarantees a theoretical
bound for layer-wise distance preservation between a pair of tokens. We propose
a simple alternative to dot-product attention to ensure Lipschitz continuity.
This allows TransJect to learn injective mappings to transform token
representations to different manifolds with similar topology and preserve
Euclidean distance between every pair of tokens in subsequent layers.
Evaluations across multiple benchmark short- and long-sequence classification
tasks show maximum improvements of 6.8% and 5.9%, respectively, over the
variants of Transformers. Additionally, TransJect displays 79% better
performance than Transformer on the language modeling task. We further
highlight the shortcomings of multi-head self-attention from the statistical
physics viewpoint. Although multi-head self-attention was incepted to learn
different abstraction levels within the networks, our empirical analyses
suggest that different attention heads learn randomly and unorderly. In
contrast, TransJect adapts a mixture of experts for regularization; these
experts are more orderly and balanced and learn different sparse
representations from the input sequences. TransJect exhibits very low entropy
and can be efficiently scaled to larger depths.
|
We present the results of a new search for variable stars in the Local Group
dwarf galaxy Leo A, based on deep photometry from the Advanced Camera for
Surveys onboard the Hubble Space Telescope. We detected 166 bona fide variables
in our field, of which about 60 percent are new discoveries, and 33 candidate
variables. Of the confirmed variables, we found 156 Cepheids, but only 10 RR
Lyrae stars despite nearly 100 percent completeness at the magnitude of the
horizontal branch. The RR Lyrae stars include 7 fundamental and 3
first-overtone pulsators, with mean periods of 0.636 and 0.366 day,
respectively. From their position on the period-luminosity (PL) diagram and
light-curve morphology, we classify 91, 58, and 4 Cepheids as fundamental,
first-overtone, and second-overtone mode Classical Cepheids (CC), respectively,
and two as population II Cepheids. However, due to the low metallicity of Leo
A, about 90 percent of the detected Cepheids have periods shorter than 1.5
days. Comparison with theoretical models indicate that some of the fainter
stars classified as CC could be Anomalous Cepheids. We estimate the distance to
Leo A using the tip of the RGB (TRGB) and various methods based on the
photometric and pulsational properties of the Cepheids and RR Lyrae stars. The
distances obtained with the TRGB and RR Lyrae stars agree well with each other
while that from the Cepheid PL relations is somewhat larger, which may indicate
a mild metallicity effect on the luminosity of the short-period Cepheids. Due
to its very low metallicity, Leo A thus serves as a valuable calibrator of the
metallicity dependencies of the variable star luminosities.
|
The generic structure of 4-point functions of fields residing in
indecomposable representations of arbitrary rank is given. The presented
algorithm is illustrated with some non-trivial examples and permutation
symmetries are exploited to reduce the number of free structure-functions,
which cannot be fixed by global conformal invariance alone.
|
As an unsupervised dimensionality reduction method, principal component
analysis (PCA) has been widely considered as an efficient and effective
preprocessing step for hyperspectral image (HSI) processing and analysis tasks.
It takes each band as a whole and globally extracts the most representative
bands. However, different homogeneous regions correspond to different objects,
whose spectral features are diverse. It is obviously inappropriate to carry out
dimensionality reduction through a unified projection for an entire HSI. In
this paper, a simple but very effective superpixelwise PCA approach, called
SuperPCA, is proposed to learn the intrinsic low-dimensional features of HSIs.
In contrast to classical PCA models, SuperPCA has four main properties. (1)
Unlike the traditional PCA method based on a whole image, SuperPCA takes into
account the diversity in different homogeneous regions, that is, different
regions should have different projections. (2) Most of the conventional feature
extraction models cannot directly use the spatial information of HSIs, while
SuperPCA is able to incorporate the spatial context information into the
unsupervised dimensionality reduction by superpixel segmentation. (3) Since the
regions obtained by superpixel segmentation have homogeneity, SuperPCA can
extract potential low-dimensional features even under noise. (4) Although
SuperPCA is an unsupervised method, it can achieve competitive performance when
compared with supervised approaches. The resulting features are discriminative,
compact, and noise resistant, leading to improved HSI classification
performance. Experiments on three public datasets demonstrate that the SuperPCA
model significantly outperforms the conventional PCA based dimensionality
reduction baselines for HSI classification. The Matlab source code is available
at https://github.com/junjun-jiang/SuperPCA
|
We first give an alternative proof of the Alon-Tarsi list coloring theorem.
We use the ideas from this proof to obtain the following result, which is an
additive coloring analog of the Alon-Tarsi Theorem: Let $G$ be a graph and let
$D$ be an orientation of $G$. We introduce a new digraph $\mathcal{W}(D)$, such
that if the out-degree in $D$ of each vertex $v$ is $d_v$, and if the number of
Eulerian subdigraphs of $\mathcal{W}(D)$ with an even number of edges differs
from the number of Eulerian subdigraphs of $\mathcal{W}(D)$ with an odd number
of edges, then for any assignment of lists $L(v)$ of $d_v+1$ positive integers
to the vertices of $G$, there is an additive coloring of $G$ assigning to each
vertex $v$ an element from $L(v)$. As an application, we prove an additive list
coloring result for tripartite graphs $G$ such that one of the color classes of
$G$ contains only vertices whose neighborhoods are complete.
|
Recently it has been demonstrated that causal entropic forces can lead to the
emergence of complex phenomena associated with human cognitive niche such as
tool use and social cooperation. Here I show that even more fundamental traits
associated with human cognition such as 'self-awareness' can easily be
demonstrated to be arising out of merely a selection for 'better regulators';
i.e. systems which respond comparatively better to threats to their existence
which are internal to themselves. A simple model demonstrates how indeed the
average self-awareness for a universe of systems continues to rise as less
self-aware systems are eliminated. The model also demonstrates however that the
maximum attainable self-awareness for any system is limited by the plasticity
and energy availability for that typology of systems. I argue that this rise in
self-awareness may be the reason why systems tend towards greater complexity.
|
Recently, we proposed a self-propelled particle model with competing
alignment interactions: nearby particles tend to align their velocities whereas
they anti-align their direction of motion with particles which are further away
[R. Grossmann et al., Phys. Rev. Lett. 113, 258104 (2014)]. Here, we extend our
previous numerical analysis of the high density regime considering low particle
densities too. We report on the emergence of various macroscopic patterns such
as vortex arrays, mesoscale turbulence as well as the formation of polar
clusters, polar bands and nematically ordered states. Furthermore, we study
analytically the instabilities leading to pattern formation in mean-field
approximation. We argue that these instabilities are well described by a
reduced set of hydrodynamic equations in the limit of high density.
|
In network data analysis, it is becoming common to work with a collection of
graphs that exhibit \emph{heterogeneity}. For example, neuroimaging data from
patient cohorts are increasingly available. A critical analytical task is to
identify communities, and graph Laplacian-based methods are routinely used.
However, these methods are currently limited to a single network and do not
provide measures of uncertainty on the community assignment. In this work, we
propose a probabilistic network model called the ``Spiked Laplacian Graph''
that considers each network as an invertible transform of the Laplacian, with
its eigenvalues modeled by a modified spiked structure. This effectively
reduces the number of parameters in the eigenvectors, and their sign patterns
allow efficient estimation of the community structure. Further, the posterior
distribution of the eigenvectors provides uncertainty quantification for the
community estimates. Subsequently, we introduce a Bayesian non-parametric
approach to address the issue of heterogeneity in a collection of graphs.
Theoretical results are established on the posterior consistency of the
procedure and provide insights on the trade-off between model resolution and
accuracy. We illustrate the performance of the methodology on synthetic data
sets, as well as a neuroscience study related to brain activity in working
memory.
Keywords: Hierarchical Community Detection, Isoperimetric Constant,
Mixed-Effect Eigendecomposition, Normalized Graph Cut, Stiefel Manifold
|
Recognizing a target of interest from the UAVs is much more challenging than
the existing object re-identification tasks across multiple city cameras. The
images taken by the UAVs usually suffer from significant size difference when
generating the object bounding boxes and uncertain rotation variations.
Existing methods are usually designed for city cameras, incapable of handing
the rotation issue in UAV scenarios. A straightforward solution is to perform
the image-level rotation augmentation, but it would cause loss of useful
information when inputting the powerful vision transformer as patches. This
motivates us to simulate the rotation operation at the patch feature level,
proposing a novel rotation invariant vision transformer (RotTrans). This
strategy builds on high-level features with the help of the specificity of the
vision transformer structure, which enhances the robustness against large
rotation differences. In addition, we design invariance constraint to establish
the relationship between the original feature and the rotated features,
achieving stronger rotation invariance. Our proposed transformer tested on the
latest UAV datasets greatly outperforms the current state-of-the-arts, which is
5.9\% and 4.8\% higher than the highest mAP and Rank1. Notably, our model also
performs competitively for the person re-identification task on traditional
city cameras. In particular, our solution wins the first place in the UAV-based
person re-recognition track in the Multi-Modal Video Reasoning and Analyzing
Competition held in ICCV 2021. Code is available at
https://github.com/whucsy/RotTrans.
|
Microscopically, collisionless reconnection in thin current sheets is argued
to involve `composite electrons' in the ion inertial (Hall current) domain, a
tiny fraction of electrons only. These `composite electrons' are confined to
lower Landau levels $\epsilon_L\ll T_e$ (energy much less than temperature).
They demagnetise by absorbing magnetic flux quanta $\Phi_0=h/e$, decouple from
the magnetic field, transport the attached magnetic flux into the non-magnetic
centre of the current layer, where they release the flux in the form of
micro-scale magnetic vortices, becoming ordinary electrons. The newly born
micro-scale magnetic vortices reconnect in their strictly anti-parallel
sections when contacting other vortices, ultimately producing the meso-scale
reconnection structure. We clarify the notions of magnetic field lines and
field line radius, estimate the power released when two oppositely directed
flux quanta annihilate, and calculate the number density and Landau-level
filling-factor of `composite electrons' in the Hall domain. As side product we
find that the magnetic diffusion coefficient in plasma also appears in quanta
$D_0^m=e\Phi_0/m_e=h/m_e$, yielding that the bulk perpendicular plasma
resistivity is quantised, with quantum (lowest limit) $\eta_{\,0\perp}=\mu_0
e\Phi_0/m_e=\mu_0h/m_e\sim 10^{-9}$ Ohm m.
Keywords: Reconnection, thin current sheets, quantum Hall effect, quantised
diffusivity, quantised plasma resistivity, composite electrons
|
High-resolution optical spectra of the ultraluminous X-ray source NGC 5408
X-1 show a broad component with a width of ~750 km/s in the HeII and Hbeta
lines in addition to the narrow component observed in these lines and [O III].
Reanalysis of moderate-resolution spectra shows a similar broad component in
the HeII line. The broad component likely originates in the ULX system itself,
probably in the accretion disk. The central wavelength of the broad HeII line
is shifted by 252 \pm 47 km/s between the two observations. If this shift
represents motion of the compact object, then its mass is less than ~1800
M_sun.
|
Hybrid refractive-diffractive lenses combine the light efficiency of
refractive lenses with the information encoding power of diffractive optical
elements (DOE), showing great potential as the next generation of imaging
systems. However, accurately simulating such hybrid designs is generally
difficult, and in particular, there are no existing differentiable image
formation models for hybrid lenses with sufficient accuracy.
In this work, we propose a new hybrid ray-tracing and wave-propagation
(ray-wave) model for accurate simulation of both optical aberrations and
diffractive phase modulation, where the DOE is placed between the last
refractive surface and the image sensor, i.e. away from the Fourier plane that
is often used as a DOE position. The proposed ray-wave model is fully
differentiable, enabling gradient back-propagation for end-to-end co-design of
refractive-diffractive lens optimization and the image reconstruction network.
We validate the accuracy of the proposed model by comparing the simulated point
spread functions (PSFs) with theoretical results, as well as simulation
experiments that show our model to be more accurate than solutions implemented
in commercial software packages like Zemax. We demonstrate the effectiveness of
the proposed model through real-world experiments and show significant
improvements in both aberration correction and extended depth-of-field (EDoF)
imaging. We believe the proposed model will motivate further investigation into
a wide range of applications in computational imaging, computational
photography, and advanced optical design. Code will be released upon
publication.
|
We present a new approach to ubiquitous sensing for indoor applications,
using high-efficiency and low-cost indoor perovksite photovoltaic cells as
external power sources for backscatter sensors. We demonstrate wide-bandgap
perovskite photovoltaic cells for indoor light energy harvesting with the
1.63eV and 1.84 eV devices demonstrate efficiencies of 21% and 18.5%
respectively under indoor compact fluorescent lighting, with a champion
open-circuit voltage of 0.95 V in a 1.84 eV cell under a light intensity of
0.16 mW/cm2. Subsequently, we demonstrate a wireless temperature sensor
self-powered by a perovskite indoor light-harvesting module. We connect three
perovskite photovoltaic cells in series to create a module that produces 14.5
uW output power under 0.16 mW/cm2 of compact fluorescent illumination with an
efficiency of 13.2%. We use this module as an external power source for a
battery-assisted RFID temperature sensor and demonstrate a read range by of 5.1
meters while maintaining very high frequency measurements every 1.24 seconds.
Our combined indoor perovskite photovoltaic modules and backscatter
radio-frequency sensors are further discussed as a route to ubiquitous sensing
in buildings given their potential to be manufactured in an integrated manner
at very low-cost, their lack of a need for battery replacement and the high
frequency data collection possible.
|
Structural subgrid stress models for large eddy simulation often allow for
backscatter of energy from unresolved to resolved turbulent scales, but
excessive model backscatter can eventually result in numerical instability. A
commonly employed strategy to overcome this issue is to set predicted subgrid
stresses to zero in regions of model backscatter. This clipping procedure
improves the stability of structural models, however, at the cost of reduced
correlation between the predicted subgrid stresses and the exact subgrid
stresses. In this article, we propose an alternative strategy that removes
model backscatter from model predictions through the solution of a constrained
minimization problem. This procedure, which we refer to as optimal clipping,
results in a parameter-free mixed model, and it yields predicted subgrid
stresses in higher correlation with the exact subgrid stresses as compared with
those attained with the traditional clipping procedure. We perform a series of
a priori and a posteriori tests to investigate the impact of applying the
traditional and optimal clipping procedures to Clark's gradient subgrid stress
model, and we observe that optimal clipping leads to a significant improvement
in model predictions as compared to the traditional clipping procedure.
|
In this paper we present a solution for Kaluza-Klein magnetic monopole in a
five-dimensional global monopole spacetime. This new solution is a
generalization of previous ones obtained by D. Gross and M. Perry (Nucl. Phys.
B {\bf 226}, 29 (1983)) containing a magnetic monopole in a Ricci-flat
formalism, and by A. Banerjee, S. Charttejee and A. See (Class. Quantum Grav.
{\bf 13}, 3141 (1996)) for a global monopole in a five-dimensional spacetime,
setting zero specific integration constant. Also we analyse the classical
motion of a massive charged test particle on this manifold and present the
equation for classical trajectory obeyed by this particle.
|
Motion planning in modified environments is a challenging task, as it
compounds the innate difficulty of the motion planning problem with a
changing
environment. This renders some algorithmic methods such as probabilistic
roadmaps less viable, as nodes and edges may become invalid as a result of
these
changes.
In this paper, we present a method of transforming any configuration
space graph, such as a roadmap, to a dynamic data structure capable of
updating
the validity of its nodes and edges in response to discrete changes in
obstacle positions.
We use methods from computational geometry to compute 3D swept volume
approximations of configuration space points and curves to achieve 10-40
percent faster updates and up to 60 percent faster motion planning queries
than previous algorithms while requiring a
significantly shorter pre-processing phase, requiring minutes instead of
hours needed by the competing method to achieve somewhat similar update
times.
|
We extend the classical stability theorem of Erdos and Simonovits in two
directions: first, we allow the order of the forbidden graph to grow as log of
order of the host graph, and second, our extremal condition is on the spectral
radius of the host graph.
|
Contrast enhancement (CE) forensics has always been ofconcern to image
forensics community. It can provide aneffective tool for recovering image
history and identifyingtampered images. Although several CE forensic
algorithmshave been proposed, their robustness against some processingis still
unsatisfactory, such as JPEG compression and anti-forensic attacks. In order to
attenuate such deficiency, inthis paper we first present a discriminability
analysis of CEforensics in pixel and gray level histogram domains. Then, insuch
two domains, two end-to-end methods based on convo-lutional neural networks
(P-CNN, H-CNN) are proposed toachieve robust CE forensics against pre-JPEG
compressionand anti-forensics attacks. Experimental results show that
theproposed methods achieve much better performance than thestate-of-the-art
schemes for CE detection in the case of noother operation and comparable
performance when pre-JPEGcompression and anti-foresics attacks is used.
|
Let $\mathfrak h_t$ be the KPZ fixed point started from any initial condition
that guarantees $\mathfrak h_t$ has a maximum at every time $t$ almost surely.
For any fixed $t$, almost surely $\max \mathfrak h_t$ is uniquely attained.
However, there are exceptional times $t \in (0, \infty)$ when $\max \mathfrak
h_t$ is achieved at multiple points. Let $\mathcal T_k \subset (0, \infty)$
denote the set of times when $\max \mathfrak h_t$ is achieved at exactly $k$
points. We show that almost surely $\mathcal T_2$ has Hausdorff dimension $2/3$
and is dense, $\mathcal T_3$ has Hausdorff dimension $1/3$ and is dense,
$\mathcal T_4$ has Hausdorff dimension $0$, and there are no times when $\max
\mathfrak h_t$ is achieved at $5$ or more points. This resolves two conjectures
of Corwin, Hammond, Hegde, and Matetski.
|
Decentralized learning provides an effective framework to train machine
learning models with data distributed over arbitrary communication graphs.
However, most existing approaches toward decentralized learning disregard the
interaction between data heterogeneity and graph topology. In this paper, we
characterize the dependence of convergence on the relationship between the
mixing weights of the graph and the data heterogeneity across nodes. We propose
a metric that quantifies the ability of a graph to mix the current gradients.
We further prove that the metric controls the convergence rate, particularly in
settings where the heterogeneity across nodes dominates the stochasticity
between updates for a given node. Motivated by our analysis, we propose an
approach that periodically and efficiently optimizes the metric using standard
convex constrained optimization and sketching techniques. Through comprehensive
experiments on standard computer vision and NLP benchmarks, we show that our
approach leads to improvement in test performance for a wide range of tasks.
|
Our objective in this series of two articles, of which the present article is
the first, is to give a Perrin-Riou-style construction of $p$-adic
$L$-functions (of Bella\"iche and Stevens) over the eigencurve. As the first
ingredient, we interpolate the Beilinson-Kato elements over the eigencurve
(including the neighborhoods of $\theta$-critical points). Along the way, we
prove \'etale variants of Bella\"iche's results describing the local properties
of the eigencurve. We also develop the local framework to construct and
establish the interpolative properties of these $p$-adic $L$-functions away
from $\theta$-critical points.
|
We address the problem of influence maximization when the social network is
accompanied by diffusion cascades. In prior works, such information is used to
compute influence probabilities, which is utilized by stochastic diffusion
models in influence maximization. Motivated by the recent criticism on the
effectiveness of diffusion models as well as the galloping advancements in
influence learning, we propose IMINFECTOR (Influence Maximization with
INFluencer vECTORs), a unified approach that uses representations learned from
diffusion cascades to perform model-independent influence maximization that
scales in real-world datasets. The first part of our methodology is a
multi-task neural network that learns embeddings of nodes that initiate
cascades (influencer vectors) and embeddings of nodes that participate in them
(susceptible vectors). The norm of an influencer vector captures the ability of
the node to create lengthy cascades and is used to estimate the expected
influence spread and reduce the number of candidate seeds. In addition, the
combination of influencer and susceptible vectors form the diffusion
probabilities between nodes. These are used to reformulate the network as a
bipartite graph and propose a greedy solution to influence maximization that
retains the theoretical guarantees.We a pply our method in three sizable
networks with diffusion cascades and evaluate it using cascades from future
time steps. IMINFECTOR is able to scale in all of them and outperforms various
competitive algorithms and metrics from the diverse landscape of influence
maximization in terms of efficiency and seed set quality.
|
Permutation invariant training (PIT) is a widely used training criterion for
neural network-based source separation, used for both utterance-level
separation with utterance-level PIT (uPIT) and separation of long recordings
with the recently proposed Graph-PIT. When implemented naively, both suffer
from an exponential complexity in the number of utterances to separate,
rendering them unusable for large numbers of speakers or long realistic
recordings. We present a decomposition of the PIT criterion into the
computation of a matrix and a strictly monotonously increasing function so that
the permutation or assignment problem can be solved efficiently with several
search algorithms. The Hungarian algorithm can be used for uPIT and we
introduce various algorithms for the Graph-PIT assignment problem to reduce the
complexity to be polynomial in the number of utterances.
|
In this short note, we describe the preparation of updated templates for the
interpretation of SUSY results from the LHC in the context of mSUGRA. The
standard (m_0,m_{1/2}) plane is shown for fixed mu > 0 and m_t = 173.2 GeV. Two
scenarios are considered: (1) A_0 = 0 GeV and tan(beta)=10 and (2) A_0 = -500
GeV and tan(beta)=40. In each case, the universal scalar mass parameter m_0
varies in the range [40,3000] GeV, while the universal gaugino mass parameter
m_{1/2} varies in the range [100,1000] GeV. We delineate notable regions in
parameter space, including the region with a charged LSP (stau), the LEP2
reach, and the cosmologically preferred region with 100% neutralino dark
matter. The templates also show mass contours for a few key particles (gluino,
squark and Higgs boson). The mass spectrum is calculated with the
SoftSusy-3.2.4 package, while the neutralino relic density is obtained with
MicrOMEGAs version 2.4.
|
The expected distributions of eclipse-depth versus period for eclipsing
binaries of different luminosities are derived from large-scale population
synthesis experiments. Using the rapid Hurley et al. BSE binary evolution code,
we have evolved several hundred million binaries, starting from various simple
input distributions of masses and orbit-sizes. Eclipse probabilities and
predicted distributions over period and eclipse-depth (P/dm) are given in a
number of main-sequence intervals, from O-stars to brown dwarfs. The comparison
between theory and Hipparcos observations shows that a standard (Duquennoy &
Mayor) input distribution of orbit-sizes (a) gives reasonable numbers and
P/dm-distributions, as long as the mass-ratio distribution is also close to the
observed flat ones. A random pairing model, where the primary and secondary are
drawn independently from the same IMF, gives more than an order of magnitude
too few eclipsing binaries on the upper main sequence. For a set of eclipsing
OB-systems in the LMC, the observed period-distribution is different from the
theoretical one, and the input orbit distributions and/or the evolutionary
environment in LMC has to be different compared with the Galaxy. A natural
application of these methods are estimates of the numbers and properties of
eclipsing binaries observed by large-scale surveys like Gaia.
|
We investigate the generation of an entangled electron pair emerging from a
system composed of two quantum dots attached to a superconductor Cooper pair
beam splitter. We take into account three processes: Crossed Andreev
Reflection, cotuneling, and Coulomb interaction. Together, these processes play
crucial roles in the formation of entangled electronic states, with electrons
being in spatially separated quantum dots. By using perturbation theory, we
derive an analytical effective model that allows a simple picture of the
intricate process behind the formation of the entangled state. Several
entanglement quantifiers, including quantum mutual information, negativity, and
concurrence, are employed to validate our findings. Finally, we define and
calculate the covariance associated with the detection of two electrons, each
originating from one of the quantum dots with a specific spin value. The time
evolution of this observable follows the dynamics of all entanglement
quantifiers, thus suggesting that it can be a useful tool for mapping the
creation of entangled electrons in future applications within quantum
information protocols.
|
In this work we study a homogeneous and quasilocal Thermodynamics associated
to the Schwarzschild-anti de Sitter black hole. The usual thermodynamic
description is extended within a Hamiltonian approach with the introduction of
the cosmological constant in the thermodynamic phase space. The treatment
presented is consistent in as much as it respects the laws of black hole
Thermodynamics and accepts the introduction of any thermodynamic potential. We
are able to construct new equations of state that characterize the
Thermodynamics. Novel phenomena can be expected from the proposed setup.
|
Generically, spectral statistics of spinless systems with time reversal
invariance (TRI) and chaotic dynamics are well described by the Gaussian
Orthogonal ensemble (GOE). However, if an additional symmetry is present, the
spectrum can be split into independent sectors which statistics depend on the
type of the group's irreducible representation. In particular, this allows the
construction of TRI quantum graphs with spectral statistics characteristic of
the Gaussian Symplectic ensembles (GSE). To this end one usually has to use
groups admitting pseudo-real irreducible representations. In this paper we show
how GSE spectral statistics can be realized in TRI systems with simpler
symmetry groups lacking pseudo-real representations. As an application, we
provide a class of quantum graphs with only $C_4$ rotational symmetry
possessing GSE spectral statistics.
|
The Cauchy problem is considered for the perturbed strictly hyperbolic 2x2
system of quasilinear equations. The unperturbed problem has a persistent
solution with two discontinuity lines (shock waves). Both an asymptotics of
shock waves position in the plane (x,t) and an asymptotics of the perturbed
problem solution are discussed.
|
Reinforcement learning (RL) allows an agent interacting sequentially with an
environment to maximize its long-term expected return. In the distributional RL
(DistrRL) paradigm, the agent goes beyond the limit of the expected value, to
capture the underlying probability distribution of the return across all time
steps. The set of DistrRL algorithms has led to improved empirical performance.
Nevertheless, the theory of DistrRL is still not fully understood, especially
in the control case. In this paper, we present the simpler one-step
distributional reinforcement learning (OS-DistrRL) framework encompassing only
the randomness induced by the one-step dynamics of the environment. Contrary to
DistrRL, we show that our approach comes with a unified theory for both policy
evaluation and control. Indeed, we propose two OS-DistrRL algorithms for which
we provide an almost sure convergence analysis. The proposed approach compares
favorably with categorical DistrRL on various environments.
|
In high speed railways (HSRs) communication system, when a train travels
along the railway with high velocity, the wireless channel between the train
and base station varies strenuously, which makes it essential to implement
appropriate power allocations to guarantee system performance. What's more, how
to evaluate the performance limits in this new scenario is also needed to
consider. To this end, this paper investigates the performance limits of
wireless communication in HSRs scenario. Since the hybrid information
transmitted between train and base station usually has diverse quality of
service (QoS) requirements, QoS-based achievable rate region is utilized to
characterize the transmission performance in this paper. It is proved that
traditional ergodic capacity and outage capacity with unique QoS requirement
can be regarded as two extreme cases of the achievable rate region proposed in
this paper. The corresponding optimal power allocation strategy is also given
to achieve the maximal boundary of achievable rate region. Compared with
conventional strategies, the advantages of the proposed strategy are validated
in terms of green communication, namely minimizing average transmit power.
Besides, the hybrid information transmission in a non-uniform generalized
motion scenario is analyzed to confirm the robust performance of proposed
strategy. The performance loss caused by non-uniform motion compared with that
in uniform motion is also indicated, where a deterministic worst case for
instantaneous speed realization is proposed to serve as the lower bound for
system performance.
|
Autonomous 3D part assembly is a challenging task in the areas of robotics
and 3D computer vision. This task aims to assemble individual components into a
complete shape without relying on predefined instructions. In this paper, we
formulate this task from a novel generative perspective, introducing the
Score-based 3D Part Assembly framework (Score-PA) for 3D part assembly. Knowing
that score-based methods are typically time-consuming during the inference
stage. To address this issue, we introduce a novel algorithm called the Fast
Predictor-Corrector Sampler (FPC) that accelerates the sampling process within
the framework. We employ various metrics to assess assembly quality and
diversity, and our evaluation results demonstrate that our algorithm
outperforms existing state-of-the-art approaches. We release our code at
https://github.com/J-F-Cheng/Score-PA_Score-based-3D-Part-Assembly.
|
Recognizing the explosive increase in the use of DNN-based applications,
several industrial companies developed a custom ASIC (e.g., Google TPU, IBM
RaPiD, Intel NNP-I/NNP-T) and constructed a hyperscale cloud infrastructure
with it. The ASIC performs operations of the inference or training process of
DNN models which are requested by users. Since the DNN models have different
data formats and types of operations, the ASIC needs to support diverse data
formats and generality for the operations. However, the conventional ASICs do
not fulfill these requirements. To overcome the limitations of it, we propose a
flexible DNN accelerator called All-rounder. The accelerator is designed with
an area-efficient multiplier supporting multiple precisions of integer and
floating point datatypes. In addition, it constitutes a flexibly fusible and
fissionable MAC array to support various types of DNN operations efficiently.
We implemented the register transfer level (RTL) design using Verilog and
synthesized it in 28nm CMOS technology. To examine practical effectiveness of
our proposed designs, we designed two multiply units and three state-of-the-art
DNN accelerators. We compare our multiplier with the multiply units and perform
architectural evaluation on performance and energy efficiency with eight
real-world DNN models. Furthermore, we compare benefits of the All-rounder
accelerator to a high-end GPU card, i.e., NVIDIA GeForce RTX30390. The proposed
All-rounder accelerator universally has speedup and high energy efficiency in
various DNN benchmarks than the baselines.
|
Given a planar oval, consider the maximal area of inscribed $n$-gons resp.
the minimal area of circumscribed $n$-gons. One obtains two sequences indexed
by $n$, and one of Dowker's theorems states that the first sequence is concave
and the second is convex. In total, there are four such classic results,
concerning areas resp. perimeters of inscribed resp. circumscribed polygons,
due to Dowker, Moln\'ar, and Eggleston. We show that these four results are all
incarnations of the convexity property of Mather's $\beta$-function (the
minimal average action function) of the respective billiard-type systems. We
then derive new geometric inequalities of similar type for various other
billiard system. Some of these billiards have been thoroughly studied, and some
are novel. Moreover, we derive new inequalities (even for conventional
billiards) for higher rotation numbers.
|
For electrochemical hydrogen evolution reaction (HER), developing
high-performance catalysts without containing precious metals has been a major
research focus in the current. Herein, we show the feasibility of HER catalytic
enhancement in Ni-based materials based on topological engineering from hybrid
Weyl states. Via a high-throughput computational screening from 140 000
materials, we identify a chiral compound NiSi is a hybrid Weyl semimetal (WSM)
with showing bulk type-I and type-II Weyl nodes and long surface Fermi arcs
near the Fermi level. Sufficient evidences verify that topological charge
carriers participate in the HER process, and make the certain surface of NiSi
highly active with the Gibbs free energy nearly zero (0.07 eV), which is even
lower than Pt and locates on the top of the volcano plots. This work opens up a
new routine to develop no-precious-metal-containing HER catalysts via
topological engineering, rather than traditional defect engineering, doping
engineering, or strain engineering.
|
The properties of the d-wave superconducting state in the two-dimensional
system have been studied. It has been assumed, that the pairing mechanism is
based on the electron-phonon and the electron-electron-phonon interactions. The
obtained results have shown the energy gap amplitude ($\Delta_{tot}$)
crossover, from the BCS to non-BCS behavior, as the value of the
electron-electron-phonon potential increases. The model has been tested for the
${\rm La_{2-x}Sr_{x}CuO_{4}}$ and ${\rm Bi_{2}Sr_{2}CaCu_{2}O_{8+\delta}}$
high-$T_{C}$ superconductors. It has been shown, that the dependence of the
$2\Delta^{(0)}_{tot}/k_{B}T_{C}$ ratio on the hole density is in agreement with
the experimental data.
|
We experimentally demonstrate the principle of an on-chip submillimeter wave
filter bank spectrometer, using superconducting microresonators as narrow
band-separation filters. The filters are made of NbTiN/SiNx/NbTiN microstrip
line resonators, which have a resonance frequency in the range of 614-685
GHz---two orders of magnitude higher in frequency than what is currently
studied for use in circuit quantum electrodynamics and photodetectors. The
frequency resolution of the filters decreases from 350 to 140 with increasing
frequency, most likely limited by dissipation of the resonators.
|
We present new multi-test Bayesian optimization models and algorithms for use
in large scale material screening applications. Our screening problems are
designed around two tests, one expensive and one cheap. This paper differs from
other recent work on multi-test Bayesian optimization through use of a flexible
model that allows for complex, non-linear relationships between the cheap and
expensive test scores. This additional modeling flexibility is essential in the
material screening applications which we describe. We demonstrate the power of
our new algorithms on a family of synthetic toy problems as well as on real
data from two large scale screening studies.
|
Due to the limited computing resources of swarm of drones, it is difficult to
handle computation-intensive tasks locally, hence the cloud based computation
offloading is widely adopted. However, for the business which requires low
latency and high reliability, the cloud-based solution is not suitable, because
of the slow response time caused by long distance data transmission. Therefore,
to solve the problem mentioned above, in this paper, we introduce fog computing
into swarm of drones (FCSD). Focusing on the latency and reliability sensitive
business scenarios, the latency and reliability is constructed as the
constraints of the optimization problem. And in order to enhance the
practicality of the FCSD system, we formulate the energy consumption of FCSD as
the optimization target function, to decrease the energy consumption as far as
possible, under the premise of satisfying the latency and reliability
requirements of the task. Furthermore, a heuristic algorithm based on genetic
algorithm is designed to perform optimal task allocation in FCSD system. The
simulation results validate that the proposed fog based computation offloading
with the heuristic algorithm can complete the computing task effectively with
the minimal energy consumption under the requirements of latency and
reliability.
|
In this paper, asymptotic behavior of convolution of distributions belonging
to two subclasses of distributions with exponential tails are considered,
respectively. The precise second-order tail asymptotics of the convolutions are
derived under the condition of second-order regular variation.
|
Here, we build on the works of Scuseria (et al.)
http://dx.doi.org/10.1063/1.3043729 and Berkelbach
https://doi.org/10.1063/1.5032314 to show connections between the
Bethe-Salpeter equation (BSE) formalism combined with the $GW$ approximation
from many-body perturbation theory and coupled-cluster (CC) theory at the
ground- and excited-state levels. In particular, we show how to recast the $GW$
and Bethe-Salpeter equations as non-linear CC-like equations. Similitudes
between BSE@$GW$ and the similarity-transformed equation-of-motion CC method
introduced by Nooijen are also put forward. The present work allows to easily
transfer key developments and general knowledge gathered in CC theory to
many-body perturbation theory. In particular, it may provide a path for the
computation of ground- and excited-state properties (such as nuclear gradients)
within the $GW$ and BSE frameworks.
|
Augmented Reality has been subject to various integration efforts within
industries due to its ability to enhance human machine interaction and
understanding. Neural networks have achieved remarkable results in areas of
computer vision, which bear great potential to assist and facilitate an
enhanced Augmented Reality experience. However, most neural networks are
computationally intensive and demand huge processing power thus, are not
suitable for deployment on Augmented Reality devices. In this work we propose a
method to deploy state of the art neural networks for real time 3D object
localization on augmented reality devices. As a result, we provide a more
automated method of calibrating the AR devices with mobile robotic systems. To
accelerate the calibration process and enhance user experience, we focus on
fast 2D detection approaches which are extracting the 3D pose of the object
fast and accurately by using only 2D input. The results are implemented into an
Augmented Reality application for intuitive robot control and sensor data
visualization. For the 6D annotation of 2D images, we developed an annotation
tool, which is, to our knowledge, the first open source tool to be available.
We achieve feasible results which are generally applicable to any AR device
thus making this work promising for further research in combining high
demanding neural networks with Internet of Things devices.
|
We propose an end-to-end pipeline to robustly generate high-quality
quadrilateral meshes for complex CAD models. An initial quad-dominant mesh is
generated with frontal point insertion guided by a locally integrable cross
field and a scalar size map adapted to the small CAD features. After triangle
combination and midpoint-subdivision into an all-quadrilateral mesh, the
topology of the mesh is modified to reduce the number of irregular vertices.
The idea is to preserve the irregular vertices matching cross-field
singularities and to eliminate the others. The topological modifications are
either local and based on disk quadrangulations, or more global with the
remeshing of patches of quads according to predefined patterns. Validity of the
quad mesh is guaranteed by monitoring element quality during all operations and
reverting the changes when necessary. Advantages of our approach include
robustness, strict respect of the CAD features and support for user-prescribed
size constraints. The quad mesher, which is available in Gmsh, is validated and
illustrated on two datasets of CAD models.
|
This paper explores two critical infrastructure proposals as alternatives to
the current state of the Internet protocols: IPFS (Interplanetary File System)
and Scuttlebutt, highlighting the political a priori and debates of these
technical enterprises. To do so, I propose to analyze the discourses of the
developers of these two systems in the mode of a critical discourse
analysis.This article highlights a particular form of criticism of Internet
regimes: infrastructural criticism, and highlights its variety through a
comparative study. Through these two case studies, we will see how different
alternatives to the current spatio-temporal implementations of the Internet
allow us to identify the agency dimensions of these acts of hijacking and
substitution, characterizing two quite different approaches to decentralized
protocols, yet linked by a technical similarity.
|
Description logics (DLs) are well-known knowledge representation formalisms
focused on the representation of terminological knowledge. Due to their
first-order semantics, these languages (in their classical form) are not
suitable for representing and handling uncertainty. A probabilistic extension
of a light-weight DL was recently proposed for dealing with certain knowledge
occurring in uncertain contexts. In this paper, we continue that line of
research by introducing the Bayesian extension \BALC of the propositionally
closed DL \ALC. We present a tableau-based procedure for deciding consistency,
and adapt it to solve other probabilistic, contextual, and general inferences
in this logic. We also show that all these problems remain \ExpTime-complete,
the same as reasoning in the underlying classical \ALC.
|
We demonstrate a novel grating coupler design based on double asymmetric and
vertically oriented waveguide scatterers to efficiently couple normally
incident light to a fundamental mode silicon waveguide laying on a buried oxide
layer.
|
Multipole radio-frequency traps are central to collisional experiments in
cryogenic environments. They also offer possibilities to generate new type of
ion crystals topologies and in particular the potential to create infinite
1D/2D structures: ion rings and ion tubes. However, multipole traps have also
been shown to be very sensitive to geometrical misalignment of the trap rods,
leading to additional local trapping minima. The present work proposes a method
to correct non-ideal potentials, by modifying the applied radio-frequency
amplitudes for each trap rod. This approach is discussed for the octupole trap,
leading to the restitution of the ideal Mexican-Hat-like pseudo-potential,
expected in multipole traps. The goodness of the compensation method is
quantified in terms of the choice of the diagnosis area, the residual trapping
potential variations, the required adaptation of the applied radio-frequency
voltage amplitudes, and the impact on the trapped ion structures. Experimental
implementation for macroscopic multipole traps is also discussed, in order to
propose a diagnostic method with respect to the resolution and stability of the
trap drive. Using the proposed compensation technique, we discuss the
feasibility of generating a homogeneous ion ring crystal, which is a measure of
quality for the obtained potential well.
|
We introduce a general Hamiltonian describing coherent superpositions of
Cooper pairs and condensed molecular bosons. For particular choices of the
coupling parameters, the model is integrable. One integrable manifold, as well
as the Bethe ansatz solution, was found by Dukelsky et al., Phys. Rev. Lett. 93
(2004) 050403. Here we show that there is a second integrable manifold,
established using the boundary Quantum Inverse Scattering Method. In this
manner we obtain the exact solution by means of the algebraic Bethe ansatz. In
the case where the Cooper pair energies are degenerate we examine the
relationship between the spectrum of these integrable Hamiltonians and the
quasi-exactly solvable spectrum of particular Schrodinger operators. For the
solution we derive here the potential of the Schrodinger operator is given in
terms of hyperbolic functions. For the solution derived by Dukelsky et al.,
loc. cit. the potential is sextic and the wavefunctions obey PT-symmetric
boundary conditions. This latter case provides a novel example of an integrable
Hermitian Hamiltonian acting on a Fock space whose states map in to a Hilbert
space of PT-symmetric wavefunctions defined on a contour in the complex plane.
|
A nonautonomous version of the ultradiscrete hungry Toda lattice with a
finite lattice boundary condition is derived by applying reduction and
ultradiscretization to a nonautonomous two-dimensional discrete Toda lattice.
It is shown that the derived ultradiscrete system has a direct connection to
the box-ball system with many kinds of balls and finite carrier capacity.
Particular solutions to the ultradiscrete system are constructed by using the
theory of some sort of discrete biorthogonal polynomials.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.