text
stringlengths 6
128k
|
---|
We present results on the world's first over 100 PFLOPS single precision
lattice QCD quark solver on the japanese new supercomputer Fugaku. We achieve a
factor 38 time speedup from the supercomputer K on the same problem size,
$192^4$, with 102 PFLOPS, 10% floating-point operation efficiency against
single precision floating-point operation peak. The evaluation region is the
single precision BiCGStab for a Clover-Wilson Dirac matrix with Schwarz
Alternating Procedure domain decomposition preconditioning using Jacobi
iteration for the local domain matrix inversion.
|
The motions of small-scale magnetic flux elements in the solar photosphere
can provide some measure of the Lagrangian properties of the convective flow.
Measurements of these motions have been critical in estimating the turbulent
diffusion coefficient in flux-transport dynamo models and in determining the
Alfven wave excitation spectrum for coronal heating models. We examine the
motions of internetwork flux elements in a 24 hour long Hinode/NFI magnetogram
sequence with 90 second cadence, and study both the scaling of their mean
squared displacement and the shape of their displacement probability
distribution as a function of time. We find that the mean squared displacement
scales super-diffusively with a slope of about 1.48. Super-diffusive scaling
has been observed in other studies for temporal increments as small as 5
seconds, increments over which ballistic scaling would be expected. Using
high-cadence MURaM simulations, we show that the observed super-diffusive
scaling at short temporal increments is a consequence of random changes in the
barycenter positions due to flux evolution. We also find that for long temporal
increments, beyond granular lifetimes, the observed displacement distribution
deviates from that expected for a diffusive process, evolving from Rayleigh to
Gaussian. This change in the distribution can be modeled analytically by
accounting for supergranular advection along with motions due to granulation.
These results complicate the interpretation of magnetic element motions as
strictly advective or diffusive on short and long timescales and suggest that
measurements of magnetic element motions must be used with caution in turbulent
diffusion or wave excitation models. We propose that passive trace motions in
measured photospheric flows may yield more robust transport statistics.
|
In this research work, we perform text line segmentation directly in
compressed representation of an unconstrained handwritten document image. In
this relation, we make use of text line terminal points which is the current
state-of-the-art. The terminal points spotted along both margins (left and
right) of a document image for every text line are considered as source and
target respectively. The tunneling algorithm uses a single agent (or robot) to
identify the coordinate positions in the compressed representation to perform
text-line segmentation of the document. The agent starts at a source point and
progressively tunnels a path routing in between two adjacent text lines and
reaches the probable target. The agent's navigation path from source to the
target bypassing obstacles, if any, results in segregating the two adjacent
text lines. However, the target point would be known only when the agent
reaches the destination; this is applicable for all source points and
henceforth we could analyze the correspondence between source and target nodes.
Artificial Intelligence in Expert systems, dynamic programming and greedy
strategies are employed for every search space while tunneling. An exhaustive
experimentation is carried out on various benchmark datasets including ICDAR13
and the performances are reported.
|
Resonant interactions between neutrinos from a Galactic supernova and dark
matter particles can lead to a sharp dip in the neutrino energy spectrum. Due
to its excellent energy resolution, measurement of this effect with the JUNO
experiment can provide evidence for such couplings. We discuss how JUNO may
confirm or further constrain a model where scalar dark matter couples to active
neutrinos and another fermion.
|
The performance of the quantum approximate optimization algorithm is
evaluated by using three different measures: the probability of finding the
ground state, the energy expectation value, and a ratio closely related to the
approximation ratio. The set of problem instances studied consists of weighted
MaxCut problems and 2-satisfiability problems. The Ising model representations
of the latter possess unique ground states and highly-degenerate first excited
states. The quantum approximate optimization algorithm is executed on quantum
computer simulators and on the IBM Q Experience. Additionally, data obtained
from the D-Wave 2000Q quantum annealer is used for comparison, and it is found
that the D-Wave machine outperforms the quantum approximate optimization
algorithm executed on a simulator. The overall performance of the quantum
approximate optimization algorithm is found to strongly depend on the problem
instance.
|
We consider a continuous-time bandlimited additive white Gaussian noise
channel with 1-bit output quantization. On such a channel the information is
carried by the temporal distances of the zero-crossings of the transmit signal.
The set of input signals is constrained by the bandwidth of the channel and an
average power constraint. We derive a lower bound on the capacity by
lower-bounding the mutual information rate for a given set of waveforms with
exponentially distributed zero-crossing distances, where we focus on the
behavior in the mid to high signal-to-noise ratio regime. We find that in case
the input randomness scales appropriately with the available bandwidth, the
mutual information rate grows linearly with the channel bandwidth for constant
signal-to-noise ratios. Furthermore, for a given bandwidth the lower bound
saturates with the signal-to-noise ratio growing to infinity. The ratio between
the lower bound on the mutual information rate and the capacity of the additive
white Gaussian noise channel without quantization is a constant independent of
the channel bandwidth for an appropriately chosen randomness of the channel
input and a given signal-to-noise ratio. We complement those findings with an
upper bound on the mutual information rate for the specific signaling scheme.
We show that both bounds are close in the mid to high SNR domain.
|
While recent Transformer-based approaches have shown impressive performances
on event-based object detection tasks, their high computational costs still
diminish the low power consumption advantage of event cameras. Image-based
works attempt to reduce these costs by introducing sparse Transformers.
However, they display inadequate sparsity and adaptability when applied to
event-based object detection, since these approaches cannot balance the fine
granularity of token-level sparsification and the efficiency of window-based
Transformers, leading to reduced performance and efficiency. Furthermore, they
lack scene-specific sparsity optimization, resulting in information loss and a
lower recall rate. To overcome these limitations, we propose the Scene Adaptive
Sparse Transformer (SAST). SAST enables window-token co-sparsification,
significantly enhancing fault tolerance and reducing computational overhead.
Leveraging the innovative scoring and selection modules, along with the Masked
Sparse Window Self-Attention, SAST showcases remarkable scene-aware
adaptability: It focuses only on important objects and dynamically optimizes
sparsity level according to scene complexity, maintaining a remarkable balance
between performance and computational cost. The evaluation results show that
SAST outperforms all other dense and sparse networks in both performance and
efficiency on two large-scale event-based object detection datasets (1Mpx and
Gen1). Code: https://github.com/Peterande/SAST
|
Training directed neural networks typically requires forward-propagating data
through a computation graph, followed by backpropagating error signal, to
produce weight updates. All layers, or more generally, modules, of the network
are therefore locked, in the sense that they must wait for the remainder of the
network to execute forwards and propagate error backwards before they can be
updated. In this work we break this constraint by decoupling modules by
introducing a model of the future computation of the network graph. These
models predict what the result of the modelled subgraph will produce using only
local information. In particular we focus on modelling error gradients: by
using the modelled synthetic gradient in place of true backpropagated error
gradients we decouple subgraphs, and can update them independently and
asynchronously i.e. we realise decoupled neural interfaces. We show results for
feed-forward models, where every layer is trained asynchronously, recurrent
neural networks (RNNs) where predicting one's future gradient extends the time
over which the RNN can effectively model, and also a hierarchical RNN system
with ticking at different timescales. Finally, we demonstrate that in addition
to predicting gradients, the same framework can be used to predict inputs,
resulting in models which are decoupled in both the forward and backwards pass
-- amounting to independent networks which co-learn such that they can be
composed into a single functioning corporation.
|
We investigated the dynamical reaction of the central region of galaxies to a
falling massive black hole by N-body simulations. As the initial galaxy model,
we used an isothermal King model and placed a massive black hole at around the
half-mass radius of the galaxy. We found that the central core of the galaxy is
destroyed by the heating due to the black hole and that a very weak density
cusp ($\rho \propto r^{-\alpha}$, with $\alpha \sim 0.5$) is formed around the
black hole. This result is consistent with recent observations of large
elliptical galaxies with Hubble Space Telescope. The velocity of the stars
becomes tangentially anisotropic in the inner region, while in the outer region
the stars have radially anisotropic velocity dispersion. The radius of the weak
cusp region is larger for larger black hole mass. Our result naturally explains
the formation of the weak cusp found in the previous simulations of galaxy
merging, and implies that the weak cusp observed in large elliptical galaxies
may be formed by the heating process by sinking black holes during merging
events.
|
We calculate the local cyclic homology of group Banach-algebras of discrete
groups acting properly, isometrically and cocompactly on a CAT(0)-space.
|
Several uniform asymptotics expansions of the Weber parabolic cylinder
functions are considered, one group in terms of elementary functions, another
group in terms of Airy functions. Starting point for the discussion are
asymptotic expansions given earlier by F.W.J. Olver. Some of his results are
modified to improve the asymptotic properties and to enlarge the intervals for
using the expansions in numerical algorithms. Olver's results are obtained from
the differential equation of the parabolic cylinder functions; we mention how
modified expansions can be obtained from integral representations. Numerical
tests are given for three expansions in terms of elementary functions. In this
paper only real values of the parameters will be considered.
|
This short note considers varieties of the form $G\times S_{\text{reg}}$,
where $G$ is a complex semisimple group and $S_{\text{reg}}$ is a regular
Slodowy slice in the Lie algebra of $G$. Such varieties arise naturally in
hyperk\"ahler geometry, theoretical physics, and in the theory of abstract
integrable systems developed by Fernandes, Laurent-Gengoux, and Vanhaecke. In
particular, previous work of the author and Rayan uses a Hamiltonian $G$-action
to endow $G\times S_{\text{reg}}$ with a canonical abstract integrable system.
One might therefore wish to understand, in some sense, all examples of abstract
integrable systems arising from Hamiltonian $G$-actions. Accordingly, we
consider a holomorphic symplectic variety $X$ carrying an abstract integrable
system induced by a Hamiltonian $G$-action. Under certain hypotheses, we show
that there must exist a $G$-equivariant variety isomorphism $X\cong G\times
S_{\text{reg}}$.
|
We characterize all (absolute) 1-Lipschitz retracts Q of R^n with the maximum
norm. Omitting two technical details, they coincide with the subsets written as
the solution set of (at most) 2n inequalities like follows. For every
coordinate i=1,...,n, there is a lower and an upper bound L,U : R^{n-1} -> R of
1-Lipschitz maps with L \leq U and the inequalities read
L(x_1,...,x_{i-1},x_{i+1},...,x_n) \leq x_i \leq
U(x_1,...,x_{i-1},x_{i+1},...,x_n)
These sets are also exactly the injective subsets; meaning those Q such that
every 1-Lipschitz map A -> Q, defined on a subset A of a metric space B,
possesses a 1-Lipschitz extension B -> Q.
|
We show that it is possible to obtain inflation and also solve the
cosmological constant problem. The theory is invariant under changes of the
Lagrangian density $L$ to $L+const$. Then the constant part of a scalar field
potential $V$ cannot be responsible for inflation. However, we show that
inflation can be driven by a condensate of a four index field strength. A
constraint appears which correlates this condensate to $V$. After a conformal
transformation, the equations are the standard GR equations with an effective
scalar field potential $V_{eff}$ which has generally an absolute minimum
$V_{eff}=0$ independently of $V$ and without fine tuning. We also show that,
after inflation, the usual reheating phase scenario (from oscillations around
the absolute minimum) is possible.
|
Minimization of distribution matching losses is a principled approach to
domain adaptation in the context of image classification. However, it is
largely overlooked in adapting segmentation networks, which is currently
dominated by adversarial models. We propose a class of loss functions, which
encourage direct kernel density matching in the network-output space, up to
some geometric transformations computed from unlabeled inputs. Rather than
using an intermediate domain discriminator, our direct approach unifies
distribution matching and segmentation in a single loss. Therefore, it
simplifies segmentation adaptation by avoiding extra adversarial steps, while
improving both the quality, stability and efficiency of training. We juxtapose
our approach to state-of-the-art segmentation adaptation via adversarial
training in the network-output space. In the challenging task of adapting brain
segmentation across different magnetic resonance images (MRI) modalities, our
approach achieves significantly better results both in terms of accuracy and
stability.
|
We apply standard Markov chain Monte Carlo (MCMC) analysis techniques for 50
short- period, single-planet systems discovered with radial velocity technique.
We develop a new method for accessing the significance of a non-zero orbital
eccentricity, namely {\Gamma} analysis, which combines frequentist bootstrap
approach with Bayesian analysis of each simulated data set. We find the
eccentricity estimations from {\Gamma} analysis are generally consistent with
results from both standard MCMC analysis and previous references. The {\Gamma}
method is particular useful for assessing the significance of small
eccentricities. Our results suggest that the current sample size is
insufficient to draw robust conclusions about the roles of tidal interaction
and perturbations in shaping the eccentricity distribution of short-period
single-planet systems. We use a Bayesian population analysis to show that a
mixture of analytical distributions is a good approximation of the underlying
eccentricity distribution. For short-period planets, we find the most probable
values of parameters in the analytical functions given the observed
eccentricities. These analytical functions can be used in theoretical
investigations or as priors for the eccentricity distribution when analyzing
short-period planets. As the measurement precision improves and sample size
increases, the method can be applied to more complex parametrizations for the
underlying distribution of eccentricity for extrasolar planetary systems.
|
We propose a new conjecture on hardness of low-degree $2$-CSP's, and show
that new hardness of approximation results for Densest $k$-Subgraph and several
other problems, including a graph partitioning problem, and a variation of the
Graph Crossing Number problem, follow from this conjecture. The conjecture can
be viewed as occupying a middle ground between the $d$-to-$1$ conjecture, and
hardness results for $2$-CSP's that can be obtained via standard techniques,
such as Parallel Repetition combined with standard $2$-prover protocols for the
3SAT problem. We hope that this work will motivate further exploration of
hardness of $2$-CSP's in the regimes arising from the conjecture. We believe
that a positive resolution of the conjecture will provide a good starting point
for further hardness of approximation proofs.
Another contribution of our work is proving that the problems that we
consider are roughly equivalent from the approximation perspective. Some of
these problems arose in previous work, from which it appeared that they may be
related to each other. We formalize this relationship in this work.
|
Bethe-Salpeter equation is solved for bound state composed of two fermions
mediated by pion exchange force of the pseudovector coupling. Expanding the
amplitude by gamma matrices the one-dimensional integral equation is derived.
It reproduces the binding energy of deuteron. The relation with the quadrupole
moment is also discussed in the framework of the asymptotic approximation.
|
The retail sector presents several open and challenging problems that could
benefit from advanced pattern recognition and computer vision techniques. One
such critical challenge is planogram compliance control. In this study, we
propose a complete embedded system to tackle this issue. Our system consists of
four key components as image acquisition and transfer via stand-alone embedded
camera module, object detection via computer vision and deep learning methods
working on single board computers, planogram compliance control method again
working on single board computers, and energy harvesting and power management
block to accompany the embedded camera modules. The image acquisition and
transfer block is implemented on the ESP-EYE camera module. The object
detection block is based on YOLOv5 as the deep learning method and local
feature extraction. We implement these methods on Raspberry Pi 4, NVIDIA Jetson
Orin Nano, and NVIDIA Jetson AGX Orin as single board computers. The planogram
compliance control block utilizes sequence alignment through a modified
Needleman-Wunsch algorithm. This block is also working along with the object
detection block on the same single board computers. The energy harvesting and
power management block consists of solar and RF energy harvesting modules with
suitable battery pack for operation. We tested the proposed embedded planogram
compliance control system on two different datasets to provide valuable
insights on its strengths and weaknesses. The results show that our method
achieves F1 scores of 0.997 and 1.0 in object detection and planogram
compliance control blocks, respectively. Furthermore, we calculated that the
complete embedded system can work in stand-alone form up to two years based on
battery. This duration can be further extended with the integration of the
proposed solar and RF energy harvesting options.
|
We study quantum spin Hall insulators with local Coulomb interactions in the
presence of boundaries using dynamical mean field theory. We investigate the
different influence of the Coulomb interaction on the bulk and the edge states.
Interestingly, we discover an edge reconstruction driven by electronic
correlations. The reason is that the helical edge states experience Mott
localization for an interaction strength smaller than the bulk one. We argue
that the significance of this edge reconstruction can be understood by
topological properties of the system characterized by a local Chern marker.
|
Community search, retrieving the cohesive subgraph which contains the query
vertex, has been widely touched over the past decades. The existing studies on
community search mainly focus on static networks. However, real-world networks
usually are temporal networks where each edge is associated with timestamps.
The previous methods do not work when handling temporal networks. We study the
problem of identifying the significant engagement community to which the
user-specified query belongs. Specifically, given an integer k and a query
vertex u, then we search for the subgraph H which satisfies (i) u $\in$ H; (ii)
the de-temporal graph of H is a connected k-core; (iii) In H that u has the
maximum engagement level. To address our problem, we first develop a top-down
greedy peeling algorithm named TDGP, which iteratively removes the vertices
with the maximum temporal degree. To boost the efficiency, we then design a
bottom-up local search algorithm named BULS and its enhanced versions BULS+ and
BULS*. Lastly, we empirically show the superiority of our proposed solutions on
six real-world temporal graphs.
|
In this paper we show expectations on the radio--X-ray luminosity correlation
of radio halos at 120 MHz. According to the "turbulent re-acceleration
scenario", low frequency observations are expected to detect a new population
of radio halos that, due to their ultra-steep spectra, are missed by present
observations at ~ GHz frequencies. These radio halos should also be less
luminous than presently observed halos hosted in clusters with the same X-ray
luminosity. Making use of Monte Carlo procedures, we show that the presence of
these ultra-steep spectrum halos at 120 MHz causes a steepening and a
broadening of the correlation between the synchrotron power and the cluster
X-ray luminosity with respect to that observed at 1.4 GHz. We investigate the
role of future low frequency radio surveys, and find that the upcoming LOFAR
surveys will be able to test these expectations.
|
Predicting the structure of a protein from its sequence is a cornerstone task
of molecular biology. Established methods in the field, such as homology
modeling and fragment assembly, appeared to have reached their limit. However,
this year saw the emergence of promising new approaches: end-to-end protein
structure and dynamics models, as well as reinforcement learning applied to
protein folding. For these approaches to be investigated on a larger scale, an
efficient implementation of their key computational primitives is required. In
this paper we present a library of differentiable mappings from two standard
dihedral-angle representations of protein structure (full-atom representation
"$\phi,\psi,\omega,\chi$" and backbone-only representation
"$\phi,\psi,\omega$") to atomic Cartesian coordinates. The source code and
documentation can be found at https://github.com/lupoglaz/TorchProteinLibrary.
|
Presence of a logically centralized controller in software-defined networks
enables smart and fine-grained management of network traffic. Generally,
traffic management includes measurement, analysis and control of traffic in
order to improve resource utilization. This is done by inspecting corresponding
performance requirements using metrics such as packet delay, jitter, loss rate
and bandwidth utilization from global network view. There has been many works
regarding traffic management of software-defined networks and how it could help
to efficiently allocate resources. However, the vast majority of these
solutions are bounded to indirect information retrieved within the border of
ingress and egress switches. This means that the three stage loop of
measurement, analysis and control is performed on switches in between this
border while the traffic flowing in network originates from applications on end
hosts. In this work, we present a framework for incorporating network
applications into the task of traffic management using the concept of
software-defined networking. We demonstrate how this could help applications to
receive desired level of quality of service by implementing a prototype of an
API for flow bandwidth reservation using OpenFlow and OVSDB protocols.
|
This expository article gives a thorough and well-motivated account of the
proof of the nilpotence theorem by Devinatz-Hopkins-Smith.
|
For a matroid $N$, a matroid $M$ is $N$-connected if every two elements of
$M$ are in an $N$-minor together. Thus a matroid is connected if and only if it
is $U_{1,2}$-connected. This paper proves that $U_{1,2}$ is the only connected
matroid $N$ such that if $M$ is $N$-connected with $|E(M)| > |E(N)|$, then $M
\backslash e$ or $M / e$ is $N$-connected for all elements $e$. Moreover, we
show that $U_{1,2}$ and $M(\mathcal{W}_2)$ are the only connected matroids $N$
such that, whenever a matroid has an $N$-minor using $\{e,f\}$ and an $N$-minor
using $\{f,g\}$, it also has an $N$-minor using $\{e,g\}$. Finally, we show
that $M$ is $U_{0,1} \oplus U_{1,1}$-connected if and only if every clonal
class of $M$ is trivial.
|
We conducted Hubble Space Telescope (HST) Snapshot observations of the Type
IIb Supernova (SN) 2011dh in M51 at an age of ~641 days with the Wide Field
Camera 3. We find that the yellow supergiant star, clearly detected in pre-SN
HST images, has disappeared, implying that this star was almost certainly the
progenitor of the SN. Interpretation of the early-time SN data which led to the
inference of a compact nature for the progenitor, and to the expected survival
of this yellow supergiant, is now clearly incorrect. We also present
ground-based UBVRI light curves obtained with the Katzman Automatic Imaging
Telescope (KAIT) at Lick Observatory up to SN age ~70 days. From the
light-curve shape including the very late-time HST data, and from recent
interacting binary models for SN 2011dh, we estimate that a putative surviving
companion star to the now deceased yellow supergiant could be detectable by
late 2013, especially in the ultraviolet. No obvious light echoes are
detectable yet in the SN environment.
|
The dynamics of Mars' obliquity are believed to be chaotic, and the
historical ~3.5 Gyr (late-Hesperian onward) obliquity probability density
function (PDF) is high uncertain and cannot be inferred from direct simulation
alone. Obliquity is also a strong control on post-Noachian Martian climate,
enhancing the potential for equatorial ice/snow melting and runoff at high
obliquities (> 40{\deg}) and enhancing the potential for desiccation of deep
aquifers at low obliquities (< 25{\deg}). We developed a new technique using
the orientations of elliptical craters to constrain the true
late-Hesperian-onward obliquity PDF. To do so, we developed a forward model of
the effect of obliquity on elliptic crater orientations using ensembles of
simulated Mars impactors and ~3.5 Gyr-long Mars obliquity simulations. In our
model, the inclinations and speeds of Mars crossing objects bias the preferred
orientation of elliptic craters which are formed by low-angle impacts.
Comparison of our simulation predictions with a validated database of elliptic
crater orientations allowed us to invert for best-fitting obliquity history. We
found that since the onset of the late-Hesperian, Mars' mean obliquity was
likely low, between ~10{\deg} and ~30{\deg}, and the fraction of time spent at
high obliquities > 40{\deg} was likely < 20%.
|
We investigate the electronic properties of ballistic planar Josephson
junctions with multiple superconducting terminals. Our devices consist of
monolayer graphene encapsulated in boron nitride with molybdenum-rhenium
contacts. Resistance measurements yield multiple resonant features, which are
attributed to supercurrent flow among adjacent and non-adjacent Josephson
junctions. In particular, we find that superconducting and dissipative currents
coexist within the same region of graphene. We show that the presence of
dissipative currents primarily results in electron heating and estimate the
associated temperature rise. We find that the electrons in encapsulated
graphene are efficiently cooled through the electron-phonon coupling.
|
We have studied the OH masers in the star forming region, W3(OH), with data
obtained from the Very Long Baseline Array (VLBA). The data provide an angular
resolution of $\sim$5 mas, and a velocity resolution of 106 m s$^{-1}$. A novel
analysis procedure allows us to differentiate between broadband temporal
intensity fluctuations introduced by instrumental gain variations plus
interstellar diffractive scintillation, and intrinsic narrowband variations.
Based on this 12.5 hours observation, we are sensitive to variations with time
scales of minutes to hours. We find statistically significant intrinsic
variations with time scales of $\sim$15--20 minutes or slower, based on the
{\it velocity-resolved fluctuation spectra}. These variations are seen
predominantly towards the line shoulders. The peak of the line profile shows
little variation, suggesting that they perhaps exhibit saturated emission. The
associated modulation index of the observed fluctuation varies from
statistically insignificant values at the line center to about unity away from
the line center. Based on light-travel-time considerations, the 20-minute time
scale of intrinsic fluctuations translates to a spatial dimension of $\sim$2--3
AU along the sight-lines. On the other hand, the transverse dimension of the
sources, estimated from their observed angular sizes of about $\sim$3 mas, is
about 6 AU. We argue that these source sizes are intrinsic, and are not
affected by interstellar scatter broadening. The implied peak brightness
temperature of the 1612/1720 maser sources is about $\sim2\times 10^{13}$ K,
and a factor of about five higher for the 1665 line.
|
Retrosynthesis analysis is a critical task in organic chemistry central to
many important industries. Previously, various machine learning approaches have
achieved promising results on this task by representing output molecules as
strings and autoregressively decoded token-by-token with generative models.
Text generation or machine translation models in natural language processing
were frequently utilized approaches. The token-by-token decoding approach is
not intuitive from a chemistry perspective because some substructures are
relatively stable and remain unchanged during reactions. In this paper, we
propose a substructure-level decoding model, where the substructures are
reaction-aware and can be automatically extracted with a fully data-driven
approach. Our approach achieved improvement over previously reported models,
and we find that the performance can be further boosted if the accuracy of
substructure extraction is improved. The substructures extracted by our
approach can provide users with better insights for decision-making compared to
existing methods. We hope this work will generate interest in this fast growing
and highly interdisciplinary area on retrosynthesis prediction and other
related topics.
|
Our landscape has been shaped by man throughout the millennia. It still
contains many clues to how it was used in the past giving us insights into
ancient cultures and their everyday life. Our summer school uses archaeology
and astronomy as a focus for effective out-of-classroom learning experiences.
It demonstrates how a field trip can be used to its full potential by utilising
ancient monuments as outdoor classrooms. This article shows how such a summer
school can be embedded into the secondary curriculum; giving advice, example
activities, locations to visit, and outlines the impact this work has had.
|
We analyze complex zebra patterns and fiber bursts during type-IV solar radio
bursts on August 1, 2010. It was shown that all of the main details of sporadic
zebra patterns can be explained within the model of zebra patterns and fiber
bursts during the interaction of plasma waves with whistlers. In addition, it
was shown that the major variations in the stripes of the zebra patterns are
caused by the scattering mechanism of fast particles on whistlers, which leads
to the transition of whistler instability from the normal Doppler effect to an
anomalous one.
|
We modify Randall-Sundrum model of brane-world (with two branes) by adding
the scalar curvature squared term in five dimensions. We find that it does not
destabilize Randall-Sundrum solution to the hierarchy problem of the Standard
Model in particle physics.
|
This paper considers the question of recovering the phase of an object from
intensity-only measurements, a problem which naturally appears in X-ray
crystallography and related disciplines. We study a physically realistic setup
where one can modulate the signal of interest and then collect the intensity of
its diffraction pattern, each modulation thereby producing a sort of coded
diffraction pattern. We show that PhaseLift, a recent convex programming
technique, recovers the phase information exactly from a number of random
modulations, which is polylogarithmic in the number of unknowns. Numerical
experiments with noiseless and noisy data complement our theoretical analysis
and illustrate our approach.
|
Modern distributed storage systems offer large capacity to satisfy the
exponentially increasing need of storage space. They often use erasure codes to
protect against disk and node failures to increase reliability, while trying to
meet the latency requirements of the applications and clients. This paper
provides an insightful upper bound on the average service delay of such
erasure-coded storage with arbitrary service time distribution and consisting
of multiple heterogeneous files. Not only does the result supersede known delay
bounds that only work for a single file or homogeneous files, it also enables a
novel problem of joint latency and storage cost minimization over three
dimensions: selecting the erasure code, placement of encoded chunks, and
optimizing scheduling policy. The problem is efficiently solved via the
computation of a sequence of convex approximations with provable convergence.
We further prototype our solution in an open-source, cloud storage deployment
over three geographically distributed data centers. Experimental results
validate our theoretical delay analysis and show significant latency reduction,
providing valuable insights into the proposed latency-cost tradeoff in
erasure-coded storage.
|
Considering systems of separations in a graph that separate every pair of a
given set of vertex sets that are themselves not separated by these
separations, we determine conditions under which such a separation system
contains a nested subsystem that still separates those sets and is invariant
under the automorphisms of the graph.
As an application, we show that the $k$-blocks -- the maximal vertex sets
that cannot be separated by at most $k$ vertices -- of a graph $G$ live in
distinct parts of a suitable tree-decomposition of $G$ of adhesion at most $k$,
whose decomposition tree is invariant under the automorphisms of $G$. This
extends recent work of Dunwoody and Kr\"on and, like theirs, generalizes a
similar theorem of Tutte for $k=2$.
Under mild additional assumptions, which are necessary, our decompositions
can be combined into one overall tree-decomposition that distinguishes, for all
$k$ simultaneously, all the $k$-blocks of a finite graph.
|
With data-driven analytics becoming mainstream, the global demand for
dedicated AI and Deep Learning accelerator chips is soaring. These
accelerators, designed with densely packed Processing Elements (PE), are
especially vulnerable to the manufacturing defects and functional faults common
in the advanced semiconductor process nodes resulting in significant yield
loss. In this work, we demonstrate an application-driven methodology of binning
the AI accelerator chips, and yield loss reduction by correlating the circuit
faults in the PEs of the accelerator with the desired accuracy of the target AI
workload. We exploit the inherent fault tolerance features of trained deep
learning models and a strategy of selective deactivation of faulty PEs to
develop the presented yield loss reduction and test methodology. An analytical
relationship is derived between fault location, fault rate, and the AI task's
accuracy for deciding if the accelerator chip can pass the final yield test. A
yield-loss reduction aware fault isolation, ATPG, and test flow are presented
for the multiply and accumulate units of the PEs. Results obtained with widely
used AI/deep learning benchmarks demonstrate that the accelerators can sustain
5% fault-rate in PE arrays while suffering from less than 1% accuracy loss,
thus enabling product-binning and yield loss reduction of these chips.
|
Conformal Prediction is a machine learning methodology that produces valid
prediction regions under mild conditions. In this paper, we explore the
application of making predictions over multiple data sources of different sizes
without disclosing data between the sources. We propose that each data source
applies a transductive conformal predictor independently using the local data,
and that the individual predictions are then aggregated to form a combined
prediction region. We demonstrate the method on several data sets, and show
that the proposed method produces conservatively valid predictions and reduces
the variance in the aggregated predictions. We also study the effect that the
number of data sources and size of each source has on aggregated predictions,
as compared with equally sized sources and pooled data.
|
Over the last few years, high-quality X-ray imaging and spectroscopic data
from Chandra and XMM-Newton have added greatly to the understanding of the
physics of radio jets. Here we describe the current state of knowledge with an
emphasis on the underlying physics used to interpret multiwavelength data in
terms of physical parameters.
|
Sparse-view computed tomography (CT) -- using a small number of projections
for tomographic reconstruction -- enables much lower radiation dose to patients
and accelerated data acquisition. The reconstructed images, however, suffer
from strong artifacts, greatly limiting their diagnostic value. Current trends
for sparse-view CT turn to the raw data for better information recovery. The
resultant dual-domain methods, nonetheless, suffer from secondary artifacts,
especially in ultra-sparse view scenarios, and their generalization to other
scanners/protocols is greatly limited. A crucial question arises: have the
image post-processing methods reached the limit? Our answer is not yet. In this
paper, we stick to image post-processing methods due to great flexibility and
propose global representation (GloRe) distillation framework for sparse-view
CT, termed GloReDi. First, we propose to learn GloRe with Fourier convolution,
so each element in GloRe has an image-wide receptive field. Second, unlike
methods that only use the full-view images for supervision, we propose to
distill GloRe from intermediate-view reconstructed images that are readily
available but not explored in previous literature. The success of GloRe
distillation is attributed to two key components: representation directional
distillation to align the GloRe directions, and band-pass-specific contrastive
distillation to gain clinically important details. Extensive experiments
demonstrate the superiority of the proposed GloReDi over the state-of-the-art
methods, including dual-domain ones. The source code is available at
https://github.com/longzilicart/GloReDi.
|
We report the experimental realization of a recently discovered quantum
information protocol by Asher Peres implying an apparent non-local quantum
mechanical retrodiction effect. The demonstration is carried out by applying a
novel quantum optical method by which each singlet entangled state is
physically implemented by a two-dimensional subspace of Fock states of a mode
of the electromagnetic field, specifically the space spanned by the vacuum and
the one photon state, along lines suggested recently by E. Knill et al., Nature
409, 46 (2001) and by M. Duan et al., Nature 414, 413 (2001). The successful
implementation of the new technique is expected to play an important role in
modern quantum information and communication and in EPR quantum non-locality
studies.
|
We partner with a leading European healthcare provider and design a mechanism
to match patients with family doctors in primary care. We define the
matchmaking process for several distinct use cases given different levels of
available information about patients. Then, we adopt a hybrid recommender
system to present each patient a list of family doctor recommendations. In
particular, we model patient trust of family doctors using a large-scale
dataset of consultation histories, while accounting for the temporal dynamics
of their relationships. Our proposed approach shows higher predictive accuracy
than both a heuristic baseline and a collaborative filtering approach, and the
proposed trust measure further improves model performance.
|
Simulating of exotic phases of matter that are not amenable to classical
techniques is one of the most important potential applications of quantum
information processing. We present an efficient algorithm for preparing a large
class of topological quantum states -- the G-injective Projected Entangled Pair
States (PEPS) -- on a quantum computer. Important examples include the resonant
valence bond (RVB) states, conjectured to be topological spin liquids. The
runtime of the algorithm scales polynomially with the condition number of the
PEPS projectors, and inverse-polynomially in the spectral gap of the PEPS
parent Hamiltonian.
|
Data analysis in science, e.g., high-energy particle physics, is often
subject to an intractable likelihood if the observables and observations span a
high-dimensional input space. Typically the problem is solved by reducing the
dimensionality using feature engineering and histograms, whereby the latter
technique allows to build the likelihood using Poisson statistics. However, in
the presence of systematic uncertainties represented by nuisance parameters in
the likelihood, the optimal dimensionality reduction with a minimal loss of
information about the parameters of interest is not known. This work presents a
novel strategy to construct the dimensionality reduction with neural networks
for feature engineering and a differential formulation of histograms so that
the full workflow can be optimized with the result of the statistical
inference, e.g., the variance of a parameter of interest, as objective. We
discuss how this approach results in an estimate of the parameters of interest
that is close to optimal and the applicability of the technique is demonstrated
with a simple example based on pseudo-experiments and a more complex example
from high-energy particle physics.
|
During July 2009 we observed the first confirmed superoutburst of the
eclipsing dwarf nova SDSS J150240.98+333423.9 using CCD photometry. The
outburst amplitude was at least 3.9 magnitudes and it lasted at least 16 days.
Superhumps having up to 0.35 peak-to-peak amplitude were present during the
outburst, thereby establishing it to be a member of the SU UMa family. The mean
superhump period during the first 4 days of the outburst was Psh = 0.06028(19)
d, although it increased during the outburst with dPsh/dt = + 2.8(1.0) x 10-4.
The orbital period was measured as Porb = 0.05890946(5) d from times of
eclipses measured during outburst and quiescence. Based on the mean superhump
period, the superhump period excess was 0.023(3). The FWHM eclipse duration
declined from a maximum of 10.5 min at the peak of the outburst to 3.5 min
later in the outburst. The eclipse depth increased from ~0.9 mag to 2.1 mag
over the same period. Eclipses in quiescence were 2.7 min in duration and 2.8
mag deep.
|
Motivated by investigations of rainbow matchings in edge colored graphs, we
introduce the notion of color-line graphs that generalizes the classical
concept of line graphs in a natural way. Let $H$ be a (properly) edge-colored
graph. The (proper) color-line graph $C\!L(H)$ of $H$ has edges of $H$ as
vertices, and two edges of $H$ are adjacent in $C\!L(H)$ if they are incident
in $H$ or have the same color. We give Krausz-type characterizations for
(proper) color-line graphs, and point out that, for any fixed $k\ge 2$,
recognizing if a graph is the color-line graph of some graph $H$ in which the
edges are colored with at most $k$ colors is NP-complete. In contrast, we show
that, for any fixed $k$, recognizing color-line graphs of properly edge colored
graphs $H$ with at most $k$ colors is polynomially. Moreover, we give a good
characterization for proper $2$-color line graphs that yields a linear time
recognition algorithm in this case.
|
Spatial solitons can exist in various kinds of nonlinear optical resonators
with and without amplification. In the past years different types of these
localized structures such as vortices, bright, dark solitons and phase solitons
have been experimentally shown to exist. Many links appear to exist to fields
different from optics, such as fluids, phase transitions or particle physics.
These spatial resonator solitons are bistable and due to their mobility suggest
schemes of information processing not possible with the fixed bistable elements
forming the basic ingredient of traditional electronic processing. The recent
demonstration of existence and manipulation of spatial solitons in emiconductor
microresonators represents a step in the direction of such optical parallel
processing applications. We review pattern formation and solitons in a general
context, show some proof of principle soliton experiments on slow systems, and
describe in more detail the experiments on semiconductor resonator solitons
which are aimed at applications.
|
This paper presents a novel neural architecture search method, called LiDNAS,
for generating lightweight monocular depth estimation models. Unlike previous
neural architecture search (NAS) approaches, where finding optimized networks
are computationally highly demanding, the introduced novel Assisted Tabu Search
leads to efficient architecture exploration. Moreover, we construct the search
space on a pre-defined backbone network to balance layer diversity and search
space size. The LiDNAS method outperforms the state-of-the-art NAS approach,
proposed for disparity and depth estimation, in terms of search efficiency and
output model performance. The LiDNAS optimized models achieve results superior
to compact depth estimation state-of-the-art on NYU-Depth-v2, KITTI, and
ScanNet, while being 7%-500% more compact in size, i.e the number of model
parameters.
|
Suppose $\mathcal{F}$ is a finite family of graphs. We consider the following
meta-problem, called $\mathcal{F}$-Immersion Deletion: given a graph $G$ and
integer $k$, decide whether the deletion of at most $k$ edges of $G$ can result
in a graph that does not contain any graph from $\mathcal{F}$ as an immersion.
This problem is a close relative of the $\mathcal{F}$-Minor Deletion problem
studied by Fomin et al. [FOCS 2012], where one deletes vertices in order to
remove all minor models of graphs from $\mathcal{F}$.
We prove that whenever all graphs from $\mathcal{F}$ are connected and at
least one graph of $\mathcal{F}$ is planar and subcubic, then the
$\mathcal{F}$-Immersion Deletion problem admits: a constant-factor
approximation algorithm running in time $O(m^3 \cdot n^3 \cdot \log m)$; a
linear kernel that can be computed in time $O(m^4 \cdot n^3 \cdot \log m)$; and
a $O(2^{O(k)} + m^4 \cdot n^3 \cdot \log m)$-time fixed-parameter algorithm,
where $n,m$ count the vertices and edges of the input graph.
These results mirror the findings of Fomin et al. [FOCS 2012], who obtained a
similar set of algorithmic results for $\mathcal{F}$-Minor Deletion, under the
assumption that at least one graph from $\mathcal{F}$ is planar. An important
difference is that we are able to obtain a linear kernel for
$\mathcal{F}$-Immersion Deletion, while the exponent of the kernel of Fomin et
al. for $\mathcal{F}$-Minor Deletion depends heavily on the family
$\mathcal{F}$. In fact, this dependence is unavoidable under plausible
complexity assumptions, as proven by Giannopoulou et al. [ICALP 2015]. This
reveals that the kernelization complexity of $\mathcal{F}$-Immersion Deletion
is quite different than that of $\mathcal{F}$-Minor Deletion.
|
The fixed-target NA61/SHINE experiment (SPS CERN) looks for the critical
point of strongly interacting matter and the properties of the onset of
deconfinement. It is a two dimensional scan of measurements of particle spectra
and fluctuations in proton-proton, proton-nucleus and nucleus-nucleus
interactions as a function of collision energy and system size, corresponding
to a two dimensional phase diagram (temperature T - baryonic chemical potential
$\mu_B$). New NA61/SHINE results are presented here, such as transverse
momentum and multiplicity fluctuations in Ar+Sc collisions compared to
NA61/SHINE p+p and Be+Be data, as well as to earlier NA49 A+A results.
Recently, a preliminary signature for the new size dependent effect - rapid
changes in system size dependence was observed in NA61-SHINE data, labeled as
percolation threshold or onset of fireball. This would be closely related to
the vicinity of the hadronic phase transition region.
|
Collecting amounts of distorted/clean image pairs in the real world is
non-trivial, which seriously limits the practical applications of these
supervised learning-based methods on real-world image super-resolution
(RealSR). Previous works usually address this problem by leveraging
unsupervised learning-based technologies to alleviate the dependency on paired
training samples. However, these methods typically suffer from unsatisfactory
texture synthesis due to the lack of supervision of clean images. To overcome
this problem, we are the first to have a close look at the under-explored
direction for RealSR, i.e., few-shot real-world image super-resolution, which
aims to tackle the challenging RealSR problem with few-shot distorted/clean
image pairs. Under this brand-new scenario, we propose Distortion Relation
guided Transfer Learning (DRTL) for the few-shot RealSR by transferring the
rich restoration knowledge from auxiliary distortions (i.e., synthetic
distortions) to the target RealSR under the guidance of distortion relation.
Concretely, DRTL builds a knowledge graph to capture the distortion relation
between auxiliary distortions and target distortion (i.e., real distortions in
RealSR). Based on the distortion relation, DRTL adopts a gradient reweighting
strategy to guide the knowledge transfer process between auxiliary distortions
and target distortions. In this way, DRTL could quickly learn the most relevant
knowledge from the synthetic distortions for the target distortion. We
instantiate DRTL with two commonly-used transfer learning paradigms, including
pre-training and meta-learning pipelines, to realize a distortion
relation-aware Few-shot RealSR. Extensive experiments on multiple benchmarks
and thorough ablation studies demonstrate the effectiveness of our DRTL.
|
Proceedings of GD2020: This volume contains the papers presented at GD~2020,
the 28th International Symposium on Graph Drawing and Network Visualization,
held on September 18-20, 2020 online. Graph drawing is concerned with the
geometric representation of graphs and constitutes the algorithmic core of
network visualization. Graph drawing and network visualization are motivated by
applications where it is crucial to visually analyse and interact with
relational datasets. Information about the conference series and past symposia
is maintained at http://www.graphdrawing.org. The 2020 edition of the
conference was hosted by University Of British Columbia, with Will Evans as
chair of the Organizing Committee. A total of 251 participants attended the
conference.
|
We investigate the accuracy and efficiency of the semiclassical Frozen
Gaussian method in describing electron dynamics in real time. Model systems of
two soft-Coulomb-interacting electrons are used to study correlated dynamics
under non-perturbative electric fields, as well as the excitation spectrum. The
results show that a recently proposed method that combines exact-exchange with
semiclassical correlation to propagate the one-body density-matrix holds
promise for electron dynamics in many situations that either wavefunction or
density-functional methods have difficulty describing. The results also however
point out challenges in such a method that need to be addressed before it can
become widely applicable.
|
During the last decade, lattice-Boltzmann (LB) simulations have been improved
to become an efficient tool for determining the permeability of porous media
samples. However, well known improvements of the original algorithm are often
not implemented. These include for example multirelaxation time schemes or
improved boundary conditions, as well as different possibilities to impose a
pressure gradient. This paper shows that a significant difference of the
calculated permeabilities can be found unless one uses a carefully selected
setup. We present a detailed discussion of possible simulation setups and
quantitative studies of the influence of simulation parameters. We illustrate
our results by applying the algorithm to a Fontainebleau sandstone and by
comparing our benchmark studies to other numerical permeability measurements in
the literature.
|
The advent of large language models (LLMs) and their adoption by the legal
community has given rise to the question: what types of legal reasoning can
LLMs perform? To enable greater study of this question, we present LegalBench:
a collaboratively constructed legal reasoning benchmark consisting of 162 tasks
covering six different types of legal reasoning. LegalBench was built through
an interdisciplinary process, in which we collected tasks designed and
hand-crafted by legal professionals. Because these subject matter experts took
a leading role in construction, tasks either measure legal reasoning
capabilities that are practically useful, or measure reasoning skills that
lawyers find interesting. To enable cross-disciplinary conversations about LLMs
in the law, we additionally show how popular legal frameworks for describing
legal reasoning -- which distinguish between its many forms -- correspond to
LegalBench tasks, thus giving lawyers and LLM developers a common vocabulary.
This paper describes LegalBench, presents an empirical evaluation of 20
open-source and commercial LLMs, and illustrates the types of research
explorations LegalBench enables.
|
We study the relative contribution of cusps and pseudocusps, on cosmic
(super)strings, to the emitted bursts of gravitational waves. The gravitational
wave emission in the vicinity of highly relativistic points on the string
follows, for a high enough frequency, a logarithmic decrease. The slope has
been analytically found to be $^{-4}/_3$ for points reaching exactly the speed
of light in the limit $c=1$. We investigate the variations of this high
frequency behaviour with respect to the velocity of the points considered, for
strings formed through a numerical simulation, and we then compute numerically
the gravitational waves emitted. We find that for string points moving with
velocities as far as $10^{-3}$ from the theoretical (relativistic) limit $c=1$,
gravitational wave emission follows a behaviour consistent with that of cusps,
effectively increasing the number of cusps on a string. Indeed, depending on
the velocity threshold chosen for such behaviour, we show the emitting part of
the string worldsheet is enhanced by a factor ${\cal O}(10^3)$ with respect to
the emission of cusps only.
|
The celebrated Bishop theorem states that an operator is subnormal if and
only if it is the strong limit of a net (or a sequence) of normal operators. By
the Agler-Stankus theorem, $2$-isometries behave similarly to subnormal
operator in the sense that the role of subnormal operators is played by
$2$-isometries, while the role of normal operators is played by Brownian
unitaries. In this paper we give Bishop-like theorems for $2$-isometries. Two
methods are involved, the first of which goes back to Bishop's original idea
and the second refers to Conway and Hadwin's result of general nature. We also
investigate the strong and $*$-strong closedness of the class of Brownian
unitaries.
|
The 100m world rankings of the 1997 outdoor track and field competition
season are reviewed, subject to corrections for wind effects and atmospheric
drag. The rankings and times are compared with those of 1996, and the
improvements of each athlete over the course of a year are discussed.
Additionally, the athletes' wind-corrected 50m and 60m splits from the 1997
IAAF World Championships are compared to the 1997 indoor world rankings over
the same distances, as an attempt to predict possible performances for the
coming 1998 indoor season.
|
Donaldon constructed a hyperk\"ahler moduli space $\mathcal{M}$ associated to
a closed oriented surface $\Sigma$ with $\textrm{genus}(\Sigma) \geq 2$. This
embeds naturally into the cotangent bundle $T^*\mathcal{T}(\Sigma)$ of
Teichm\"uller space or can be identified with the almost-Fuchsian moduli space
associated to $\Sigma$. The later is the moduli space of quasi-Fuchsian
threefolds which contain a unique incompressible minimal surface with principal
curvatures in $(-1,1)$.
Donaldson outlined various remarkable properties of this moduli space for
which we provide complete proofs in this paper: On the cotangent-bundle of
Teichm\"uller space, the hyperk\"ahler structure on $\mathcal{M}$ can be viewed
as the Feix--Kaledin hyperk\"ahler extension of the Weil--Petersson metric. The
almost-Fuchsian moduli space embeds into the
$\textrm{SL}(2,\mathbb{C})$-representation variety of $\Sigma$ and the
hyperk\"ahler structure on $\mathcal{M}$ extends the Goldman holomorphic
symplectic structure. Here the natural complex structure corresponds to the
second complex structure in the first picture. Moreover, the area of the
minimal surface in an almost-Fuchsian manifold provides a K\"ahler potential
for the hyperk\"ahler metric.
The various identifications are obtained using the work of Uhlenbeck on germs
of hyperbolic $3$-manifolds, an explicit map from $\mathcal{M}$ to
$\mathcal{T}(\Sigma)\times \bar{\mathcal{T}(\Sigma)}$ found by Hodge, the
simultaneous uniformization theorem of Bers, and the theory of Higgs bundles
introduced by Hitchin.
|
In this paper, we present a large-scale dynamical survey of the
trans-Neptunian region, with particular attention to mean-motion resonances
(MMRs). We study a set of 4121 trans-Neptunian objects (TNOs), a sample far
larger than in previous works. We perform direct long-term numerical
integrations that enable us to examine the overall dynamics of the individual
TNOs as well as to identify all MMRs. For the latter purpose, we apply the
own-developed FAIR method that allows the semi-automatic identification of even
very high-order MMRs. Apart from searching for the more frequent
eccentricity-type resonances that previous studies concentrated on, we set our
method to allow the identification of inclination-type MMRs, too. Furthermore,
we distinguish between TNOs that are locked in the given MMR throughout the
whole integration time span ($10^8$\,years) and those that are only temporarily
captured in resonances. For a more detailed dynamical analysis of the
trans-Neptunian space, we also construct dynamical maps using test particles.
Observing the fine structure of the $ 34-80 $~AU region underlines the
stabilizing role of the MMRs, with the regular regions coinciding with the
position of the real TNOs.
|
Based on the gauge-gravity duality, we study the three-dimensional QCD
($\mathrm{QCD}_{3}$) and Chern-Simons theory by constructing the anisotropic
black D3-brane solution in IIB supergravity. The deformed bulk geometry is
obtained by performing a double Wick rotation and dimension reduction which
becomes an anisotropic bubble configuration exhibiting confinement in the dual
theory. And its anisotropy also reduces to a Chern-Simons term due to the
presence of the dissolved D7-branes or the axion field in bulk. Using the
bubble geometry, we investigate the the ground-state energy density, quark
potential, entanglement entropy and the baryon vertex according to the standard
methods in the AdS/CFT dictionary. Our calculation shows that the ground-state
energy illustrates degenerate to the Chern-Simons coupling coefficient which is
in agreement with the properties of the gauge Chern-Simons theory. The behavior
of the quark tension, entanglement entropy and the embedding of the baryon
vertex further implies strong anisotropy may destroy the confinement.
Afterwards, we additionally introduce various D7-branes as flavor and
Chern-Simons branes to include the fundamental matter and effective
Chern-Simons level in the dual theory. By counting their orientation, we
finally obtain the associated topological phase in the dual theory and the
critical mass for the phase transition. Interestingly the formula of the
critical mass reveals the flavor symmetry, which may relate to the chiral
symmetry, would be restored if the anisotropy increases greatly. As all of the
analysis is consistent with characteristics of quark-gluon plasma, we therefore
believe our framework provides a remarkable way to understand the features of
Chern-Simons theory, the strong coupled nuclear matter and its deconfinement
condition with anisotropy.
|
Oscillatory behavior of electron capture rates in the two-body decay of
hydrogen-like ions into recoil ions plus undetected neutrinos, with a period of
approximately 7 s, was reported in storage ring single-ion experiments at the
GSI Laboratory, Darmstadt. Ivanov and Kienle [PRL 103, 062502 (2009)] have
relegated this period to neutrino masses through neutrino mixing in the final
state. New arguments are given here against this interpretation, while
suggesting that these `GSI Oscillations' may be related to neutrino spin
precession in the static magnetic field of the storage ring. This scenario
requires a Dirac neutrino magnetic moment six times smaller than the Borexino
solar neutrino upper limit 0.54 x 10E(-10) of the Bohr magneton [PRL 101,
091302 (2008)], and its consequences are explored.
|
We study the spontaneous symmetry breaking of O(3) scalar field on a fuzzy
sphere $S_F^2$. We find that the fluctuations in the background of topological
configurations are finite. This is in contrast to the fluctuations around a
uniform configuration which diverge, due to Mermin-Wagner-Hohenberg-Coleman
theorem, leading to the decay of the condensate. Interesting implications of
enhanced topological stability of the configurations are pointed out.
|
In immersive augmented reality (IAR), users can wear a head-mounted display
to see computer-generated images superimposed to their view of the world. IAR
was shown to be beneficial across several domains, e.g., automotive, medicine,
gaming and engineering, with positive impacts on, e.g., collaboration and
communication. We think that IAR bears a great potential for software
engineering but, as of yet, this research area has been neglected. In this
vision paper, we elicit potentials and obstacles for the use of IAR in software
engineering. We identify possible areas that can be supported with IAR
technology by relating commonly discussed IAR improvements to typical software
engineering tasks. We further demonstrate how innovative use of IAR technology
may fundamentally improve typical activities of a software engineer through a
comprehensive series of usage scenarios outlining practical application.
Finally, we reflect on current limitations of IAR technology based on our
scenarios and sketch research activities necessary to make our vision a
reality. We consider this paper to be relevant to academia and industry alike
in guiding the steps to innovative research and applications for IAR in
software engineering.
|
We construct a large class of morphisms, which we call partial morphisms, of
groupoids that induce $*$-morphisms of maximal and minimal groupoid
$C^*$-algebras. We show that the association of a groupoid to its maximal
(minimal) groupoid $C^*$-algebra and the association of a partial morphism to
its induced morphism are functors (both of which extend the Gelfand functor).
We show how to geometrically visualize lots of $*$-morphisms between groupoid
$C^*$-algebras. As an application, we construct a groupoid models of the entire
inductive systems of the Jiang-Su algebra $\mathcal{Z}$ and the Razak-Jacelon
algebra $\mathcal{W}$.
|
Motivated by applications, we consider here new operator theoretic approaches
to Conditional mean embeddings (CME). Our present results combine a spectral
analysis-based optimization scheme with the use of kernels, stochastic
processes, and constructive learning algorithms. For initially given non-linear
data, we consider optimization-based feature selections. This entails the use
of convex sets of positive definite (p.d.) kernels in a construction of optimal
feature selection via regression algorithms from learning models. Thus, with
initial inputs of training data (for a suitable learning algorithm,) each
choice of p.d. kernel $K$ in turn yields a variety of Hilbert spaces and
realizations of features. A novel idea here is that we shall allow an
optimization over selected sets of kernels $K$ from a convex set $C$ of
positive definite kernels $K$. Hence our \textquotedblleft
optimal\textquotedblright{} choices of feature representations will depend on a
secondary optimization over p.d. kernels $K$ within a specified convex set $C$.
|
Lattice Weinberg - Salam model without fermions for the value of the Weinberg
angle $\theta_W \sim 30^o$, and bare fine structure constant around $\alpha
\sim 1/150$ is investigated numerically. We consider the value of the scalar
self coupling corresponding to bare Higgs mass around 150 GeV. We investigate
phenomena existing in the vicinity of the phase transition between the physical
Higgs phase and the unphysical symmetric phase of the lattice model. This is
the region of the phase diagram, where the continuum physics is to be
approached. We find the indications that at the energies above 1 TeV
nonperturbative phenomena become important in the Weinberg - Salam model.
|
We show that the cold atom systems of simultaneously trapped Bose-Einstein
condensates (BEC's) and quantum degenerate fermionic atoms provide promising
laboratories for the study of macroscopic quantum tunneling. Our theoretical
studies reveal that the spatial extent of a small trapped BEC immersed in a
Fermi sea can tunnel and coherently oscillate between the values of the
separated and mixed configurations (the phases of the phase separation
transition of BEC-fermion systems). We evaluate the period, amplitude and
dissipation rate for $^{23}$Na and $^{40}$K-atoms and we discuss the
experimental prospects for observing this phenomenon.
|
Due to the modality gap between visible and infrared images with high visual
ambiguity, learning \textbf{diverse} modality-shared semantic concepts for
visible-infrared person re-identification (VI-ReID) remains a challenging
problem. Body shape is one of the significant modality-shared cues for VI-ReID.
To dig more diverse modality-shared cues, we expect that erasing
body-shape-related semantic concepts in the learned features can force the ReID
model to extract more and other modality-shared features for identification. To
this end, we propose shape-erased feature learning paradigm that decorrelates
modality-shared features in two orthogonal subspaces. Jointly learning
shape-related feature in one subspace and shape-erased features in the
orthogonal complement achieves a conditional mutual information maximization
between shape-erased feature and identity discarding body shape information,
thus enhancing the diversity of the learned representation explicitly.
Extensive experiments on SYSU-MM01, RegDB, and HITSZ-VCM datasets demonstrate
the effectiveness of our method.
|
Fibronectin, a glycoprotein secreted by connective tissue cells is found in
the human plasma as well as in the ECM. It is known to have an adhesive
property and plays a role in cell-to-cell and cell-to-substratum adhesions. In
rheumatoid arthritis (RA) and osteoarthritis (OA), fibronectin is locally
synthesized by the synovial cells, and the synovial fluid level of fibronectin
is found to be double that in plasma. The concentration of fibronectin in the
synovial fluid under such conditions is found escalate to 2-3 times higher than
in the corresponding plasma. The present article demonstrates that the protein
has a strong tendency to get adsorbed on biomaterials after an implant surgery
in preference to lighter proteins like albumin, which in turn enhances the
growth of robust biofilms. The present article demonstrates that, heightened
risk of dangerous implant infections due to the formation of such biofilms
coupled with the degrading immunity levels of the geriatric patients have the
potential to transform an implant surgery to a 'life threatening' event.
|
The paper proves the existence of monochrome standard simplexes of a given
volume on a multidimensional rational lattice painted in a finite number of
colors.
|
A kinetic system has an absolute concentration robustness (ACR) for a
molecular species if its concentration remains the same in every positive
steady state of the system. Just recently, a condition that sufficiently
guarantees the existence of an ACR in a rank-one mass-action kinetic system was
found. In this paper, it will be shown that this ACR criterion does not extend
in general to power-law kinetic systems. Moreover, we also discussed in this
paper a necessary condition for ACR in multistationary rank-one kinetic system
which can be used in ACR analysis. Finally, a concept of equilibria variation
for kinetic systems which are based on the number of the system's ACR species
will be introduced here.
|
In this paper, we address two forms of consensus for multi-agent systems with
undirected, signed, weighted, and connected communication graphs, under the
assumption that the agents can be partitioned into three clusters, representing
the decision classes on a given specific topic, for instance, the in favour,
abstained and opponent agents. We will show that under some assumptions on the
cooperative/antagonistic relationships among the agents, simple modifications
of DeGroot's algorithm allow to achieve tripartite consensus(if the opinions of
agents belonging to the same class all converge to the same decision) or sign
consensus (if the opinions of the agents in the three clusters converge to
positive, zero and negative values, respectively).
|
We provide a qualitative and quantitative unified picture of the charge
asymmetry in top quark pair production at hadron colliders in the SM and
summarise the most recent experimental measurements.
|
Misner (1967) space is a portion of 2-dimensional Minkowski spacetime,
identified under a boost $\mathcal B$. It is well known that the maximal
analytic extension of Misner space that is Hausdorff consists of one half of
Minkowski spacetime, identified under $\mathcal B$; and Hawking and Ellis
(1973) have shown that the maximal analytic extension that is non-Hausdorff is
equal to the full Minkowski spacetime with the point $Q$ at the origin removed,
identifed under $\mathcal B$. In this paper I show that, in fact, there is an
infinite set of non-Hausdorff maximal analytic extensions, each with a
different causal structure. The extension constructed by Hawking and Ellis is
the simplest of these. Another extension is obtained by wrapping an infinite
number of copies of Minkowski spacetime around the removed $Q$ as a helicoid or
Riemann surface and then identifying events under the boost $\mathcal B$. The
other extensions are obtained by wrapping some number $n$ of successive copies
of Minkowski spacetime around the missing $Q$ as a helicoid, then identifying
the end of the $n$'th copy with the beginning of the initial copy, and then
identifying events under $\mathcal B$. I discuss the causal structure and
covering spaces of each of these extensions.
|
With the fast adoption of machine learning (ML) techniques, sharing of ML
models is becoming popular. However, ML models are vulnerable to privacy
attacks that leak information about the training data. In this work, we focus
on a particular type of privacy attacks named property inference attack (PIA)
which infers the sensitive properties of the training data through the access
to the target ML model. In particular, we consider Graph Neural Networks (GNNs)
as the target model, and distribution of particular groups of nodes and links
in the training graph as the target property. While the existing work has
investigated PIAs that target at graph-level properties, no prior works have
studied the inference of node and link properties at group level yet.
In this work, we perform the first systematic study of group property
inference attacks (GPIA) against GNNs. First, we consider a taxonomy of threat
models under both black-box and white-box settings with various types of
adversary knowledge, and design six different attacks for these settings. We
evaluate the effectiveness of these attacks through extensive experiments on
three representative GNN models and three real-world graphs. Our results
demonstrate the effectiveness of these attacks whose accuracy outperforms the
baseline approaches. Second, we analyze the underlying factors that contribute
to GPIA's success, and show that the target model trained on the graphs with or
without the target property represents some dissimilarity in model parameters
and/or model outputs, which enables the adversary to infer the existence of the
property. Further, we design a set of defense mechanisms against the GPIA
attacks, and demonstrate that these mechanisms can reduce attack accuracy
effectively with small loss on GNN model accuracy.
|
Self-interacting dynamics of non-local Dirac's electron has been proposed.
This dynamics was revealed by the projective representation of operators
corresponding to spin/charge degrees of freedom. Energy-momentum field is
described by the system of quasi-linear ``field-shell" PDE's following from the
conservation law expressed by the affine parallel transport in $CP(3)$
\cite{Le1}. We discuss here solutions of these equations in the connection with
the following problems: curvature of $CP(3)$ as a potential source of
electromagnetic fields and the self-consistent problem of the electron mass.
|
The use of blockchain in regulatory ecosystems is a promising approach to
address challenges of compliance among mutually untrusted entities. In this
work, we consider applications of blockchain technologies in telecom
regulations. In particular, we address growing concerns around Unsolicited
Commercial Communication (UCC aka. spam) sent through text messages (SMS) and
phone calls in India. Despite several regulatory measures taken to curb the
menace of spam it continues to be a nuisance to subscribers while posing
challenges to telecom operators and regulators alike.
In this paper, we present a consortium blockchain based architecture to
address the problem of UCC in India. Our solution improves subscriber
experiences, improves the efficiency of regulatory processes while also
positively impacting all stakeholders in the telecom ecosystem. Unlike previous
approaches to the problem of UCC, which are all ex-post, our approach to
adherence to the regulations is ex-ante. The proposal described in this paper
is a primary contributor to the revision of regulations concerning UCC and spam
by the Telecom Regulatory Authority of India (TRAI). The new regulations
published in July 2018 were first of a kind in the world and amended the 2010
Telecom Commercial Communication Customer Preference Regulation (TCCCPR),
through mandating the use of a blockchain/distributed ledgers in addressing the
UCC problem. In this paper, we provide a holistic account of of the projects'
evolution from (1) its design and strategy, to (2) regulatory and policy
action, (3) country wide implementation and deployment, and (4) evaluation and
impact of the work.
|
We consider a class of Lagrangians that depend not only on some
configurational variables and their first time derivatives, but also on second
time derivatives, thereby leading to fourth-order evolution equations. The
proposed higher-order Lagrangians are obtained by expressing the variables of
standard Lagrangians in terms of more basic variables and their time
derivatives. The Hamiltonian formulation of the proposed class of models is
obtained by means of the Ostrogradsky formalism. The structure of the
Hamiltonians for this particular class of models is such that constraints can
be introduced in a natural way, thus eliminating expected instabilities of the
fourth-order evolution equations. Moreover, canonical quantization of the
constrained equations can be achieved by means of Dirac's approach to
generalized Hamiltonian dynamics.
|
We show an upper bound for the sum of positive Lyapunov exponents of any
Teichm\"uller curve in strata of quadratic differentials with at least one zero
of large multiplicity. As a corollary, it holds for any $SL(2,\mathbb
R)$-invariant subspaces defined over $\mathbb Q$ in these strata. This proves
Grivaux-Hubert's conjecture about the asymptotics of Lyapunov exponents for
strata with a large number of poles in the situation when at least one zero has
large multiplicity.
|
In this paper, we propose a novel large deformation diffeomorphic
registration algorithm to align high angular resolution diffusion images
(HARDI) characterized by orientation distribution functions (ODFs). Our
proposed algorithm seeks an optimal diffeomorphism of large deformation between
two ODF fields in a spatial volume domain and at the same time, locally
reorients an ODF in a manner such that it remains consistent with the
surrounding anatomical structure. To this end, we first review the Riemannian
manifold of ODFs. We then define the reorientation of an ODF when an affine
transformation is applied and subsequently, define the diffeomorphic group
action to be applied on the ODF based on this reorientation. We incorporate the
Riemannian metric of ODFs for quantifying the similarity of two HARDI images
into a variational problem defined under the large deformation diffeomorphic
metric mapping (LDDMM) framework. We finally derive the gradient of the cost
function in both Riemannian spaces of diffeomorphisms and the ODFs, and present
its numerical implementation. Both synthetic and real brain HARDI data are used
to illustrate the performance of our registration algorithm.
|
We present the mass function (MF) of the Arches cluster obtained from
ground-based adaptive optics data in comparison with results derived from
HST/NICMOS data. A MF slope of Gamma = -0.8 +- 0.2 is obtained. Both datasets
reveal a strong radial variation in the MF, with a flat slope in the cluster
center, which increases with increasing radius.
|
This paper is concerned with the numerical solution to a 3D coefficient
inverse problem for buried objects with multi-frequency experimental data. The
measured data, which are associated with a single direction of an incident
plane wave, are backscatter data for targets buried in a sandbox. These raw
scattering data were collected using a microwave scattering facility at the
University of North Carolina at Charlotte. We develop a data preprocessing
procedure and exploit a newly developed globally convergent inversion method
for solving the inverse problem with these preprocessed data. It is shown that
dielectric constants of the buried targets as well as their locations are
reconstructed with a very good accuracy. We also prove a new analytical result
which rigorously justifies an important step of the so-called "data
propagation" procedure.
|
Classical planning formulations like the Planning Domain Definition Language
(PDDL) admit action sequences guaranteed to achieve a goal state given an
initial state if any are possible. However, reasoning problems defined in PDDL
do not capture temporal aspects of action taking, for example that two agents
in the domain can execute an action simultaneously if postconditions of each do
not interfere with preconditions of the other. A human expert can decompose a
goal into largely independent constituent parts and assign each agent to one of
these subgoals to take advantage of simultaneous actions for faster execution
of plan steps, each using only single agent planning. By contrast, large
language models (LLMs) used for directly inferring plan steps do not guarantee
execution success, but do leverage commonsense reasoning to assemble action
sequences. We combine the strengths of classical planning and LLMs by
approximating human intuitions for two-agent planning goal decomposition. We
demonstrate that LLM-based goal decomposition leads to faster planning times
than solving multi-agent PDDL problems directly while simultaneously achieving
fewer plan execution steps than a single agent plan alone and preserving
execution success. Additionally, we find that LLM-based approximations of
subgoals can achieve similar multi-agent execution steps than those specified
by human experts. Website and resources at https://glamor-usc.github.io/twostep
|
We study the linear periods on $GL_{2n}$ twisted by a character using a new
relative trace formula. We establish the relative fundamental lemma and the
transfer of orbital integrals. Together with the spectral isolation technique
of Beuzart-Plessis--Liu--Zhang--Zhu, we are able to compare the elliptic part
of the relative trace formulae and to obtain new results generalizing
Waldspurger's theorem in the $n=1$ case.
|
This paper addresses an uplink localization problem in which the base station
(BS) aims to locate a remote user with the aid of reconfigurable intelligent
surface (RIS). This paper proposes a strategy in which the user transmits
pilots over multiple time frames, and the BS adaptively adjusts the RIS
reflection coefficients based on the observations already received so far in
order to produce an accurate estimate of the user location at the end. This is
a challenging active sensing problem for which finding an optimal solution
involves a search through a complicated functional space whose dimension
increases with the number of measurements. In this paper, we show that the long
short-term memory (LSTM) network can be used to exploit the latent temporal
correlation between measurements to automatically construct scalable
information vectors (called hidden state) based on the measurements.
Subsequently, the state vector can be mapped to the RIS configuration for the
next time frame in a codebook-free fashion via a deep neural network (DNN).
After all the measurements have been received, a final DNN can be used to map
the LSTM cell state to the estimated user equipment (UE) position. Numerical
result shows that the proposed active RIS design results in lower localization
error as compared to existing active and nonactive methods. The proposed
solution produces interpretable results and is generalizable to early stopping
in the sequence of sensing stages.
|
We investigate the formation of methane line at 2.3 ${\rm \mu m}$ in Brown
Dwarf Gliese 229B. Two sets of model parameters with (a) $T_{\rm eff}=940 $K
and $\log (g) =5.0$, (b) $T_{\rm eff}=1030 $K and $ \log(g)=5.5$ are adopted
both of which provide excellent fit for the synthetic continuum spectra with
the observed flux at a wide range of wavelengths. In the absence of
observational data for individual molecular lines, we set the additional
parameters that are needed in order to model the individual lines by fitting
the calculated flux with the observed flux at the continuum. A significant
difference in the amount of flux at the core of the line is found with the two
different models although the flux at the ontinuum remains the same. Hence, we
show that if spectroscopic observation at $2.3{\rm \mu m}$ with a resolution as
high as $R \simeq 200,000$ is possible then a much better constraint on the
surface gravity and on the metallicity of the object could be obtained by
fitting the theoretical model of individual molecular line with the observed
data.
|
One of the most challenging issues in adaptive control of robot manipulators
with kinematic uncertainties is requirement of the inverse of Jacobian matrix
in regressor form. This requirement is inevitable in the case of the control of
parallel robots, whose dynamic equations are written directly in the task
space. In this paper, an adaptive controller is designed for parallel robots
based on representation of Jacobian matrix in regressor form, such that
asymptotic trajectory tracking is ensured. The main idea is separation of
determinant and adjugate of Jacobian matrix and then organize new regressor
forms. Simulation and experimental results on a 2--DOF R\underline{P}R and
3--DOF redundant cable driven robot, verify promising performance of the
proposed methods.
|
The effects of rotation on stellar evolution are particularly important at
low metallicity, when mass loss by stellar winds diminishes and the surface
enrichment due to rotational mixing becomes relatively more pronounced than at
high metallicities. Here we investigate the impact of rotation and metallicity
on stellar evolution. Using a similar physics as in our previous large grids of
models at Z=0.002 and Z=0.014, we compute stellar evolution models with the
Geneva code for rotating and nonrotating stars with initial masses (Mini)
between 1.7 and 120 Msun and Z=0.0004 (1/35 solar). This is comparable to the
metallicities of the most metal poor galaxies observed so far, such as I Zw 18.
Concerning massive stars, both rotating and nonrotating models spend most of
their core-helium burning phase with an effective temperature higher than 8000
K. Stars become red supergiants only at the end of their lifetimes, and few
RSGs are expected. Our models predict very few to no classical Wolf-Rayet stars
as a results of weak stellar winds at low metallicity. The most massive stars
end their lifetimes as luminous blue supergiants or luminous blue variables, a
feature that is not predicted by models with higher metallicities.
Interestingly, due to the behavior of the intermediate convective zone, the
mass domain of stars producing pair-instability supernovae is smaller at
Z=0.0004 than at Z=0.002. We find that during the main sequence phase, the
ratio between nitrogen and carbon abundances (N/C) remains unchanged for
nonrotating models. However, N/C increases by factors of 10-20 in rotating
models at the end of the MS. Cepheids coming from stars with Mini > 4-6 Msun
are beyond the core helium burning phase and spend little time in the
instability strip. Since they would evolve towards cooler effective
temperatures, these Cepheids should show an increase of the pulsation period as
a function of age.
|
We study influence of gravitational field on the mass-energy equivalence
relation by incorporating gravitation in the physical situation considered by
Einstein (Ann. Physik, 17, 1905, English translation in ref. [1]) for his first
derivation of mass-energy equivalence. In doing so, we also refine Einstein's
expression (Ann. Physik, 35, 1911, English translation in ref. [3]) for
increase in gravitational mass of the body when it absorbs E amount of
radiation energy.
|
A gauge invariant notion of a strong connection is presented and
characterized. It is then used to justify the way in which a global curvature
form is defined. Strong connections are interpreted as those that are induced
from the base space of a quantum bundle. Examples of both strong and non-strong
connections are provided. In particular, such connections are constructed on a
quantum deformation of the fibration $S^2 -> RP^2$. A certain class of strong
$U_q(2)$-connections on a trivial quantum principal bundle is shown to be
equivalent to the class of connections on a free module that are compatible
with the q-dependent hermitian metric. A particular form of the Yang-Mills
action on a trivial $U\sb q(2)$-bundle is investigated. It is proved to
coincide with the Yang-Mills action constructed by A.Connes and M.Rieffel.
Furthermore, it is shown that the moduli space of critical points of this
action functional is independent of q.
|
Sequences of linear systems arise in the predictor-corrector method when
computing the Pareto front for multi-objective optimization. Rather than
discarding information generated when solving one system, it may be
advantageous to recycle information for subsequent systems. To accomplish this,
we seek to reduce the overall cost of computation when solving linear systems
using common recycling methods. In this work, we assessed the performance of
recycling minimum residual (RMINRES) method along with a map between
coefficient matrices. For these methods to be fully integrated into the
software used in Enouen et al. (2022), there must be working version of each in
both Python and PyTorch. Herein, we discuss the challenges we encountered and
solutions undertaken (and some ongoing) when computing efficient Python
implementations of these recycling strategies. The goal of this project was to
implement RMINRES in Python and PyTorch and add it to the established Pareto
front code to reduce computational cost. Additionally, we wanted to implement
the sparse approximate maps code in Python and PyTorch, so that it can be
parallelized in future work.
|
Brain-inspired spiking neuron networks (SNNs) have attracted widespread
research interest due to their low power features, high biological
plausibility, and strong spatiotemporal information processing capability.
Although adopting a surrogate gradient (SG) makes the non-differentiability SNN
trainable, achieving comparable accuracy for ANNs and keeping low-power
features simultaneously is still tricky. In this paper, we proposed an
energy-efficient spike-train level spiking neural network (SLSSNN) with low
computational cost and high accuracy. In the SLSSNN, spatio-temporal conversion
blocks (STCBs) are applied to replace the convolutional and ReLU layers to keep
the low power features of SNNs and improve accuracy. However, SLSSNN cannot
adopt backpropagation algorithms directly due to the non-differentiability
nature of spike trains. We proposed a suitable learning rule for SLSSNNs by
deducing the equivalent gradient of STCB. We evaluate the proposed SLSSNN on
static and neuromorphic datasets, including Fashion-Mnist, Cifar10, Cifar100,
TinyImageNet, and DVS-Cifar10. The experiment results show that our proposed
SLSSNN outperforms the state-of-the-art accuracy on nearly all datasets, using
fewer time steps and being highly energy-efficient.
|
We introduce a new class of totally balanced cooperative TU games, namely p
-additive games. It is inspired by the class of inventory games that arises
from inventory situations with temporary discounts (Toledo, 2002) and contains
the class of inventory cost games (Meca et al. 2003). It is shown that every
p-additive game and its corresponding subgames have a nonempty core. We also
focus on studying the character concave or convex and monotone of p-additive
games. In addition, the modified SOC-rule is proposed as a solution for
p-additive games. This solution is suitable for p-additive games since it is a
core-allocation which can be reached through a population monotonic allocation
scheme. Moreover, two characterizations of the modified SOC-rule are provided.
|
This work presents the design of beta-Ga2O3 schottky barrier diode using
high-k dielectric superjunction to significantly enhance the breakdown voltage
vs on-resistance trade-off beyond its already high unipolar figure of merit.
The device parameters are optimized using both TCAD simulations and analytical
modeling using conformal mapping technique. The dielectric superjunction
structure is found to be highly sensitive to the device dimensions and the
dielectric constant of the insulator. The aspect ratio, which is the ratio of
the length to the width of the drift region, is found to be the most important
parameter in designing the structure and the proposed approach only works for
aspect ratio much greater than one. The width of the dielectric layer and the
dielectric constant also plays a crucial role in improving the device
properties and are optimized to achieve maximum figure of merit. Using the
optimized structure with an aspect ratio of 10 and a dielectric constant of
300, the structure is predicted to surpass the b-Ga2O3 unipolar figure of merit
by four times indicating the promise of such structures for exceptional FOM
vertical power electronics.
|
We introduce and theoretically analyze the concept of manipulating optical
chirality via strong coupling of the optical modes of chiral nanostructures
with excitonic transitions in molecular layers or semiconductors. With
chirality being omnipresent in chemistry and biomedicine, and highly desirable
for technological applications related to efficient light manipulation, the
design of nanophotonic architectures that sense the handedness of molecules or
generate the desired light polarization in an externally controllable manner is
of major interdisciplinary importance. Here we propose that such capabilities
can be provided by the mode splitting resulting from polaritonic hybridization.
Starting with an object with well-known chiroptical response -- here, for a
proof of concept, a chiral sphere -- we show that strong coupling with a nearby
excitonic material generates two distinct frequency regions that retain the
object's chirality density and handedness, which manifest most clearly through
anticrossings in circular-dichroism or differential-scattering dispersion
diagrams. These windows can be controlled by the intrinsic properties of the
excitonic layer and the strength of the interaction, enabling thus the
post-fabrication manipulation of optical chirality. Our findings are further
verified via simulations of the circular dichroism of a realistic chiral
architecture, namely a helical assembly of plasmonic nanospheres embedded in a
resonant matrix.
|
Few-shot segmentation aims to devise a generalizing model that segments query
images from unseen classes during training with the guidance of a few support
images whose class tally with the class of the query. There exist two
domain-specific problems mentioned in the previous works, namely spatial
inconsistency and bias towards seen classes. Taking the former problem into
account, our method compares the support feature map with the query feature map
at multi scales to become scale-agnostic. As a solution to the latter problem,
a supervised model, called as base learner, is trained on available classes to
accurately identify pixels belonging to seen classes. Hence, subsequent meta
learner has a chance to discard areas belonging to seen classes with the help
of an ensemble learning model that coordinates meta learner with the base
learner. We simultaneously address these two vital problems for the first time
and achieve state-of-the-art performances on both PASCAL-5i and COCO-20i
datasets.
|
We consider the statistical problem of recovering a hidden "ground truth"
binary labeling for the vertices of a graph up to low Hamming error from noisy
edge and vertex measurements. We present new algorithms and a sharp
finite-sample analysis for this problem on trees and sparse graphs with poor
expansion properties such as hypergrids and ring lattices. Our method
generalizes and improves over that of Globerson et al. (2015), who introduced
the problem for two-dimensional grid lattices.
For trees we provide a simple, efficient, algorithm that infers the ground
truth with optimal Hamming error has optimal sample complexity and implies
recovery results for all connected graphs. Here, the presence of side
information is critical to obtain a non-trivial recovery rate. We then show how
to adapt this algorithm to tree decompositions of edge-subgraphs of certain
graph families such as lattices, resulting in optimal recovery error rates that
can be obtained efficiently
The thrust of our analysis is to 1) use the tree decomposition along with
edge measurements to produce a small class of viable vertex labelings and 2)
apply an analysis influenced by statistical learning theory to show that we can
infer the ground truth from this class using vertex measurements. We show the
power of our method in several examples including hypergrids, ring lattices,
and the Newman-Watts model for small world graphs. For two-dimensional grids,
our results improve over Globerson et al. (2015) by obtaining optimal recovery
in the constant-height regime.
|
Abrikosov fluxonics, a domain of science and engineering at the interface of
superconductivity research and nanotechnology, is concerned with the study of
properties and dynamics of Abrikosov vortices in nanopatterned superconductors,
with particular focus on their confinement, manipulation, and exploitation for
emerging functionalities. Vortex pinning, guided vortex motion, and the ratchet
effect are three main fluxonic ``tools'' which allow for the dynamical (pinned
or moving), the directional (angle-dependent), and the orientational (current
polarity-sensitive) control over the fluxons, respectively. Thanks to the
periodicity of the vortex lattice, several groups of effects emerge when the
vortices move in a periodic pinning landscape: Spatial commensurability of the
location of vortices with the underlying pinning nanolandscape leads to a
reduction of the dc resistance and the microwave loss at the so-called matching
fields. Temporal synchronization of the displacement of vortices with the
number of pinning sites visited during one half ac cycle manifests itself as
Shapiro steps in the current-voltage curves. Delocalization of vortices
oscillating under the action of a high-frequency ac drive can be tuned by a
superimposed dc bias. In this short review a choice of experimental results on
the vortex dynamics in the presence of periodic pinning in Nb thin films is
presented. The consideration is limited to one particular type of artificial
pinning structures --- directly written nanolandscapes of the washboard type,
which are fabricated by focused ion beam milling and focused electron beam
induced deposition. The reported results are relevant for the development of
fluxonic devices and the reduction of microwave losses in superconducting
planar transmission lines.
|
Subsets and Splits