text
stringlengths 6
128k
|
---|
We study the evolution of cooperation in the spatial prisoner's dilemma game,
where besides unconditional cooperation and defection, tit-for-tat,
win-stay-lose-shift and extortion are the five competing strategies. While
pairwise imitation fails to sustain unconditional cooperation and extortion
regardless of game parametrization, myopic updating gives rise to the
coexistence of all five strategies if the temptation to defect is sufficiently
large or if the degree distribution of the interaction network is
heterogeneous. This counterintuitive evolutionary outcome emerges as a result
of an unexpected chain of strategy invasions. Firstly, defectors emerge and
coarsen spontaneously among players adopting win-stay-lose-shift. Secondly,
extortioners and players adopting tit-for-tat emerge and spread via neutral
drift among the emerged defectors. And lastly, among the extortioners,
cooperators become viable too. These recurrent evolutionary invasions yield a
five-strategy phase that is stable irrespective of the system size and the
structure of the interaction network, and they reveal the most unexpected
mechanism that stabilizes extortion and cooperation in an evolutionary setting.
|
We numerically study level statistics of disordered interacting quantum
many-body systems. A two-parameter plasma model which controls level repulsion
exponent $\beta$ and range $h$ of interactions between eigenvalues is shown to
reproduce accurately features of level statistics across the transition from
ergodic to many-body localized phase. Analysis of higher order spacing ratios
indicates that the considered $\beta$-$h$ model accounts even for long range
spectral correlations and allows to obtain a clear picture of the flow of level
statistics across the transition. Comparing spectral form factors of
$\beta$-$h$ model and of a system in the ergodic-MBL crossover, we show that
the range of effective interactions between eigenvalues $h$ is related to the
Thouless time which marks the onset of quantum chaotic behavior of the system.
Analysis of level statistics of random quantum circuit which hosts chaotic and
localized phases supports the claim that $\beta$-$h$ model grasps universal
features of level statistics in transition between ergodic and many-body
localized phases also for systems breaking time-reversal invariance.
|
In this paper, we describe an IDE called CAPS (Calculational Assistant for
Programming from Specifications) for the interactive, calculational derivation
of imperative programs. In building CAPS, our aim has been to make the IDE
accessible to non-experts while retaining the overall flavor of the
pen-and-paper calculational style. We discuss the overall architecture of the
CAPS system, the main features of the IDE, the GUI design, and the trade-offs
involved.
|
In this paper, we study efficient differentially private alternating
direction methods of multipliers (ADMM) via gradient perturbation for many
machine learning problems. For smooth convex loss functions with (non)-smooth
regularization, we propose the first differentially private ADMM (DP-ADMM)
algorithm with performance guarantee of $(\epsilon,\delta)$-differential
privacy ($(\epsilon,\delta)$-DP). From the viewpoint of theoretical analysis,
we use the Gaussian mechanism and the conversion relationship between R\'enyi
Differential Privacy (RDP) and DP to perform a comprehensive privacy analysis
for our algorithm. Then we establish a new criterion to prove the convergence
of the proposed algorithms including DP-ADMM. We also give the utility analysis
of our DP-ADMM. Moreover, we propose an accelerated DP-ADMM (DP-AccADMM) with
the Nesterov's acceleration technique. Finally, we conduct numerical
experiments on many real-world datasets to show the privacy-utility tradeoff of
the two proposed algorithms, and all the comparative analysis shows that
DP-AccADMM converges faster and has a better utility than DP-ADMM, when the
privacy budget $\epsilon$ is larger than a threshold.
|
We introduce deterministic state-transformation protocols between many-body
quantum states which can be implemented by low-depth Quantum Circuits (QC)
followed by Local Operations and Classical Communication (LOCC). We show that
this gives rise to a classification of phases in which topologically-ordered
states or other paradigmatic entangled states become trivial. We also
investigate how the set of unitary operations is enhanced by LOCC in this
scenario, allowing one to perform certain large-depth QC in terms of low-depth
ones.
|
A two amino acid (hydrophobic and polar) scheme is used to perform the design
on target conformations corresponding to the native states of twenty single
chain proteins. Strikingly, the percentage of successful identification of the
nature of the residues benchmarked against naturally occurring proteins and
their homologues is around 75 % independent of the complexity of the design
procedure. Typically, the lowest success rate occurs for residues such as
alanine that have a high secondary structure functionality. Using a simple
lattice model, we argue that one possible shortcoming of the model studied may
involve the coarse-graining of the twenty kinds of amino acids into just two
effective types.
|
This paper studies optimal pricing and rebalancing policies for Autonomous
Mobility-on-Demand (AMoD) systems. We take a macroscopic planning perspective
to tackle a profit maximization problem while ensuring that the system is
load-balanced. We begin by describing the system using a dynamic fluid model to
show the existence and stability of an equilibrium (i.e., load balance) through
pricing policies. We then develop an optimization framework that allows us to
find optimal policies in terms of pricing and rebalancing. We first maximize
profit by only using pricing policies, then incorporate rebalancing, and
finally we consider whether the solution is found sequentially or jointly. We
apply each approach on a data-driven case study using real taxi data from New
York City. Depending on which benchmarking solution we use, the joint problem
(i.e., pricing and rebalancing) increases profits by 7% to 40%
|
Lanthanide atoms have an unusual electron configuration, with a partially
filled shell of $f$ orbitals. This leads to a set of characteristic properties
that enable enhanced control over ultracold atoms and their interactions: large
numbers of optical transitions with widely varying wavelengths and transition
strengths, anisotropic interaction properties between atoms and with light, and
a large magnetic moment and spin space present in the ground state. These
features in turn enable applications ranging from narrow-line laser cooling and
spin manipulation to evaporative cooling through universal dipolar scattering,
to the observation of a rotonic dispersion relation, self-bound liquid-like
droplets stabilized by quantum fluctuations, and supersolid states. In this
short review, we describe how the unusual level structure of lanthanide atoms
leads to these key features, and provide a brief and necessarily partial
overview of experimental progress in this rapidly developing field.
|
Transit fare arbitrage is the scenario when two or more commuters agree to
swap tickets during travel in such a way that total cost is lower than
otherwise. Such arbitrage allows pricing inefficiencies to be explored and
exploited, leading to improved pricing models. In this paper we discuss the
basics of fare arbitrage through an intuitive pricing framework involving
population density. We then analyze the San Francisco Bay Area Rapid Transit
(BART) system to understand underlying inefficiencies. We also provide source
code and comprehensive list of pairs of trips with significant arbitrage gain
at github.com/asifhaque/transit-arbitrage. Finally, we point towards a uniform
payment interface for different kinds of transit systems.
|
We introduce a general, efficient method to completely describe the topology
of individual grains, bubbles, and cells in three-dimensional polycrystals,
foams, and other multicellular microstructures. This approach is applied to a
pair of three-dimensional microstructures that are often regarded as close
analogues in the literature: one resulting from normal grain growth (mean
curvature flow) and another resulting from a random Poisson-Voronoi
tessellation of space. Grain growth strongly favors particular grain
topologies, compared with the Poisson-Voronoi model. Moreover, the frequencies
of highly symmetric grains are orders of magnitude higher in the the grain
growth microstructure than they are in the Poisson-Voronoi one. Grain topology
statistics provide a strong, robust differentiator of different cellular
microstructures and provide hints to the processes that drive different classes
of microstructure evolution.
|
We study the externally-driven motion of the domain walls (DWs)of the pi/2
type in (in-the-plane ordered) nanostripes of the crystalline cubic anisotropy.
Such DWs are much narrower than the transverse and vortex pi DWs of the
soft-magnetic nanostripes while propagating much faster, thus, enabling dense
packing of magnetization domains and high speed processing of the many domain
states. The viscous current-driven motion of the DW with the velocity above
1000m/s under the electric current of the density 10^12A/m2 is predicted to
take place in the nanostripes of the magnetite. Also, the viscous motion with
the velocity above 700m/s can be driven by the magnetic field according to our
solution to a 1D analytical model and the micromagnetc simulations. Such huge
velocities are achievable in the nanostripes of very small cross-sections (only
100nm width and 10nm thickness). The fully stress driven propagation of the DW
in the nanostripes of cubic magnetostrictive materials is predicted as well.
The strength of the DW pinning to the stripe notches and the thermal stability
of the magnetization during the current flow are addressed.
|
We introduce a new isometric strain model for the study of the dynamics of
cloth garments in a moderate stress environment, such as robotic manipulation
in the neighborhood of humans. This model treats textiles as surfaces which are
inextensible, admitting only isometric motions. Inextensibility is imposed in a
continuous setting, prior to any discretization, which gives consistency with
respect to re-meshing and prevents the problem of locking even with coarse
meshes. The simulations of robotic manipulation using the model are compared to
the actual manipulation in the real world, finding that the error between the
simulated and real position of each point in the garment is lower than 1cm in
average, even when a coarse mesh is used. Aerodynamic contributions to motion
are incorporated to the model through the virtual uncoupling of the inertial
and gravitational mass of the garment. This approach results in an accurate, as
compared to reality, description of cloth motion incorporating aerodynamic
effects by using only two parameters.
|
We propose a programme for systematically counting the single and multi-trace
gauge invariant operators of a gauge theory. Key to this is the plethystic
function. We expound in detail the power of this plethystic programme for
world-volume quiver gauge theories of D-branes probing Calabi-Yau
singularities, an illustrative case to which the programme is not limited,
though in which a full intimate web of relations between the geometry and the
gauge theory manifests herself. We can also use generalisations of
Hardy-Ramanujan to compute the entropy of gauge theories from the plethystic
exponential. In due course, we also touch upon fascinating connections to Young
Tableaux, Hilbert schemes and the MacMahon Conjecture.
|
Let $K$ be a compact subset in the complex plane and let $A(K)$ be the
uniform closure of the functions continuous on $K$ and analytic on $K^{\circ}$.
Let $\mu$ be a positive finite measure with its support contained in $K$. For
$1 \leq q < \infty$, let $A^{q}(K,\mu)$ denote the closure of $A(K)$ in
$L^{q}(\mu)$. The aim of this work is to study the structure of the space
$A^{q}(K,\mu)$. We seek a necessary and sufficient condition on $K$ so that a
Thomson-type structure theorem for $A^{q}(K,\mu)$ can be established. Our
results essentially give perfect solutions to the major open problem in the
research filed of theory of subnormal operators and aproximation by analytic
functions in the mean .
|
The paper introduces a new numerical characteristic of one dimensional
stochastic systems. This quantity is a measure of minimal periodicity, can be
detected in the process deep differential structure. The claim is that this new
measure of stochasticity is also a well adapted characteristic for research of
stochastic resonance phenomena.
|
Targeted color-dots with varying shapes and sizes in images are first
exhaustively identified, and then their multiscale 2D geometric patterns are
extracted for testing spatial uniformness in a progressive fashion. Based on
color theory in physics, we develop a new color-identification algorithm
relying on highly associative relations among the three color-coordinates: RGB
or HSV. Such high associations critically imply low color-complexity of a color
image, and renders potentials of exhaustive identification of targeted
color-dots of all shapes and sizes. Via heterogeneous shaded regions and
lighting conditions, our algorithm is shown being robust, practical and
efficient comparing with the popular Contour and OpenCV approaches. Upon all
identified color-pixels, we form color-dots as individually connected networks
with shapes and sizes. We construct minimum spanning trees (MST) as spatial
geometries of dot-collectives of various size-scales. Given a size-scale, the
distribution of distances between immediate neighbors in the observed MST is
extracted, so do many simulated MSTs under the spatial uniformness assumption.
We devise a new algorithm for testing 2D spatial uniformness based on a
Hierarchical clustering tree upon all involving MSTs. Our developments are
illustrated on images obtained by mimicking chemical spraying via drone in
Precision Agriculture.
|
In the past decade, society has experienced notable growth in a variety of
technological areas. However, the Fourth Industrial Revolution has not been
embraced yet. Industry 4.0 imposes several challenges which include the
necessity of new architectural models to tackle the uncertainty that open
environments represent to cyber-physical systems (CPS). Waste Electrical and
Electronic Equipment (WEEE) recycling plants stand for one of such open
environments. Here, CPSs must work harmoniously in a changing environment,
interacting with similar and not so similar CPSs, and adaptively collaborating
with human workers. In this paper, we support the Distributed Adaptive Control
(DAC) theory as a suitable Cognitive Architecture for managing a recycling
plant. Specifically, a recursive implementation of DAC (between both
single-agent and large-scale levels) is proposed to meet the expected demands
of the European Project HR-Recycler. Additionally, with the aim of having a
realistic benchmark for future implementations of the recursive DAC, a
micro-recycling plant prototype is presented.
|
A new analytic treatment of the two-dimensional Hubbard model at finite
temperature and chemical potential is presented. A next nearest neighbor
hopping term of strength t' is included. This analysis is based upon a
formulation of the statistical mechanics of particles in terms of the S-matrix.
In the 2-body scattering approximation, the S-matrix allows a systematic
expansion in t/U. We show that for U/t large enough, a region of attractive
interactions exists near the Fermi surface due to 1-loop renormalization
effects. For t'/t = -0.3, attractive interactions exist for U/t > 6.4. Our
analysis suggests that superconductivity may not exist for t'=0. Based on the
existence of solutions of the integral equation for the pseudo-energy, we
provide evidence for the superconducting phase and estimate Tc/t = 0.02.
|
We present the remarkable discovery that the dwarf irregular galaxy NGC 2366
is an excellent analog of the Green Pea (GP) galaxies, which are characterized
by extremely high ionization parameters. The similarities are driven
predominantly by the giant H II region Markarian 71 (Mrk 71). We compare the
system with GPs in terms of morphology, excitation properties, specific
star-formation rate, kinematics, absorption of low-ionization species,
reddening, and chemical abundance, and find consistencies throughout. Since
extreme GPs are associated with both candidate and confirmed Lyman continuum
(LyC) emitters, Mrk 71/NGC 2366 is thus also a good candidate for LyC escape.
The spatially resolved data for this object show a superbubble blowout
generated by mechanical feedback from one of its two super star clusters
(SSCs), Knot B, while the extreme ionization properties are driven by the <1
Myr-old, enshrouded SSC Knot A, which has ~ 10 times higher ionizing
luminosity. Very massive stars (> 100 Msun) may be present in this remarkable
object. Ionization-parameter mapping indicates the blowout region is optically
thin in the LyC, and the general properties also suggest LyC escape in the line
of sight. Mrk 71/NGC 2366 does differ from GPs in that it is 1 - 2 orders of
magnitude less luminous. The presence of this faint GP analog and candidate LyC
emitter (LCE) so close to us suggests that LCEs may be numerous and
commonplace, and therefore could significantly contribute to the cosmic
ionizing budget. Mrk 71/NGC 2366 offers an unprecedentedly detailed look at the
viscera of a candidate LCE, and could clarify the mechanisms of LyC escape.
|
'Red nuggets' are a rare population of passive compact massive galaxies
thought to be the first massive galaxies that formed in the Universe. First
found at $z \sim 3$, they are even less abundant at lower redshifts, and it is
believed that with time they mostly transformed through mergers into today's
giant ellipticals. Those red nuggets which managed to escape this fate can
serve as unique laboratories to study the early evolution of massive galaxies.
In this paper, we aim to make use of the VIMOS Public Extragalactic Redshift
Survey to build the largest up-to-date catalogue of spectroscopically confirmed
red nuggets at the intermediate redshift $0.5<z<1.0$. Starting from a catalogue
of nearly 90 000 VIPERS galaxies we select sources with stellar masses
$M_{star} > 8\times10^{10}$ $\rm{M}_{\odot}$ and effective radii
$R_\mathrm{e}<1.5$ kpc. Among them, we select red, passive galaxies with old
stellar population based on colour--colour NUVrK diagram, star formation rate
values, and verification of their optical spectra. Verifying the influence of
the limit of the source compactness on the selection, we found that the sample
size can vary even up to two orders of magnitude, depending on the chosen
criterion. Using one of the most restrictive criteria with additional checks on
their spectra and passiveness, we spectroscopically identified only 77
previously unknown red nuggets. The resultant catalogue of 77 red nuggets is
the largest such catalogue built based on the uniform set of selection criteria
above the local Universe. Number density calculated on the final sample of 77
VIPERS passive red nuggets per comoving Mpc$^3$ increases from
4.7$\times10^{-6}$ at $z \sim 0.61$ to $9.8 \times 10^{-6}$ at $z \sim 0.95$,
which is higher than values estimated in the local Universe, and lower than the
ones found at $z>2$. It fills the gap at intermediate redshift.
|
The Internet of Things (IoT) requires a new processing paradigm that inherits
the scalability of the cloud while minimizing network latency using resources
closer to the network edge. Building up such flexibility within the
edge-to-cloud continuum consisting of a distributed networked ecosystem of
heterogeneous computing resources is challenging. Furthermore, IoT traffic
dynamics and the rising demand for low-latency services foster the need for
minimizing the response time and balanced service placement. Load-balancing for
fog computing becomes a cornerstone for cost-effective system management and
operations. This paper studies two optimization objectives and formulates a
decentralized load-balancing problem for IoT service placement: (global) IoT
workload balance and (local) quality of service (QoS), in terms of minimizing
the cost of deadline violation, service deployment, and unhosted services. The
proposed solution, EPOS Fog, introduces a decentralized multi-agent system for
collective learning that utilizes edge-to-cloud nodes to jointly balance the
input workload across the network and minimize the costs involved in service
execution. The agents locally generate possible assignments of requests to
resources and then cooperatively select an assignment such that their
combination maximizes edge utilization while minimizes service execution cost.
Extensive experimental evaluation with realistic Google cluster workloads on
various networks demonstrates the superior performance of EPOS Fog in terms of
workload balance and QoS, compared to approaches such as First Fit and
exclusively Cloud-based. The results confirm that EPOS Fog reduces service
execution delay up to 25% and the load-balance of network nodes up to 90%. The
findings also demonstrate how distributed computational resources on the edge
can be utilized more cost-effectively by harvesting collective intelligence.
|
It is shown that recent criticism by C. R. Hagen (hep-th/9902057) questioning
the validity of stress tensor treatments of the Casimir energy for space
divided into two parts by a spherical boundary is without foundation.
|
We study an open quantum system simulation on quantum hardware, which
demonstrates robustness to hardware errors even with deep circuits containing
up to two thousand entangling gates. We simulate two systems of electrons
coupled to an infinite thermal bath: 1) a system of dissipative free electrons
in a driving electric field; and 2) the thermalization of two interacting
electrons in a single orbital in a magnetic field -- the Hubbard atom. These
problems are solved using IBM quantum computers, showing no signs of decreasing
fidelity at long times. Our results demonstrate that algorithms for simulating
open quantum systems are able to far outperform similarly complex
non-dissipative algorithms on noisy hardware. Our two examples show promise
that the driven-dissipative quantum many-body problem can eventually be solved
on quantum computers.
|
Designed to compete with fiat currencies, bitcoin proposes it is a
crypto-currency alternative. Bitcoin makes a number of false claims, including:
solving the double-spending problem is a good thing; bitcoin can be a reserve
currency for banking; hoarding equals saving, and that we should believe
bitcoin can expand by deflation to become a global transactional currency
supply. Bitcoin's developers combine technical implementation proficiency with
ignorance of currency and banking fundamentals. This has resulted in a failed
attempt to change finance. A set of recommendations to change finance are
provided in the Afterword: Investment/venture banking for the masses; Venture
banking to bring back what investment banks once were; Open-outcry exchange for
all CDS contracts; Attempting to develop CDS type contracts on investments in
startup and existing enterprises; and Improving the connection between startup
tech/ideas, business organization and investment.
|
The definition of matter states on spacelike hypersurfaces of a 1+1
dimensional black hole spacetime is considered. Because of small quantum
fluctuations in the mass of the black hole, the usual approximation of treating
the gravitational field as a classical background on which matter is quantized,
breaks down near the black hole horizon. On any hypersurface that captures both
infalling matter near the horizon and Hawking radiation, a semiclassical
calculation is inconsistent. An estimate of the size of correlations between
the matter and gravity states shows that they are so strong that a fluctuation
in the black hole mass of order exp[-M/M_{Planck}] produces a macroscopic
change in the matter state. (Talk given at the 7th Marcel Grossmann Meeting on
work in collaboration with E. Keski-Vakkuri, G. Lifschytz and S. Mathur.)
|
This paper defines double fibrations (fibrations of double categories) and
describes their key examples and properties. In particular, it shows how double
fibrations relate to existing fibrational notions such as monoidal fibrations
and discrete double fibrations, proves a representation theorem for double
fibrations, and shows how double fibrations are a type of internal fibration.
|
Recent progress in full jet reconstruction in heavy-ion collisions at RHIC
makes it a promising tool for the quantitative study of the QCD at high energy
density. Measurements in d+Au collisions are important to disentangle initial
state nuclear effects from medium-induced k_T broadening and jet quenching.
Furthermore, comparison to measurements in p+p gives access to cold nuclear
matter effects. Inclusive jet p_T spectra and di-jet correlations (k_T) in 200
GeV p+p and d+Au collisions from the 2007-2008 RHIC run are presented.
|
In this paper we show the existence of stochastic Lagrangian particle
trajectory for Leray's solution of 3D Navier-Stokes equations. More precisely,
for any Leray's solution ${\mathbf u}$ of 3D-NSE and each
$(s,x)\in\mathbb{R}_+\times\mathbb{R}^3$, we show the existence of weak
solutions to the following SDE, which has a density $\rho_{s,x}(t,y)$ belonging
to $\mathbb{H}^{1,p}_q$ provided $p,q\in[1,2)$ with
$\frac{3}{p}+\frac{2}{q}>4$: $$ \mathrm{d} X_{s,t}={\mathbf u}
(s,X_{s,t})\mathrm{d} t+\sqrt{2\nu}\mathrm{d} W_t,\ \ X_{s,s}=x,\ \ t\geq s, $$
where $W$ is a three dimensional standard Brownian motion, $\nu>0$ is the
viscosity constant. Moreover, we also show that for Lebesgue almost all
$(s,x)$, the solution $X^n_{s,\cdot}(x)$ of the above SDE associated with the
mollifying velocity field ${\mathbf u}_n$ weakly converges to $X_{s,\cdot}(x)$
so that $X$ is a Markov process in almost sure sense.
|
LoRaWAN has emerged as one of the promising low-power wide-area network
technologies to enable long-range sensing and monitoring applications in
Internet of Things. The LoRa physical layer used in LoRaWAN suffers from low
data rates and thus increases packet duration. In a dense LoRaWAN network
scenario with simple media access protocol like ALOHA, the packet collision
probability increases with increase in packet duration. This degrades over-all
network throughput because of increased re-transmissions of collided packets.
Any increase in data rate directly reduces the packet duration. Thus, in this
paper, we have proposed a novel approach to enhance the data rate in LoRa
communication system by using adaptive symbol periods in physical layer. To the
best of our knowledge, this is the first attempt at using adaptive symbol
periods to enhance data rate of the LoRa system. The trade-off of the proposed
approach in terms of required symbol overhead and degradation in bit error rate
performance due to symbol period reduction has also been analysed. We have
shown that for reduction factor \(\beta\), the data rate directly increases
\(1/\beta\) times.
|
We study the dynamics of a second order phase transition in a situation
thatmimics a sudden quench to a temperature below the critical temperature in a
model with dynamical symmetry breaking. In particular we show that the domains
of correlated values of the condensate grow as $\sqrt{t}$ and that this result
seems to be largely model independent.
|
We describe a model for pion production off nucleons and coherent pions from
nuclei induced by neutrinos in the 1 GeV energy regime. Besides the dominant
Delta pole contribution, it takes into account the effect of background terms
required by chiral symmetry. Moreover, the model uses a reduced
nucleon-to-Delta resonance axial coupling, which leads to coherent pion
production cross sections around a factor two smaller than most of the previous
theoretical estimates. Nuclear effects like medium corrections on the Delta
propagator and final pion distortion are included.
|
Counting the number of people is something many security application focus
on, when dealing with controlling accesses in restricted areas, as it occurs
with banks, airports, railway stations and governmental offices. This paper
presents an automated solution for detecting the presence of more than one
person into interlocked doors adopted in many accesses. In most cases,
interlocked doors are small areas where other pieces of information and sensors
are placed in order to detect the presence of guns, explosive, etc. The general
goals and the required environmental condition, allowed us to implement a
detection system at lower costs and complexity, with respect to other existing
techniques. The system consists of a fixed array of microwave transceiver
modules, whose received signals are processed to collect information related to
a sort of volume occupied in the interlocked door cabin. The proposed solution
has been statistically validated by using statistical analysis. The whole
solution has been also implemented to be used in a real time environment and
thus validated against real experimental measures.
|
An introduction to models of open universes originating from bubbles,
including a summary of recent theoretical results for the power spectrum. To
appear in the proceedings of the XXXIth Moriond meeting, "Microwave Background
Anisotropies."
|
In this paper, we consider the embedding relations between any two $\alpha$%
-modulation spaces. Based on an observation that the $\alpha$-modulation space
with smaller $\alpha$ can be regarded as a corresponding $\alpha$% -modulation
space with larger $\alpha$, we give a complete characterization of the Fourier
multipliers between $\alpha$-modulation spaces with different $\alpha$. Then we
establish a full version of optimal embedding relations between
$\alpha$-modulation spaces. As an application, we determine that the bounded
operators commuting with translations between $\alpha$-modulation spaces are of
convolution type.
|
An interpretation of the Casselman-Wallach (C-W) Theorem is that the
$K$-finite functor is an isomorphism of categories from the category of
finitely generated, admissible smooth Fr\'echet modules of moderate growth to
the category of Harish-Chandra modules for a real reductive group, $G$ (here
$K$ is a maximal compact subgroup of G).In this paper we study the dependence
of this functor on parameters. Our main result implies that holomorphic
dependence implies holomorphic dependence. The work uses results from the
excellent thesis of van der Noort. Also a remarkable family of Universal
Harish-Chandra modules developed in this paper plays a key role.
|
Recently, the collisionless expansion of spherical nanoplasmas has been
analyzed with a new ergodic model, clarifying the transition from
hydrodynamic-like to Coulomb-explosion regimes, and providing accurate laws for
the relevant features of the phenomenon. A complete derivation of the model is
here presented. The important issue of the self-consistent initial conditions
is addressed by analyzing the initial charging transient due to the electron
expansion, in the approximation of immobile ions. A comparison among different
kinetic models for the expansion is presented, showing that the ergodic model
provides a simplified description, which retains the essential information on
the electron distribution, in particular, the energy spectrum. Results are
presented for a wide range of initial conditions (determined from a single
dimensionless parameter), in excellent agreement with calculations from the
exact Vlasov-Poisson theory, thus providing a complete and detailed
characterization of all the stages of the expansion.
|
The speed-accuracy Pareto curve of object detection systems have advanced
through a combination of better model architectures, training and inference
methods. In this paper, we methodically evaluate a variety of these techniques
to understand where most of the improvements in modern detection systems come
from. We benchmark these improvements on the vanilla ResNet-FPN backbone with
RetinaNet and RCNN detectors. The vanilla detectors are improved by 7.7% in
accuracy while being 30% faster in speed. We further provide simple scaling
strategies to generate family of models that form two Pareto curves, named
RetinaNet-RS and Cascade RCNN-RS. These simple rescaled detectors explore the
speed-accuracy trade-off between the one-stage RetinaNet detectors and
two-stage RCNN detectors. Our largest Cascade RCNN-RS models achieve 52.9% AP
with a ResNet152-FPN backbone and 53.6% with a SpineNet143L backbone. Finally,
we show the ResNet architecture, with three minor architectural changes,
outperforms EfficientNet as the backbone for object detection and instance
segmentation systems.
|
Network Function Virtualization (NFV) has the potential to significantly
reduce the capital and operating expenses, shorten product release cycle, and
improve service agility. In this paper, we focus on minimizing the total number
of Virtual Network Function (VNF) instances to provide a specific service
(possibly at different locations) to all the flows in a network. Certain
network security and analytics applications may allow fractional processing of
a flow at different nodes (corresponding to datacenters), giving an opportunity
for greater optimization of resources. Through a reduction from the set cover
problem, we show that this problem is NP-hard and cannot even be approximated
within a factor of (1 - o(1)) ln(m) (where m is the number of flows) unless
P=NP. Then, we design two simple greedy algorithms and prove that they achieve
an approximation ratio of (1 - o(1)) ln(m) + 2, which is asymptotically
optimal. For special cases where each node hosts multiple VNF instances (which
is typically true in practice), we also show that our greedy algorithms have a
constant approximation ratio. Further, for tree topologies we develop an
optimal greedy algorithm by exploiting the inherent topological structure.
Finally, we conduct extensive numerical experiments to evaluate the performance
of our proposed algorithms in various scenarios.
|
After the observation of non-zero $\theta_{13}$ the goal has shifted to
observe $CP$ violation in the leptonic sector. Neutrino oscillation experiments
can, directly, probe the Dirac $CP$ phases. Alternatively, one can measure $CP$
violation in the leptonic sector using Leptonic Unitarity Quadrangle(LUQ). The
existence of Standard Model (SM) gauge singlets - sterile neutrinos - will
provide additional sources of $CP$ violation. We investigate the connection
between neutrino survival probability and rephasing invariants of the
$4\times4$ neutrino mixing matrix. In general, LUQ contain eight geometrical
parameters out of which five are independent. We obtain $CP$
asymmetry($P_{\nu_f\rightarrow\nu_{f'}}-P_{\bar{\nu}_f\rightarrow\bar{\nu}_{f'}}$)
in terms of these independent parameters of the LUQ and search for the
possibilities of extracting information on these independent geometrical
parameters in short baseline(SBL) and long baseline(LBL) experiments, thus,
looking for constructing LUQ and possible measurement of $CP$ violation. We
find that it is not possible to construct LUQ using data from LBL experiments
because $CP$ asymmetry is sensitive to only three of the five independent
parameters of LUQ. However, for SBL experiments, $CP$ asymmetry is found to be
sensitive to all five independent parameters making it possible to construct
LUQ and measure $CP$ violation.
|
System identification techniques -- projection pursuit regression models
(PPRs) and convolutional neural networks (CNNs) -- provide state-of-the-art
performance in predicting visual cortical neurons' responses to arbitrary input
stimuli. However, the constituent kernels recovered by these methods are often
noisy and lack coherent structure, making it difficult to understand the
underlying component features of a neuron's receptive field.
In this paper, we show that using a dictionary of diverse kernels with
complex shapes learned from natural scenes based on efficient coding theory, as
the front-end for PPRs and CNNs can improve their performance in neuronal
response prediction as well as algorithmic data efficiency and convergence
speed. Extensive experimental results also indicate that these sparse-code
kernels provide important information on the component features of a neuron's
receptive field. In addition, we find that models with the complex-shaped
sparse code front-end are significantly better than models with a standard
orientation-selective Gabor filter front-end for modeling V1 neurons that have
been found to exhibit complex pattern selectivity. We show that the relative
performance difference due to these two front-ends can be used to produce a
sensitive metric for detecting complex selectivity in V1 neurons.
|
Neural Radiance Field (NeRF) has recently become a significant development in
the field of Computer Vision, allowing for implicit, neural network-based scene
representation and novel view synthesis. NeRF models have found diverse
applications in robotics, urban mapping, autonomous navigation, virtual
reality/augmented reality, and more. Due to the growing popularity of NeRF and
its expanding research area, we present a comprehensive survey of NeRF papers
from the past two years. Our survey is organized into architecture and
application-based taxonomies and provides an introduction to the theory of NeRF
and its training via differentiable volume rendering. We also present a
benchmark comparison of the performance and speed of key NeRF models. By
creating this survey, we hope to introduce new researchers to NeRF, provide a
helpful reference for influential works in this field, as well as motivate
future research directions with our discussion section.
|
Rudimentary mathematical analysis of simple network models suggests
bandwidth-independent saturation of network growth dynamics and hints at linear
decrease in information density of the data. However it strongly confirms
Metcalfe's law as a measure of network utility and suggests it can play an
important role in network calculations. This paper establishes mathematical
notion of network value and analyses two conflicting models of network. One,
traditional model, fails to manifest Metcalfe's law. Another model, one that
observes network in a wider context, both confirms Metcalfe's law and reveals
its upper boundary.
|
In contrast with classical Schwarz theory, recent results in computational
chemistry have shown that for special domain geometries, the one-level parallel
Schwarz method can be scalable. This property is not true in general, and the
issue of quantifying the lack of scalability remains an open problem. Even
though heuristic explanations are given in the literature, a rigorous and
systematic analysis is still missing. In this short manuscript, we provide a
first rigorous result that precisely quantifies the lack of scalability of the
classical one-level parallel Schwarz method for the solution to the
one-dimensional Laplace equation. Our analysis technique provides a possible
roadmap for a systematic extension to more realistic problems in higher
dimensions.
|
Why does nature only allow nonlocal correlations up to Tsirelson's bound and
not beyond? We construct a channel whose input is statistically independent of
its output, but through which communication is nevertheless possible if and
only if Tsirelson's bound is violated. This provides a statistical
justification for Tsirelson's bound on nonlocal correlations in a bipartite
setting.
|
We examine the $\Xi_Q - \Xi'_Q$ mixing and heavy baryon masses in the heavy
quark effective theory with the $\mq$ corrections. In the conventional baryon
assignment, we obtain the mixing angle $\cos^2 \theta = 0.87\pm 0.03$ in virtue
of the Gell-Mann-Okubo mass relation. On the other hand, if we adopt the new
baryon assignment given by Falk, the allowed region of the $\Sigma_c$ mass is
upper from 2372 MeV.
|
The partial decay widths of lowest lying negative parity baryons belonging to
the 70-plet of SU(6) are analyzed in the framework of the 1/Nc expansion The
channels considered are those with single pseudo-scalar meson emission. The
analysis is carried out to sub-leading order in 1/Nc and to first order in
SU(3) symmetry breaking. Conclusions about the magnitude of SU(3) breaking
effects along with predictions for some unknown or poorly determined partial
decay widths of known resonances are obtained.
|
The most general version of a renormalizable $d=4$ theory corresponding to a
dimensionless higher-derivative scalar field model in curved spacetime is
explored. The classical action of the theory contains $12$ independent
functions, which are the generalized coupling constants of the theory. We
calculate the one-loop beta functions and then consider the conditions for
finiteness. The set of exact solutions of power type is proven to consist of
precisely three conformal and three nonconformal solutions, given by remarkably
simple (albeit nontrivial) functions that we obtain explicitly. The finiteness
of the conformal theory indicates the absence of a conformal anomaly in the
finite sector. The stability of the finite solutions is investigated and the
possibility of renormalization group flows is discussed as well as several
physical applications.
|
We construct a generalization of the cyclic $\lambda$-deformed models of
\cite{Georgiou:2017oly} by relaxing the requirement that all the WZW models
should have the same level $k$. Our theories are integrable and flow from a
single UV point to different IR fixed points depending on the different
orderings of the WZW levels $k_i$. First we calculate the Zamolodchikov's
C-function for these models as exact functions of the deformation parameters.
Subsequently, we fully characterize each of the IR conformal field theories.
Although the corresponding left and right sectors have different symmetries,
realized as products of current and coset-type symmetries, the associated
central charges are precisely equal, in agreement with the valuesobtained from
the C-function.
|
We study the Cauchy problem for the semilinear fractional heat equation
$u_{t}=\triangle^{\alpha/2}u+f(u)$ with non-negative initial value $u_{0}\in
L^{q}(\mathbb{R}^{n})$ and locally Lipschitz, non-negative source term $f$. For
$f$ satisfying the Osgood-type condition
$\int_{1}^{\infty}\frac{ds}{f(s)}=\infty$, we show that there exist initial
conditions such that the equation has no local solution in
$L^{1}_{loc}(\mathbb{R}^{n})$.
|
The quantum interference effect on the quasiparticle density of states (DOS)
is studied with the diagrammatic technique in two-dimensional d-wave
superconductors with dilute nonmagnetic impurities both near the Born and near
the unitary limits. We derive in details the expressions of the Goldstone modes
(cooperon and diffuson) for quasiparticle diffusion. The DOS for generic Fermi
surfaces is shown to be subject to a quantum interference correction of
logarithmic suppression, but with various renormalization factors for the Born
and unitary limits. Upon approaching the combined limit of unitarity and nested
Fermi surface, the DOS correction is found to become a $\delta$-function of the
energy, which can be used to account for the resonant peak found by the
numerical studies.
|
Text entry is a common activity in virtual reality (VR) systems. There is a
limited number of available hands-free techniques, which allow users to carry
out text entry when users' hands are busy such as holding items or hand-based
devices are not available. The most used hands-free text entry technique is
DwellType, where a user selects a letter by dwelling over it for a specific
period. However, its performance is limited due to the fixed dwell time for
each character selection. In this paper, we explore two other hands-free text
entry mechanisms in VR: BlinkType and NeckType, which leverage users' eye
blinks and neck's forward and backward movements to select letters. With a user
study, we compare the performance of the two techniques with DwellType. Results
show that users can achieve an average text entry rate of 13.47, 11.18 and
11.65 words per minute with BlinkType, NeckType, and DwellType, respectively.
Users' subjective feedback shows BlinkType as the preferred technique for text
entry in VR.
|
I present a determination of the photon PDF from a fit to the recent ATLAS
measurements of high-mass Drell-Yan lepton-pair production at $\sqrt{s} = 8$
TeV. This analysis is based on the {\tt xFitter} framework interfaced to the
{\tt APFEL} program, that accounts for NLO QED effects, and to the {\tt
aMCfast} code to account for the photon-initiated contributions within {\tt
MadGraph5\_aMC@NLO}. The result is compared with other recent determinations of
the photon PDF finding a general good agreement.
|
Our local environment at $r<10$ Mpc expands linearly and smoothly, as if
ruled by a uniform matter distribution, while observations show the very clumpy
local universe. This is a long standing enigma in cosmology. We argue that the
recently discovered vacuum or quintessence (dark energy (DE) component with the
equation of state $p_Q = w \rho_Q c^2$, $w \in [-1,0)$) from observations of
the high-redshift universe may also manifest itself in the properties of the
very local Hubble flow. We introduce the concept of the critical distance $r_Q$
where the repulsive force of dark energy starts to dominate over the gravity of
a mass concentration. For the Local Group $r_Q$ is about 1.5 Mpc. Intriguingly,
at the same distance 1.5 Mpc the linear and very "cold" Hubble flow emerges,
with about the global Hubble constant. We also consider the critical epoch
$t_Q$, when the DE antigravity began to dominate over the local matter gravity
for a galaxy which at the present epoch is in the local DE dominated region.
Our main result is that the homogeneous dark energy component, revealed by SNIa
observations, resolves the old confrontation between the local Hubble flow and
local highly non-uniform, fractal matter distribution. It explains why the
Hubble law starts on the outskirts of the Local Group, with the same Hubble
constant as globally and with a remarkably small velocity dispersion.
|
We extend the computations in [AGM1, AGM2, AGM3] to find the cohomology in
degree five of a congruence subgroup Gamma of SL(4,Z) with coefficients in a
field K, twisted by a nebentype character eta, along with the action of the
Hecke algebra. This is the top cuspidal degree. In practice we take K to be a
finite field of large characteristic, as a proxy for the complex numbers. For
each Hecke eigenclass found, we produce a Galois representation that appears to
be attached to it. Our computations show that in every case this Galois
representation is the only one that could be attached to it. The existence of
the attached Galois representations agrees with a theorem of Scholze and sheds
light on the Borel-Serre boundary for Gamma.
The computations require serious modifications to our previous algorithms to
accommodate the twisted coefficients. Nontrivial coefficients add a layer of
complication to our data structures, and new possibilites arise that must be
taken into account in the Galois Finder, the code that finds the Galois
representations. We have improved the Galois Finder so that it reports when the
attached Galois representation is uniquely determined by our data.
|
SOXS (Son of X-shooter) is a wide band, medium resolution spectrograph for
the ESO NTT with a first light expected in 2021. The instrument will be
composed by five semi-independent subsystems: a pre-slit Common Path, an
Acquisition Camera, a Calibration Box, the NIR spectrograph, and the UV-VIS
spectrograph. In this paper, we present the mechanical design of the
subsystems, the kinematic mounts developed to simplify the final integration
procedure and the maintenance. The concept of the CP and NIR optomechanical
mounts developed for a simple pre-alignment procedure and for the thermal
compensation of reflective and refractive elements will be shown.
|
Models with extra dimensions may give new effects visible at future
experiments. In these models, bulk fields can develop localized corrections to
their kinetic terms which can modify the phenomenological predictions in a
sizeable way. We review the case in which both gauge bosons and fermions
propagate in the bulk, and discuss the limits on the parameter space arising
from electroweak precision data.
|
We present the results of an experiment where a short focal length (~ 1.3 cm)
permanent magnet electron lens is used to image micron-size features of a metal
sample in a single shot, using an ultra- high brightness ps-long 4 MeV electron
beam from a radiofrequency photoinjector. Magnifcation ratios in excess of 30x
were obtained using a triplet of compact, small gap (3.5 mm), Halbach-style
permanent magnet quadrupoles with nearly 600 T/m field gradients. These results
pave the way to- wards single shot time-resolved electron microscopy and open
new opportunities in the applications of high brightness electron beams.
|
Excitation of solar-like oscillations is attributed to turbulent convection
and takes place at the upper-most part of the outer convective zones.
Amplitudes of these oscillations depend on the efficiency of the excitation
processes as well as on the properties of turbulent convection. We present past
and recent improvements on the modeling of those processes. We show how the
mode amplitudes and mode line-widths can bring information about the turbulence
in the specific cases of the Sun and Alpha Cen A.
|
We perform perturbative computations in a lattice gauge theory with a
conformal measure that is quadratic in a non-compact abelian gauge field and is
nonlocal, as inspired by the induced gauge action in massless QED$_3$. In a
previous work, we showed that coupling fermion sources to the gauge model led
to nontrivial conformal data in the correlation functions of fermion bilinears
that are functions of charge $q$ of the fermion. In this paper, we compute such
gauge invariant fermionic observables to order $q^2$ in lattice perturbation
theory with the same conformal measure. We reproduce the expectations for
scalar anomalous dimension from previous estimates in dimensional
regularization. We address the issue of the lattice regulator dependence of the
amplitudes of correlation functions.
|
We analyse a minimal supersymmetric standard model (MSSM) taking a minimal
flavour violation (MFV) structure at the GUT scale. We evaluate the parameters
at the electroweak scale taking into account the full flavour structure in the
evolution of the renormalization group equations. We concentrate mainly on the
decay Bs -> mu mu and its correlations with other observables like b -> s
gamma, b -> s l l, Delta M_Bs and the anomalous magnetic moment of the muon. We
restrict our analysis to the regions in parameter space consistent with the
dark matter constraints. We find that the BR(Bs -> mu mu) can exceed the
current experimental limit in the regions of parameter space which are allowed
by all other constraints thus providing an additional bound on supersymmetric
parameters. This holds even in the constrained MSSM. Assuming an hypothetical
measurement of BR(Bs -> mu mu) ~ 10^-7 we analyse the predicted MSSM spectrum
and flavour violating decay modes of supersymmetric particles which are found
to be small.
|
We show theoretically that the magnetic ions, randomly distributed in a
two-dimensional (2D) semiconductor system, can generate a ferromagnetic
long-range order via the RKKY interaction. The main physical reason is the
discrete (rather than continuous) symmetry of the 2D Ising model of the
spin-spin interaction mediated by the spin-orbit coupling of 2D free carriers,
which precludes the validity of the Mermin-Wagner theorem. Further, the
analysis clearly illustrates the crucial role of the molecular field
fluctuations as opposed to the mean field. The developed theoretical model
describes the desired magnetization and phase-transition temperature $T_c$ in
terms of a single parameter; namely, the chemical potential $\mu$. Our results
highlight a path way to reach the highest possible $T_c$ in a given material as
well as an opportunity to control the magnetic properties externally (e.g., via
a gate bias). Numerical estimations show that magnetic impurities such as
Mn$^{2+}$ with spins $S=5/2$ can realize ferromagnetism with $T_c$ close to
room temperature.
|
In this paper, we solve Diophantine equation in the tittle in nonnegative
integers m,n, and a. In order to prove our result, we use lower bounds for
linear forms in logarithms and and a version of the Baker-Davenport reduction
method in diophantine approximation.
|
The progenitor stars of several Type IIb supernovae (SNe) show indications
for extended hydrogen envelopes. These envelopes might be the outcome of
luminous energetic pre-explosion events, so-called precursor eruptions. We use
the Palomar Transient Factory (PTF) pre-explosion observations of a sample of
27 nearby Type IIb SNe to look for such precursors during the final years prior
to the SN explosion. No precursors are found when combining the observations in
15-day bins, and we calculate the absolute-magnitude-dependent upper limit on
the precursor rate. At the 90% confidence level, Type IIb SNe have on average
$<0.86$ precursors as bright as absolute $R$-band magnitude $-14$ in the final
3.5 years before the explosion and $<0.56$ events over the final year. In
contrast, precursors among SNe IIn have a $\gtrsim 5$ times higher rate. The
kinetic energy required to unbind a low-mass stellar envelope is comparable to
the radiated energy of a few-weeks-long precursor which would be detectable for
the closest SNe in our sample. Therefore, mass ejections, if they are common in
such SNe, are radiatively inefficient or have durations longer than months.
Indeed, when using 60-day bins a faint precursor candidate is detected prior to
SN 2012cs ($\sim2$% false-alarm probability). We also report the detection of
the progenitor of SN 2011dh which does not show detectable variability over the
final two years before the explosion. The suggested progenitor of SN 2012P is
still present, and hence is likely a compact star cluster, or an unrelated
object.
|
Ellipsometry is used to indirectly measure the optical properties and
thickness of thin films. However, solving the inverse problem of ellipsometry
is time-consuming since it involves human expertise to apply the data fitting
techniques. Many studies use traditional machine learning-based methods to
model the complex mathematical fitting process. In our work, we approach this
problem from a deep learning perspective. First, we introduce a large-scale
benchmark dataset to facilitate deep learning methods. The proposed dataset
encompasses 98 types of thin film materials and 4 types of substrate materials,
including metals, alloys, compounds, and polymers, among others. Additionally,
we propose a deep learning framework that leverages residual connections and
self-attention mechanisms to learn the massive data points. We also introduce a
reconstruction loss to address the common challenge of multiple solutions in
thin film thickness prediction. Compared to traditional machine learning
methods, our framework achieves state-of-the-art (SOTA) performance on our
proposed dataset. The dataset and code will be available upon acceptance.
|
The variational quantum eigensolver (VQE) is a hybrid algorithm that has the
potential to provide a quantum advantage in practical chemistry problems that
are currently intractable on classical computers. VQE trains parameterized
quantum circuits using a classical optimizer to approximate the eigenvalues and
eigenstates of a given Hamiltonian. However, VQE faces challenges in
task-specific design and machine-specific architecture, particularly when
running on noisy quantum devices. This can have a negative impact on its
trainability, accuracy, and efficiency, resulting in noisy quantum data. We
propose variational denoising, an unsupervised learning method that employs a
parameterized quantum neural network to improve the solution of VQE by learning
from noisy VQE outputs. Our approach can significantly decrease energy
estimation errors and increase fidelities with ground states compared to noisy
input data for the $\text{H}_2$, LiH, and $\text{BeH}_2$ molecular
Hamiltonians, and the transverse field Ising model. Surprisingly, it only
requires noisy data for training. Variational denoising can be integrated into
quantum hardware, increasing its versatility as an end-to-end quantum
processing for quantum data.
|
In F-term supergravity inflation models, scalar fields other than the
inflaton generically receive a Hubble induced mass, which may restore gauge
symmetries during inflation and phase transitions may occur during or after
inflation as the Hubble parameter decreases. We study monopole (and domain
wall) production associated with such a phase transition in chaotic inflation
in supergravity and obtain a severe constraint on the symmetry breaking scale
which is related with the tensor-to-scalar ratio. Depending on model
parameters, it is possible that monopoles are sufficiently diluted to be free
from current constraints but still observable by planned experiments.
|
James's Conjecture predicts that the adjustment matrix for blocks of the
Iwahori-Hecke algebra of the symmetric group is the identity matrix when the
weight of the block is strictly less than the characteristic of the field. In
this paper, we consider the case when the characteristic of the field is
greater than or equal to 5, and prove that the adjustment matrix for the
principal block of $\mathcal{H}_{5e}$ is the identity matrix whenever $e\neq4$.
When $e=4$, we are able to calculate all but two entries of the adjustment
matrix.
|
Implementing circular economy (CE) principles is increasingly recommended as
a convenient solution to meet the goals of sustainable development. New tools
are required to support practitioners, decision-makers and policy-makers
towards more CE practices, as well as to monitor the effects of CE adoption.
Worldwide, academics, industrialists and politicians all agree on the need to
use CE-related measuring instruments to manage this transition at different
systemic levels. In this context, a wide range of circularity indicators
(C-indicators) has been developed in recent years. Yet, as there is not one
single definition of the CE concept, it is of the utmost importance to know
what the available indicators measure in order to use them properly. Indeed,
through a systematic literature review-considering both academic and grey
literature-55 sets of C-indicators, developed by scholars, consulting companies
and governmental agencies, have been identified, encompassing different
purposes, scopes, and potential usages. Inspired by existing taxonomies of
eco-design tools and sustainability indicators, and in line with the CE
characteristics, a classification of indicators aiming to assess, improve,
monitor and communicate on the CE performance is proposed and discussed. In the
developed taxonomy including 10 categories, C-indicators are differentiated
regarding criteria such as the levels of CE implementation (e.g. micro, meso,
macro), the CE loops (maintain, reuse, remanufacture, recycle), the performance
(intrinsic, impacts), the perspective of circularity (actual, potential) they
are taking into account, or their degree of transversality (generic,
sector-specific). In addition, the database inventorying the 55 sets of
C-indicators is linked to an Excel-based query tool to facilitate the selection
of appropriate indicators according to the specific user's needs and
requirements. This study enriches the literature by giving a first need-driven
taxonomy of C-indicators, which is experienced on several use cases. It
provides a synthesis and clarification to the emerging and must-needed research
theme of C-indicators, and sheds some light on remaining key challenges like
their effective uptake by industry. Eventually, limitations, improvement areas,
as well as implications of the proposed taxonomy are intently addressed to
guide future research on C-indicators and CE implementation.
|
Characterizing and accessing quantum phases of itinerant bosons or fermions
in two dimensions (2D) that exhibit singular structure along surfaces in
momentum space but have no quasi-particle description remains as a central
challenge in the field of strongly correlated physics. Fortuitously, signatures
of such 2D strongly correlated phases are expected to be manifest in
quasi-one-dimensional "$N$-leg ladder" systems. The ladder discretization of
the transverse momentum cuts through the 2D surface, leading to a quasi-1D
descendant state with a set of low-energy modes whose number grows with the
number of legs and whose momenta are inherited from the 2D surfaces. These
multi-mode quasi-1D liquids constitute a new and previously unanticipated class
of quantum states interesting in their own right. But more importantly they
carry a distinctive quasi-1D "fingerprint" of the parent 2D quantum fluid
state. This can be exploited to access the 2D phases from controlled numerical
and analytical studies in quasi-1D models. The preliminary successes and future
prospects in this endeavor will be briefly summarized.
|
We propose a novel way to handle out of vocabulary (OOV) words in downstream
natural language processing (NLP) tasks. We implement a network that predicts
useful embeddings for OOV words based on their morphology and on the context in
which they appear. Our model also incorporates an attention mechanism
indicating the focus allocated to the left context words, the right context
words or the word's characters, hence making the prediction more interpretable.
The model is a ``drop-in'' module that is jointly trained with the downstream
task's neural network, thus producing embeddings specialized for the task at
hand. When the task is mostly syntactical, we observe that our model aims most
of its attention on surface form characters. On the other hand, for tasks more
semantical, the network allocates more attention to the surrounding words. In
all our tests, the module helps the network to achieve better performances in
comparison to the use of simple random embeddings.
|
The Michelson-Morley experiment was designed to detect the relative motion of
the Earth with respect to a preferred reference frame, the ether, by measuring
the fringe shifts in an optical interferometer. These shifts, that should have
been proportional to the square of the Earth's velocity, were found to be much
smaller than expected. As a consequence, that experiment was taken as an
evidence that there is no ether and, as such, played a crucial role for
deciding between Lorentzian Relativity and Einstein's Special Relativity.
However, according to some authors, the observed Earth's velocity was not
negligibly small. To provide an independent check, we have re-analyzed the
fringe shifts observed in each of the six different sessions of the
Michelson-Morley experiment. They are consistent with a non-zero observable
Earth's velocity $v_{\rm obs} = 8.4 \pm 0.5 km/s$. Assuming the existence of a
preferred reference frame and using Lorentz transformations, this $v_{\rm obs}$
corresponds to a real velocity, in the plane of the interferometer, $v_{\rm
earth} = 201 \pm 12 km/s$. This value, which is remarkably consistent with 1932
Miller's cosmic solution, suggests that the magnitude of the fringe shifts is
determined by the typical velocity of the solar system within our galaxy. This
conclusion is consistent with the results of all classical experiments
(Morley-Miller, Illingworth, Joos, Michelson-Pease-Pearson,...) and with the
existing data from present-day experiments.
|
Many important ultrafast phenomena take place in the liquid phase. However,
there is no practical theory to predict how liquids respond to intense light.
Here, we propose an $ab~initio$ accurate method to study the non-perturbative
interaction of intense pulses with a liquid target to investigate its
high-harmonic emission. We consider the case of liquid water, but the method
can be applied to any other liquid or amorphous system. The liquid water
structure is reproduced using Car-Parrinello molecular dynamics simulations in
a periodic supercell. Then, we employ real-time time-dependent density
functional theory to evaluate the light-liquid interaction. We outline the
practical numerical conditions to obtain a converged response. Also, we discuss
the impact of nuclei ultrafast dynamics on the non-linear response of system.
In addition, by considering two different ordered structures of ice, we show
how harmonic emission responds to the loss of long-range order in liquid water.
|
The benefits of cutting planes based on the perspective function are well
known for many specific classes of mixed-integer nonlinear programs with on/off
structures. However, we are not aware of any empirical studies that evaluate
their applicability and computational impact over large, heterogeneous test
sets in general-purpose solvers. This paper provides a detailed computational
study of perspective cuts within a linear programming based branch-and-cut
solver for general mixed-integer nonlinear programs. Within this study, we
extend the applicability of perspective cuts from convex to nonconvex
nonlinearities. This generalization is achieved by applying a perspective
strengthening to valid linear inequalities which separate solutions of linear
relaxations. The resulting method can be applied to any constraint where all
variables appearing in nonlinear terms are semi-continuous and depend on at
least one common indicator variable. Our computational experiments show that
adding perspective cuts for convex constraints yields a consistent improvement
of performance, and adding perspective cuts for nonconvex constraints reduces
branch-and-bound tree sizes and strengthens the root node relaxation, but has
no significant impact on the overall mean time.
|
The electromagnetic and gravitational form factors of decuplet baryons are
systematically studied with a covariant quark-diquark approach. The model
parameters are firstly discussed and determined through comparison with the
lattice calculation results integrally. Then, the electromagnetic properties of
the systems including electromagnetic radii, magnetic moments, and
electric-quadrupole moments are calculated. The obtained results are in
agreement with experimental measurements and the results of other models.
Finally, the gravitational form factors and the mechanical properties of the
decuplet baryons, such as mass radii, energy densities, and spin distributions,
are also calculated and discussed.
|
Underwater images are degraded by the selective attenuation of light that
distorts colours and reduces contrast. The degradation extent depends on the
water type, the distance between an object and the camera, and the depth under
the water surface the object is at. Underwater image filtering aims to restore
or to enhance the appearance of objects captured in an underwater image.
Restoration methods compensate for the actual degradation, whereas enhancement
methods improve either the perceived image quality or the performance of
computer vision algorithms. The growing interest in underwater image filtering
methods--including learning-based approaches used for both restoration and
enhancement--and the associated challenges call for a comprehensive review of
the state of the art. In this paper, we review the design principles of
filtering methods and revisit the oceanology background that is fundamental to
identify the degradation causes. We discuss image formation models and the
results of restoration methods in various water types. Furthermore, we present
task-dependent enhancement methods and categorise datasets for training neural
networks and for method evaluation. Finally, we discuss evaluation strategies,
including subjective tests and quality assessment measures. We complement this
survey with a platform ( https://puiqe.eecs.qmul.ac.uk/ ), which hosts
state-of-the-art underwater filtering methods and facilitates comparisons.
|
The paper deals with fractal characteristics (Hurst exponent) and
wavelet-scaleograms of the information distribution model, suggested by the
authors. The authors have studied the effect of Hurst exponent change depending
upon the model parameters, which have semantic meaning. The paper also
considers fractal characteristics of real information streams. It is described,
how the Hurst exponent dynamics depends on these information streams state in
practice
|
We present combinatorial characterizations for the associated primes of the
second power of squarefree monomial ideals and criteria for this power to have
positive depth or depth greater than one.
|
We demonstrate electromagnetically induced transparency with the control
laser in a Laguerre-Gaussian mode. The transmission spectrum is studied in an
ultracold gas for the D2 line in both $^{85}$Rb and $^{87}$Rb, where the
decoherence due to diffusion of the atomic medium is negligible. We compare
these results to a similar configuration, but with the control laser in the
fundamental laser mode. We model the transmission of a probe laser under both
configurations, and we find good agreement with the experiment. We conclude
that the use of Laguerre-Gaussian modes in electromagnetically induced
transparency results in narrower resonance linewidths as compared to uniform
control laser intensity. The narrowing of the linewidth is caused by the
spatial distribution of the Laguerre-Gaussian intensity profile.
|
We analyze the role of first (leading) author gender on the number of
citations that a paper receives, on the publishing frequency and on the
self-citing tendency. We consider a complete sample of over 200,000
publications from 1950 to 2015 from five major astronomy journals. We determine
the gender of the first author for over 70% of all publications. The fraction
of papers which have a female first author has increased from less than 5% in
the 1960s to about 25% today. We find that the increase of the fraction of
papers authored by females is slowest in the most prestigious journals such as
Science and Nature. Furthermore, female authors write 19$\pm$7% fewer papers in
seven years following their first paper than their male colleagues. At all
times papers with male first authors receive more citations than papers with
female first authors. This difference has been decreasing with time and amounts
to $\sim$6% measured over the last 30 years. To account for the fact that the
properties of female and male first author papers differ intrinsically, we use
a random forest algorithm to control for the non-gender specific properties of
these papers which include seniority of the first author, number of references,
total number of authors, year of publication, publication journal, field of
study and region of the first author's institution. We show that papers
authored by females receive 10.4$\pm$0.9% fewer citations than what would be
expected if the papers with the same non-gender specific properties were
written by the male authors. Finally, we also find that female authors in our
sample tend to self-cite more, but that this effect disappears when controlled
for non-gender specific variables.
|
We define a free product of connected simple graphs that is equivalent to
several existing definitions when the graphs are vertex-transitive but differs
otherwise. The new definition is designed for the automorphism group of the
free product to be as large as possible, and we give sufficient criteria for it
to be non-discrete. Finally, we transfer Tits' classification of automorphisms
of trees and simplicity criterion to free products of graphs.
|
Developments in Genome-Wide Association Studies have led to the increasing
notion that future healthcare techniques will be personalized to the patient,
by relying on genetic tests to determine the risk of developing a disease. To
this end, the detection of gene interactions that cause complex diseases
constitutes an important application. Similarly to many applications in this
field, extensive data sets containing genetic information for a series of
patients are used (such as Single-Nucleotide Polymorphisms), leading to high
computational complexity and memory utilization, thus constituting a major
challenge when targeting high-performance execution in modern computing
systems. To close this gap, this work proposes several novel approaches for the
detection of three-way gene interactions in modern CPUs and GPUs, making use of
different optimizations to fully exploit the target architectures. Crucial
insights from the Cache-Aware Roofline Model are used to ensure the suitability
of the applications to the computing devices. An extensive study of the
architectural features of 13 CPU and GPU devices from all main vendors is also
presented, allowing to understand the features relevant to obtain
high-performance in this bioinformatics domain. To the best of our knowledge,
this study is the first to perform such evaluation for epistasis detection. The
proposed approaches are able to surpass the performance of state-of-the-art
works in the tested platforms, achieving an average speedup of 3.9$\times$
(7.3$\times$ on CPUs and 2.8$\times$ on GPUs) and maximum speedup of
10.6$\times$ on Intel UHD P630 GPU.
|
We present a series of statistical tests done to a sample of 29 Seyfert 1 and
59 Seyfert 2 galaxies selected from mostly isotropic properties, their far
infrared fluxes and warm infrared colors. Such selection criteria provide a
profound advantage over the criteria used by most investigators in the past,
such as ultraviolet excess. These tests were done using ground based high
resolution VLA A-configuration 3.6 cm radio and optical B and I imaging data.
From the relative number of Seyfert 1's and Seyfert 2's we calculate that the
torus half opening angle is 48deg. We show that, as seen in previous papers,
there is a lack of edge-on Seyfert 1 galaxies, suggesting dust and gas along
the host galaxy disk probably play an important role in hiding some nuclei from
direct view. We find that there is no statistically significant difference in
the distribution of host galaxy morphological types and radio luminosities of
Seyfert 1's and Seyfert 2's, suggesting that previous results showing the
opposite may have been due to selection effects. The average extension of the
radio emission of Seyfert 1's is smaller than that of Seyfert 2's by a factor
of ~2-3, as predicted by the Unified Model. A search for galaxies around our
Seyferts allows us to put a lower and an upper limit on the possible number of
companions around these galaxies of 19% and 28%, respectively, with no
significant difference in the number of companion galaxies between Seyfert 1's
and Seyfert 2's. We also show that there is no preference for the radio jets to
be aligned closer to the host galaxy disk axis in late type Seyferts, unlike
results claimed by previous papers. These results, taken together, provide
strong support for a Unified Model in which type 2 Seyferts contain a torus
seen more edge-on than the torus in type 1 Seyferts.
|
In this article, we study the new Q-tensor model previously derived from
Onsager's molecular theory by Han \textit{et al.} [Arch. Rational Mech. Anal.,
215.3 (2014), pp. 741-809] for static liquid crystal modeling. Taking density
and Q-tensor as order parameters, the new Q-tensor model not only characterizes
important phases while capturing density variation effects, but also remains
computationally tractable and efficient. We report the results of two numerical
applications of the model, namely the isotropic--nematic--smectic-A--smectic-C
phase transitions and the isotropic--nematic interface problem, in which
density variations are indispensable. Meanwhile, we show the connections of the
new Q-tensor model with classical models including generalized Landau-de Gennes
models, generalized McMillan models, and the Chen-Lubensky model. The new
Q-tensor model is the pivot and an appropriate trade-off between the classical
models in three scales.
|
We consider the classic problem of Network Reliability. A network is given
together with a source vertex, one or more target vertices, and probabilities
assigned to each of the edges. Each edge appears in the network with its
associated probability and the problem is to determine the probability of
having at least one source-to-target path. This problem is known to be NP-hard.
We present a linear-time fixed-parameter algorithm based on a parameter
called treewidth, which is a measure of tree-likeness of graphs. Network
Reliability was already known to be solvable in polynomial time for bounded
treewidth, but there were no concrete algorithms and the known methods used
complicated structures and were not easy to implement. We provide a
significantly simpler and more intuitive algorithm that is much easier to
implement.
We also report on an implementation of our algorithm and establish the
applicability of our approach by providing experimental results on the graphs
of subway and transit systems of several major cities, such as London and
Tokyo. To the best of our knowledge, this is the first exact algorithm for
Network Reliability that can scale to handle real-world instances of the
problem.
|
We construct $O(1)\times O(n)$-invariant ancient ``pancake'' solutions to a
large and natural class of fully nonlinear curvature flows. We then establish
that these are the unique $O(n)$-invariant ancient solutions to the
corresponding flow which sweep out a slab by carrying out a fine asymptotic
analysis for this class. This extends the main results of \cite{BLT} to a
surprisingly general class of flows.
|
In this paper we study the problem of optimally paying out dividends from an
insurance portfolio, when the criterion is to maximize the expected discounted
dividends over the lifetime of the company and the portfolio contains claims
due to natural catastrophes, modelled by a shot-noise Cox claim number process.
The optimal value function of the resulting two-dimensional stochastic control
problem is shown to be the smallest viscosity supersolution of a corresponding
Hamilton-Jacobi-Bellman equation, and we prove that it can be uniformly
approximated through a discretization of the space of the free surplus of the
portfolio and the current claim intensity level. We implement the resulting
numerical scheme to identify optimal dividend strategies for such a natural
catastrophe insurer, and it is shown that the nature of the barrier and band
strategies known from the classical models with constant Poisson claim
intensity carry over in a certain way to this more general situation, leading
to action and non-action regions for the dividend payments as a function of the
current surplus and intensity level. We also discuss some interpretations in
terms of upward potential for shareholders when including a catastrophe sector
in the portfolio.
|
We analyse analytic properties of nonlocal transition semigroups associated
with a class of stochastic differential equations (SDEs) in $\mathbb{R}^d$
driven by pure jump--type L\'evy processes. First, we will show under which
conditions the semigroup will be analytic on the Besov space $B_{p,q}^
m(\mathbb{R}^d)$ with $1\le p, q<\infty$ and $m\in\mathbb{R}$. Secondly, we
present some applications by proving the strong Feller property and give weak
error estimates for approximating schemes of the SDEs over the Besov space
$B_{\infty,\infty}^ m(\mathbb{R}^d)$.
|
We explore the meaning of privacy from the perspective of Qatari nationals as
it manifests in digital environments. Although privacy is an essential and
widely respected value in many cultures, the way in which it is understood and
enacted depends on context. It is especially vital to understand user behaviors
regarding privacy in the digital sphere, where individuals increasingly publish
personal information. Our mixed-methods analysis of 18K Twitter posts that
mention privacy focuses on the face to face and digital contexts in which
privacy is mentioned, and how those contexts lead to varied ideologies
regarding privacy. We find that in the Arab Gulf, the need for privacy is often
supported by Quranic text, advice on how to protect privacy is frequently
discussed, and the use of paternalistic language by men when discussing women
related privacy is common. Above all, privacy is framed as a communal
attribute, including not only the individual, but the behavior of those around
them; it even extends beyond the individual lifespan. We contribute an analysis
and description of these previously unexplored interpretations of privacy,
which play a role in how users navigate social media.
|
An ensemble of nuclear spin-pairs under certain conditions is known to
exhibit singlet state life-times much longer than other non-equilibrium states.
This property of singlet state can be exploited in quantum information
processing for efficient initialization of quantum registers. Here we describe
a general method of initialization and experimentally demonstrate it with two-,
three-, and four-qubit nuclear spin registers.
|
Dense disparities among multiple views is essential for estimating the 3D
architecture of a scene based on the geometrical relationship among the scene
and the views or cameras. Scenes with larger extents of heterogeneous textures,
differing scene illumination among the multiple views and with occluding
objects affect the accuracy of the estimated disparities. Markov random fields
(MRF) based methods for disparity estimation address these limitations using
spatial dependencies among the observations and among the disparity estimates.
These methods, however, are limited by spatially fixed and smaller neighborhood
systems or cliques. In this work, we present a new factor graph-based
probabilistic graphical model for disparity estimation that allows a larger and
a spatially variable neighborhood structure determined based on the local scene
characteristics. We evaluated our method using the Middlebury benchmark stereo
datasets and the Middlebury evaluation dataset version 3.0 and compared its
performance with recent state-of-the-art disparity estimation algorithms. The
new factor graph-based method provided disparity estimates with higher accuracy
when compared to the recent non-learning- and learning-based disparity
estimation algorithms. In addition to disparity estimation, our factor graph
formulation can be useful for obtaining maximum a posteriori solution to
optimization problems with complex and variable dependency structures as well
as for other dense estimation problems such as optical flow estimation.
|
Speech enhancement using neural networks is recently receiving large
attention in research and being integrated in commercial devices and
applications. In this work, we investigate data augmentation techniques for
supervised deep learning-based speech enhancement. We show that not only
augmenting SNR values to a broader range and a continuous distribution helps to
regularize training, but also augmenting the spectral and dynamic level
diversity. However, to not degrade training by level augmentation, we propose a
modification to signal-based loss functions by applying sequence level
normalization. We show in experiments that this normalization overcomes the
degradation caused by training on sequences with imbalanced signal levels, when
using a level-dependent loss function.
|
We investigate monoenergetic gamma-ray signatures from annihilations of dark
matter comprised of Z^1, the first Kaluza-Klein excitation of the Z boson, in a
non-minimal Universal Extra Dimensions model. The self-interactions of the
non-Abelian Z^1 gauge boson give rise to a large number of contributing Feynman
diagrams that do not exist for annihilations of the Abelian gauge boson B^1,
which is the standard Kaluza-Klein dark matter candidate. We find that the
annihilation rate is indeed considerably larger for the Z^1 than for the B^1.
Even though relic density calculations indicate that the mass of the Z^1 should
be larger than the mass of the B^1, the predicted monoenergetic gamma fluxes
are of the same order of magnitude. We compare our results to existing
experimental limits, as well as to future sensitivities, for image air
Cherenkov telescopes, and we find that the limits are reached already with a
moderately large boost factor. The realistic prospects for detection depend on
the experimental energy resolution.
|
The wormlike chain model of stiff polymers is a nonlinear $\sigma$-model in
one spacetime dimension in which the ends are fluctuating freely. This causes
important differences with respect to the presently available theory which
exists only for periodic and Dirichlet boundary conditions. We modify this
theory appropriately and show how to perform a systematic large-stiffness
expansions for all physically interesting quantities in powers of $L/\xi$,
where $L$ is the length and $\xi$ the persistence length of the polymer. This
requires special procedures for regularizing highly divergent Feynman integrals
which we have developed in previous work. We show that by adding to the
unperturbed action a correction term ${\cal A}^{\rm corr}$, we can calculate
all Feynman diagrams with Green functions satisfying Neumann boundary
conditions. Our expansions yield, order by order, properly normalized
end-to-end distribution function in arbitrary dimensions $d$, its even and odd
moments, and the two-point correlation function.
|
We adapt the interactive spline model of Wahba to growth curves with
covariates. The smoothing spline formulation permits a non-parametric
representation of the growth curves. In the limit when the discretization error
is small relative to the estimation error, the resulting growth curve estimates
often depend only weakly on the number and locations of the knots. The
smoothness parameter is determined from the data by minimizing an empirical
estimate of the expected error. We show that the risk estimate of Craven and
Wahba is a weighted goodness of fit estimate. A modified loss estimate is
given, where $\sigma^2$ is replaced by its unbiased estimate.
|
The physical properties of molecular clouds are often measured using
spectral-line observations, which provide the only probes of the clouds'
velocity structure. It is hard, though, to assess whether and to what extent
intensity features in position-position-velocity (PPV) space correspond to
"real" density structures in position-position-position (PPP) space. In this
paper, we create synthetic molecular cloud spectral-line maps of simulated
molecular clouds, and present a new technique for measuring the reality of
individual PPV structures. Our procedure projects density structures identified
in PPP space into corresponding intensity structures in PPV space and then
measures the geometric overlap of the projected structures with structures
identified from the synthetic observation. The fractional overlap between a PPP
and PPV structure quantifies how well the synthetic observation recovers
information about the 3D structure. Applying this machinery to a set of
synthetic observations of CO isotopes, we measure how well spectral-line
measurements recover mass, size, velocity dispersion, and virial parameter for
a simulated star-forming region. By disabling various steps of our analysis, we
investigate how much opacity, chemistry, and gravity affect measurements of
physical properties extracted from PPV cubes. For the simulations used here,
our results suggest that superposition induces a ~40% uncertainty in masses,
sizes, and velocity dispersions derived from 13CO. The virial parameter is most
affected by superposition, such that estimates of the virial parameter derived
from PPV and PPP information typically disagree by a factor of ~2. This
uncertainty makes it particularly difficult to judge whether gravitational or
kinetic energy dominate a given region, since the majority of virial parameter
measurements fall within a factor of 2 of the equipartition level alpha ~ 2.
|
The usefulness of the genuinely entangled six qubit state that was recently
introduced by Borras et al. is investigated for the quantum teleportation of an
arbitrary three qubit state and for quantum state sharing (QSTS) of an
arbitrary two qubit state. For QSTS, we explicitly devise two protocols and
construct sixteen orthogonal measurement basis which can lock an arbitrary two
qubit information between two parties.
|
Dark matter (DM) may be captured around a neutron star (NS) through
DM-nucleon interactions. We observe that the enhancement of such capturing is
particularly significant when the DM velocity and/or momentum transfer depend
on the DM-nucleon scattering cross-section. This could potentially lead to the
formation of a black hole within the typical lifetime of the NS. As the black
hole expands through the accretion of matter from the NS, it ultimately results
in the collapse of the host. Utilizing the existing pulsar data J0437-4715 and
J2124-3858, we derive the stringent constraints on the DM-nucleon scattering
cross-section across a broad range of DM masses.
|
The propagation of boson particles in a gravitational field described by the
Brans-Dicke theory of gravity is analyzed. We derive the wave function of the
scalar particles, and the effective potential experienced by the quantum
particles considering the role of the varying gravitational coupling. Besides,
we calculate the probability to find the scalar particles near the region where
a naked singularity is present. The extremely high energy radiated in such a
situation could account for the huge emitted power observed in Gamma Ray
Bursts.
|
Suppose $G$ is a finite group. The set of all centralizers of $2-$element
subsets of $G$ is denoted by $2-Cent(G)$. A group $G$ is called
$(2,n)-$centralizer if $|2-Cent(G)| = n$ and primitive $(2,n)-$centralizer if
$|2-Cent(G)| = |2-Cent(\frac{G}{Z(G)})| = n$, where $Z(G)$ denotes the center
of $G$. The aim of this paper is to present the main properties of
$(2,n)-$centralizer groups among them a characterization of $(2,n)-$centralizer
and primitive $(2,n)-$centralizer groups, $n \leq 9$, are given.
|
Computed tomography (CT) segmentation models frequently include classes that
are not currently supported by magnetic resonance imaging (MRI) segmentation
models. In this study, we show that a simple image inversion technique can
significantly improve the segmentation quality of CT segmentation models on MRI
data, by using the TotalSegmentator model, applied to T1-weighted MRI images,
as example. Image inversion is straightforward to implement and does not
require dedicated graphics processing units (GPUs), thus providing a quick
alternative to complex deep modality-transfer models for generating
segmentation masks for MRI data.
|
Subsets and Splits