text
stringlengths 6
128k
|
---|
Non-Abelian anyons promise to reveal spectacular features of quantum
mechanics that could ultimately provide the foundation for a decoherence-free
quantum computer. A key breakthrough in the pursuit of these exotic particles
originated from Read and Green's observation that the Moore-Read quantum Hall
state and a (relatively simple) two-dimensional p+ip superconductor both
support so-called Ising non-Abelian anyons. Here we establish a similar
correspondence between the Z_3 Read-Rezayi quantum Hall state and a novel
two-dimensional superconductor in which charge-2e Cooper pairs are built from
fractionalized quasiparticles. In particular, both phases harbor Fibonacci
anyons that---unlike Ising anyons---allow for universal topological quantum
computation solely through braiding. Using a variant of Teo and Kane's
construction of non-Abelian phases from weakly coupled chains, we provide a
blueprint for such a superconductor using Abelian quantum Hall states
interlaced with an array of superconducting islands. Fibonacci anyons appear as
neutral deconfined particles that lead to a two-fold ground-state degeneracy on
a torus. In contrast to a p+ip superconductor, vortices do not yield additional
particle types yet depending on non-universal energetics can serve as a trap
for Fibonacci anyons. These results imply that one can, in principle, combine
well-understood and widely available phases of matter to realize non-Abelian
anyons with universal braid statistics. Numerous future directions are
discussed, including speculations on alternative realizations with fewer
experimental requirements.
|
We reexamine the transition magnetic moment solution to the solar neutrino
problem. We argue that the absence of large time variations in the
Super-Kamiokande rate provides strong evidence against spin-flavor flip in the
solar convective zone. Spin-flavor flip could, however, occur in the primordial
magnetic field in the radiative zone. We compute the longest lived toroidal
mode for this field and show that spin-flavor flip in the radiative zone can
account for all available solar data.
|
Many of the important conclusions about Gamma-Ray Bursts follow from the
distributions of various quantities such as peak flux or duration. We show that
for astrophysical transients such as bursts, multiple selection thresholds can
lead to various forms of data truncation, which can strongly affect the
distributions obtained from the data if not accounted for properly. Thus the
data should be considered to form a multivariate distribution. We also caution
that if the variables forming the multivariate distribution are not
statistically independent of each other, further biases can result. A general
method is described to properly account for these effects, and as a specific
example we extract the distributions of flux and duration from the BATSE 3B
Gamma-Ray Burst data. It is shown that properly accounting for the
aforementioned biases tends to increase the slope of the $\log{N}$-$\log{S}$
relation at low values of $S$, and dramatically increases the number of short
duration bursts.
|
We present an overview of high resolution quiet Sun observations, from disk
center to the limb, obtained with the Atacama Large mm and sub-mm Array (ALMA)
at 3 mm. Seven quiet Sun regions were observed with resolution of up to 2.5" by
4.5". We produced both average and snapshot images by self-calibrating the ALMA
visibilities and combining the interferometric images with full disk solar
images. The images show well the chromospheric network, which, based on the
unique segregation method we used, is brighter than the average over the fields
of view of the observed regions by $\sim 305$ K while the intranetwork is less
bright by $\sim 280$ K, with a slight decrease of the network/intranetwork
contrast toward the limb. At 3 mm the network is very similar to the 1600 \AA\
images, with somewhat larger size. We detected for the first time spicular
structures, rising up to 15" above the limb with a width down to the image
resolution and brightness temperature of $\sim$ 1800 K above the local
background. No trace of spicules, either in emission or absorption, was found
on the disk. Our results highlight ALMA's potential for the study of the quiet
chromosphere.
|
We proposed an algorithm for solving Hamilton-Jacobi equation associated to
an optimal trajectory problem for a vehicle moving inside the pre-specified
domain with the speed depending upon the direction of the motion and current
position of the vehicle. The dynamics of the vehicle is defined by an ordinary
differential equation, the right hand of which is given by product of control(a
time dependent fuction) and a function dependent on trajectory and control. At
some unspecified terminal time, the vehicle reaches the boundary of the
pre-specified domain and incurs a terminal cost. We also associate the
traveling cost with a type of integral to the trajectory followed by vehicle.
We are interested in a numerical method for finding a trajectory that minimizes
the sum of the traveling cost and terminal cost. We developed an algorithm
solving the value function for general trajectory optimization problem. Our
algorithm is closely related to the Tsitsiklis's Fast Marching Method and J. A.
Sethian's OUM and SLF-LLL[1-4] and is a generalization of them. On the basis of
these results, We applied our algorithm to the image processing such as
fingerprint verification.
|
We show how quantum dynamics (a unitary transformation) can be captured in
the state of a quantum system, in such a way that the system can be used to
perform, at a later time, the stored transformation almost perfectly on some
other quantum system. Thus programmable quantum gates for quantum information
processing are feasible if some small degree of imperfection is allowed. We
discuss the possibility of using this fact for securely computing a secret
function on a public quantum computer. Finally, our scheme for storage of
operations also allows for a new form of quantum remote control.
|
Active matter is ubiquitous in biology and becomes increasingly more
important in materials science. While numerous active systems have been
investigated in detail both experimentally and theoretically, general design
principles for functional active materials are still lacking. Building on a
recently developed linear response optimization (LRO) framework, we here
demonstrate that the spectra of nonlinear active mechanical and electric
circuits can be designed similarly to those of linear passive networks.
|
Iron and its alloys have made modern civilisation possible, with metallic
meteorites providing one of the human's earliest sources of usable iron as well
as providing a window into our solar system's billion-year history. Here
highest-resolution tools reveal the existence of a previously hidden FeNi
nanophase within the extremely slowly cooled metallic meteorite NWA 6259. This
new nanophase exists alongside Ni-poor and Ni-rich nanoprecipitates within a
matrix of tetrataenite, the uniaxial, chemically ordered form of FeNi. The
ferromagnetic nature of the nanoprecipitates combined with the
antiferromagnetic character of the FeNi nanophases give rise to a complex
magnetic state that evolves dramatically with temperature. These observations
extend and possibly alter our understanding of celestial metallurgy, provide
new knowledge concerning the archetypal Fe-Ni phase diagram and supply new
information for the development of new types of sustainable, technologically
critical high-energy magnets.
|
Amplitudes of quantum transitions containing time zigzags are considered. The
discussion is carried out in the framework of the Minkowski metric and standard
quantum mechanics without adding new postulates. It is shown that the wave
function is singular at the instant of the time zigzag. Nevertheless, we argue
that time zigzags are not suppressed at the quantum level, but their
contribution to the amplitude is zero. The result is valid for a single
particle and a non-interacting scalar field.
|
Recent work has shown that systems for speech translation (ST) -- similarly
to automatic speech recognition (ASR) -- poorly handle person names. This
shortcoming does not only lead to errors that can seriously distort the meaning
of the input, but also hinders the adoption of such systems in application
scenarios (like computer-assisted interpreting) where the translation of named
entities, like person names, is crucial. In this paper, we first analyse the
outputs of ASR/ST systems to identify the reasons of failures in person name
transcription/translation. Besides the frequency in the training data, we
pinpoint the nationality of the referred person as a key factor. We then
mitigate the problem by creating multilingual models, and further improve our
ST systems by forcing them to jointly generate transcripts and translations,
prioritising the former over the latter. Overall, our solutions result in a
relative improvement in token-level person name accuracy by 47.8% on average
for three language pairs (en->es,fr,it).
|
We report long-term simultaneous optical and (RXTE) X-ray observations of the
soft X-ray transient and low mass X-ray binary X1608-52 spanning from 1999 to
2001. In addition to the usual X-ray outburst and quiescent states, X1608-52
also exhibits an extended low intensity state during which the optical
counterpart, QX Nor, is found to be about two magnitudes brighter than during
quiescence. We detect optical photometric variability on a possible period of
0.5370 days with a semi-amplitude of ~0.27 mag in the I band. The modulation
could be orbital but is also consistent with a scenario invoking a superhump
with decreasing period. Observations of QX Nor during quiescence indicate an F
to G type main sequence secondary while theoretical considerations argue for an
evolved mass donor. Only an evolved mass donor would satisfy the condition for
the occurrence of superhumps.
|
Exact Maximum Inner Product Search (MIPS) is an important task that is widely
pertinent to recommender systems and high-dimensional similarity search. The
brute-force approach to solving exact MIPS is computationally expensive, thus
spurring recent development of novel indexes and pruning techniques for this
task. In this paper, we show that a hardware-efficient brute-force approach,
blocked matrix multiply (BMM), can outperform the state-of-the-art MIPS solvers
by over an order of magnitude, for some -- but not all -- inputs.
In this paper, we also present a novel MIPS solution, MAXIMUS, that takes
advantage of hardware efficiency and pruning of the search space. Like BMM,
MAXIMUS is faster than other solvers by up to an order of magnitude, but again
only for some inputs. Since no single solution offers the best runtime
performance for all inputs, we introduce a new data-dependent optimizer,
OPTIMUS, that selects online with minimal overhead the best MIPS solver for a
given input. Together, OPTIMUS and MAXIMUS outperform state-of-the-art MIPS
solvers by 3.2$\times$ on average, and up to 10.9$\times$, on widely studied
MIPS datasets.
|
Quasinormal modes provide valuable information about the structure of
spacetime outside a black hole. There is also a conjectured relationship
between the highly damped quasinormal modes and the semi-classical spectrum of
the horizon area/entropy. In this paper, we show that for spacetimes
characterized by more than one scale, the "infinitely damped" modes in
principle probe the structure of spacetime outside the horizon at the shortest
length scales. We demonstrate this with the calculation of the highly damped
quasinormal modes of the non-singular, single horizon, quantum corrected black
hole derived in [14].
|
We consider a simple conceptual question with respect to Majorana zero modes
in semiconductor nanowires: Can the measured non-ideal values of the
zero-bias-conductance-peak in the tunneling experiments be used as a
characteristic to predict the underlying topological nature of the proximity
induced nanowire superconductivity? In particular, we define and calculate the
topological visibility, which is a variation of the topological invariant
associated with the scattering matrix of the system as well as the
zero-bias-conductance-peak heights in the tunneling measurements, in the
presence of dissipative broadening, using realistic nanowire parameters to
connect the topological invariants with the zero bias tunneling conductance
values. This dissipative broadening is present in both (the existing) tunneling
measurements and also (any future) braiding experiments as an inevitable
consequence of a finite braiding time. The connection between the topological
visibility and the conductance allows us to obtain the visibility of realistic
braiding experiments in nanowires, and to conclude that the current
experimentally accessible systems with non-ideal zero bias conductance peaks
may indeed manifest (with rather low visibility) non-Abelian statistics for the
Majorana zero modes. In general, we find that large (small) superconducting gap
(Majorana peak splitting) is essential for the manifestation of the non-Abelian
braiding statistics, and in particular, a zero bias conductance value of around
half the ideal quantized Majorana value should be sufficient for the
manifestation of non-Abelian statistics in experimental nanowires.
|
The Hubbard and Su-Schrieffer-Heeger Hamiltonians (SSH) are iconic models for
understanding the qualitative effects of electron-electron and electron-phonon
interactions respectively. In the two-dimensional square lattice Hubbard model
at half filling, the on-site Coulomb repulsion, $U$, between up and down
electrons induces antiferromagnetic (AF) order and a Mott insulating phase. On
the other hand, for the SSH model, there is an AF phase when the
electron-phonon coupling $\lambda$ is less than a critical value $\lambda_c$
and a bond order wave when $\lambda > \lambda_c$. In this work, we perform
numerical studies on the square lattice optical Su-Schrieffer-Heeger-Hubbard
Hamiltonian (SSHH), which combines both interactions. We use the determinant
quantum Monte Carlo (DQMC) method which does not suffer from the fermionic sign
problem at half filling. We map out the phase diagram and find that it exhibits
a direct first-order transition between an antiferromagnetic phase and a
bond-ordered wave as $\lambda$ increases. The AF phase is characterized by two
different regions. At smaller $\lambda$ the behavior is similar to that of the
pure Hubbard model; the other region, while maintaining long range AF order,
exhibits larger kinetic energies and double occupancy, i.e. larger quantum
fluctuations, similar to the AF phase found in the pure SSH model.
|
Learning unsupervised node embeddings facilitates several downstream tasks
such as node classification and link prediction. A node embedding is universal
if it is designed to be used by and benefit various downstream tasks. This work
introduces PanRep, a graph neural network (GNN) model, for unsupervised
learning of universal node representations for heterogenous graphs. PanRep
consists of a GNN encoder that obtains node embeddings and four decoders, each
capturing different topological and node feature properties. Abiding to these
properties the novel unsupervised framework learns universal embeddings
applicable to different downstream tasks. PanRep can be furthered fine-tuned to
account for possible limited labels. In this operational setting PanRep is
considered as a pretrained model for extracting node embeddings of heterogenous
graph data. PanRep outperforms all unsupervised and certain supervised methods
in node classification and link prediction, especially when the labeled data
for the supervised methods is small. PanRep-FT (with fine-tuning) outperforms
all other supervised approaches, which corroborates the merits of pretraining
models. Finally, we apply PanRep-FT for discovering novel drugs for Covid-19.
We showcase the advantage of universal embeddings in drug repurposing and
identify several drugs used in clinical trials as possible drug candidates.
|
The cosmic ray ionization rate (CRIR) is a key parameter in understanding the
physical and chemical processes in the interstellar medium. Cosmic rays are a
significant source of energy in star formation regions, which impacts the
physical and chemical processes which drive the formation of stars. Previous
studies of the circum-molecular zone (CMZ) of the starburst galaxy NGC 253 have
found evidence for a high CRIR value; $10^3-10^6$ times the average cosmic ray
ionization rate within the Milky Way. This is a broad constraint and one goal
of this study is to determine this value with much higher precision. We exploit
ALMA observations towards the central molecular zone of NGC 253 to measure the
CRIR. We first demonstrate that the abundance ratio of H$_3$O$^+$ and SO is
strongly sensitive to the CRIR. We then combine chemical and radiative transfer
models with nested sampling to infer the gas properties and CRIR of several
star-forming regions in NGC 253 due to emission from their transitions. We find
that each of the four regions modelled has a CRIR in the range
$(1-80)\times10^{-14}$ s$^{-1}$ and that this result adequately fits the
abundances of other species that are believed to be sensitive to cosmic rays
including C$_2$H, HCO$^+$, HOC$^+$, and CO. From shock and PDR/XDR models, we
further find that neither UV/X-ray driven nor shock dominated chemistry are a
viable single alternative as none of these processes can adequately fit the
abundances of all of these species.
|
Simplified Template Cross Sections (STXS) have been adopted by the LHC
experiments as a common framework for Higgs measurements. Their purpose is to
reduce the theoretical uncertainties that are directly folded into the
measurements as much as possible, while at the same time allowing for the
combination of the measurements between different decay channels as well as
between experiments. We report the complete, revised definition of the STXS
kinematic bins (stage 1.1), which are to be used for the upcoming measurements
by the ATLAS and CMS experiments using the full LHC Run 2 datasets. The main
focus is on the three dominant Higgs production processes, namely gluon-fusion,
vector-boson fusion, and in association with a vector boson. We also comment
briefly on the treatment of other production modes.
|
Karyotyping is of importance for detecting chromosomal aberrations in human
disease. However, chromosomes easily appear curved in microscopic images, which
prevents cytogeneticists from analyzing chromosome types. To address this
issue, we propose a framework for chromosome straightening, which comprises a
preliminary processing algorithm and a generative model called masked
conditional variational autoencoders (MC-VAE). The processing method utilizes
patch rearrangement to address the difficulty in erasing low degrees of
curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE
further straightens the results by leveraging chromosome patches conditioned on
their curvatures to learn the mapping between banding patterns and conditions.
During model training, we apply a masking strategy with a high masking ratio to
train the MC-VAE with eliminated redundancy. This yields a non-trivial
reconstruction task, allowing the model to effectively preserve chromosome
banding patterns and structure details in the reconstructed results. Extensive
experiments on three public datasets with two stain styles show that our
framework surpasses the performance of state-of-the-art methods in retaining
banding patterns and structure details. Compared to using real-world bent
chromosomes, the use of high-quality straightened chromosomes generated by our
proposed method can improve the performance of various deep learning models for
chromosome classification by a large margin. Such a straightening approach has
the potential to be combined with other karyotyping systems to assist
cytogeneticists in chromosome analysis.
|
We consider a finite region of a lattice of weakly interacting geodesic flows
on manifolds of negative curvature and we show that, when rescaling the
interactions and the time appropriately, the energies of the flows evolve
according to a non linear diffusion equation. This is a first step toward the
derivation of macroscopic equations from a Hamiltonian microscopic dynamics in
the case of weakly coupled systems.
|
An and/or tree is usually a binary plane tree, with internal nodes labelled
by logical connectives, and with leaves labelled by literals chosen in a fixed
set of k variables and their negations. In the present paper, we introduce the
first model of such Catalan trees, whose number of variables k_n is a function
of n, the size of the expressions. We describe the whole range of the
probability distributions depending on the function k_n, as soon as it tends
jointly with n to infinity. As a by-product we obtain a study of the
satisfiability problem in the context of Catalan trees.
Our study is mainly based on analytic combinatorics and extends the Kozik's
pattern theory, first developed for the fixed-k Catalan tree model.
|
An optimal boundary control problem for the one-dimensional heat equation is
considered. The objective functional includes a standard quadratic terminal
observation, a Tikhonov regularization term with regularization parameter
$\nu$, and the $L^1$-norm of the control that accounts for sparsity. The
switching structure of the optimal control is discussed for $\nu \ge 0$. Under
natural assumptions, it is shown that the set of switching points of the
optimal control is countable with the final time as only possible accumulation
point. The convergence of switching points is investigated for $\nu \searrow
0$.
|
Model (BSM). ATLAS and CMS concentrate on B decays that can be registered by
a di-muon signature. B-hadrons decaying to J/psi(mumu) will statistically
dominate B-physics analyses allowing high precision measurements, in particular
a test of BSM effects in the CP violation of Bs-Jpsiphi. In the so-called rare
B-decay sector, ATLAS and CMS will concentrate on a family of semi-muonic
exclusive channels, b - s mumu and on the purely muonic decay Bs - mumu. After
three years of LHC running at a luminosity of a few times 1033 cm-2 s-1
(corresponding to 30 fb-1), each of these two experiments can measure the
Bs-mumu signal with 3 sigma significance, assuming the Standard Model (SM)
value for the decay probability.
|
We aim to bring a new perspective about some aspects of the current research
in Cosmology. We start with a brief introduction about the main developments of
the field in the last century; then we introduce an analogy that shall
elucidate the main difficulties that observational sciences involve, which
might be part of the issue related to some of the contemporary cosmological
problems. The analogy investigates how microscopic beings could ever discover
and understand gravitational phenomena.
|
We study the BRST renormalization of an alternative formulation of the
Yang-Mills theory, where the matrix-propagator of the gluon and the
complementary fields is diagonal. This procedure involves scalings as well as
non-linear mixings of the fields and sources. We show, in the Landau gauge,
that the BRST identities implement a recursive proof of renormalizability to
all orders.
|
In this note we make several observations concerning symplectic cobordisms.
Among other things we show that every contact 3-manifold has infinitely many
concave symplectic fillings and that all overtwisted contact 3-manifolds are
``symplectic cobordism equivalent.''
|
We present a proof of the algorithm for computing line bundle valued
cohomology classes over toric varieties conjectured by R.~Blumenhagen, B.~Jurke
and the authors (arXiv:1003.5217) and suggest a kind of Serre duality for
combinatorial Betti numbers that we observed when computing examples.
|
Let $f\colon M^{2n}\to\mathbb{R}^{2n+\ell}$, $n \geq 5$, denote a conformal
immersion into Euclidean space with codimension $\ell$ of a Kaehler manifold of
complex dimension $n$ and free of flat points. For codimensions $\ell=1,2$ we
show that such a submanifold can always be locally obtained in a rather simple
way, namely, from an isometric immersion of the Kaehler manifold $M^{2n}$ into
either $\mathbb{R}^{2n+1}$ or $\mathbb{R}^{2n+2}$, the latter being a class of
submanifolds already extensively studied.
|
The rational quantum algebraically integrable systems are non-trivial
generalizations of Laplacian operators to the case of elliptic operators with
variable coefficients. We study corresponding extensions of Laplacian growth
connected with algebraically integrable systems, describing viscous
free-boundary flows in non-homogenous media. We introduce a class of planar
flows related with application of Adler-Moser polynomials and construct
solutions for higher-dimensional cases, where the conformal mapping technique
is unavailable.
|
We consider the prompt photon production at high energy hadron colliders in
the framework of k_T-factorization approach. The unintegrated quark and gluon
distributions in a proton are determined using the Kimber-Martin-Ryskin
prescription. The conservative error analisys is performed. We investigate both
inclusive prompt photon and prompt photon and associated muon production rates.
In Standard Model such events come mainly due to Compton scattering process
where the final heavy (charm or bottom) quark produces a muon. The theoretical
results are compared with recent experimental data taken by the D0 and CDF
collaborations at Fermilab Tevatron. Our analysis also covers the azimuthal
correlations between produced prompt photon and muon which can provide an
important information about non-collinear parton evolution in a proton.
Finally, we extrapolate the theoretical predictions to CERN LHC energies.
|
In this paper we argue for a paradigmatic shift from `reductionism' to
`togetherness'. In particular, we show how interaction between systems in
quantum theory naturally carries over to modelling how word meanings interact
in natural language. Since meaning in natural language, depending on the
subject domain, encompasses discussions within any scientific discipline, we
obtain a template for theories such as social interaction, animal behaviour,
and many others.
|
We obtain a new quantitative deformation lemma, and then gain a new mountain
pass theorem. More precisely, the new mountain pass theorem is independent of
the functional value on the boundary of the mountain, which improves the well
known results (\cite{AR,PS1,PS2,Qi,Wil}). Moreover, by our new mountain pass
theorem, new existence of nontrivial periodic solutions for some nonlinear
second-order discrete systems is obtained, which greatly improves the result in
\cite{Z04}.
|
Normally, program execution spends most of the time on loops. Automated test
data generation devotes special attention to loops for better coverage.
Automated test data generation for programs having loops with variable number
of iteration and variable length array is a challenging problem. It is so
because the number of paths may increase exponentially with the increase of
array size for some programming constructs, like merge sort. We propose a
method that finds heuristic for different types of programming constructs with
loops and arrays. Linear search, Bubble sort, merge sort, and matrix
multiplication programs are included in an attempt to highlight the difference
in execution between single loop, variable length array and nested loops with
one and two dimensional arrays. We have used two parameters/heuristics to
predict the minimum number of iterations required for generating automated test
data. They are longest path level (kL) and saturation level (kS). The
proceedings of our work includes the instrumentation of source code at the
elementary level, followed by the application of the random inputs until all
feasible paths or all paths having longest paths are collected. However,
duplicate paths are avoided by using a filter. Our test data is the random
numbers that cover each feasible path.
|
Protein-protein interactions can be properly modeled as scale-free complex
networks, while the lethality of proteins has been correlated with the node
degrees, therefore defining a lethality-centrality rule. In this work we
revisit this relevant problem by focusing attention not on proteins as a whole,
but on their functional domains, which are ultimately responsible for their
binding potential. Four networks are considered: the original protein-protein
interaction network, its randomized version, and two domain networks assuming
different lethality hypotheses. By using formal statistical analysis, we show
that the correlation between connectivity and essentiality is higher for
domains than for proteins.
|
It is decidable for deterministic MSO definable graph-to-string or
graph-to-tree transducers whether they are equivalent on a context-free set of
graphs.
|
We show that the results of [BM97, DeB02b, Oka, Lus85, AA07, Tay16] imply a
positive answer to the question of Moeglin-Waldspurger on wave-front sets in
the case of depth zero cuspidal representations. Namely, we deduce that for
large enough residue characteristic, the Zariski closure of the wave-front set
of any depth zero irreducible cuspidal representation of any reductive group
over a non-Archimedean local field is an irreducible variety.
In more details, we use [BM97, DeB02b, Oka] to reduce the statement to an
analogous statement for finite groups of Lie type, which is proven in [Lus85,
AA07, Tay16].
|
The goal of this article is to study closed connected sets of periodic
solutions, of autonomous second order Hamiltonian systems, emanating from
infinity. The main idea is to apply the degree for SO(2)-equivariant gradient
operators defined by the second author. Using the results due to Rabier we show
that we cannot apply the Leray-Schauder degree to prove the main results of
this article. It is worth pointing out that since we study connected sets of
solutions, we also cannot use the Conley index technique and the Morse theory.
|
We study N=2 supersymmetric four dimensional gauge theories, in a certain N=2
supergravity background, called Omega-background. The partition function of the
theory in the Omega-background can be calculated explicitly. We investigate
various representations for this partition function: a statistical sum over
random partitions, a partition function of the ensemble of random curves, a
free fermion correlator. These representations allow to derive rigorously the
Seiberg-Witten geometry, the curves, the differentials, and the prepotential.
We study pure N=2 theory, as well as the theory with matter hypermultiplets in
the fundamental or adjoint representations, and the five dimensional theory
compactified on a circle.
|
Assuming that center vortices are the confining gauge field configurations,
we argue that in gauges that are sensitive to the confining center vortex
degrees of freedom, and where the latter lie on the Gribov horizon, the
corresponding ghost form factor is infrared divergent. Furthermore, this
infrared divergence disappears when center vortices are removed from the
Yang-Mills ensemble. On the other hand, for gauge conditions which are
insensitive to center vortex degrees of freedom, the ghost form factor is
infrared finite and does not change (qualitatively) when center vortices are
removed. Evidence for our observation is provided from lattice calculations.
|
A fast implementation of the quantum imaginary time evolution (QITE)
algorithm called Fast QITE is proposed. The algorithmic cost of QITE typically
scales exponentially with the number of particles it nontrivially acts on in
each Trotter step. In contrast, a Fast QITE implementation reduces this to only
a linear scaling. It is shown that this speed up leads to a quantum advantage
when sampling diagonal elements of a matrix exponential, which cannot be
achieved using the standard implementation of the QITE algorithm. Finally the
cost of implementing Fast QITE for finite temperature simulations is also
discussed.
|
Graph Neural Networks (GNNs) have shown great success in many applications
such as recommendation systems, molecular property prediction, traffic
prediction, etc. Recently, CPU-FPGA heterogeneous platforms have been used to
accelerate many applications by exploiting customizable data path and abundant
user-controllable on-chip memory resources of FPGAs. Yet, accelerating and
deploying GNN training on such platforms requires not only expertise in
hardware design but also substantial development efforts.
We propose HP-GNN, a novel framework that generates high throughput GNN
training implementations on a given CPU-FPGA platform that can benefit both
application developers and machine learning researchers. HP-GNN takes GNN
training algorithms, GNN models as the inputs, and automatically performs
hardware mapping onto the target CPU-FPGA platform. HP-GNN consists of: (1)
data layout and internal representation that reduce the memory traffic and
random memory accesses; (2) optimized hardware templates that support various
GNN models; (3) a design space exploration engine for automatic hardware
mapping; (4) high-level application programming interfaces (APIs) that allows
users to specify GNN training with only a handful of lines of code. To evaluate
HP-GNN, we experiment with two well-known sampling-based GNN training
algorithms and two GNN models. For each training algorithm and model, HP-GNN
generates implementation on a state-of-the-art CPU-FPGA platform. Compared with
CPU-only and CPU-GPU platforms, experimental results show that the generated
implementations achieve $55.67\times$ and $2.17\times$ speedup on the average,
respectively. Compared with the state-of-the-art GNN training implementations,
HP-GNN achieves up to $4.45\times$ speedup.
|
Solar flares and coronal mass ejections (CMEs), the most catastrophic
eruptions in our solar system, have been known to affect terrestrial
environments and infrastructure. However, because their triggering mechanism is
still not sufficiently understood, our capacity to predict the occurrence of
solar eruptions and to forecast space weather is substantially hindered. Even
though various models have been proposed to determine the onset of solar
eruptions, the types of magnetic structures capable of triggering these
eruptions are still unclear. In this study, we solved this problem by
systematically surveying the nonlinear dynamics caused by a wide variety of
magnetic structures in terms of three-dimensional magnetohydrodynamic
simulations. As a result, we determined that two different types of small
magnetic structures favor the onset of solar eruptions. These structures, which
should appear near the magnetic polarity inversion line (PIL), include magnetic
fluxes reversed to the potential component or the nonpotential component of
major field on the PIL. In addition, we analyzed two large flares, the X-class
flare on December 13, 2006 and the M-class flare on February 13, 2011, using
imaging data provided by the Hinode satellite, and we demonstrated that they
conform to the simulation predictions. These results suggest that forecasting
of solar eruptions is possible with sophisticated observation of a solar
magnetic field, although the lead time must be limited by the time scale of
changes in the small magnetic structures.
|
CoRoT-2 is one of the most unusual planetary systems known to date. Its host
star is exceptionally active, showing a pronounced, regular pattern of optical
variability caused by magnetic activity. The transiting hot Jupiter, CoRoT-2b,
shows one of the largest known radius anomalies. We analyze the properties and
activity of CoRoT-2A in the optical and X-ray regime by means of a high-quality
UVES spectrum and a 15 ks Chandra exposure both obtained during planetary
transits. The UVES data are analyzed using various complementary methods of
high-resolution stellar spectroscopy. We characterize the photosphere of the
host star by deriving accurate stellar parameters such as effective
temperature, surface gravity, and abundances. Signatures of stellar activity,
Li abundance, and interstellar absorption are investigated to provide
constraints on the age and distance of CoRoT-2. Furthermore, our UVES data
confirm the presence of a late-type stellar companion to CoRoT-2A that is
gravitationally bound to the system. The Chandra data provide a clear detection
of coronal X-ray emission from CoRoT-2A, for which we obtain an X-ray
luminosity of 1.9e29 erg/s. The potential stellar companion remains undetected
in X-rays. Our results indicate that the distance to the CoRoT-2 system is
approximately 270 pc, and the most likely age lies between 100 and 300 Ma. Our
X-ray observations show that the planet is immersed in an intense field of
high-energy radiation. Surprisingly, CoRoT-2A's likely coeval stellar
companion, which we find to be of late-K spectral type, remains X-ray dark.
Yet, as a potential third body in the system, the companion could account for
CoRoT-2b's slightly eccentric orbit.
|
We consider a class of parabolic nonlocal $1$-Laplacian equation
\begin{align*} u_t+(-\Delta)^s_1u=f \quad \text{ in }\Omega\times(0,T].
\end{align*} By employing the Rothe time-discretization method, we establish
the existence and uniqueness of weak solutions to the equation above. In
particular, different from the previous results on the local case, we infer
that the weak solution maintains $\frac{1}{2}$-H\"{o}lder continuity in time.
|
Self-assembly of soft materials attracts keen interest for patterning
applications owing to its ease and spontaneous behavior. We report the
fabrication of nanogrooves using sublimation and recondensation of liquid
crystal (LC) materials. First, well-aligned smectic LC structures are obtained
on the micron-scale topographic patterns of the microchannel; then the
sublimation and recondensation process directly produces nanogrooves having
sub-200-nm scale. The entire process can be completed in less than 30 min.
After it is replicated using an ultraviolet-curable polymer, our platform can
be used as an alignment layer to control other guest LC materials.
|
The recently introduced consistent lattice Boltzmann model with energy
conservation [S. Ansumali, I.V. Karlin, Phys. Rev. Lett. 95, 260605 (2005)] is
extended to the simulation of thermal flows on standard lattices. The
two-dimensional thermal model on the standard square lattice with nine
velocities is developed and validated in the thermal Couette and
Rayleigh-B\'{e}nard natural convection problems.
|
We suggest an approach to use memristors (resistors with memory) in
programmable analog circuits. Our idea consists in a circuit design in which
low voltages are applied to memristors during their operation as analog circuit
elements and high voltages are used to program the memristor's states. This
way, as it was demonstrated in recent experiments, the state of memristors does
not essentially change during analog mode operation. As an example of our
approach, we have built several programmable analog circuits demonstrating
memristor-based programming of threshold, gain and frequency.
|
Considering the vacuum as characterized by the presence of only the
gravitational field, we show that the vacuum energy density of the de Sitter
space, in the realm of the teleparallel equivalent of general relativity, can
acquire arbitrarily high values. This feature is expected to hold in the
consideration of realistic cosmological models, and may possibly provide a
simple explanation to the cosmological constant problem.
|
Community Question Answering (CQA) in different domains is growing at a large
scale because of the availability of several platforms and huge shareable
information among users. With the rapid growth of such online platforms, a
massive amount of archived data makes it difficult for moderators to retrieve
possible duplicates for a new question and identify and confirm existing
question pairs as duplicates at the right time. This problem is even more
critical in CQAs corresponding to large software systems like askubuntu where
moderators need to be experts to comprehend something as a duplicate. Note that
the prime challenge in such CQA platforms is that the moderators are themselves
experts and are therefore usually extremely busy with their time being
extraordinarily expensive. To facilitate the task of the moderators, in this
work, we have tackled two significant issues for the askubuntu CQA platform:
(1) retrieval of duplicate questions given a new question and (2) duplicate
question confirmation time prediction. In the first task, we focus on
retrieving duplicate questions from a question pool for a particular newly
posted question. In the second task, we solve a regression problem to rank a
pair of questions that could potentially take a long time to get confirmed as
duplicates. For duplicate question retrieval, we propose a Siamese neural
network based approach by exploiting both text and network-based features,
which outperforms several state-of-the-art baseline techniques. Our method
outperforms DupPredictor and DUPE by 5% and 7% respectively. For duplicate
confirmation time prediction, we have used both the standard machine learning
models and neural network along with the text and graph-based features. We
obtain Spearman's rank correlation of 0.20 and 0.213 (statistically
significant) for text and graph based features respectively.
|
Health insurance plays a significant role in ensuring quality healthcare. In
response to the escalating costs of the medical industry, the demand for health
insurance is soaring. Additionally, those with health insurance are more likely
to receive preventative care than those without health insurance. However, from
granting health insurance to delivering services to insured individuals, the
health insurance industry faces numerous obstacles. Fraudulent actions, false
claims, a lack of transparency and data privacy, reliance on human effort and
dishonesty from consumers, healthcare professionals, or even the insurer party
itself, are the most common and important hurdles towards success. Given these
constraints, this chapter briefly covers the most immediate concerns in the
health insurance industry and provides insight into how blockchain technology
integration can contribute to resolving these issues. This chapter finishes by
highlighting existing limitations as well as potential future directions.
|
Many real-world applications are characterized by a number of conflicting
performance measures. As optimizing in a multi-objective setting leads to a set
of non-dominated solutions, a preference function is required for selecting the
solution with the appropriate trade-off between the objectives. The question
is: how good do estimations of these objectives have to be in order for the
solution maximizing the preference function to remain unchanged? In this paper,
we introduce the concept of preference radius to characterize the robustness of
the preference function and provide guidelines for controlling the quality of
estimations in the multi-objective setting. More specifically, we provide a
general formulation of multi-objective optimization under the bandits setting.
We show how the preference radius relates to the optimal gap and we use this
concept to provide a theoretical analysis of the Thompson sampling algorithm
from multivariate normal priors. We finally present experiments to support the
theoretical results and highlight the fact that one cannot simply scalarize
multi-objective problems into single-objective problems.
|
Fine-tuning is becoming widely used for leveraging the power of pre-trained
foundation models in new downstream tasks. While there are many successes of
fine-tuning on various tasks, recent studies have observed challenges in the
generalization of fine-tuned models to unseen distributions (i.e.,
out-of-distribution; OOD). To improve OOD generalization, some previous studies
identify the limitations of fine-tuning data and regulate fine-tuning to
preserve the general representation learned from pre-training data. However,
potential limitations in the pre-training data and models are often ignored. In
this paper, we contend that overly relying on the pre-trained representation
may hinder fine-tuning from learning essential representations for downstream
tasks and thus hurt its OOD generalization. It can be especially catastrophic
when new tasks are from different (sub)domains compared to pre-training data.
To address the issues in both pre-training and fine-tuning data, we propose a
novel generalizable fine-tuning method LEVI (Layer-wise Ensemble of different
VIews), where the pre-trained model is adaptively ensembled layer-wise with a
small task-specific model, while preserving its efficiencies. By combining two
complementing models, LEVI effectively suppresses problematic features in both
the fine-tuning data and pre-trained model and preserves useful features for
new tasks. Broad experiments with large language and vision models show that
LEVI greatly improves fine-tuning generalization via emphasizing different
views from fine-tuning data and pre-trained features.
|
We study an attractive $\phi^4$ interaction using Tamm-Dancoff truncation
with light-front coordinates in $3+1$ dimensions. The truncated theory requires
a coupling constant renormalization, we compute its $\beta$ function
non-perturbatively, show that the model is asymptotically free, and find the
corresponding Callan-Symanzik equations. The model supports bound states, we
find the wave function for the ground state of the two-particle sector. We also
give a bound for the $N$-particle ground state energy within a mean field
approximation, including the corresponding result for the case of $2+1$
dimensions where the model does not require renormalization.
|
The Cosmic Microwave Background can provide information regarding physics of
the very early universe, more specifically, of the matter-radiation
distribution of the inflationary era. Starting from the effective field theory
of inflation, we use the Goldstone action to calculate the three point
correlation function for the Goldstone field, whose results can be directly
applied to the field describing the curvature perturbations around a de Sitter
solution for the inflationary era. We then use the data from the recent Planck
mission for the parameters $f_{NL}^{equil}$ and $f_{NL}^{orthog}$ which
parametrize the size and shape of non-Gaussianities generated in single field
models of inflation. Using these known values, we calculate the parameters
relevant to our analysis, $f_{NL}^{\dot{\pi}^3}$, $f_{NL}^{\dot{\pi}(\partial
_i \pi)^2}$ and the speed of sound $c_s$ which parametrize the
non-Gaussianities arising from two different kinds of generalized interactions
of the scalar field in question.
|
When the World Wide Web was first conceived as a way to facilitate the
sharing of scientific information at the CERN (European Center for Nuclear
Research) few could have imagined the role it would come to play in the
following decades. Since then, the increasing ubiquity of Internet access and
the frequency with which people interact with it raise the possibility of using
the Web to better observe, understand, and monitor several aspects of human
social behavior. Web sites with large numbers of frequently returning users are
ideal for this task. If these sites belong to companies or universities, their
usage patterns can furnish information about the working habits of entire
populations. In this work, we analyze the properly anonymized logs detailing
the access history to Emory University's Web site. Emory is a medium size
university located in Atlanta, Georgia. We find interesting structure in the
activity patterns of the domain and study in a systematic way the main forces
behind the dynamics of the traffic. In particular, we show that both linear
preferential linking and priority based queuing are essential ingredients to
understand the way users navigate the Web.
|
In this paper, we consider the binary hypothesis testing problem with two
observers. There are two possible states of nature (or hypotheses).
Observations are collected by two observers. The observations are statistically
related to the true state of nature. Given the observations, the objective of
both observers is to find out what is the true state of nature. We present four
different approaches to address the problem. In the first (centralized)
approach, the observations collected by both observers are sent to a central
coordinator where hypothesis testing is performed. In the second approach, each
observer performs hypothesis testing based on locally collected observations.
Then they exchange binary information to arrive at a consensus. In the third
approach, each observer constructs an aggregated probability space based on the
observations collected by it and the decision it receives from the alternate
observer and performs hypothesis testing in the new probability space. In this
approach also they exchange binary information to arrive at consensus. In the
fourth approach, if observations collected by the observers are independent
conditioned on the hypothesis we show the construction of the aggregated sample
space can be skipped. In this case, the observers exchange real-valued
information to achieve consensus. Given the same fixed number of samples, n, n
sufficiently large, for the centralized (first) and decentralized (second)
approaches, it has been shown that if the observations collected by the
observers are independent conditioned on the hypothesis, then the minimum
probability that the two observers agree and are wrong in the decentralized
approach is upper bounded by the minimum probability of error achieved in the
centralized approach.
|
Given a positive integer $n\ge 2$, let $D(n)$ denote the smallest positive
integer $m$ such that $a^3+a(1\le a\le n)$ are pairwise distinct modulo $m^2$.
A conjecture of Z.-W. Sun states that $D(n)=3^k$, where $3^k$ is the least
power of $3$ no less than $\sqrt{n}$. The purpose of this paper is to confirm
this conjecture.
|
Estimating the eigenvalue or energy gap of a Hamiltonian H is vital for
studying quantum many-body systems. Particularly, many of the problems in
quantum chemistry, condensed matter physics, and nuclear physics investigate
the energy gap between two eigenstates. Hence, how to efficiently solve the
energy gap becomes an important motive for researching new quantum algorithms.
In this work, we propose a hybrid non-variational quantum algorithm that uses
the Monte Carlo method and real-time Hamiltonian simulation to evaluate the
energy gap of a general quantum many-body system. Compared to conventional
approaches, our algorithm does not require controlled real-time evolution, thus
making its implementation much more experimental-friendly. Since our algorithm
is non-variational, it is also free from the "barren plateaus" problem. To
verify the efficiency of our algorithm, we conduct numerical simulations for
the Heisenberg model and molecule systems on a classical emulator.
|
Semantic segmentation using deep neural networks has been widely explored to
generate high-level contextual information for autonomous vehicles. To acquire
a complete $180^\circ$ semantic understanding of the forward surroundings, we
propose to stitch semantic images from multiple cameras with varying
orientations. However, previously trained semantic segmentation models showed
unacceptable performance after significant changes to the camera orientations
and the lighting conditions. To avoid time-consuming hand labeling, we explore
and evaluate the use of data augmentation techniques, specifically skew and
gamma correction, from a practical real-world standpoint to extend the existing
model and provide more robust performance. The presented experimental results
have shown significant improvements with varying illumination and camera
perspective changes.
|
The main result implies that a proper convex subset of an irreducible higher
rank symmetric space cannot have Zariski dense stabilizer.
|
Measured response functions and low photon yield spectra of silicon
photomultipliers (SiPM) were compared to multi-photoelectron pulse-height
distributions generated by a Monte Carlo model. Characteristic parameters for
SiPM were derived. The devices were irradiated with 14 MeV electrons at the
Mainz microtron MAMI. It is shown that the first noticeable damage consists of
an increase in the rate of dark pulses and the loss of uniformity in the pixel
gains. Higher radiation doses reduced also the photon detection efficiency. The
results are especially relevant for applications of SiPM in fibre detectors at
high luminosity experiments.
|
We further investigate a class of time-reversal-invariant two-band s-wave
topological superconductors introduced in Phys. Rev. Lett. 108, 036803 (2012).
We show how, in the presence of time-reversal symmetry, Z_2 invariants that
distinguish between trivial and non-trivial quantum phases can be constructed
by considering only one of the Kramers' sectors in which the Hamiltonian
decouples into. We find that the main features identified in our original 2D
setting remain qualitatively unchanged in 1D and 3D, with non-trivial
topological superconducting phases supporting an odd number of Kramers' pairs
of helical Majorana modes on each boundary, as long as the required $\pi$ phase
difference between gaps is maintained. We also analyze the consequences of
time-reversal symmetry-breaking either due to the presence of an applied or
impurity magnetic field or to a deviation from the intended phase matching
between the superconducting gaps. We demonstrate how the relevant notion of
topological invariance must be modified when time-reversal symmetry is broken,
and how both the persistence of gapless Majorana modes and their robustness
properties depend in general upon the way in which the original Hamiltonian is
perturbed. Interestingly, a topological quantum phase transition between
helical and chiral superconducting phases can be induced by suitably tuning a
Zeeman field in conjunction with a phase mismatch between the gaps. Recent
experiments in doped semiconducting crystals, of potential relevance to the
proposed model, and possible candidate material realizations in superconductors
with $s_\pm$ pairing symmetry are discussed.
|
When creating an outfit, style is a criterion in selecting each fashion item.
This means that style can be regarded as a feature of the overall outfit.
However, in various previous studies on outfit generation, there have been few
methods focusing on global information obtained from an outfit. To address this
deficiency, we have incorporated an unsupervised style extraction module into a
model to learn outfits. Using the style information of an outfit as a whole,
the proposed model succeeded in generating outfits more flexibly without
requiring additional information. Moreover, the style information extracted by
the proposed model is easy to interpret. The proposed model was evaluated on
two human-generated outfit datasets. In a fashion item prediction task (missing
prediction task), the proposed model outperformed a baseline method. In a style
extraction task, the proposed model extracted some easily distinguishable
styles. In an outfit generation task, the proposed model generated an outfit
while controlling its styles. This capability allows us to generate fashionable
outfits according to various preferences.
|
Robust point cloud classification is crucial for real-world applications, as
consumer-type 3D sensors often yield partial and noisy data, degraded by
various artifacts. In this work we propose a general ensemble framework, based
on partial point cloud sampling. Each ensemble member is exposed to only
partial input data. Three sampling strategies are used jointly, two local ones,
based on patches and curves, and a global one of random sampling. We
demonstrate the robustness of our method to various local and global
degradations. We show that our framework significantly improves the robustness
of top classification netowrks by a large margin. Our experimental setting uses
the recently introduced ModelNet-C database by Ren et al.[24], where we reach
SOTA both on unaugmented and on augmented data. Our unaugmented mean Corruption
Error (mCE) is 0.64 (current SOTA is 0.86) and 0.50 for augmented data (current
SOTA is 0.57). We analyze and explain these remarkable results through
diversity analysis. Our code is available at:
https://github.com/yossilevii100/EPiC
|
This paper presents two new challenges for the Telco ecosystem transformation
in the era of cloud-native microservice-based architectures. (1)
Development-for-Operations (Dev-for-Operations) impacts not only the overall
workflow for deploying a Platform as a Service (PaaS) in an open foundry
environment, but also the Telco business as well as operational models to
achieve an economy of scope and an economy of scale. (2) For that purpose, we
construct an integrative platform business model in the form of a Multi-Sided
Platform (MSP) for building Telco PaaSes. The proposed MSP based architecture
enables a multi-organizational ecosystem with increased automation
possibilities for Telco-grade service creation and operation. The paper
describes how the Dev-for-Operations and MSP lift constraints and offers an
effective way for next-generation PaaS building, while mutually reinforcing
each other in the Next Generation Platform as a Service (NGPaaS) framework.
|
We analyze a single electron transistor composed of two semi-infinite one
dimensional quantum wires and a relatively short segment between them. We
describe each wire section by a Luttinger model, and treat tunneling events in
the sequential approximation when the system's dynamics can be described by a
master equation. We show that the steady state occupation probabilities in the
strongly interacting regime depend only on the energies of the states and
follow a universal form that depends on the source-drain voltage and the
interaction strength.
|
We present average performance results for dynamical inference problems in
large networks, where a set of nodes is hidden while the time trajectories of
the others are observed. Examples of this scenario can occur in signal
transduction and gene regulation networks. We focus on the linear stochastic
dynamics of continuous variables interacting via random Gaussian couplings of
generic symmetry. We analyze the inference error, given by the variance of the
posterior distribution over hidden paths, in the thermodynamic limit and as a
function of the system parameters and the ratio {\alpha} between the number of
hidden and observed nodes. By applying Kalman filter recursions we find that
the posterior dynamics is governed by an "effective" drift that incorporates
the effect of the observations. We present two approaches for characterizing
the posterior variance that allow us to tackle, respectively, equilibrium and
nonequilibrium dynamics. The first appeals to Random Matrix Theory and reveals
average spectral properties of the inference error and typical posterior
relaxation times, the second is based on dynamical functionals and yields the
inference error as the solution of an algebraic equation.
|
The Heisenberg spin chain is a canonical integrable model. As such, it
features stable ballistically propagating quasiparticles, but spin transport is
sub-ballistic at any nonzero temperature: an initially localized spin
fluctuation spreads in time $t$ to a width $t^{2/3}$. This exponent, as well as
the functional form of the dynamical spin correlation function, suggest that
spin transport is in the Kardar-Parisi-Zhang (KPZ) universality class. However,
the full counting statistics of magnetization is manifestly incompatible with
KPZ scaling. A simple two-mode hydrodynamic description, derivable from
microscopic principles, captures both the KPZ scaling of the correlation
function and the coarse features of the full counting statistics, but remains
to be numerically validated. These results generalize to any integrable spin
chain invariant under a continuous nonabelian symmetry, and are surprisingly
robust against moderately strong integrability-breaking perturbations that
respect the nonabelian symmetry.
|
We observed resistance drift in 125 K - 300 K temperature range in melt
quenched amorphous Ge2Sb2Te5 line-cells with length x width x thickness = ~500
nm x ~100 nm x ~ 50 nm. Drift coefficients measured using small voltage sweeps
appear to decrease from 0.12 +/- 0.029 at 300 K to 0.075 +/- 0.006 at 125 K.
The current-voltage characteristics of the amorphized cells measured in the 85
K - 300 K using high-voltage sweeps (0 to ~25 V) show a combination of a
linear, low-field exponential and high-field exponential conduction mechanisms,
all of which are strong functions of temperature. The very first high-voltage
sweep after amorphization (with electric fields up to ~70% of the breakdown
field) shows clear hysteresis in the current-voltage characteristics due to
accelerated drift, while the consecutive sweeps show stable characteristics.
Stabilization was achieved with 50 nA compliance current (current densities
~104 A/cm^2), preventing appreciable self-heating in the cells. The observed
acceleration and stoppage of the resistance drift with the application of high
electric fields is attributed to changes in the electrostatic potential profile
within amorphous Ge2Sb2Te5 due to trapped charges, reducing tunneling current.
Stable current-voltage characteristics are used to extract carrier activation
energies for the conduction mechanisms in 85 K - 300 K temperature range. The
carrier activation energy associated with linear current-voltage response is
extracted to be 331 +/- 5 meV in 200 - 300 K range, while carrier activation
energies of 233 +/- 2 meV and 109 +/- 5 meV are extracted in 85 K to 300 K
range for the mechanisms that give exponential current-voltage responses.
|
The many-body entanglement between two finite (size-$d$) disjoint vacuum
regions of non-interacting lattice scalar field theory in one spatial dimension
-- a $(d_A \times d_B)_{\rm mixed}$ Gaussian continuous variable system -- is
locally transformed into a tensor-product "core" of $(1_A \times 1_B)_{\rm
mixed}$ entangled pairs. Accessible entanglement within these core pairs
exhibits an exponential hierarchy, and as such identifies the structure of
dominant region modes from which vacuum entanglement could be extracted into a
spatially separated pair of quantum detectors. Beyond the core, remaining modes
of the "halo" are determined to be AB-separable in isolation, as well as
separable from the core. However, state preparation protocols that distribute
entanglement in the form of $(1_A \times 1_B)_{\rm mixed}$ core pairs are found
to require additional entanglement in the halo that is obscured by classical
correlations. This inaccessible (bound) halo entanglement is found to mirror
the accessible entanglement, but with a step behavior as the continuum is
approached. It remains possible that alternate initialization protocols that do
not utilize the exponential hierarchy of core-pair entanglement may require
less inaccessible entanglement. Entanglement consolidation is expected to
persist in higher dimensions and may aid classical and quantum simulations of
asymptotically free gauge field theories, such as quantum chromodynamics.
|
In the field of autonomous vehicles (AVs), accurately discerning commander
intent and executing linguistic commands within a visual context presents a
significant challenge. This paper introduces a sophisticated encoder-decoder
framework, developed to address visual grounding in AVs.Our Context-Aware
Visual Grounding (CAVG) model is an advanced system that integrates five core
encoders-Text, Image, Context, and Cross-Modal-with a Multimodal decoder. This
integration enables the CAVG model to adeptly capture contextual semantics and
to learn human emotional features, augmented by state-of-the-art Large Language
Models (LLMs) including GPT-4. The architecture of CAVG is reinforced by the
implementation of multi-head cross-modal attention mechanisms and a
Region-Specific Dynamic (RSD) layer for attention modulation. This
architectural design enables the model to efficiently process and interpret a
range of cross-modal inputs, yielding a comprehensive understanding of the
correlation between verbal commands and corresponding visual scenes. Empirical
evaluations on the Talk2Car dataset, a real-world benchmark, demonstrate that
CAVG establishes new standards in prediction accuracy and operational
efficiency. Notably, the model exhibits exceptional performance even with
limited training data, ranging from 50% to 75% of the full dataset. This
feature highlights its effectiveness and potential for deployment in practical
AV applications. Moreover, CAVG has shown remarkable robustness and
adaptability in challenging scenarios, including long-text command
interpretation, low-light conditions, ambiguous command contexts, inclement
weather conditions, and densely populated urban environments. The code for the
proposed model is available at our Github.
|
In this article we propose a novel approach for adapting speaker embeddings
to new domains based on adversarial training of neural networks. We apply our
embeddings to the task of text-independent speaker verification, a challenging,
real-world problem in biometric security. We further the development of
end-to-end speaker embedding models by combing a novel 1-dimensional,
self-attentive residual network, an angular margin loss function and
adversarial training strategy. Our model is able to learn extremely compact,
64-dimensional speaker embeddings that deliver competitive performance on a
number of popular datasets using simple cosine distance scoring. One the
NIST-SRE 2016 task we are able to beat a strong i-vector baseline, while on the
Speakers in the Wild task our model was able to outperform both i-vector and
x-vector baselines, showing an absolute improvement of 2.19% over the latter.
Additionally, we show that the integration of adversarial training consistently
leads to a significant improvement over an unadapted model.
|
In an effort to increase the capabilities of SLAM systems and produce
object-level representations, the community increasingly investigates the
imposition of higher-level priors into the estimation process. One such example
is given by employing object detectors to load and register full CAD models.
Our work extends this idea to environments with unknown objects and imposes
object priors by employing modern class-specific neural networks to generate
complete model geometry proposals. The difficulty of using such predictions in
a real SLAM scenario is that the prediction performance depends on the
view-point and measurement quality, with even small changes of the input data
sometimes leading to a large variability in the network output. We propose a
discrete selection strategy that finds the best among multiple proposals from
different registered views by re-enforcing the agreement with the online depth
measurements. The result is an effective object-level RGBD SLAM system that
produces compact, high-fidelity, and dense 3D maps with semantic annotations.
It outperforms traditional fusion strategies in terms of map completeness and
resilience against degrading measurement quality.
|
The KEK 8-GeV electron / 3.5-GeV positron linac has been operated with very
different beam specifications for downstream rings, KEKB, PF and PF-AR. For the
reliable operation among these beam modes intelligent beam switching and beam
feedback systems have been developed and used since its commissioning.
A software panel is used to choose one of four beam modes and a switching
sequence is executed in about two minutes. Most items in a sequence are simple
operations followed by failure recoveries. The magnet standardization part
consumes most of the time. The sequence can be re-arranged easily by
accelerator operators. Linac beam modes are switched about fifty times a day
using this software.
In order to stabilize linac beam energy and orbits, as well as some
accelerator equipment, about thirty software beam feedback loops have been
installed. They have been utilized routinely in all beam modes, and have
improved its beam quality. Since its software interfaces are standardized, it
is easy to add new feedback loops simply defining monitors and actuators.
|
Unmanned aerial vehicles (UAVs) can be deployed to monitor very large areas
without the need for network infrastructure. UAVs communicate with each other
during flight and exchange information with each other. However, such
communication poses security challenges due to its dynamic topology. To solve
these challenges, the proposed method uses two phases to counter malicious UAV
attacks. In the first phase, we applied a number of rules and principles to
detect malicious UAVs. In this phase, we try to identify and remove malicious
UAVs according to the behavior of UAVs in the network in order to prevent
sending fake information to the investigating UAVs. In the second phase, a
mobile agent based on a three-step negotiation process is used to eliminate
malicious UAVs. In this way, we use mobile agents to inform our normal neighbor
UAVs so that they do not listen to the data generated by the malicious UAVs.
Therefore, the mobile agent of each UAV uses reliable neighbors through a
three-step negotiation process so that they do not listen to the traffic
generated by the malicious UAVs. The NS-3 simulator was used to demonstrate the
efficiency of the SAUAV method. The proposed method is more efficient than
CST-UAS, CS-AVN, HVCR, and BSUM-based methods in detection rate, false positive
rate, false negative rate, packet delivery rate, and residual energy.
|
In this paper we use the Klazar-Marcus-Tardos method to prove that if a
hereditary property of partitions P has super-exponential speed, then for every
k-permutation pi, P contains the partition of [2k] with parts {i, pi(i) + k},
where 1 <= i <= k. We also prove a similar jump, from exponential to factorial,
in the possible speeds of monotone properties of ordered graphs, and of
hereditary properties of ordered graphs not containing large complete, or
complete bipartite ordered graphs.
Our results generalize the Stanley-Wilf Conjecture on the number of
n-permutations avoiding a fixed permutation, which was recently proved by the
combined results of Klazar and of Marcus and Tardos. Our main results follow
from a generalization to ordered hypergraphs of the theorem of Marcus and
Tardos.
|
Spatially resolved relative phase measurement of two adjacent 1D Bose gases
is enabled by matter-wave interference upon free expansion. However,
longitudinal dynamics is typically ignored in the analysis of experimental
data. We provide an analytical formula showing a correction to the readout of
the relative phase due to longitudinal expansion and mixing with the common
phase. We numerically assess the error propagation to the estimation of the
gases' physical quantities such as correlation functions and temperature. Our
work characterizes the reliability and robustness of interferometric
measurements, directing us to the improvement of existing phase extraction
methods necessary to observe new physical phenomena in cold-atomic quantum
simulators.
|
We determine two improvement coefficients which are relevant to cancel
mass-dependent cutoff effects in correlation functions with operator insertions
of the non-singlet local QCD vector current. This determination is based on
degenerate three-flavor QCD simulations of non-perturbatively O(a) improved
Wilson fermions with tree-level improved gauge action. Employing a very robust
strategy that has been pioneered in the quenched approximation leads to an
accurate estimate of a counterterm cancelling dynamical quark cutoff effects
linear in the trace of the quark mass matrix. To our knowledge this is the
first time that such an effect has been determined systematically with large
significance.
|
The LAT instrument, onboard the Fermi satellite, in its first three months of
operation detected more than 100 blazars at more than the 10 sigma level. This
is already a great improvement with respect to its predecessor, the instrument
EGRET onboard the Compton Gamma Ray Observatory. Observationally, the new
detections follow and confirm the so-called blazar sequence, relating the
bolometric observed non-thermal luminosity to the overall shape of the spectal
energy distribution. We have studied the general physical properties of all
these bright Fermi blazars, and found that their jets are matter dominated,
carrying a large total power that correlates with the luminosity of their
accretion disks. We suggest that the division of blazars into the two
subclasses of broad line emitting objects (Flat Spectrum Radio Quasars) and
line-less BL Lacs is a consequence of a rather drastic change of the accretion
mode, becoming radiatively inefficient below a critical value of the accretion
rate, corresponding to a disk luminosity of ~1 per cent of the Eddington one.
The reduction of the ionizing photons below this limit implies that the broad
line clouds, even if present, cannot produce significant broad lines, and the
object becomes a BL Lac.
|
We report the discovery of propylene (also called propene, CH_2CHCH_3) with
the IRAM 30-m radio telescope toward the dark cloud TMC-1. Propylene is the
most saturated hydrocarbon ever detected in space through radio astronomical
techniques. In spite of its weak dipole moment, 6 doublets (A and E species)
plus another line from the A species have been observed with main beam
temperatures above 20 mK. The derived total column density of propylene is 4
10^13 cm^-2, which corresponds to an abundance relative to H_2 of 4 10^-9,
i.e., comparable to that of other well known and abundant hydrocarbons in this
cloud, such as c-C_3H_2. Although this isomer of C_3H_6 could play an important
role in interstellar chemistry, it has been ignored by previous chemical models
of dark clouds as there seems to be no obvious formation pathway in gas phase.
The discovery of this species in a dark cloud indicates that a thorough
analysis of the completeness of gas phase chemistry has to be done.
|
We present measurements of $E_G$, a probe of gravity from large-scale
structure, using BOSS LOWZ and CMASS spectroscopic samples, with lensing
measurements from SDSS (galaxy lensing) and Planck (CMB lensing). Using SDSS
lensing and the BOSS LOWZ sample, we measure
$\langle{E_G}\rangle=0.40^{+0.05}_{-0.04}$ (stat), $\pm 0.026$ (systematic),
consistent with the predicted value from the Planck $\Lambda$CDM model,
$E_G=0.46$. Using CMB lensing, we measure
$\langle{E_G}\rangle=0.46^{+0.08}_{-0.09}$ (stat) for LOWZ (statistically
consistent with galaxy lensing and Planck predictions) and
$\langle{E_G}\rangle=0.39^{+0.05}_{-0.05}$ (stat) for the CMASS sample,
consistent with the Planck prediction of $E_G=0.40$ given the higher redshift
of the sample. We also study the redshift evolution of $E_G$ by splitting the
LOWZ sample into two samples based on redshift, with results being consistent
with model predictions. We estimate systematic uncertainties on the above
$\langle{E_G}\rangle$ numbers to be $\sim 6$% (when using galaxy-galaxy
lensing) or $\sim 3$% (when using CMB lensing), subdominant to the quoted
statistical errors. These systematic error budgets are dominated by
observational systematics in galaxy-galaxy lensing and by theoretical modeling
uncertainties, respectively. We do not estimate observational systematics in
galaxy-CMB lensing cross correlations.
|
A proof based on reduction to finite fields of Esnault-Viehweg's stronger
version of Sommese Vanishing Theorem for $k$-ample line bundles is given. This
result is used to give different proofs of isotriviality results of A. Parshin
and L. Migliorini.
|
Object detection requires substantial labeling effort for learning robust
models. Active learning can reduce this effort by intelligently selecting
relevant examples to be annotated. However, selecting these examples properly
without introducing a sampling bias with a negative impact on the
generalization performance is not straightforward and most active learning
techniques can not hold their promises on real-world benchmarks. In our
evaluation paper, we focus on active learning techniques without a
computational overhead besides inference, something we refer to as zero-cost
active learning. In particular, we show that a key ingredient is not only the
score on a bounding box level but also the technique used for aggregating the
scores for ranking images. We outline our experimental setup and also discuss
practical considerations when using active learning for object detection.
|
We present a recent study of light charged Higgs boson ($H^-$) production at
the Large Hadron electron Collider (LHeC). We study the charged current
production process $e^- p \to \nu_e q H^-$, taking in account the decay
channels $H^- \to b\bar{c}$ and $H^-\to \tau \bar{\nu}_\tau$. We analyse the
process in the framework of the 2-Higgs Doublet Model Type-III (2HDM-III),
assuming a four-zero texture in the Yukawa matrices and a general Higgs
potential. We consider a variety of both reducible and irreducible backgrounds
for the signals of the $H^-$ state. We show that the detection of a light
charged Higgs boson is feasible, assuming for the LHeC standard energy and
luminosity conditions.
|
Electricity theft, the behavior that involves users conducting illegal
operations on electrical meters to avoid individual electricity bills, is a
common phenomenon in the developing countries. Considering its harmfulness to
both power grids and the public, several mechanized methods have been developed
to automatically recognize electricity-theft behaviors. However, these methods,
which mainly assess users' electricity usage records, can be insufficient due
to the diversity of theft tactics and the irregularity of user behaviors.
In this paper, we propose to recognize electricity-theft behavior via
multi-source data. In addition to users' electricity usage records, we analyze
user behaviors by means of regional factors (non-technical loss) and climatic
factors (temperature) in the corresponding transformer area. By conducting
analytical experiments, we unearth several interesting patterns: for instance,
electricity thieves are likely to consume much more electrical power than
normal users, especially under extremely high or low temperatures. Motivated by
these empirical observations, we further design a novel hierarchical framework
for identifying electricity thieves. Experimental results based on a real-world
dataset demonstrate that our proposed model can achieve the best performance in
electricity-theft detection (e.g., at least +3.0% in terms of F0.5) compared
with several baselines. Last but not least, our work has been applied by the
State Grid of China and used to successfully catch electricity thieves in
Hangzhou with a precision of 15% (an improvement form 0% attained by several
other models the company employed) during monthly on-site investigation.
|
This paper reports the results of a survey of Doppler shift oscillations
measured during solar flares in emission lines of S XV and Ca XIX with the
Bragg Crystal Spectrometer (BCS) on Yohkoh. Data from 20 flares that show
oscillatory behavior in the measured Doppler shifts have been fitted to
determine the properties of the oscillations. Results from both BCS channels
show average oscillation periods of 5.5 +/- 2.7 minutes, decay times of 5.0
+/-2.5 minutes, amplitudes of 17.1 +/- 17.0 km/s, and inferred displacements of
1070 +/- 1710 km, where the listed errors are the standard deviations of the
sample means. For some of the flares, intensity fluctuations are also observed.
These lag the Doppler shift oscillations by 1/4 period, strongly suggesting
that the oscillations are standing slow mode waves. The relationship between
the oscillation period and the decay time is consistent with conductive damping
of the oscillations.
|
In this article, we give an abstract characterization of the ``identity'' of
an operator space $V$ by looking at a quantity $n_{cb}(V,u)$ which is defined
in analogue to a well-known quantity in Banach space theory. More precisely, we
show that there exists a complete isometry from $V$ to some $\mathcal{L}(H)$
sending $u$ to ${\rm id}_H$ if and only if $n_{cb}(V,u) =1$. We will use it to
give an abstract characterization of operator systems. Moreover, we will show
that if $V$ is a unital operator space and $W$ is a proper complete $M$-ideal,
then $V/W$ is also a unital operator space. As a consequece, the quotient of an
operator system by a proper complete $M$-ideal is again an operator system. In
the appendix, we will also give an abstract characterisation of ``non-unital
operator systems'' using an idea arose from the definition of $n_{cb}(V,u)$.
|
We describe the reduction from four to two dimensions of the SU(2)
Donaldson-Witten theory and the dual twisted Seiberg-Witten theory, i.e. the
Abelian topological field theory corresponding to the Seiberg--Witten monopole
equations.
|
Integrated time-slice correlation functions $G(t)$ with weights $K(t)$
appear, e.g., in the moments method to determine $\alpha_s$ from heavy quark
correlators, in the muon g-2 determination or in the determination of smoothed
spectral functions.
For the (leading-order-)normalised moment $R_4$ of the pseudo-scalar
correlator we have non-perturbative results down to $a=10^{-2}$ fm and for
masses, $m$, of the order of the charm mass in the quenched approximation. A
significant bending of $R_4$ as a function of $a^2$ is observed at small
lattice spacings.
Starting from the Symanzik expansion of the integrand we derive the
asymptotic convergence of the integral at small lattice spacing in the free
theory and prove that the short distance part of the integral leads to
$\log(a)$-enhanced discretisation errors when $G(t)K(t) \sim\, t $ for small
$t$. In the interacting theory an unknown, function $K(a\Lambda)$ appears.
For the $R_4$-case, we modify the observable to improve the short distance
behavior and demonstrate that it results in a very smooth continuum limit. The
strong coupling and the $\Lambda$-parameter can then be extracted. In general,
and in particular for $g-2$, the short distance part of the integral should be
determined by perturbation theory. The (dominating) rest can then be obtained
by the controlled continuum limit of the lattice computation.
|
We tackle in this paper an online network resource allocation problem with
job transfers. The network is composed of many servers connected by
communication links. The system operates in discrete time; at each time slot,
the administrator reserves resources at servers for future job requests, and a
cost is incurred for the reservations made. Then, after receptions, the jobs
may be transferred between the servers to best accommodate the demands. This
incurs an additional transport cost. Finally, if a job request cannot be
satisfied, there is a violation that engenders a cost to pay for the blocked
job. We propose a randomized online algorithm based on the exponentially
weighted method. We prove that our algorithm enjoys a sub-linear in time
regret, which indicates that the algorithm is adapting and learning from its
experiences and is becoming more efficient in its decision-making as it
accumulates more data. Moreover, we test the performance of our algorithm on
artificial data and compare it against a reinforcement learning method where we
show that our proposed method outperforms the latter.
|
Hard X-ray and low-energy gamma-ray coded-aperture imaging instruments have
been highly successful as high-energy surveyors and transient-source
discoverers and trackers over the past decades. Albeit having relatively low
sensitivity as compared to focussing instruments, coded-aperture telescopes
still represent a very good choice for simultaneous, high cadence spectral
measurements of individual point sources in large source fields. Here I present
a review of the fundamentals of coded-aperture imaging instruments in
high-energy astrophysics. Emphasis is on fundamental aspects of the technique,
coded-mask instrument characteristics, and properties of the reconstructed
images.
|
Let $\mathcal U_\hbar(\hat{\mathfrak g})$ be the untwisted quantum
affinization of a symmetrizable quantum Kac-Moody algebra $\mathcal
U_\hbar({\mathfrak g})$. For $\ell\in\mathbb C$, we construct an $\hbar$-adic
quantum vertex algebra $V_{\hat{\mathfrak g},\hbar}(\ell,0)$, and establish a
one-to-one correspondence between $\phi$-coordinated $V_{\hat{\mathfrak
g},\hbar}(\ell,0)$-modules and restricted $\mathcal U_\hbar(\hat{\mathfrak
g})$-modules of level $\ell$. Suppose that $\ell$ is a positive integer. We
construct a quotient $\hbar$-adic quantum vertex algebra $L_{\hat{\mathfrak
g},\hbar}(\ell,0)$ of $V_{\hat{\mathfrak g},\hbar}(\ell,0)$, and establish a
one-to-one correspondence between certain $\phi$-coordinated $L_{\hat{\mathfrak
g},\hbar}(\ell,0)$-modules and restricted integrable $\mathcal
U_\hbar(\hat{\mathfrak g})$-modules of level $\ell$. Suppose further that
${\mathfrak g}$ is of finite type. We prove that $L_{\hat{\mathfrak
g},\hbar}(\ell,0)/\hbar L_{\hat{\mathfrak g},\hbar}(\ell,0)$ is isomorphic to
the simple affine vertex algebra $L_{\hat{\mathfrak g}}(\ell,0)$.
|
A Post-Quantum Key Exchange is needed since the availability of quantum
computers that allegedly allow breaking classical algorithms like
Diffie-Hellman, El Gamal, RSA and others within a practical amount of time is
broadly assumed in literature. Although our survey suggests that practical
quantum computers appear to be by far less advanced as actually required to
break state-of-the-art key negotiation algorithms, it is of high scientific
interest to develop fundamentally immune key negotiation methods. A novel
polymorphic algorithm based on permutable functions and defined over the field
of real numbers is proposed. The proposed key exchange can operate with at
least four different strategies. The cryptosystem itself is highly variable
and, due to the fact that rounding operations are inevitable and mandatory on a
traditional computer system, decoherence of the quantum computer system would
lead to a premature end of the computation on quantum systems.
|
The handling of user preferences is becoming an increasingly important issue
in present-day information systems. Among others, preferences are used for
information filtering and extraction to reduce the volume of data presented to
the user. They are also used to keep track of user profiles and formulate
policies to improve and automate decision making.
We propose here a simple, logical framework for formulating preferences as
preference formulas. The framework does not impose any restrictions on the
preference relations and allows arbitrary operation and predicate signatures in
preference formulas. It also makes the composition of preference relations
straightforward. We propose a simple, natural embedding of preference formulas
into relational algebra (and SQL) through a single winnow operator
parameterized by a preference formula. The embedding makes possible the
formulation of complex preference queries, e.g., involving aggregation, by
piggybacking on existing SQL constructs. It also leads in a natural way to the
definition of further, preference-related concepts like ranking. Finally, we
present general algebraic laws governing the winnow operator and its
interaction with other relational algebra operators. The preconditions on the
applicability of the laws are captured by logical formulas. The laws provide a
formal foundation for the algebraic optimization of preference queries. We
demonstrate the usefulness of our approach through numerous examples.
|
In this work we study implications of additional non-holomorphic soft
breaking terms (mu', A'_t, A'_b and A'_tau) on the MSSM phenomenology. By
respecting the existing bounds on the mass measurements and restrictions coming
from certain B-decays, we probe reactions of the MSSM to these additional soft
breaking terms. We provide examples in which some slightly excluded solutions
of the MSSM can be made to be consistent with the current experimental results.
During this, even after applying additional fine-tuning constraints the
non-holomorphic terms are allowed to be as large as hundreds of GeV. Such terms
prove that they are capable of enriching the phenomenology and varying the mass
spectra of the MSSM heavily, with a reasonable amount of fine-tuning.
We observe that higgsinos, the lightest stop, the heavy Higgs boson states A,
H, charged H, sbottom and stau exhibit the highest sensitivity to the new
terms. We also show how the light stop can become nearly degenerate with top
quark using these non-holomorphic terms.
|
$\cal T$-parity in the Little Higgs model could be violated by anomalies that
allow the lightest $\cal T$-odd $A_H$ to decay into $ZZ$ and $W^+W^-$. We
analyze these anomaly induced decays and the two-particle and the
three-particle decay modes of other heavy quarks and bosons in this model which
yield unique Large Hadron Collider (LHC) signals with fully reconstructable
events. $\cal T$-odd quarks in the Little Higgs model are nearly degenerate in
mass and they decay by almost identical processes; however, members of the
heavy Higgs triplet follow distinct decay modes. The branching fractions of
three-body decays increase with the global symmetry-breaking energy scale $f$
and are found to be at the level of a few percent in heavy quark decays while
they can reach up to 10% for heavy bosons.
|
Optical focusing at depths in tissue is the Holy Grail of biomedical optics
that may bring revolutionary advancement to the field. Wavefront shaping is a
widely accepted approach to solve this problem, but most implementations thus
far have only operated with stationary media which, however, are scarcely
existent in practice. In this article, we propose to apply a deep convolutional
neural network named as ReFocusing-Optical-Transformation-Net (RFOTNet), which
is a Multi-input Single-output network, to tackle the grand challenge of light
focusing in nonstationary scattering media. As known, deep convolutional neural
networks are intrinsically powerful to solve inverse scattering problems
without complicated computation. Considering the optical speckles of the medium
before and after moderate perturbations are correlated, an optical focus can be
rapidly recovered based on fine-tuning of pre-trained neural networks,
significantly reducing the time and computational cost in refocusing. The
feasibility is validated experimentally in this work. The proposed deep
learning-empowered wavefront shaping framework has great potentials in
facilitating optimal optical focusing and imaging in deep and dynamic tissue.
|
The $^{120}$Sn($p$,$p\alpha$)$^{116}$Cd reaction at 392 MeV is investigated
with the distorted wave impulse approximation (DWIA) framework. We show that
this reaction is very peripheral mainly because of the strong absorption of
$\alpha$ by the reaction residue $^{116}$Cd, and the $\alpha$-clustering on the
nuclear surface can be probed clearly. We investigate also the validity of the
so-called factorization approximation that has frequently been used so far. It
is shown that the kinematics of $\alpha$ in the nuclear interior region is
significantly affected by the distortion of $^{116}$Cd, but it has no effect on
the reaction observables because of the strong absorption in that region.
|
In this note we are interested in the rich geometry of the graph of a curve
$\gamma_{a,b}: [0,1] \rightarrow \mathbb{C}$ defined as \begin{equation*}
\gamma_{a,b}(t) = \exp(2\pi i a t) + \exp(2\pi i b t), \end{equation*} in which
$a,b$ are two different positive integers. It turns out that the sum of only
two exponentials gives already rise to intriguing graphs. We determine the
symmetry group and the points of self intersection of any such graph using only
elementary arguments and describe various interesting phenomena that arise in
the study of graphs of sums of more than two exponentials.
|
A Molecular Dynamics (MD) study of static and dynamic properties of molten
and glassy germanium dioxide is presented. The interactions between the atoms
are modelled by the classical pair potential proposed by Oeffner and Elliott
(OE) [Oeffner R D and Elliott S R 1998, Phys. Rev. B, 58, 14791]. We compare
our results to experiments and previous simulations. In addition, an ab initio
method, the so-called Car-Parrinello Molecular Dynamics (CPMD), is applied to
check the accuracy of the structural properties, as obtained by the classical
MD simulations with the OE potential. As in a similar study for SiO2, the
structure predicted by CPMD is only slightly softer than that resulting from
the classical MD. In contrast to earlier simulations, both the static structure
and dynamic properties are in very good agreement with pertinent experimental
data. MD simulations with the OE potential are also used to study the
relaxation dynamics. As previously found for SiO2, for high temperatures the
dynamics of molten GeO2 is compatible with a description in terms of mode
coupling theory.
|
Subsets and Splits