text
stringlengths 6
128k
|
---|
We observed the flare stars AD Leonis, Wolf 424, EQ Pegasi, EV Lacertae, and
UV Ceti for nearly 135 hours. These stars were observed between 63 and 83 MHz
using the interferometry mode of the Long Wavelength Array. Given that emission
from flare stars is typically circularly polarized, we used the condition that
any significant detection present in Stokes I must also be present in Stokes V
at the same time in order for us to consider it a possible flare. Following
this, we made one marginal flare detection for the star EQ Pegasi. This flare
had a flux density of 5.91 Jy in Stokes I and 5.13 Jy in Stokes V,
corresponding to a brightness temperature $1.75 \times 10^{16}(r/r_*)^{-2}$ K.
|
Left atrium shape has been shown to be an independent predictor of recurrence
after atrial fibrillation (AF) ablation. Shape-based representation is
imperative to such an estimation process, where correspondence-based
representation offers the most flexibility and ease-of-computation for
population-level shape statistics. Nonetheless, population-level shape
representations in the form of image segmentation and correspondence models
derived from cardiac MRI require significant human resources with sufficient
anatomy-specific expertise. In this paper, we propose a machine learning
approach that uses deep networks to estimate AF recurrence by predicting shape
descriptors directly from MRI images, with NO image pre-processing involved. We
also propose a novel data augmentation scheme to effectively train a deep
network in a limited training data setting. We compare this new method of
estimating shape descriptors from images with the state-of-the-art
correspondence-based shape modeling that requires image segmentation and
correspondence optimization. Results show that the proposed method and the
current state-of-the-art produce statistically similar outcomes on AF
recurrence, eliminating the need for expensive pre-processing pipelines and
associated human labor.
|
Many recent studies of the motor system are divided into two distinct
approaches: Those that investigate how motor responses are encoded in cortical
neurons' firing rate dynamics and those that study the learning rules by which
mammals and songbirds develop reliable motor responses. Computationally, the
first approach is encapsulated by reservoir computing models, which can learn
intricate motor tasks and produce internal dynamics strikingly similar to those
of motor cortical neurons, but rely on biologically unrealistic learning rules.
The more realistic learning rules developed by the second approach are often
derived for simplified, discrete tasks in contrast to the intricate dynamics
that characterize real motor responses. We bridge these two approaches to
develop a biologically realistic learning rule for reservoir computing. Our
algorithm learns simulated motor tasks on which previous reservoir computing
algorithms fail, and reproduces experimental findings including those that
relate motor learning to Parkinson's disease and its treatment.
|
We realize a two-qubit sensor designed for achieving high spectral resolution
in quantum sensing experiments. Our sensor consists of an active "sensing
qubit" and a long-lived "memory qubit", implemented by the electronic and the
nitrogen-15 nuclear spins of a nitrogen-vacancy center in diamond,
respectively. Using state storage times of up to 45 ms, we demonstrate
spectroscopy of external ac signals with a line width of 19 Hz (~2.9 ppm) and
of carbon-13 nuclear magnetic resonance (NMR) signals with a line width of 190
Hz (~ 74 ppm). This represents an up to 100-fold improvement in spectral
resolution compared to measurements without nuclear memory.
|
Interleaving fins can significantly increase the heat transfer by increasing
the effective area per unit base area. The fins are separated uniformly by a
gap, which is filled with a flow medium to control the heat flux. The heat flux
between the plates depends strongly on the thermal conductivity of the fin
material and the medium between them as well as the dimensions. In earlier
studies empirical a fitting method is used to determine the total effectiveness
of the fins. However, it required complete characterization of the fins for
each new set of operating conditions. In this paper, a simplified analytical
model, but still preserving the main physical traits of the problem is
developed. This model reveals the dimensionless parameter group containing both
material properties and the fin geometry that govern the heat transfer.
Rigorously testing of the model using a numerical finite element model shows an
accuracy within 2% over a large parameter space, varying both dimensions and
material properties. Lastly, this model is put to test with previously measured
experimental data and a good agreement is obtained.
|
We first present an overview of our previous work on the dynamics of
subgroups of automorphism groups of compact complex surfaces, together with a
selection of open problems and new classification results. Then, we study two
families of examples in depth: the first one comes from folding plane
pentagons, and the second one is a family of groups introduced by J\'er\'emy
Blanc, which exhibits interesting new dynamical features.
|
MARTY is a C++ computer algebra system specialized for High Energy Physics
that can calculate amplitudes, squared amplitudes and Wilson coefficients in a
large variety of beyond the Standard Model scenarios up to the one-loop order.
It is fully independent of any other framework and its main development
guideline is generality, in order to be adapted easily to any type of model.
The calculations are fully automated from the Lagrangian up to the generation
of the C++ code evaluating the theoretical results (numerically, depending on
the model parameters). Once a phenomenological tool chain has been set up -
from a Lagrangian to observable analysis - it can be used in a model
independent way leaving only model building, with MARTY, as the task to be
performed by physicists. Here we present the main steps to build a general new
physics model, namely gauge group, particle content, representations,
replacements, rotations and symmetry breaking, using the example of a 2 Higgs
Doublet Model. The sample codes that are shown for this example can be easily
generalized to any Beyond the Standard Model scenario written with MARTY.
|
In this work, which is based on our previously derived theoretical framework
[1], we apply the truncated Born-Bogoliubov-Green-Kirkwood-Yvon (BBGKY)
hierarchy for ultracold bosonic systems with a fixed number of particles to two
out-of-equilibrium scenarios, namely tunneling in a double-well potential and
an interaction quench in a harmonic trap. The efficient formulation of the
theory provided in [1] allows for going to large truncation orders such that
the impact of the truncation order on the accuracy of the results can be
systematically explored. While the short-time dynamics is found to be
excellently described with controllable accuracy, significant deviations occur
on a longer time-scale for a sufficiently strong interaction quench or the
tunneling scenario. Theses deviations are accompanied by exponential-like
instabilities leading to unphysical results. The phenomenology of these
instabilities is investigated in detail and we show that the minimal-invasive
correction algorithm of the equation of motion as proposed in [1] can indeed
stabilize the BBGKY hierarchy truncated at the second order.
|
We consider asymptotic behavior of $e^{-itH}f$ for $N$-body Schr\"odinger
operator $H=H_0+\sum_{1\le i<j\le N}V_{ij}(x)$ with long- and short-range pair
potentials $V_{ij}(x)=V_{ij}^L(x)+V_{ij}^S(x)$ $(x\in {\mathbb R}^\nu)$ such
that $\partial_x^\alpha V_{ij}^L(x)=O(|x|^{-\delta-|\alpha|})$ and
$V_{ij}^S(x)=O(|x|^{-1-\delta})$ $(|x|\to\infty)$ with $\delta>0$. Introducing
the concept of scattering spaces which classify the initial states $f$
according to the asymptotic behavior of the evolution $e^{-itH}f$, we give a
generalized decomposition theorem of the continuous spectral subspace
${\mathcal{H}}_c(H)$ of $H$. The asymptotic completeness of wave operators is
proved for some long-range pair potentials with $\delta>1/2$ by using this
decomposition theorem under some assumption on subsystem eigenfunctions.
|
Density-functional theory within a Berry-phase formulation of the dynamical
polarization is used to determine the second-order susceptibility $\chi^{(2)}$
of lithium niobate (LiNbO$_3$). Defect trapped polarons and bipolarons are
found to strongly enhance the nonlinear susceptibility of the material, in
particular if localized at Nb$_\mathrm{V}$-V$_{\mathrm{Li}}$ defect pairs. This
is essentially a consequence of the polaronic excitation resulting in
relaxation-induced gap states. The occupation of these levels leads to strongly
enhanced $\chi^{(2)}$ coefficients and allows for the spatial and transient
modification of the second-harmonic generation of macroscopic samples.
|
In an age where the distribution of information is crucial, current file
sharing solutions suffer significant deficiencies. Popular systems such as
Google Drive, torrenting and IPFS suffer issues with compatibility,
accessibility and censorship. This paper introduces DistriFS, a novel
decentralized approach tailored for efficient and large-scale distribution of
files. The architecture of DistriFS is grounded in three foundational pillars:
scalability, security, and seamless integration. The proposed server
implementation harnesses the power of Golang, ensuring near-universal
interoperability across operating systems and hardware. Moreover, the use of
the HTTP protocol eliminates the need for additional software to access the
network, ensuring compatibility across all major operating systems and
facilitating effortless downloads. The design and efficacy of DistriFS
represent a significant advancement in the realm of file distribution systems,
offering a scalable and secure alternative to current centralized and
decentralized models.
|
Doping in the chalcopyrite Cu(In,Ga)Se2 is determined by intrinsic point
defects. In the ternary CuInSe2, both N-type and P-type conductivity can be
obtained depending on the growth conditions and stoichiometry: N-type is
obtained when grown Cu-poor, Se-poor and alkali-free. CuGaSe2, on the other
hand, is found to be always a P-type semiconductor that seems to resist all
kinds of N-type doping no matter whether it comes from native defects or
extrinsic impurities. In this contribution, we study the N-to-P transition in
Cu-poor Cu(In,Ga)Se2 single crystals in dependence of the gallium content. Our
results show that Cu(In,Ga)Se2 can still be grown as an N-type semiconductor
until the gallium content reaches the critical concentration of 15-19%, where
the N-to-P transition occurs. Furthermore, trends in the Seebeck coefficient
and activation energies extracted from temperature-dependent conductivity
measurements, demonstrate that the carrier concentration drops by around two
orders of magnitude near the transition concentration. Our proposed model
explains the N-to-P transition based on the differences in formation energies
of donor and acceptor defects caused by the addition of gallium.
|
We present molecular dynamics simulations of a realistic model of an
ultrathin film of BaTiO$_3$ sandwiched between short-circuited electrodes to
determine and understand effects of film thickness, epitaxial strain and the
nature of electrodes on its ferroelectric phase transitions as a function of
temperature. We determine a full epitaxial strain-temperature phase diagram in
the presence of perfect electrodes. Even with the vanishing depolarization
field, we find that ferroelectric phase transitions to states with in-plane and
out-of-plane components of polarization exhibit dependence on thickness; it
arises from the interactions of local dipoles with their electrostatic images
in the presence of electrodes. Secondly, in the presence of relatively bad
metal electrodes which only partly compensate the surface charges and
depolarization field, a qualitatively different phase with stripe-like domains
is stabilized at low temperature.
|
Annotating images with tags is useful for indexing and retrieving images.
However, many available annotation data include missing or inaccurate
annotations. In this paper, we propose an image annotation framework which
sequentially performs tag completion and refinement. We utilize the subspace
property of data via sparse subspace clustering for tag completion. Then we
propose a novel matrix completion model for tag refinement, integrating visual
correlation, semantic correlation and the novelly studied property of complex
errors. The proposed method outperforms the state-of-the-art approaches on
multiple benchmark datasets even when they contain certain levels of annotation
noise.
|
Nested case-control (NCC) is a sampling method widely used for developing and
evaluating risk models with expensive biomarkers on large prospective cohort
studies. The biomarker values are typically obtained on a sub-cohort,
consisting of all the events and a subset of non-events. However, when the
number of events is not small, it might not be affordable to measure the
biomarkers on all of them. Due to the costs and limited availability of
bio-specimens, only a subset of events is selected to the sub-cohort as cases.
For these "untypical" NCC studies, we propose a new weighting method for the
inverse probability weighted (IPW) estimation. We also design a perturbation
method to estimate the variance of the IPW estimator with our new weights. It
accounts for between-subject correlations induced by the sampling processes for
both cases and controls through perturbing their sampling indicator variables,
and thus, captures all the variations. Furthermore, we demonstrate,
analytically and numerically, that when cases consist of only a subset of
events, our new weight produces more efficient IPW estimators than the weight
proposed in Samuelsen (1997) for a standard NCC design. We illustrate the
estimating procedure with a study that aims to evaluate a biomarker-based risk
prediction model using the Framingham cohort study.
|
The Hubble Space Telescope has obtained some of its well-deserved impact by
producing stunning three-color (RGB) pictures from three-band imaging data.
Here we produce a new RGB representation of the $I$, $V$, and $B$ bandpass
images of the HST Ultra Deep Field (HUDF). Our representation is based on
principles set forth elsewhere (Lupton et al 2004, PASP, 116, 133--137). The
principal difference between our RGB representation of the data and the more
traditional representation provided by the Space Telescope Science Institute is
that the (necessarily) nonlinear transformation between data values and RGB
values is done in a color-preserving (i.e., hue- and saturation-preserving)
way. For example, if one of the image pixel values saturates the dynamic range
of the RGB representation, all three of the R, G, and B values in the
representation are truncated such that the hue and saturation of the pixel is
the same as it would have been if the pixel had had lower flux but the same
astronomical color. This, in effect, makes the bright parts of the
representation an informative color map, not "whited out" as they are in
traditional representations. For the HUDF, this difference is seen best in the
centers of bright galaxies, which show significant detail not visible in the
more traditional representation.
|
It is a well known fact that the classical (``Buscher'') transformations of
T-duality do receive, in general, quantum corrections. It is interesting to
check whether classical T-duality can be exact as a quantum symmetry. The
natural starting point is a $\sigma$-model with N=4 world sheet supersymmetry.
Remarkably, we find that (owing to the fact that N=4 models with torsion are
not off-shell finite as quantum theories),the T-duality transformations for
these models get in general quantum corrections, with the only known exception
of warped products of flat submanifolds or orbifolds thereof with other
geometries.
|
On the phase diagram of a system undergoing a continuous phase transition of
the second order, three lines, hyper-surfaces, convergent into the critical
point feature prominently: the ordered and disordered phases in the
thermodynamic limit, and a third line, extending into a domain of finite-size
systems, defined by the bifurcation of the distribution of the order parameter.
Unlike critical phenomena in the thermodynamic limit devoid of known
thermodynamic potential and described rather by the conformal symmetry, in
finite-size systems near the bifurcation line an explicit Hamiltonian for the
zero-mode of the order parameter is found. It bears the impress of the
universality class of the critical point in terms of the two critical
exponents: $\beta$ and $\nu$.
|
We investigate the spatial patterns of the ground state of two interacting
Bose-Einstein condensates. We consider the general case of two different atomic
species (with different mass and in different hyperfine states) trapped in a
magnetic potential whose eigenaxes can be tilted with respect to the vertical
direction, giving rise to a non trivial gravitational sag. Despite the
complicated geometry, we show that within the Thomas-Fermi approximations and
upon appropriate coordinate transformations, the equations for the density
distributions can be put in a very simple form. Starting from this expressions
we give explicit rules to classify the different spatial topologies which can
be produced, and we discuss how the behavior of the system is influenced by the
inter-atomic scattering length. We also compare explicit examples with the full
numeric Gross-Pitaevskii calculation.
|
The exact Li$\acute{e}$nard-Wiechert solutions for the point charge in
arbitrary motion are shown to be null fields everywhere. These are used as a
basis to introduce extended electromagnetic field equations that have null
field solutions with fractional charges that combine with absolute confining
potentials.
|
This doctoral thesis investigates the long-term evolution of the strong
magnetic fields within isolated neutron stars (NSs), the most potent magnetic
objects in the universe. Their magnetic influence extends beyond their surface
to encompass the magnetised plasma in their vicinity. The overarching magnetic
configuration significantly impacts the observable characteristics of the
highly magnetised NSs, i.e., magnetars. Conversely, the internal magnetic field
undergoes prolonged evolution spanning thousands to millions of years,
intricately linked to thermal evolution. The diverse observable phenomena
associated with NSs underscore the complex 3D nature of their magnetic
structure, thereby requiring sophisticated numerical simulations. A central
focus of this thesis involves a thorough exploration of state-of-the-art 3D
coupled magneto-thermal evolution models. This marks a pioneering achievement
as we conduct, for the first time, the most realistic 3D simulations to date,
spanning the first million years of a NS's life using the newly developed code
MATINS, which adeptly accounts for both Ohmic dissipation and Hall drift within
the NS's crust. Our simulations incorporate highly accurate
temperature-dependent microphysical calculations and adopt the star's structure
based on a realistic equation of state. To address axial singularities in 3D
simulations, we employ the cubed-sphere coordinates. We also account for
corresponding relativistic factors in the evolution equations and use the
latest envelope model from existing literature, in addition to an initial
magnetic field structure derived from proton-NS dynamo simulations. Within this
framework, we quantitatively simulate the thermal luminosity, timing
properties, and magnetic field evolution, pushing the boundaries of numerical
modeling capabilities and enabling the performance of several astrophysical
studies within this thesis.
|
The author used synchrotron x-ray reflectivity to study the ion-size effect
for alkali ions (Na$^+$, K$^+$, Rb$^+$, and Cs$^+$), with densities as high as
$4 \times 10^{18}- 7 \times 10^{18}$ m$^{-2}$, suspended above the surface of a
colloidal solution of silica nanoparticles in the field generated by the
surface electric-double layer. According to the data, large alkali ions
preferentially accumulate at the sol's surface replacing smaller ions, a
finding that qualitatively agrees with the dependence of the Kharkats-Ulstrup
single-ion electrostatic free energy on the ion's radius.
|
Given a smooth, proper curve $C$ over a discretely valued field $k$, we equip
the $k$-vector space $H^{0}(C,\omega_{C/k})$ with a canonical discrete
valuation $v_{\mathrm{can}}$ which measures how canonical forms degenerate on
regular integral models of $C$. More precisely, $v_{\mathrm{can}}$ maps a
canonical form to the minimal value of its associated weight function, as
introduced by Musta\c{t}\u{a}--Nicaise. Our main result states that
$v_{\mathrm{can}}$ computes Edixhoven's jumps of the Jacobian of $C$ when
evaluated in an orthogonal basis. As a byproduct, we deduce a short proof for
the rationality of the jumps of Jacobians. We also show how $v_{\mathrm{can}}$
and the jumps can be computed efficiently for the class of $\Delta_v$-regular
curves introduced by Dokchitser.
|
In this paper we attempt to understand the role of tin and carbon in magnetic
interactions in Mn$_3$SnC. Mn$_3$SnC exhibits a time dependent magnetic
configuration and a complex magnetic ground state with both ferromagnetic and
antiferromagnetic orders. Such a magnetic state is attributed to presence of
distorted Mn$_6$C octahedra with long and short Mn--Mn bonds. Our studies show
that C deficiency increases the tensile strain on the Mn$_6$C octahedra which
elongates Mn--Mn bonds and strengthens ferromagnetic interactions while Sn
deficiency tends to ease out the strain resulting in shorter as well as longer
Mn--Mn bond distances in comparison with stoichiometric Mn$_3$SnC. Such a
variation strengthens both, ferromagnetic and antiferromagnetic interactions.
Thus the structural strain caused by both Sn and C is responsible for complex
magnetic ground state of Mn$_3$SnC.
|
We present a simple and efficient method to simulate three-dimensional,
complex-shaped, interacting bodies. The particle shape is represented by
Minkowski operators. A time-continuous interaction between these bodies is
derived using simple concepts of computational geometry. The model (particles +
interactions) is efficient, accurate and easy to implement, and it complies
with the conservation laws of physics. 3D simulations of hopper flow shows that
the non-convexity of the particles strongly effects the jamming on granular
flow.
|
We show how a bi-directional grammar can be used to specify and verbalise
answer set programs in controlled natural language. We start from a program
specification in controlled natural language and translate this specification
automatically into an executable answer set program. The resulting answer set
program can be modified following certain naming conventions and the revised
version of the program can then be verbalised in the same subset of natural
language that was used as specification language. The bi-directional grammar is
parametrised for processing and generation, deals with referring expressions,
and exploits symmetries in the data structure of the grammar rules whenever
these grammar rules need to be duplicated. We demonstrate that verbalisation
requires sentence planning in order to aggregate similar structures with the
aim to improve the readability of the generated specification. Without
modifications, the generated specification is always semantically equivalent to
the original one; our bi-directional grammar is the first one that allows for
semantic round-tripping in the context of controlled natural language
processing. This paper is under consideration for acceptance in TPLP.
|
We investigate the relationship between two properties of quantum
transformations often studied in popular subtheories of quantum theory:
covariance of the Wigner representation of the theory and the existence of a
transformation noncontextual ontological model of the theory. We consider
subtheories of quantum theory specified by a set of states, measurements and
transformations, defined specifying a group of unitaries, that map between
states (and measurements) within the subtheory. We show that if there exists a
Wigner representation of the subtheory which is covariant under the group of
unitaries defining the set of transformations then the subtheory admits of a
transformation noncontextual ontological model. We provide some concrete
arguments to conjecture that the converse statement also holds provided that
the underlying ontological model is the one given by the Wigner representation.
In addition, we investigate the relationships of covariance and transformation
noncontextuality with the existence of a quasiprobability distribution for the
theory that represents the transformations as positivity preserving maps. We
conclude that covariance implies transformation noncontextuality, which implies
positivity preservation.
|
In recent years, deep learning-based models have significantly improved the
Natural Language Processing (NLP) tasks. Specifically, the Convolutional Neural
Network (CNN), initially used for computer vision, has shown remarkable
performance for text data in various NLP problems. Most of the existing
CNN-based models use 1-dimensional convolving filters n-gram detectors), where
each filter specialises in extracting n-grams features of a particular input
word embedding. The input word embeddings, also called sentence matrix, is
treated as a matrix where each row is a word vector. Thus, it allows the model
to apply one-dimensional convolution and only extract n-gram based features
from a sentence matrix. These features can be termed as intra-sentence n-gram
features. To the extent of our knowledge, all the existing CNN models are based
on the aforementioned concept. In this paper, we present a CNN-based
architecture TextConvoNet that not only extracts the intra-sentence n-gram
features but also captures the inter-sentence n-gram features in input text
data. It uses an alternative approach for input matrix representation and
applies a two-dimensional multi-scale convolutional operation on the input. To
evaluate the performance of TextConvoNet, we perform an experimental study on
five text classification datasets. The results are evaluated by using various
performance metrics. The experimental results show that the presented
TextConvoNet outperforms state-of-the-art machine learning and deep learning
models for text classification purposes.
|
An algorithm is developed and tested for the problem posed by photometric
observations of the bulge of the Milky Way. The latter subtends a non-trivial
solid angle on the sky, and we show that this permits inversion of the
projected brightness distribution under the assumption that the bulge has three
orthogonal mirror planes of specified orientation. A serious error in the
assumed orientation of the mirror planes should be detectable.
|
A future detection of the Stochastic Gravitational Wave Background (SGWB)
with GW experiments is expected to open a new window on early universe
cosmology and on the astrophysics of compact objects. In this paper we study
SGWB anisotropies, that can offer new tools to discriminate between different
sources of GWs. In particular, the cosmological SGWB inherits its anisotropies
both (i) at its production and (ii) during its propagation through our
perturbed universe. Concerning (i), we show that it typically leads to
anisotropies with order one dependence on frequency. We then compute the effect
of (ii) through a Boltzmann approach, including contributions of both
large-scale scalar and tensor linearized perturbations. We also compute for the
first time the three-point function of the SGWB energy density, which can allow
one to extract information on GW non-Gaussianity with interferometers. Finally,
we include non-linear effects associated with long wavelength scalar
fluctuations, and compute the squeezed limit of the 3-point function for the
SGWB density contrast. Such limit satisfies a consistency relation,
conceptually similar to what found in the literature for the case of CMB
perturbations.
|
There is no such thing as a perfect dataset. In some datasets, deep neural
networks discover underlying heuristics that allow them to take shortcuts in
the learning process, resulting in poor generalization capability. Instead of
using standard cross-entropy, we explore whether a modulated version of
cross-entropy called focal loss can constrain the model so as not to use
heuristics and improve generalization performance. Our experiments in natural
language inference show that focal loss has a regularizing impact on the
learning process, increasing accuracy on out-of-distribution data, but slightly
decreasing performance on in-distribution data. Despite the improved
out-of-distribution performance, we demonstrate the shortcomings of focal loss
and its inferiority in comparison to the performance of methods such as
unbiased focal loss and self-debiasing ensembles.
|
Many speech and music analysis and processing schemes rely on an estimate of
the fundamental frequency $f_0$ of periodic signal components. Most established
schemes apply rather unspecific signal models such as sinusoidal models to the
estimation problem, which may limit time resolution and estimation accuracy.
This study proposes a novel time-domain locked-loop algorithm with low
computational effort and low memory footprint for $f_0$ estimation. The loop
control signal is directly derived from the input time signal, using a harmonic
signal model. Theoretically, this allows for a noise-robust and rapid $f_0$
estimation for periodic signals of arbitrary waveform, and without the
requirement of a prior frequency analysis. Several simulations with short
signals employing different types of periodicity and with added wide-band noise
were performed to demonstrate and evaluate the basic properties of the proposed
algorithm. Depending on the Signal-to-Noise Ratio (SNR), the estimator was
found to converge within 3-4 signal repetitions, even at SNR close to or below
0dB. Furthermore, it was found to follow fundamental frequency sweeps with a
delay of less than one period and to track all tones of a three-tone musical
chord signal simultaneously. Quasi-periodic sounds with shifted harmonics as
well as signals with stochastic periodicity were robustly tracked. Mean and
standard deviation of the estimation error, i.e., the difference between true
and estimated $f_0$, were at or below 1 Hz in most cases. The results suggest
that the proposed algorithm may be applicable to low-delay speech and music
analysis and processing.
|
Two results on product of compact filters are shown to be the common
principle behind a surprisingly large number of theorems.
|
With the increasing demand for practical applications of Large Language
Models (LLMs), many attention-efficient models have been developed to balance
performance and computational cost. However, the adversarial robustness of
these models remains under-explored. In this work, we design a framework to
investigate the trade-off between efficiency, performance, and adversarial
robustness of LLMs by comparing three prominent models with varying levels of
complexity and efficiency -- Transformer++, Gated Linear Attention (GLA)
Transformer, and MatMul-Free LM -- utilizing the GLUE and AdvGLUE datasets. The
AdvGLUE dataset extends the GLUE dataset with adversarial samples designed to
challenge model robustness. Our results show that while the GLA Transformer and
MatMul-Free LM achieve slightly lower accuracy on GLUE tasks, they demonstrate
higher efficiency and either superior or comparative robustness on AdvGLUE
tasks compared to Transformer++ across different attack levels. These findings
highlight the potential of simplified architectures to achieve a compelling
balance between efficiency, performance, and adversarial robustness, offering
valuable insights for applications where resource constraints and resilience to
adversarial attacks are critical.
|
We prove that every right-angled Coxeter group (RACG) is profinitely rigid
amongst all Coxeter groups. On the other hand we exhibit RACGs which have
infinite profinite genus amongst all finitely generated residually finite
groups. We also establish profinite rigidity results for graph products of
finite groups. Along the way we prove that the Higman-Thompson groups $V_{n}$
are generated by $4$ involutions, generalising a classical result of Higman for
Thompson's group $V$.
|
SuperNEMO is a next-generation double beta decay experiment based on the
successful tracking plus calorimetry design approach of the NEMO3 experiment
currently running in the Laboratoire Souterrain de Modane (LSM). SuperNEMO can
study a range of isotopes, the baseline isotopes are 82Se and possibly 150Nd.
The total isotope mass will be 100-200 kg. A sensitivity to neutrinoless double
beta decay half-life greater than 10e26 years can be reached which gives access
to Majorana neutrino masses of 50-100 meV. One of the main challenges of the
SuperNEMO R&D is the development of the calorimeter with an unprecedented
energy resolution of 4% FWHM at 3 MeV (Qbb value of 82Se).
|
Type Ia Supernovae (SNe Ia) form an observationally uniform class of stellar
explosions, in that more luminous objects have smaller decline-rates. This
one-parameter behavior allows SNe Ia to be calibrated as cosmological `standard
candles', and led to the discovery of an accelerating Universe. Recent
investigations, however, have revealed that the true nature of SNe Ia is more
complicated. Theoretically, it has been suggested that the initial
thermonuclear sparks are ignited at an offset from the centre of the
white-dwarf (WD) progenitor, possibly as a result of convection before the
explosion. Observationally, the diversity seen in the spectral evolution of SNe
Ia beyond the luminosity decline-rate relation is an unresolved issue. Here we
report that the spectral diversity is a consequence of random directions from
which an asymmetric explosion is viewed. Our findings suggest that the spectral
evolution diversity is no longer a concern in using SNe Ia as cosmological
standard candles. Furthermore, this indicates that ignition at an offset from
the centre of is a generic feature of SNe Ia.
|
The emergence of heterogeneous decentralized networks without a central
controller, such as device-to-device communication systems, has created the
need for new problem frameworks to design and analyze the performance of such
networks. As a key step towards such an analysis for general networks, this
paper examines the strategic behavior of \emph{receivers} in a Gaussian
broadcast channel (BC) and \emph{transmitters} in a multiple access channel
(MAC) with sum power constraints (sum power MAC) using the framework of
non-cooperative game theory. These signaling scenarios are modeled as
generalized Nash equilibrium problems (GNEPs) with jointly convex and coupled
constraints and the existence and uniqueness of equilibrium achieving
strategies and equilibrium utilities are characterized for both the Gaussian BC
and the sum power MAC. The relationship between Pareto-optimal boundary points
of the capacity region and the generalized Nash equilibria (GNEs) are derived
for the several special cases and in all these cases it is shown that all the
GNEs are Pareto-optimal, demonstrating that there is no loss in efficiency when
players adopt strategic behavior in these scenarios. Several key equivalence
relations are derived and used to demonstrate a game-theoretic duality between
the Gaussian MAC and the Gaussian BC. This duality allows a parametrized
computation of the equilibria of the BC in terms of the equilibria of the MAC
and paves the way to translate several MAC results to the dual BC scenario.
|
The cross-section for the production of $b{\bar b} c{\bar c}$ quarks in
$e^+e^-$ annihilation, that proves to be at a level of $\sigma (e^+e^-
\rightarrow b{\bar b} c{\bar c})/
\sigma (e^+e^- \rightarrow b{\bar b}) \sim 10^{-2}$ for ${\sqrt s}=M_Z$ is
calculated within the frames of the QCD perturbation theory. The cross sections
for the associated production of $1S$- and $2S$-wave states of $B_c$-meson in
the reaction $e^+e^- \rightarrow B_c b{\bar c}$ were calculated in the
nonrelativistic model of a heavy quarkonium. The fragmentation function of
$b\to B_c^{(*)}$ is analysed in the scaling limit. The number of
$\Lambda_{bc}$-hyperons to be expected at LEP is estimated on the basis of the
assumption on quark-hadron duality.
|
We introduce a hard-core lattice-gas model on generalized Bethe lattices and
investigate analytically and numerically its compaction behavior. If
compactified slowly, the system undergoes a first-order crystallization
transition. If compactified much faster, the system stays in a meta-stable
liquid state and undergoes a glass transition under further compaction. We show
that this behavior is induced by geometrical frustration which appears due to
the existence of short loops in the generalized Bethe lattices. We also compare
our results to numerical simulations of a three-dimensional analog of the
model.
|
We investigate semi-inclusive photon-hadron production in the color glass
condensate (CGC) framework at RHIC and the LHC energies in proton-proton (pp)
and proton-nucleus (pA) collisions. We calculate the coincidence probability
for azimuthal correlation of pairs of photon-hadron and show that the away-side
correlations have a double-peak or a single-peak structure depending on trigger
particle selection and kinematics. This novel feature is unique for
semi-inclusive photon-hadron production compared to a similar measurement for
double inclusive dihadron production in pA collisions. We obtain necessary
conditions between kinematics variables for the appearance of a double-peak or
a single peak structure for the away-side photon-hadron correlations in pp and
pA collisions at forward rapidities and show that this feature is mainly
controlled by the ratio p_T^hadron/p_T^photon. Decorrelation of away-side
photon-hadron production by increasing the energy, rapidity and density, and
appearance of double-peak structure can be understood by QCD saturation
physics. We also provide predictions for the ratio of single inclusive prompt
photon to hadron production, and two-dimensional nuclear modification factor
for the semi-inclusive photon-hadron pair production at RHIC and the LHC at
forward rapidities.
|
Superfluids are distinguished from ordinary fluids by the quantized manner
the rotation is manifested in them. Precisely, quantized vortices are known to
appear in the bulk of a superfluid subject to external rotation. In this work
we study a trapped ultracold Bose gas of $N=101$ atoms in two spatial
dimensions that is stirred by a rotating beam. We use the multiconfigurational
Hartree method for bosons, that extends the mainstream mean-field theory, to
calculate the dynamics of the gas in real time. As the gas is rotated the
wavefunction of the system changes symmetry and topology. We see a series of
resonant rotations as the stirring frequency is increased. Fragmentation
accompanies the resonances and change of symmetry of the wavefunction of the
gas. We conclude that fragmentation of the gas appears hand-in-hand with
resonant absorption of energy and angular momentum from the external agent of
rotation.
|
For the last two years, from 2020 to 2021, COVID-19 has broken disease
prevention measures in many countries, including Vietnam, and negatively
impacted various aspects of human life and the social community. Besides, the
misleading information in the community and fake news about the pandemic are
also serious situations. Therefore, we present the first Vietnamese
community-based question answering dataset for developing question answering
systems for COVID-19 called UIT-ViCoV19QA. The dataset comprises 4,500
question-answer pairs collected from trusted medical sources, with at least one
answer and at most four unique paraphrased answers per question. Along with the
dataset, we set up various deep learning models as baseline to assess the
quality of our dataset and initiate the benchmark results for further research
through commonly used metrics such as BLEU, METEOR, and ROUGE-L. We also
illustrate the positive effects of having multiple paraphrased answers
experimented on these models, especially on Transformer - a dominant
architecture in the field of study.
|
For any inelastic process $v_{\ell} + I \to \ell^- + F$ with $m_{\ell} = 0$,
the cross section at $\theta_{\ell} = 0$ is given by Adler's PCAC theorem.
Inclusion of the lepton mass has a dynamical effect (``PCAC-screening'') caused
by interference of spin-zero ($\pi^+$) and spin-one exchanges. This effect may
be relevant to the forward suppression reported in recent experiments.
|
In this contribution I will review some of the researches that are currently
being pursued in Padova (mainly within the In:Theory and Strength projects),
focusing on the interdisciplinary applications of nuclear theory to several
other branches of physics, with the aim of contributing to show the centrality
of nuclear theory in the Italian scientific scenario and the prominence of this
fertile field in fostering new physics.
In particular, I will talk about: i) the recent solution of the long-standing
"electron screening puzzle" that settles a fundamental controversy in nuclear
astrophysics between the outcome of lab experiments on earth and nuclear
reactions happening in stars; the application of algebraic methods to very
diverse systems such as: ii) the supramolecular complex H2@C60, i.e. a diatomic
hydrogen molecule caged in a fullerene and iii) to the spectrum of hypernuclei,
i.e. systems made of a Lambda particles trapped in (heavy) nuclei.
|
Through numerical simulations of the Kuramoto equation, which displays
high-dimensional dissipative chaos, we find a quantity representing the cost
for maintenance of a spatially non-uniform structure that appears in the phase
turbulence of chemical oscillatory waves. We call this quantity the generalized
entropy production and demonstrate that its distribution function possesses a
symmetry expressed by a fluctuation theorem. We also report a numerical result
which suggests a relation between this generalized entropy production rate and
the Kolmogorov-Sinai entropy.
|
In classification problems with large output spaces (up to millions of
labels), the last layer can require an enormous amount of memory. Using sparse
connectivity would drastically reduce the memory requirements, but as we show
below, it can result in much diminished predictive performance of the model.
Fortunately, we found that this can be mitigated by introducing a penultimate
layer of intermediate size. We further demonstrate that one can constrain the
connectivity of the sparse layer to be uniform, in the sense that each output
neuron will have the exact same number of incoming connections. This allows for
efficient implementations of sparse matrix multiplication and connection
redistribution on GPU hardware. Via a custom CUDA implementation, we show that
the proposed approach can scale to datasets with 670,000 labels on a single
commodity GPU with only 4GB memory.
|
We use mirror symmetry, the refined holomorphic anomaly equation and
modularity properties of elliptic singularities to calculate the refined BPS
invariants of stable pairs on non-compact Calabi-Yau manifolds, based on del
Pezzo surfaces and elliptic surfaces, in particular the half K3. The BPS
numbers contribute naturally to the five-dimensional N=1 supersymmetric index
of M-theory, but they can be also interpreted in terms of the superconformal
index in six dimensions and upon dimensional reduction the generating functions
count N=2 Seiberg-Witten gauge theory instantons in four dimensions. Using the
M/F-theory uplift the additional information encoded in the spin content can be
used in an essential way to obtain information about BPS states in physical
systems associated to small instantons, tensionless strings, gauge symmetry
enhancement in F-theory by [p,q]-strings as well as M-strings.
|
In this paper, we present the experimental work done on Query Expansion (QE)
for retrieval tasks of Gujarati text documents. In information retrieval, it is
very difficult to estimate the exact user need, query expansion adds terms to
the original query, which provides more information about the user need. There
are various approaches to query expansion. In our work, manual thesaurus based
query expansion was performed to evaluate the performance of widely used
information retrieval models for Gujarati text documents. Results show that
query expansion improves the recall of text documents.
|
We introduce a class of birth-and-death Polya urns, which allow for both
sampling and removal of observations governed by an auxiliary inhomogeneous
Bernoulli process, and investigate the asymptotic behaviour of the induced
allelic partitions. By exploiting some embedded models, we show that the
asymptotic regimes exhibit a phase transition from partitions with almost
surely infinitely many blocks and independent counts, to stationary partitions
with a random number of blocks. The first regime corresponds to limits of
Ewens-type partitions and includes a result of Arratia, Barbour and Tavar\'e
(1992) as a special case. We identify the invariant and reversible measure in
the second regime, which preserves asymptotically the dependence between
counts, and is shown to be a mixture of Ewens sampling formulas, with a tilted
Negative Binomial mixing distribution on the sample size.
|
Actin cytoskeletal protrusions in crawling cells, or lamellipodia, exhibit
various morphological properties such as two characteristic peaks in the
distribution of filament orientation with respect to the leading edge. To
understand these properties, using the dendritic nucleation model as a basis
for cytoskeletal restructuring, a kinetic-population model with
orientational-dependent branching (birth) and capping (death) is constructed
and analyzed. Optimizing for growth yields a relation between the branch angle
and filament orientation that explains the two characteristic peaks. The model
also exhibits a subdominant population that allows for more accurate modeling
of recent measurements of filamentous actin density along the leading edge of
lamellipodia in keratocytes. Finally, we explore the relationship between
orientational and spatial organization of filamentous actin in lamellipodia and
address recent observations of a prevalence of overlapping filaments to
branched filaments---a finding that is claimed to be in contradiction with the
dendritic nucleation model.
|
Non-thermal electron acceleration via magnetic reconnection is thought to
play an important role in powering the variable X-ray emission from radiatively
inefficient accretion flows around black holes. The trans-relativistic regime
of magnetic reconnection, where the magnetization $\sigma$, defined as the
ratio of magnetic energy density to enthalpy density, is $\sim 1$, is
frequently encountered in such flows. By means of a large suite of
two-dimensional particle-in-cell simulations, we investigate electron and
proton acceleration in the trans-relativistic regime. We focus on the
dependence of the electron energy spectrum on $\sigma$ and the proton $\beta$
(i.e., the ratio of proton thermal pressure to magnetic pressure). We find that
the electron spectrum in the reconnection region is non-thermal and can be
generally modeled as a power law. At $\beta \lesssim 3 \times 10^{-3}$, the
slope, $p$, is independent of $\beta$ and it hardens with increasing $\sigma$
as $p\simeq 1.8 +0.7/\sqrt{\sigma}$. Electrons are primarily accelerated by the
non-ideal electric field at X-points, either in the initial current layer or in
current sheets generated in between merging magnetic islands. At higher values
of $\beta$, the electron power law steepens for all values of $\sigma$. At
values of $\beta$ near $\beta_{\rm max}\approx1/4\sigma$, when both electrons
and protons are relativistically hot prior to reconnection, the spectra of both
species display an additional component at high energies, containing a few
percent of particles. These particles are accelerated via a Fermi-like process
by bouncing in between the reconnection outflow and a stationary magnetic
island. We provide an empirical prescription for the dependence of the
power-law slope and the acceleration efficiency on $\beta$ and $\sigma$, which
can be used in global simulations of collisionless accretion disks.
|
Background: Recent epidemic outbreaks such as the SARS-CoV-2 pandemic and the
mpox outbreak in 2022 have demonstrated the value of genomic sequencing data
for tracking the origin and spread of pathogens. Laboratories around the globe
generated new sequences at unprecedented speed and volume and bioinformaticians
developed new tools and dashboards to analyze this wealth of data. However, a
major challenge that remains is the lack of simple and efficient approaches for
accessing and processing sequencing data.
Results: The Lightweight API for Sequences (LAPIS) facilitates rapid
retrieval and analysis of genomic sequencing data through a REST API. It
supports complex mutation- and metadata-based queries and can perform
aggregation operations on massive datasets. LAPIS is optimized for typical
questions relevant to genomic epidemiology. Using a newly-developed in-memory
database engine, it has a high speed and throughput: between 25 January and 4
February 2023, the SARS-CoV-2 instance of LAPIS, which contains 14.5 million
sequences, processed over 20 million requests with a mean response time of 411
ms and a median response time of 1 ms. LAPIS is the core engine behind our
dashboards on genspectrum.org and we currently maintain public LAPIS instances
for SARS-CoV-2 and mpox.
Conclusions: Powered by an optimized database engine and available through a
web API, LAPIS enhances the accessibility of genomic sequencing data. It is
designed to serve as a common backend for dashboards and analyses with the
potential to be integrated into common database platforms such as GenBank.
|
Observations of gravitational waves from inspiralling neutron star
binaries---such as GW170817---can be used to constrain the nuclear equation of
state by placing bounds on stellar tidal deformability. For slowly rotating
neutron stars, the response to a weak quadrupolar tidal field is characterized
by four internal-structure-dependent constants called "Love numbers." The tidal
Love numbers $k_2^\text{el}$ and $k_2^\text{mag}$ measure the tides raised by
the gravitoelectric and gravitomagnetic components of the applied field, and
the rotational-tidal Love numbers $\mathfrak{f}^\text{o}$ and
$\mathfrak{k}^\text{o}$ measure those raised by couplings between the applied
field and the neutron star spin. In this work we compute these four Love
numbers for perfect fluid neutron stars with realistic equations of state. We
discover (nearly) equation-of-state independent relations between the
rotational-tidal Love numbers and the moment of inertia, thereby extending the
scope of I-Love-Q universality. We find that similar relations hold among the
tidal and rotational-tidal Love numbers. These relations extend the
applications of I-Love universality in gravitational-wave astronomy. As our
findings differ from those reported in the literature, we derive general
formulas for the rotational-tidal Love numbers in post-Newtonian theory and
confirm numerically that they agree with our general-relativistic computations
in the weak-field limit.
|
In this paper, we characterize singularity of the $n$-th eigenvalue of
self-adjoint discrete Sturm-Liouville problems in any dimension. For a fixed
Sturm-Liouville equation, we completely characterize singularity of the $n$-th
eigenvalue. For a fixed boundary condition, unlike in the continuous case, the
$n$-th eigenvalue exhibits jump phenomena and we describe the singularity under
a non-degenerate assumption. Compared with the continuous case in [8, 12], the
singular set here is involved heavily with coefficients of the Sturm-Liouville
equations. This, along with arbitrariness of the dimension, causes difficulty
when dividing areas in layers of the considered space such that the $n$-th
eigenvalue has the same singularity in any given area. We study the singularity
by partitioning and analyzing the local coordinate systems, and provide a
Hermitian matrix which can determine the areas' division. To prove the
asymptotic behavior of the $n$-th eigenvalue, we generalize the method
developed in [21] to any dimension. Finally, by transforming the
Sturm-Liouville problem of Atkinson type in any dimension to a discrete one, we
can not only determine the number of eigenvalues, but also apply our approach
above to obtain the complete characterization of singularity of the $n$-th
eigenvalue for the Atkinson type.
|
Using exponential quadratic operators, we present a general framework for
studying the exact dynamics of system-bath interaction in which the Hamiltonian
is described by the quadratic form of bosonic operators. To demonstrate the
versatility of the approach, we study how the environment affects the squeezing
of quadrature components of the system. We further propose that the squeezing
can be enhanced when parity kicks are applied to the system.
|
We study a passenger-taxi double-ended queue with impatient passengers and
two-point matching time in this paper. The system considered in this paper is
different from those considered in the existing literature, which fully
considers the matching time between passengers and taxis, and the taxi capacity
of the system. The objective is to get the equilibrium joining strategy and the
socially optimal strategy under two information levels. For the practical
consideration of the airport terminal scenario, two different information
levels are considered. The theoretical results show that the passenger utility
function in the partially observable case is monotonic. For the complex form of
social welfare function of the partially observable case, we use a split
derivation. The equilibrium strategy and socially optimal strategy of the
observable case are threshold-type. Furthermore, some representative numerical
scenarios are used to visualize the theoretical results. The numerical
scenarios illustrate the influence of parameters on the equilibrium strategy
and socially optimal strategy under two information levels. Finally, the
optimal social welfare for the two information levels with the same parameters
are compared.
|
There have been extensive efforts in government, academia, and industry to
anticipate, forecast, and mitigate cyber attacks. A common approach is
time-series forecasting of cyber attacks based on data from network telescopes,
honeypots, and automated intrusion detection/prevention systems. This research
has uncovered key insights such as systematicity in cyber attacks. Here, we
propose an alternate perspective of this problem by performing forecasting of
attacks that are analyst-detected and -verified occurrences of malware. We call
these instances of malware cyber event data. Specifically, our dataset was
analyst-detected incidents from a large operational Computer Security Service
Provider (CSSP) for the U.S. Department of Defense, which rarely relies only on
automated systems. Our data set consists of weekly counts of cyber events over
approximately seven years. Since all cyber events were validated by analysts,
our dataset is unlikely to have false positives which are often endemic in
other sources of data. Further, the higher-quality data could be used for a
number for resource allocation, estimation of security resources, and the
development of effective risk-management strategies. We used a Bayesian State
Space Model for forecasting and found that events one week ahead could be
predicted. To quantify bursts, we used a Markov model. Our findings of
systematicity in analyst-detected cyber attacks are consistent with previous
work using other sources. The advanced information provided by a forecast may
help with threat awareness by providing a probable value and range for future
cyber events one week ahead. Other potential applications for cyber event
forecasting include proactive allocation of resources and capabilities for
cyber defense (e.g., analyst staffing and sensor configuration) in CSSPs.
Enhanced threat awareness may improve cybersecurity.
|
We are concerned with the mean field equation with singular data on bounded
domains. Under suitable non-degeneracy conditions we prove local uniqueness and
non-degeneracy of bubbling solutions blowing up at singular points. The proof
is based on sharp estimates for bubbling solutions of singular mean field
equations and suitably defined Pohozaev-type identities.
|
This essay points to many of the interesting ramifications of Margulis'
arithmeticity theorem, the superrigidity theorem, and normal subgroup theorem.
We provide some history and background, but the main goal is to point to
interesting open questions that stem directly or indirectly from Margulis' work
and it's antecedents.
|
PAPER WITHDRAWN.
The recent detection of Sgr A* in the X-ray band, together with the radio
polarization measurements conducted over the past few years, offer the best
constraints yet for understanding the nature of the emitting gas within several
Schwarzschild radii ($r_S$) of this supermassive black hole candidate at the
Galactic Center. Earlier, we showed that the sub-mm radiation from this source
may be associated with thermal synchrotron emission from an inner Keplerian
region within the circularization radius of the accreting plasma. In this
paper, we extend this analysis in a very important way, by calculating the
implied high-energy emission of Sgr A* associated with the orbiting, hot,
magnetized gas. We find that for the accretion rate inferred from the fits to
the sub-mm data, the dominant contribution to Sgr A*'s X-ray flux is due to
self-Comptonization of the radio photons, rather than from bremsstrahlung. The
latter is a two-body process, which would produce significant X-ray emission
only at much higher accretion rates. This picture leads to the testable
prediction that the physical conditions within the inner $\sim5r_S$ are
variable on a time scale of order a year. In particular, the accretion rate
$\dot M$ appears to have changed by about 15% between the sub-mm measurements
in 1996 and 1999. Given that the radio and self-Comptonized fluxes are strongly
correlated in this picture, the upcoming second generation Chandra observations
of Sgr A* may provide the direct evidence required to test this model.
|
We show that smooth maps are $C^1$-dense among $C^1$ volume preserving maps.
|
We study the ground state phase diagrams of two-photon Dicke, the
one-dimensional Jaynes-Cummings-Hubbard (JCH), and Rabi-Hubbard (RH) models
using mean field, perturbation, quantum Monte Carlo (QMC), and density matrix
renormalization group (DMRG) methods. We first compare mean field predictions
for the phase diagram of the Dicke model with exact QMC results and find
excellent agreement. The phase diagram of the JCH model is then shown to
exhibit a single Mott insulator lobe with two excitons per site, a superfluid
(SF, superradiant) phase and a large region of instability where the
Hamiltonian becomes unbounded. Unlike the one-photon model, there are no higher
Mott lobes. Also unlike the one-photon case, the SF phases above and below the
Mott are surprisingly different: Below the Mott, the SF is that of photon {\it
pairs} as opposed to above the Mott where it is SF of simple photons. The mean
field phase diagram of the RH model predicts a transition from a normal to a
superradiant phase but none is found with QMC.
|
An all-order resummation is performed for the effect of the running of the
strong coupling in the zero recoil sum rule for the axial current and for the
kinetic operator \vec\pi^2. The perturbative corrections to well-defined
objects of OPE turn out to be very moderate. The renormalization of the kinetic
operator is addressed.
|
Diluted Magnetic Semiconductors (DMS) doped with a small concentration of
magnetic impurities inducing ferromagnetic DMSs have attracted a lot of
attention in the few last years. In particular, DMS based on III-V and II-VI
semiconductors doped with transition metal are deeply investigated by both
theoretical and experimental scientists for their promoting applications in
spintronics. In this work, we present the magnetic properties of doped Mn ions
in semi-conductor for different carrier's concentration. For the case of
Zn1-xMnxO, the results of our calculations, dealt within Monte Carlo study
using a Heisenberg Hamiltonian based on the RKKY interaction, show well
converged MC data based on this Hamiltonian for different carrier
concentrations.
|
3D print is a recently developed technique, for single-unit production, and
for structures that have been impossible to build previously. The current work
presents a method to 3D print polymer bonded isotropic hard magnets with a
low-cost, end-user 3D printer. Commercially available isotropic NdFeB powder
inside a PA11 matrix is characterized, and prepared for the printing process.
An example of a printed magnet with a complex shape that was designed to
generate a specific stray field is presented, and compared with finite element
simulation solving the macroscopic Maxwell equations. For magnetic
characterization, and comparing 3D printed structures with injection molded
parts, hysteresis measurements are performed. To measure the stray field
outside the magnet, the printer is upgraded to a 3D magnetic flux density
measurement system. To skip an elaborate adjusting of the sensor, a simulation
is used to calibrate the angles, sensitivity, and the offset of the sensor.
With this setup a measurement resolution of 0.05\,mm along the z-axes is
achievable. The effectiveness of our novel calibration method is shown.
With our setup we are able to print polymer bonded magnetic systems with the
freedom of having a specific complex shape with locally tailored magnetic
properties. The 3D scanning setup is easy to mount, and with our calibration
method we are able to get accurate measuring results of the stray field.
|
Proof-of-Learning (PoL) proposes that a model owner logs training checkpoints
to establish a proof of having expended the computation necessary for training.
The authors of PoL forego cryptographic approaches and trade rigorous security
guarantees for scalability to deep learning. They empirically argued the
benefit of this approach by showing how spoofing--computing a proof for a
stolen model--is as expensive as obtaining the proof honestly by training the
model. However, recent work has provided a counter-example and thus has
invalidated this observation.
In this work we demonstrate, first, that while it is true that current PoL
verification is not robust to adversaries, recent work has largely
underestimated this lack of robustness. This is because existing spoofing
strategies are either unreproducible or target weakened instantiations of
PoL--meaning they are easily thwarted by changing hyperparameters of the
verification. Instead, we introduce the first spoofing strategies that can be
reproduced across different configurations of the PoL verification and can be
done for a fraction of the cost of previous spoofing strategies. This is
possible because we identify key vulnerabilities of PoL and systematically
analyze the underlying assumptions needed for robust verification of a proof.
On the theoretical side, we show how realizing these assumptions reduces to
open problems in learning theory.We conclude that one cannot develop a provably
robust PoL verification mechanism without further understanding of optimization
in deep learning.
|
We study the dust concentration and emission in protoplanetary disks
vortices. We extend the Lyra-Lin solution for the dust concentration of a
single grain size to a power-law distribution of grain sizes $n(a) \propto
a^{-p}$. Assuming dust conservation in the disk, we find an analytic dust
surface density as a function of the grain radius. We calculate the increase of
the dust to gas mass ratio $\epsilon$ and the slope $p$ of the dust size
distribution due to grain segregation within the vortex. We apply this model to
a numerical simulation of a disk containing a persistent vortex. Due to the
accumulation of large grains towards the vortex center, $\epsilon$ increases by
a factor of 10 from the background disk value, and $p$ decreases from 3.5 to
3.0. We find the disk emission at millimeter wavelengths corresponding to
synthetic observations with ALMA and VLA. The simulated maps at 7 mm and 1 cm
show a strong azimuthal asymmetry. This happens because, at these wavelengths,
the disk becomes optically thin while the vortex remains optically thick. The
large vortex opacity is mainly due to an increase in the dust to gas mass
ratio. In addition, the change in the slope of the dust size distribution
increases the opacity by a factor of 2. We also show that the inclusion of the
dust scattering opacity substantially changes the disks images.
|
The Deep Survey instrument on the Extreme Ultraviolet Explorer satellite
(EUVE) obtained long, nearly continuous soft X-ray light curves of 5-33 days
duration for 14 Seyfert galaxies and QSOs. We present a uniform reduction of
these data, which account for a total of 231 days of observation. Several of
these light curves are well suited to a search for periodicity or QPOs in the
range of hours to days that might be expected from dynamical processes in the
inner accretion disk around ~10^8 M_sun black holes. Light curves and
periodograms of the three longest observations show features that could be
transient periods: 0.89 days in RX J0437.4-4711, 2.08 days in Ton S180, and 5.8
days in 1H 0419-577. The statistical significance of these signals is estimated
using the method of Timmer & Konig (1995), which carefully takes into account
the red-noise properties of Seyfert light curves. The result is that the
signals in RX J0437.4-4711 and Ton S180 exceed 95% confidence with respect to
red noise, while 1H 0419-577 is only 64% significant. These period values
appear unrelated to the length of the observation, which is similar in the
three cases, but they do scale roughly as the luminosity of the object, which
would be expected in a dynamical scenario if luminosity scales with black hole
mass.
|
A discontinuous generalization of the standard map, which arises naturally as
the dynamics of a periodically kicked particle in a one dimensional infinite
square well potential, is examined. Existence of competing length scales,
namely the width of the well and the wavelength of the external field,
introduce novel dynamical behaviour. Deterministic chaos induced diffusion is
observed for weak field strengths as the length scales do not match. This is
related to an abrupt breakdown of rotationally invariant curves and in
particular KAM tori. An approximate stability theory is derived wherein the
usual standard map is a point of ``bifurcation''.
|
The consistent histories formulation of the quantum theory of a closed system
with pure initial state defines an infinite number of incompatible consistent
sets, each of which gives a possible description of the physics. We investigate
the possibility of using the properties of the Schmidt decomposition to define
an algorithm which selects a single, physically natural, consistent set. We
explain the problems which arise, set out some possible algorithms, and explain
their properties with the aid of simple models. Though the discussion is framed
in the language of the consistent histories approach, it is intended to
highlight the difficulty in making any interpretation of quantum theory based
on decoherence into a mathematically precise theory.
|
We study unbinding of multivalent cationic ligands from oppositely charged
polymeric binding sites sparsely grafted on a flat neutral substrate. Our
molecular dynamics (MD) simulations are suggested by single-molecule studies of
protein-DNA interactions. We consider univalent salt concentrations spanning
roughly a thousandfold range, together with various concentrations of excess
ligands in solution. To reveal the ionic effects on unbinding kinetics of
spontaneous and facilitated dissociation mechanisms, we treat electrostatic
interactions both at a Debye-H\"{u}ckel (DH, or `implicit' ions, i.e., use of
an electrostatic potential with a prescribed decay length) level, as well as by
the more precise approach of considering all ionic species explicitly in the
simulations. We find that the DH approach systematically overestimates
unbinding rates, relative to the calculations where all ion pairs are present
explicitly in solution, although many aspects of the two types of calculation
are qualitatively similar. For facilitated dissociation (FD, acceleration of
unbinding by free ligands in solution) explicit ion simulations lead to
unbinding at lower free ligand concentrations. Our simulations predict a
variety of FD regimes as a function of free ligand and ion concentrations; a
particularly interesting regime is at intermediate concentrations of ligands
where non-electrostatic binding strength controls FD. We conclude that
explicit-ion electrostatic modeling is an essential component to quantitatively
tackle problems in molecular ligand dissociation, including
nucleic-acid-binding proteins.
|
A new proof is given for the correctness of the powers of two descent method
for computing discrete logarithms. The result is slightly stronger than the
original work, but more importantly we provide a unified geometric argument,
eliminating the need to analyse all possible subgroups of
$\mathrm{PGL}_2(\mathbb F_q)$. Our approach sheds new light on the role of
$\mathrm{PGL}_2$, in the hope to eventually lead to a complete proof that
discrete logarithms can be computed in quasi-polynomial time in finite fields
of fixed characteristic.
|
Cervical Cancer continues to be the leading gynecological malignancy, posing
a persistent threat to women's health on a global scale. Early screening via
cytology Whole Slide Image (WSI) diagnosis is critical to prevent this Cancer
progression and improve survival rate, but pathologist's single test suffers
inevitable false negative due to the immense number of cells that need to be
reviewed within a WSI. Though computer-aided automated diagnostic models can
serve as strong complement for pathologists, their effectiveness is hampered by
the paucity of extensive and detailed annotations, coupled with the limited
interpretability and robustness. These factors significantly hinder their
practical applicability and reliability in clinical settings. To tackle these
challenges, we develop an AI approach, which is a Scalable Technology for
Robust and Interpretable Diagnosis built on Extensive data (STRIDE) of cervical
cytology. STRIDE addresses the bottleneck of limited annotations by integrating
patient-level labels with a small portion of cell-level labels through an
end-to-end training strategy, facilitating scalable learning across extensive
datasets. To further improve the robustness to real-world domain shifts of
cytology slide-making and imaging, STRIDE employs color adversarial samples
training that mimic staining and imaging variations. Lastly, to achieve
pathologist-level interpretability for the trustworthiness in clinical
settings, STRIDE can generate explanatory textual descriptions that simulates
pathologists' diagnostic processes by cell image feature and textual
description alignment. Conducting extensive experiments and evaluations in 183
medical centers with a dataset of 341,889 WSIs and 0.1 billion cells from
cervical cytology patients, STRIDE has demonstrated a remarkable superiority
over previous state-of-the-art techniques.
|
Every database system contains a query optimizer that performs query
rewrites. Unfortunately, developing query optimizers remains a highly
challenging task. Part of the challenges comes from the intricacies and rich
features of query languages, which makes reasoning about rewrite rules
difficult. In this paper, we propose a machine-checkable denotational semantics
for SQL, the de facto language for relational database, for rigorously
validating rewrite rules. Unlike previously proposed semantics that are either
non-mechanized or only cover a small amount of SQL language features, our
semantics covers all major features of SQL, including bags, correlated
subqueries, aggregation, and indexes. Our mechanized semantics, called HoTTSQL,
is based on K-Relations and homotopy type theory, where we denote relations as
mathematical functions from tuples to univalent types. We have implemented
HoTTSQL in Coq, which takes only fewer than 300 lines of code and have proved a
wide range of SQL rewrite rules, including those from database research
literature (e.g., magic set rewrites) and real-world query optimizers (e.g.,
subquery elimination). Several of these rewrite rules have never been
previously proven correct. In addition, while query equivalence is generally
undecidable, we have implemented an automated decision procedure using HoTTSQL
for conjunctive queries: a well-studied decidable fragment of SQL that
encompasses many real-world queries.
|
Detecting Human-Object Interaction (HOI) in images is an important step
towards high-level visual comprehension. Existing work often shed light on
improving either human and object detection, or interaction recognition.
However, due to the limitation of datasets, these methods tend to fit well on
frequent interactions conditioned on the detected objects, yet largely ignoring
the rare ones, which is referred to as the object bias problem in this paper.
In this work, we for the first time, uncover the problem from two aspects:
unbalanced interaction distribution and biased model learning. To overcome the
object bias problem, we propose a novel plug-and-play Object-wise Debiasing
Memory (ODM) method for re-balancing the distribution of interactions under
detected objects. Equipped with carefully designed read and write strategies,
the proposed ODM allows rare interaction instances to be more frequently
sampled for training, thereby alleviating the object bias induced by the
unbalanced interaction distribution. We apply this method to three advanced
baselines and conduct experiments on the HICO-DET and HOI-COCO datasets. To
quantitatively study the object bias problem, we advocate a new protocol for
evaluating model performance. As demonstrated in the experimental results, our
method brings consistent and significant improvements over baselines,
especially on rare interactions under each object. In addition, when evaluating
under the conventional standard setting, our method achieves new
state-of-the-art on the two benchmarks.
|
The potential between two D4-branes at angles with partially unbroken
supersymmetry is computed, and is used to discuss the creation of a fundamental
string when two such D4-branes cross each other in M(atrix) theory. The
effective Lagrangian is shown to be anomalous at 1-loop approximation, but it
can be modified by bare Chern-Simons terms to preserve the invariance under the
large gauge transformation. The resulting effective potential agrees with that
obtained from the string calculations. The result shows that a fundamental
string is created in order to cancel the repulsive force between two D4-branes
at proper angles.
|
We investigate the non-equilibrium relaxation dynamics of a one dimensional
system of interacting spinless fermions near the XXZ integrable point. We
observe two qualitatively different regimes: close to integrability and for low
energies the relaxation proceeds in two steps (prethermalization scenario),
while for large energies and/or away from integrability the dynamics develops
in a single step. When the integrability breaking parameter is below a certain
finite threshold and the energy of the system is sufficiently low the lifetime
of the metastable states increases abruptly by several orders of magnitude,
resembling the physics of glassy systems. This is reflected in a sudden jump in
the relaxation timescales. We present results for finite but large systems and
for large times compared to standard numerical methods. Our approach is based
on the construction of equations of motion for one- and two-particle
correlation functions using projection operator techniques.
|
We discuss the testing of the Standard Model of CP violation, and the search
for CP-violating effects from beyond the Standard Model, in $B$ decays. We then
focus on the quantum mechanics of the experiments on CP violation to be
performed at $B$ factories. These experiments will involve very pretty
Einstein-Podolsky-Rosen correlations. We show that the physics of these
experiments can be understood without invoking the ``collapse of the wave
function," and without the mysteries that sometimes accompany discussions of
EPR effects.
(To appear in the Proceedings of the Moriond Workshop on Electroweak
Interactions and Unified Theories, Les Arcs, France, March 1995)
|
Urbanization has a large impact on human society. Besides, global warming has
become an increasingly popular topic since global warming will not only play a
significant role in human beings daily life but also have a huge impact on the
global climate. Under the background of global warming, the impacts of
urbanization may have differences, which is the topic this project mainly
discusses. In this study, the monthly air temperature data from 2
meteorological stations in Beijing under the different underlying surface:
urban and suburban for a period of 1960-2014 were analyzed. Besides, two years,
1993 and 2010, selected with the different circulation conditions have been
simulated by weather forecasting models. We conducted experiments using the
Weather Research and Forecasting (WRF) model to investigate the impacts of
urbanization under different circulation conditions on the summer temperature
over the Beijing-Tianjin-Hebei Region (BTHR). The results indicate that the
effect of urbanization under low-pressure system is greater than that under
high-pressure system. That is because the difference of total cloud cover
between the urban area and the suburban area is greater under low-pressure
system in 1993. This research will not only increase our understanding of the
effect of urbanization but also provide a scientific basis to enhance the
ability to prevent disasters and reduce damages under the global warming
background.
|
Cross-lingual document alignment aims to identify pairs of documents in two
distinct languages that are of comparable content or translations of each
other. In this paper, we exploit the signals embedded in URLs to label web
documents at scale with an average precision of 94.5% across different language
pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify
web document pairs that are translations of each other. We release a new web
dataset consisting of over 392 million URL pairs from Common Crawl covering
documents in 8144 language pairs of which 137 pairs include English. In
addition to curating this massive dataset, we introduce baseline methods that
leverage cross-lingual representations to identify aligned documents based on
their textual content. Finally, we demonstrate the value of this parallel
documents dataset through a downstream task of mining parallel sentences and
measuring the quality of machine translations from models trained on this mined
data. Our objective in releasing this dataset is to foster new research in
cross-lingual NLP across a variety of low, medium, and high-resource languages.
|
Text summarization aims to compress a textual document to a short summary
while keeping salient information. Extractive approaches are widely used in
text summarization because of their fluency and efficiency. However, most of
existing extractive models hardly capture inter-sentence relationships,
particularly in long documents. They also often ignore the effect of topical
information on capturing important contents. To address these issues, this
paper proposes a graph neural network (GNN)-based extractive summarization
model, enabling to capture inter-sentence relationships efficiently via
graph-structured document representation. Moreover, our model integrates a
joint neural topic model (NTM) to discover latent topics, which can provide
document-level features for sentence selection. The experimental results
demonstrate that our model not only substantially achieves state-of-the-art
results on CNN/DM and NYT datasets but also considerably outperforms existing
approaches on scientific paper datasets consisting of much longer documents,
indicating its better robustness in document genres and lengths. Further
discussions show that topical information can help the model preselect salient
contents from an entire document, which interprets its effectiveness in long
document summarization.
|
If binaries consisting of two 100 Msun black holes exist they would serve as
extraordinarily powerful gravitational-wave sources, detectable to redshifts of
z=2 with the advanced LIGO/Virgo ground-based detectors. Large uncertainties
about the evolution of massive stars preclude definitive rate predictions for
mergers of these massive black holes. We show that rates as high as hundreds of
detections per year, or as low as no detections whatsoever, are both possible.
It was thought that the only way to produce these massive binaries was via
dynamical interactions in dense stellar systems. This view has been challenged
by the recent discovery of several stars with mass above 150 Msun in the R136
region of the Large Magellanic Cloud. Current models predict that when stars of
this mass leave the main sequence, their expansion is insufficient to allow
common envelope evolution to efficiently reduce the orbital separation. The
resulting black-hole--black-hole binary remains too wide to be able to coalesce
within a Hubble time. If this assessment is correct, isolated very massive
binaries do not evolve to be gravitational-wave sources. However, other
formation channels exist. For example, the high multiplicity of massive stars,
and their common formation in relatively dense stellar associations, opens up
dynamical channels for massive black hole mergers (e.g., via Kozai cycles or
repeated binary-single interactions). We identify key physical factors that
shape the population of very massive black-hole--black-hole binaries. Advanced
gravitational-wave detectors will provide important constraints on the
formation and evolution of very massive stars.
|
Loops and cycles play an important role in computing endomorphism rings of
supersingular elliptic curves and related cryptosystems. For a supersingular
elliptic curve $E$ defined over $\mathbb{F}_{p^2}$, if an imaginary quadratic
order $O$ can be embedded in $\text{End}(E)$ and a prime $L$ splits into two
principal ideals in $O$, we construct loops or cycles in the supersingular
$L$-isogeny graph at the vertices which are next to $j(E)$ in the supersingular
$\ell$-isogeny graph where $\ell$ is a prime different from $L$. Next, we
discuss the lengths of these cycles especially for $j(E)=1728$ and $0$.
Finally, we also determine an upper bound on primes $p$ for which there are
unexpected $2$-cycles if $\ell$ doesn't split in $O$.
|
We consider a generalization of the abelian Higgs model with a Chern-Simons
term by modifying two terms of the usual Lagrangian. We multiply a dielectric
function with the Maxwell kinetic energy term and incorporate nonminimal
interaction by considering generalized covariant derivative. We show that for a
particular choice of the dielectric function this model admits topological
vortices satisfying Bogomol'nyi bound for which the magnetic flux is not
quantized even though the energy is quantized. Furthermore, the vortex solution
in each topological sector is infinitely degenerate.
|
The Georgi-Machacek model is one of many beyond Standard Model scenarios with
an extended scalar sector which can group under the custodial $\rm SU(2)_C$
symmetry. There are 5-plet, 3-plet and singlet Higgs bosons under the
classification of such symmetry in addition to the Standard Model Higgs boson.
Here we study the prospects for detecting the doubly-charged Higgs boson ($\rm
H_5^{\pm\pm}$) through the vector boson fusion production at the
electron-proton colliders. Typically, we concentrate on our analysis through
$\mu$-lepton pair production via pair of same-sign W bosons decay. The
discovery significance are calculated as the functions of the triplet vacuum
expectation value and necessary luminosity.
|
Braneworld scenarios with compact extra-dimensions need the volume of the
extra space to be stabilized. Goldberger and Wise have introduced a simple
mechanism, based on the presence of a bulk scalar field, able to stabilize the
radius of the Randall-Sundrum model. Here, we transpose the same mechanism to
generic single-brane and two-brane models, with one extra dimension and
arbitrary scalar potentials in the bulk and on the branes. The single-brane
construction turns out to be always unstable, independently of the bulk and
brane potentials. In the case of two branes, we derive some generic criteria
ensuring the stabilization or destabilization of the system.
|
The tidal deformations of a planet are often considered as markers of its
inner structure. In this work, we use the tide excitations induced by the Sun
on Venus for deciphering the nature of its internal layers. In using a Monte
Carlo Random Exploration of the space of parameters describing the thickness,
density and viscosity of 4 or 5 layer profiles, we were able to select models
that can reproduce the observed mass, total moment of inertia, $k_2$ Love
number and expected quality factor $Q$. Each model is assumed to have
homogeneous layers with constant density, viscosity and rigidity. These models
show significant contrasts in the viscosity between the upper mantle and the
lower mantle. They also rather favor a S-free core and a slightly hotter lower
mantle consistent with previous expectations.
|
We present a mechanism that permits the parallel execution of multiple
quantum gate operations within a single long linear ion chain. Our approach is
based on large coherent forces that occur when ions are electronically excited
to long-lived Rydberg states. The presence of Rydberg ions drastically affects
the vibrational mode structure of the ion crystal giving rise to modes that are
spatially localized on isolated sub-crystals which can be individually and
independently manipulated. We theoretically discuss this Rydberg mode shaping
in an experimentally realistic setup and illustrate its power by analyzing the
fidelity of two conditional phase flip gates executed in parallel. Our scheme
highlights a possible route towards large-scale quantum computing via
vibrational mode shaping which is controlled on the single ion level.
|
In this paper, we propose a framework that enables a human teacher to shape a
robot behaviour by interactively providing it with unlabeled instructions. We
ground the meaning of instruction signals in the task-learning process, and use
them simultaneously for guiding the latter. We implement our framework as a
modular architecture, named TICS (Task-Instruction-Contingency-Shaping) that
combines different information sources: a predefined reward function, human
evaluative feedback and unlabeled instructions. This approach provides a novel
perspective for robotic task learning that lies between Reinforcement Learning
and Supervised Learning paradigms. We evaluate our framework both in simulation
and with a real robot. The experimental results demonstrate the effectiveness
of our framework in accelerating the task-learning process and in reducing the
number of required teaching signals.
|
We analyse the impact of matrix-element corrections to top decays in HERWIG
on several observables related to the jet activity in t-tbar events at the
Tevatron and at the LHC. In particular, we study the effects induced by the
higher-order corrections to top decays on the top mass reconstruction using the
recently proposed J/psi+lepton final states.
|
This thesis presents a computational theory of unsupervised language
acquisition, precisely defining procedures for learning language from ordinary
spoken or written utterances, with no explicit help from a teacher. The theory
is based heavily on concepts borrowed from machine learning and statistical
estimation. In particular, learning takes place by fitting a stochastic,
generative model of language to the evidence. Much of the thesis is devoted to
explaining conditions that must hold for this general learning strategy to
arrive at linguistically desirable grammars. The thesis introduces a variety of
technical innovations, among them a common representation for evidence and
grammars, and a learning strategy that separates the ``content'' of linguistic
parameters from their representation. Algorithms based on it suffer from few of
the search problems that have plagued other computational approaches to
language acquisition.
The theory has been tested on problems of learning vocabularies and grammars
from unsegmented text and continuous speech, and mappings between sound and
representations of meaning. It performs extremely well on various objective
criteria, acquiring knowledge that causes it to assign almost exactly the same
structure to utterances as humans do. This work has application to data
compression, language modeling, speech recognition, machine translation,
information retrieval, and other tasks that rely on either structural or
stochastic descriptions of language.
|
This paper presents location based service for telecom providers. Most of the
location-based services in the mobile networks are introduced and deployed by
Internet companies. It leaves for telecom just the role of the data channel.
Telecom providers should use their competitive advantages and offer own
solutions. In this paper, we discuss the sharing location information via
geo-messages. Geo messages let mobile users share location information as
signatures to the standard messages (e.g., email, SMS). Rather than let some
service constantly monitor (poll) the user location (as the most standalone
services do) or share location info within any social circle (social network
check-in, etc.) The Geo Messages approach lets users share location data on the
peer to peer basis. Users can share own location info with any existing
messaging systems. And messaging (e.g., SMS) is the traditional service for
telecom
|
An extended, spinless Falicov-Kimball model in the presence of perpendicular
magnetic field is investigated employing Hartree-Fock self-consistent
mean-field theory in two dimensions. In the presence of an orbital field the
hybridization-dependence of the excitonic average ${ \Delta =<{{d_i}^\dagger}
{f_i}>}$ is modified. The exciton responses in subtle different ways for
different chosen values of the magnetic flux consistent with Hofstadter's
well-known spectrum. The excitonic average is suppressed by the application of
magnetic field. We further examine the effect of Coulomb interaction and
$f$-electron hopping on the condensation of exciton for some rational values of
the applied magnetic field. The interband Coulomb interaction enhances the
$\Delta$ exponentially, while a non-zero $f$-electron hopping reduces it. A
strong commensurability effect of the magnetic flux on the behaviour of the
excitons is observed.
|
Years ago S. Weinberg suggested the "Quasi-Particle" method (Q-P) for
iteratively solving an integral equation, based on an expansion in terms of
sturmian functions that are eigenfunctions of the integral kernel. An
improvement of this method is presented that does not require knowledge of such
sturmian functions, but uses simpler auxiliary sturmian functions instead. This
improved Q-P method solves the integral equation iterated to second order so as
to accelerate the convergence of the iterations. Numerical examples are given
for the solution of the Lippmann-Schwinger integral equation for the scattering
of a particle from a potential with a repulsive core. An accuracy of 1:10^6 is
achieved after 14 iterations, and 1:10^10 after 20 iterations. The calculations
are carried out in configuration space for positive energies with an accuracy
of 1:10^11 by using a spectral expansion method in terms of Chebyshev
polynomials. The method can be extended to general integral kernels, and also
to solving a Schr\"odinger equation with Coulomb or non-local potentials.
|
Cellular automata are a famous model of computation, yet it is still a
challenging task to assess the computational capacity of a given automaton;
especially when it comes to showing negative results. In this paper, we focus
on studying this problem via the notion of CA relative simulation. We say that
automaton A is simulated by B if each space-time diagram of A can be, after
suitable transformations, reproduced by B.
We study affine automata - i.e., automata whose local rules are affine
mappings of vector spaces. This broad class contains the well-studied cases of
additive automata. The main result of this paper shows that (almost) every
automaton affine over a finite field F_p can only simulate affine automata over
F_p. We discuss how this general result implies, and widely surpasses,
limitations of additive automata previously proved in the literature.
We provide a formalization of the simulation notions into algebraic language
and discuss how this opens a new path to showing negative results about the
computational power of cellular automata using deeper algebraic theorems.
|
We report the theoretical investigation of the suppression of magnetic
systematic effects in HfF$^+$ cation for the experiment to search for the
electron electric dipole moment. The g-factors for $J = 1$, $F=3/2$,
$|M_F|=3/2$ hyperfine levels of the $^3\Delta_1$ state are calculated as
functions of the external electric field. The lowest value for the difference
between the g-factors of $\Omega$-doublet levels, $\Delta g = 3 \times
10^{-6}$, is attained at the electric field 7 V/cm. The body-fixed g-factor,
$G_{\parallel}$, was obtained both within the electronic structure calculations
and with our fit of the experimental data from [H. Loh, K. C. Cossel, M. C.
Grau, K.-K. Ni, E. R. Meyer, J. L. Bohn, J. Ye, and E. A. Cornell, Science {\bf
342}, 1220 (2013)]. For the electronic structure calculations we used a
combined scheme to perform correlation calculations of HfF$^+$ which includes
both the direct 4-component all-electron and generalized relativistic effective
core potential approaches. The electron correlation effects were treated using
the coupled cluster methods. The calculated value $G_{\parallel}=0.0115$ agrees
very well with the $G_{\parallel}=0.0118$ obtained in the our fitting
procedure. The calculated value $D_{\parallel}=-1.53$ a.u. of the molecule
frame dipole moment (with the origin in the center of mass) is in agreement
with the experimental value $D_{\parallel}=-1.54(1)$ a.u. [H. Loh, Ph.D.
thesis, Massachusetts Institute of Technology (2006)].
|
We consider several previously studied online variants of bin packing and
prove new and improved lower bounds on the asymptotic competitive ratios for
them. For that, we use a method of fully adaptive constructions. In particular,
we improve the lower bound for the asymptotic competitive ratio of online
square packing significantly, raising it from roughly 1.68 to above 1.75.
|
We investigate how to obtain various flows of K\"ahler metrics on a fixed
manifold as variations of K\"ahler reductions of a metric satisfying a given
static equation on a higher dimensional manifold. We identify static equations
that induce the geodesic equation for the Mabuchi's metric, the Calabi flow,
the pseudo-Calabi flow of Chen-Zheng and the K\"ahler-Ricci flow. In the latter
case we re-derive the V-soliton equation of La Nave-Tian.
|
Solving the 4-d Einstein equations as evolution in time requires solving
equations of two types: the four elliptic initial data (constraint) equations,
followed by the six second order evolution equations. Analytically the
constraint equations remain solved under the action of the evolution, and one
approach is to simply monitor them ({\it unconstrained} evolution). Since
computational solution of differential equations introduces almost inevitable
errors, it is clearly "more correct" to introduce a scheme which actively
maintains the constraints by solution ({\it constrained} evolution). This has
shown promise in computational settings, but the analysis of the resulting
mixed elliptic hyperbolic method has not been completely carried out. We
present such an analysis for one method of constrained evolution, applied to a
simple vacuum system, linearized gravitational waves.
We begin with a study of the hyperbolicity of the unconstrained Einstein
equations. (Because the study of hyperbolicity deals only with the highest
derivative order in the equations, linearization loses no essential details.)
We then give explicit analytical construction of the effect of initial data
setting and constrained evolution for linearized gravitational waves. While
this is clearly a toy model with regard to constrained evolution, certain
interesting features are found which have relevance to the full nonlinear
Einstein equations.
|
Subsets and Splits