text
stringlengths 6
128k
|
---|
Asymmetric steering is an effect whereby an inseparable bipartite system can
be found to be described by either quantum mechanics or local hidden variable
theories depending on which one of Alice or Bob makes the required
measurements. We show that, even with an inseparable bipartite system,
situations can arise where Gaussian measurements on one half are not sufficient
to answer the fundamental question of which theory gives an adequate
description and the whole system must be considered. This phenomenon is
possible because of an asymmetry in the definition of the original
Einstein-Podolsky-Rosen paradox and in this article we show theoretically that
it may be demonstrated, at least in the case where Alice and Bob can only make
Gaussian measurements, using the intracavity nonlinear coupler.
|
In this paper we construct complete simply connected minimal surfaces with a
prescribed coordinate function. Moreover, we prove that these surfaces are
dense in the space of all minimal surfaces with this coordinate function (with
the topology of the smooth convergence on compact sets).
|
Lattices that can be represented in a kagome-like form are shown to satisfy a
universal percolation criticality condition, expressed as a relation between
P_3, the probability that all three vertices in the triangle connect, and P_0,
the probability that none connect. A linear approximation for P_3(P_0) is
derived and appears to provide a rigorous upper bound for critical thresholds.
A numerically determined relation for P_3(P_0) gives thresholds for the kagome,
site-bond honeycomb, (3-12^2), and "stack-of-triangle" lattices that compare
favorably with numerical results.
|
We explore the continuum limit $a\rightarrow 0$ of meson correlation
functions at finite temperature. In detail we analyze finite volume and lattice
cut-off effects in view of possible consequences for continuum physics. We
perform calculations on quenched gauge configurations using the clover improved
Wilson fermion action. We present and discuss simulations on isotropic
$N_\sigma^3\times 16$ lattices with $N_\sigma=32,48,64,128$ and $128^3 \times
N_\tau$ lattices with $N_\tau=16,24,32,48$ corresponding to lattice spacings in
the range of $0.01 fm \lsim a \lsim\ 0.031 fm$ at $T\simeq1.45T_c$. Continuum
limit extrapolations of vector meson and pseudo scalar correlators are
performed and their large distance expansion in terms of thermal moments is
introduced. We discuss consequences of this analysis for the calculation of the
electrical conductivity of the QGP at this temperature.
|
We consider the common setting where one observes probability estimates for a
large number of events, such as default risks for numerous bonds.
Unfortunately, even with unbiased estimates, selecting events corresponding to
the most extreme probabilities can result in systematically underestimating the
true level of uncertainty. We develop an empirical Bayes approach "Excess
Certainty Adjusted Probabilities" (ECAP), using a variant of Tweedie's formula,
which updates probability estimates to correct for selection bias. ECAP is a
flexible non-parametric method, which directly estimates the score function
associated with the probability estimates, so it does not need to make any
restrictive assumptions about the prior on the true probabilities. ECAP also
works well in settings where the probability estimates are biased. We
demonstrate through theoretical results, simulations, and an analysis of two
real world data sets, that ECAP can provide significant improvements over the
original probability estimates.
|
The kinetics of droplet and bridge formation within striped nano-capillaries
is studied when the wetting film grows via interface-limited growth. The
phenomenological time-dependent Ginzburg-Landau (TDGL)-type model with thermal
noise is used and numerically solved using the cell dynamics method. The model
is two-dimensional and consists of undersaturated vapor confined within a
nano-capillary made of two infinitely wide flat substrates. The surface of the
substrate is chemically heterogeneous with a single stripe of lyophilic domain
that exerts long-range attractive potential to the vapor molecule. The dynamics
of nucleation and subsequent growth of droplet and bridge can be simulated and
visualized. In particular, the evolution of the morphology from droplet or bump
to bridge is clearly identified. Crucial role played by the substrate potential
on the morphology of bridge of nanoscopic size is clarified. Nearly
temperature-independent evolution of capillary condensation is predicted when
the interface-limited growth dominates. In addition, it is shown that the
dynamics of capillary condensation follows the scenario of capillary
condensation proposed by Everett and Haynes three decades ago.
|
In the theory of digraphs, the study of cycles is a subject of great
importance and has given birth to a number of deep questions such as the
Behzad-Chartrand-Wall conjecture (1970) and its generalization, the
Caccetta-H\"{a}ggkvist conjecture (1978). Despite a lot of interest and
efforts, the progress on these remains slow and mostly restricted to the
solution of some special cases. In this note, we prove these conjectures for
digraphs with girth is at least as large as their minimum out-degree and
without short even cycles. More generally, we prove that if a digraph has
sufficiently large girth and does not contain closed walks of certain lengths,
then the conjectures hold. The proof makes use of some of the known results on
the Caccetta-H\"{a}ggkvist conjecture, properties of direct products of
digraphs and a construction that multiplies the girth of a digraph.
|
We carry out a general analysis of the representations of the superconformal
algebras SU(2,2/N), OSp(8/4,R) and OSp(8^*/4) and give their realization in
superspace. We present a construction of their UIR's by multiplication of the
different types of massless superfields ("supersingletons"). Particular
attention is paid to the so-called "short multiplets". Representations
undergoing shortening have "protected dimension" and correspond to BPS states
in the dual supergravity theory in anti-de Sitter space. These results are
relevant for the classification of multitrace operators in boundary conformally
invariant theories as well as for the classification of AdS black holes
preserving different fractions of supersymmetry.
|
Numerical algorithms for solving problems of mathematical physics on modern
parallel computers employ various domain decomposition techniques. Domain
decomposition schemes are developed here to solve numerically initial/boundary
value problems for the Stokes system of equations in the primitive variables
pressure-velocity. Unconditionally stable schemes of domain decomposition are
based on the partition of unit for a computational domain and the corresponding
Hilbert spaces of grid functions.
|
We have shown elsewhere that the presence of mixed-culture growth of
microbial species in fermentation processes can be detected with high accuracy
by employing the wavelet transform. This is achieved because the crosses in the
different growth processes contributing to the total biomass signal appear as
singularities that are very well evidenced through their singularity cones in
the wavelet transform. However, we used very simple two-species cases. In this
work, we extend the wavelet method to a more complicated illustrative
fermentation case of three microbial species for which we employ several
wavelets of different number of vanishing moments in order to eliminate
possible numerical artifacts. Working in this way allows to filter in a more
precise way the numerical values of the H\"older exponents. Therefore, we were
able to determine the characteristic H\"older exponents for the corresponding
crossing singularities of the microbial growth processes and their stability
logarithmic scale ranges up to the first decimal in the value of the
characteristic exponents. Since calibrating the mixed microbial growth by means
of their H\"older exponents could have potential industrial applications, the
dependence of the H\"older exponents on the kinetic and physical parameters of
the growth models remains as a future experimental task
|
Recent spectral observations by the Spitzer Space Telescope (SST) reveal that
some discs around young ($\sim {\rm few} \times 10^6$ yr old) stars have
remarkably sharp transitions to a low density inner region in which much of the
material has been cleared away. It has been recognized that the most plausible
mechanism for the sharp transition at a specific radius is the gravitational
influence of a massive planet. This raises the question of whether the planet
can also account for the hole extending all the way to the star. Using high
resolution numerical simulations, we show that Jupiter-mass planets drive
spiral waves which create holes on time scales $\sim 10$ times shorter than
viscous or planet migration times. We find that the theory of spiral-wave
driven accretion in viscous flows by Takeuchi et al. (1996) can be used to
provide a consistent interpretation of the simulations. In addition, although
the hole surface densities are low, they are finite, allowing mass accretion
toward the star. Our results therefore imply that massive planets can form
extended, sharply bounded spectral holes which can still accommodate
substantial mass accretion rates. The results also imply that holes are more
likely than gaps for Jupiter mass planets around solar mass stars.
|
We address the question of an appropriate choice of basis functions for the
self-consistent field (SCF) method of simulation of the N-body problem. Our
criterion is based on a comparison of the orbits found in N-body realizations
of analytical potential-density models of triaxial galaxies, in which the
potential is fitted by the SCF method using a variety of basis sets, with those
of the original models. Our tests refer to maximally triaxial Dehnen
gamma-models for values of $\gamma$ in the range 0<=gamma<=1. When an N-body
realization of a model is fitted by the SCF method, the choice of radial basis
functions affects significantly the way the potential, forces, or derivatives
of the forces are reproduced, especially in the central regions of the system.
We find that this results in serious discrepancies in the relative amounts of
chaotic versus regular orbits, or in the distributions of the Lyapunov
characteristic exponents, as found by different basis sets. Numerical tests
include the Clutton-Brock and the Hernquist-Ostriker (HO) basis sets, as well
as a family of numerical basis sets which are `close' to the HO basis set. The
family of numerical basis sets is parametrized in terms of a quantity
$\epsilon$ which appears in the kernel functions of the Sturm-Liouville (SL)
equation defining each basis set. The HO basis set is the $\epsilon=0$ member
of the family. We demonstrate that grid solutions of the SL equation yielding
numerical basis sets introduce large errors in the variational equations of
motion. We propose a quantum-mechanical method of solution of the SL equation
which overcomes these errors. We finally give criteria for a choice of optimal
value of $\epsilon$ and calculate the latter as a function of the value of
gamma.
|
In this paper we consider the generalized shift operator, generated by the
Gegenbauer differential operator $$ G =\left(x^2-1\right)^{\frac{1}{2}-\lambda}
\frac{d}{dx} \left(x^2-1\right)^{\lambda+\frac{1}{2}}\frac{d}{dx}. $$ Maximal
function ($ G- $ maximal function), generated by the Gegenbauer differential
operator $ G $ is investigated. The $ L_{p,\lambda} $ -boundedness for the $ G-
$ maximal function is obtained. The concept of potential of Riesz-Gegenbauer is
introduced and for it the theorem of Sobolev type is proved.
|
We show that in the framework of grand unified theory (GUT) with anomalous
$U(1)_A$ gauge symmetry, the success of the gauge coupling unification in the
minimal SU(5) GUT is naturally explained, even if the mass spectrum of
superheavy fields does not respect SU(5) symmetry. Because the unification
scale for most realizations of the theory becomes smaller than the usual GUT
scale, it suggests that the present level of experiments is close to that
sufficient to observe proton decay via dimension 6 operators, $p\to e+\pi$.
|
We use grey forecast model to predict the future energy consumption of four
states in the U.S, and make some improvments to the model.
|
The nodes of certain minimal cubature rule are real common zeros of a set of
orthogonal polynomials of degree $n$. They often consist of a well distributed
set of points and interpolation polynomials based on them have desired
convergence behavior. We report what is known and the theory behind by
explaining the situation when the domain of integrals is a square.
|
We have determined the distance to a second eclipsing binary system (EB) in
the Large Magellanic Cloud, HV982 (~B1 IV-V + ~B1 IV-V). The measurement of the
distance -- among other properties of the system -- is based on optical
photometry and spectroscopy and space-based UV/optical spectrophotometry. The
analysis combines the ``classical'' EB study of light and radial velocity
curves, which yields the stellar masses and radii, with a new analysis of the
observed energy distribution, which yields the effective temperature,
metallicity, and reddening of the system plus the distance ``attenuation
factor'', essentially (radius/distance)^2. Combining the results gives the
distance to HV982, which is 50.2 +/- 1.2 kpc. This distance determination
consists of a detailed study of well-understood objects (B stars) in a
well-understood evolutionary phase (core H burning), and is free of the biases
and uncertainties that plague various other techniques. After correcting for
the location of HV982, we find an implied distance to the optical center of the
LMC's bar of d(LMC) = 50.7 +/- 1.2 kpc. This result differs by nearly 5 kpc
from our earlier result for the EB HV2274, which implies a bar distance of 45.9
kpc. These results may reflect either marginally compatible measures of a
unique LMC distance or, alternatively, suggest a significant depth to the
stellar distribution in the LMC. Some evidence for this latter hypothesis is
discussed.
|
The paper analyzes security aspects of practical entanglement-based quantum
key distribution (QKD), namely, BBM92 or entanglement-based BB84 protocol.
Similar to prepare-and-measure QKD protocols, practical implementations of the
entanglement-based QKD have to rely upon non-ideal photon sources. A typical
solution for entanglement generation is the spontaneous parametric
down-conversion. However, this process creates not only single photon pairs,
but also quantum states with more than two photons, which potentially may lead
to security deterioration. We show that this effect does not impair the
security of entanglement-based QKD systems. We also review the available
security proofs and show that properties of the entanglement source have
nothing to do with security degradation.
|
We have studied the atomic ordering of B-site transition metals and magnetic
properties in the pulsed-laser deposited films of La2CrFeO6 (LCFO) and La2VMnO6
(LVMO), whose bulk materials are known to be single perovskites with random
distribution of the B-site cations. Despite similar ionic characters of
constituent transition metals in each compound, the maximum B-site order
attained was surprisingly high, ~90% for LCFO and ~80% for LVMO, suggesting a
significant role of epitaxial stabilization in the spontaneous ordering
process. Magnetization and valence state characterizations revealed that the
magnetic ground state of both compounds was coincidently ferrimagnetic with
saturation magnetization of ~2myuB per formula unit, unlike those predicted
theoretically. In addition, they were found to be insulating with optical band
gaps of 1.6 eV and 0.9 eV for LCFO and LVMO, respectively. Our results present
a wide opportunity to explore novel magnetic properties of binary
transition-metal perovskites upon epitaxial stabilization of the ordered phase.
|
We use Ilmanen's elliptic regularization to prove that for an initially
smooth mean convex hypersurface in Euclidean n-space moving by mean curvature
flow, the surface is very nearly convex in a spacetime neighborhood of every
singularity. Previously this was known only (i) for n < 7, and (ii) for
arbitrary n up to the first singular time.
|
This research aims to further understanding in the field of continuous
authentication using behavioral biometrics. We are contributing a novel dataset
that encompasses the gesture data of 15 users playing Minecraft with a Samsung
Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest
Neighbors (KNN), and Support Vector Classifier (SVC), to determine the
authenticity of specific user actions. Our most robust model was SVC, which
achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed
to make it viable option for authentication systems
|
The translation of pronouns presents a special challenge to machine
translation to this day, since it often requires context outside the current
sentence. Recent work on models that have access to information across sentence
boundaries has seen only moderate improvements in terms of automatic evaluation
metrics such as BLEU. However, metrics that quantify the overall translation
quality are ill-equipped to measure gains from additional context. We argue
that a different kind of evaluation is needed to assess how well models
translate inter-sentential phenomena such as pronouns. This paper therefore
presents a test suite of contrastive translations focused specifically on the
translation of pronouns. Furthermore, we perform experiments with several
context-aware models. We show that, while gains in BLEU are moderate for those
systems, they outperform baselines by a large margin in terms of accuracy on
our contrastive test set. Our experiments also show the effectiveness of
parameter tying for multi-encoder architectures.
|
We study locally Cohen-Macaulay curves of low degree in the Segre threefold
with Picard number three and investigate the irreducible and connected
components respectively of the Hilbert scheme of them. We also discuss the
irreducibility of some moduli spaces of purely one-dimensional stable sheaves
and apply the similar argument to the Segre threefold with Picard number two.
|
We introduce a notion of a weak Poisson structure on a manifold $M$ modeled
on a locally convex space. This is done by specifying a Poisson bracket on a
subalgebra $\cA \subeq C^\infty(M)$ which has to satisfy a non-degeneracy
condition (the differentials of elements of $\cA$ separate tangent vectors) and
we postulate the existence of smooth Hamiltonian vector fields. Motivated by
applications to Hamiltonian actions, we focus on affine Poisson spaces which
include in particular the linear and affine Poisson structures on duals of
locally convex Lie algebras. As an interesting byproduct of our approach, we
can associate to an invariant symmetric bilinear form $\kappa$ on a Lie algebra
$\g$ and a $\kappa$-skew-symmetric derivation $D$ a weak affine Poisson
structure on $\g$ itself. This leads naturally to a concept of a Hamiltonian
$G$-action on a weak Poisson manifold with a $\g$-valued momentum map and hence
to a generalization of quasi-hamiltonian group actions.
|
New stable particles with fairly low masses could exist if the coupling to
the Standard Model is weak, and with suitable parameters they might be possible
to produce at the LHC. Here we study a selection of models with the new
particles being charged under a new gauge group, either U(1) or SU(N). In the
Abelian case there will be radiation of gammavs, which decay back into the SM.
In the non-Abelian case the particles will undergo hadronization into mesons
like states piv/rhov that subsequently decays. We consider three different
scenarios for interaction between the new sector and the SM sector and perform
simulations using a Hidden Valley model previously implemented in PYTHIA. In
this study we illustrate how one can distinguish the different models and
measure different parameters of the models under conditions like those at the
LHC.
|
The Banzhaf index, Shapley-Shubik index and other voting power indices
measure the importance of a player in a coalitional game. We consider a simple
coalitional game called the spanning connectivity game (SCG) based on an
undirected, unweighted multigraph, where edges are players. We examine the
computational complexity of computing the voting power indices of edges in the
SCG. It is shown that computing Banzhaf values and Shapley-Shubik indices is
#P-complete for SCGs. Interestingly, Holler indices and Deegan-Packel indices
can be computed in polynomial time. Among other results, it is proved that
Banzhaf indices can be computed in polynomial time for graphs with bounded
treewidth. It is also shown that for any reasonable representation of a simple
game, a polynomial time algorithm to compute the Shapley-Shubik indices implies
a polynomial time algorithm to compute the Banzhaf indices. As a corollary,
computing the Shapley value is #P-complete for simple games represented by the
set of minimal winning coalitions, Threshold Network Flow Games, Vertex
Connectivity Games and Coalitional Skill Games.
|
We present a formalism for performance forecasting and optimization of future
cosmic microwave background (CMB) experiments. We implement it in the context
of nearly full sky, multifrequency, B-mode polarization observations,
incorporating statistical uncertainties due to the CMB sky statistics,
instrumental noise, as well as the presence of the foreground signals. We model
the effects of a subtraction of these using a parametric maximum likelihood
technique and optimize the instrumental configuration with predefined or
arbitrary observational frequency channels, constraining either a total number
of detectors or a focal plane area. We showcase the proposed formalism by
applying it to two cases of experimental setups based on the CMBpol and COrE
mission concepts looked at as dedicated B-mode experiments. We find that, if
the models of the foregrounds available at the time of the optimization are
sufficiently precise, the procedure can help to either improve the potential
scientific outcome of the experiment by a factor of a few, while allowing one
to avoid excessive hardware complexity, or simplify the instrument design
without compromising its science goals. However, our analysis also shows that
even if the available foreground models are not considered to be sufficiently
reliable, the proposed procedure can guide a design of more robust experimental
setups. While better suited to cope with a plausible, greater complexity of the
foregrounds than that foreseen by the models, these setups could ensure science
results close to the best achievable, should the models be found to be correct.
|
Irradiation of the strong light on the material leads to numerous non-linear
effects that are essential to understand the physics of excited states of the
system and for optoelectronics. Here, we study the non-linear thermoelectric
effect due to the electric and thermal fields applied on a non-centrosymmetric
system. The phenomenon arises on the Fermi surface with the transitions of
electrons from valence to conduction bands. We derive the formlism to
investigate these effects and find that the non-linearity in these effects
namely non-linear Seebeck and non-linear Peltier effects depends on the ratio
of the non-linear to the linear conductivities. The theory is tested for a
hexagonally warped and gapped topological insulator. Results show enhancement
in the longitudinal and Hall effects on increasing the warping strength while
show opposite behavior with the surface gap.
|
We study fully convex polygons with a given area, and variable perimeter
length on square and hexagonal lattices. We attach a weight t^m to a convex
polygon of perimeter m and show that the sum of weights of all polygons with a
fixed area s varies as s^{-theta_{conv}} exp[K s^(1/2)] for large s and t less
than a critical threshold t_c, where K is a t-dependent constant, and
theta_{conv} is a critical exponent which does not change with t. We find
theta_{conv} is 1/4 for the square lattice, but -1/4 for the hexagonal lattice.
The reason for this unexpected non-universality of theta_{conv} is traced to
existence of sharp corners in the asymptotic shape of these polygons.
|
Training features used to analyse physical processes are often highly
correlated and determining which ones are most important for the classification
is a non-trivial tasks. For the use case of a search for a top-quark pair
produced in association with a Higgs boson decaying to bottom-quarks at the
LHC, we compare feature ranking methods for a classification BDT. Ranking
methods, such as the BDT Selection Frequency commonly used in High Energy
Physics and the Permutational Performance, are compared with the
computationally expense Iterative Addition and Iterative Removal procedures,
while the latter was found to be the most performant.
|
The duration of a starburst is a fundamental parameter affecting the
evolution of galaxies yet, to date, observational constraints on the durations
of starbursts are not well established. Here we study the recent star formation
histories (SFHs) of three nearby dwarf galaxies to rigorously quantify the
duration of their starburst events using a uniform and consistent approach. We
find that the bursts range from ~200 - ~400 Myr in duration resolving the
tension between the shorter timescales often derived observationally with the
longer timescales derived from dynamical arguments. If these three starbursts
are typical of starbursts in dwarf galaxies, then the short timescales (3 - 10
Myr) associated with starbursts in previous studies are best understood as
"flickering" events which are simply small components of the larger starburst.
In this sample of three nearby dwarfs, the bursts are not localized events. All
three systems show bursting levels of star formation in regions of both high
and low stellar density. The enhanced star formation moves around the galaxy
during the bursts and covers a large fraction of the area of the galaxy. These
massive, long duration bursts can significantly affect the structure, dynamics,
and chemical evolution of the host galaxy and can be the progenitors of
"superwinds" that drive much of the recently chemically enriched material from
the galaxy into the intergalactic medium.
|
We compute the leading chiral-logarithmic corrections to the S parameter in
the four-site Higgsless model. In addition to the usual electroweak gauge
bosons of the Standard Model, this model contains two sets of heavy charged and
neutral gauge bosons. In the continuum limit, the latter gauge bosons can be
identified with the first excited Kaluza-Klein states of the W^\pm and Z bosons
of a warped extra-dimensional model with an SU(2)_L \times SU(2)_R \times
U(1)_X bulk gauge symmetry. We consider delocalized fermions and show that the
delocalization parameter must be considerably tuned from its tree-level ideal
value in order to reconcile experimental constraints with the one-loop results.
Hence, the delocalization of fermions does not solve the problem of large
contributions to the S parameter in this class of theories and significant
contributions to S can potentially occur at one-loop.
|
Recommender systems are ubiquitous yet often difficult for users to control,
and adjust if recommendation quality is poor. This has motivated conversational
recommender systems (CRSs), with control provided through natural language
feedback. However, as with most application domains, building robust CRSs
requires training data that reflects system usage$\unicode{x2014}$here
conversations with user utterances paired with items that cover a wide range of
preferences. This has proved challenging to collect scalably using conventional
methods. We address the question of whether it can be generated synthetically,
building on recent advances in natural language. We evaluate in the setting of
item set recommendation, noting the increasing attention to this task motivated
by use cases like music, news, and recipe recommendation. We present
TalkTheWalk, which synthesizes realistic high-quality conversational data by
leveraging domain expertise encoded in widely available curated item
collections, generating a sequence of hypothetical yet plausible item sets,
then using a language model to produce corresponding user utterances. We
generate over one million diverse playlist curation conversations in the music
domain, and show these contain consistent utterances with relevant item sets
nearly matching the quality of an existing but small human-collected dataset
for this task. We demonstrate the utility of the generated synthetic dataset on
a conversational item retrieval task and show that it improves over both
unsupervised baselines and systems trained on a real dataset.
|
In this work we experimentally implement a deterministic transfer of a
generic qubit initially encoded in the orbital angular momentum of a single
photon to its polarization. Such transfer of quantum information, completely
reversible, has been implemented adopting a electrically tunable q-plate device
and a Sagnac interferomenter with a Dove's prism. The adopted scheme exhibits a
high fidelity and low losses.
|
We review some aspects of multiple interactions in High Energy QCD; we
discuss in particular AGK rules and present some results concerning multiple
interactions in the context of jet production.
|
In this article we give the generalized triangle Ramsey numbers R(K3,G) of 12
005 158 of the 12 005 168 graphs of order 10. There are 10 graphs remaining for
which we could not determine the Ramsey number. Most likely these graphs need
approaches focusing on each individual graph in order to determine their
triangle Ramsey number. The results were obtained by combining new
computational and theoretical results. We also describe an optimized algorithm
for the generation of all maximal triangle-free graphs and triangle Ramsey
graphs. All Ramsey numbers up to 30 were computed by our implementation of this
algorithm. We also prove some theoretical results that are applied to determine
several triangle Ramsey numbers larger than 30. As not only the number of
graphs is increasing very fast, but also the difficulty to determine Ramsey
numbers, we consider it very likely that the table of all triangle Ramsey
numbers for graphs of order 10 is the last complete table that can possibly be
determined for a very long time.
|
We revisit the application of Shelah's Revised GCH Theorem \cite{SheRGCH} to
diamond. We also formulate a generalization of the theorem and prove a small
fragment of it. Finally we consider another application of the theorem, to
covering numbers of the form cov(-, -, -, $\omega$).
|
The space of monic centered cubic polynomials with marked critical points is
isomorphic to C^2. For each n>0, the locus Sn formed by all polynomials with a
specified critical point periodic of exact period n forms an affine algebraic
set. We prove that Sn is irreducible, thus giving an affirmative answer to a
question posed by Milnor. (This manuscript has been withdrawn)
|
New arguments supporting the reality of large-scale fluctuations in the
density of the visible matter in deep galaxy surveys are presented. A
statistical analysis of the radial distributions of galaxies in the COSMOS and
HDF-N deep fields is presented. Independent spectral and photometric surveys
exist for each field, carried out in different wavelength ranges and using
different observing methods. Catalogs of photometric redshifts in the optical
(COSMOS-Zphot) and infrared (UltraVISTA) were used for the COSMOS field in the
redshift interval $0.1 < z < 3.5$, as well as the zCOSMOS (10kZ) spectroscopic
survey and the XMM-COSMOS and ALHAMBRA-F4 photometric redshift surveys. The
HDFN-Zphot and ALHAMBRA-F5 catalogs of photometric redshifts were used for the
HDF-N field. The Pearson correlation coefficient for the fluctuations in the
numbers of galaxies obtained for independent surveys of the same deep field
reaches $R = 0.70 \pm 0.16$. The presence of this positive correlation supports
the reality of fluctuations in the density of visible matter with sizes of up
to 1 000 Mpc and amplitudes of up to 20% at redshifts $z \sim 2$. The absence
of correlations between the fluctuations in different fields (the correlation
coefficient between COSMOS and HDF-N is $R = -0.20 \pm 0.31$) testifies to the
independence of structures visible in different directions on the celestial
sphere. This also indicates an absence of any influence from universal
systematic errors (such as "spectral voids"), which could imitate the detection
of correlated structures.
|
This paper investigates the energy harvested from the flutter of a plate in
an axial flow by making use of piezoelectric materials. The equations for
fully-coupled linear dynamics of the fluid-solid and electrical systems are
derived. The continuous limit is then considered, when the characteristic
length of the plate's deformations is large compared to the piezoelectric
patches' length. The linear stability analysis of the coupled system is
addressed from both a local and global point of view. Piezoelectric energy
harvesting adds rigidity and damping on the motion of the flexible plate, and
destabilization by dissipation is observed for negative energy waves
propagating in the medium. This result is confirmed in the global analysis of
fluttering modes of a finite-length plate. It is finally observed that waves or
modes destabilized by piezoelectric coupling maximize the energy conversion
efficiency.
|
We present the performances of Li-based compounds used as scintillating
bolometer for rare decay studies such as double-beta decay and direct dark
matter investigations. The compounds are tested in a dilution refrigerator
installed in the underground laboratory of Laboratori Nazionali del Gran Sasso
(Italy). Low temperature scintillating properties are investigated by means of
different radioactive sources, and the radio-purity level for internal
contaminations are estimated for possible employment for next generation
experiments.
|
A tetra of sets which elements are time series of interbeats has been
obtained from the databank Physionet-MIT-BIH, corresponding to the following
failures at the humans' heart: Obstructive Sleep Apnea, Congestive Heart
Failure, and Atrial Fibrillation. Those times series has been analyzed
statistically using an already known technique based on the Wavelet and Hilbert
Transforms. That technique has been applied to the time series of interbeats
for 87 patients, in order to find out the dynamics of the heart. The size of
the times series varies around 7 to 24 h. while the kind of wavelet selected
for this study has been any one of: Daubechies, Biortoghonal, and Gaussian. The
analysis has been done for the complet set of scales ranging from: 1-128
heartbeats. Choosing the Biorthogonal wavelet: bior3.1, it is observed: (a)
That the time series hasn't to be cutted in shorter periods, with the purpose
to obtain the collapsing of the data, (b) An analytical, universal behavior of
the data, for the first and second diseases, but not for the third.
|
Bulk magnetic order in two dimensional La4Ni3O8 nickelate with Ni1+/Ni2+
(d9/d8), isoelectronic with superconducting cuprates is demonstrated
experimentally and theoretically. Magnetization, specific heat and 139La NMR
evidence a transition at 105 K to an antiferromagnetic state. Theoretical
calculations by DFT relate the transition to a nesting instability of the Fermi
surface with ordering wave-vector Q = [1/3, 1/3, 0].
|
A quasi-complementary sequence set (QCSS) refers to a set of two-dimensional
matrices with low non-trivial aperiodic auto- and cross- correlation sums. For
multicarrier code-division multiple-access applications, the availability of
large QCSSs with low correlation sums is desirable. The generalized Levenshtein
bound (GLB) is a lower bound on the maximum aperiodic correlation sum of QCSSs.
The bounding expression of GLB is a fractional quadratic function of a weight
vector $\mathbf{w}$ and is expressed in terms of three additional parameters
associated with QCSS: the set size $K$, the number of channels $M$, and the
sequence length $N$. It is known that a tighter GLB (compared to the Welch
bound) is possible only if the condition $M\geq2$ and $K\geq \overline{K}+1$,
where $\overline{K}$ is a certain function of $M$ and $N$, is satisfied. A
challenging research problem is to determine if there exists a weight vector
which gives rise to a tighter GLB for \textit{all} (not just \textit{some})
$K\geq \overline{K}+1$ and $M\geq2$, especially for large $N$, i.e., the
condition is {asymptotically} both necessary and sufficient. To achieve this,
we \textit{analytically} optimize the GLB which is (in general) non-convex as
the numerator term is an indefinite quadratic function of the weight vector.
Our key idea is to apply the frequency domain decomposition of the circulant
matrix (in the numerator term) to convert the non-convex problem into a convex
one. Following this optimization approach, we derive a new weight vector
meeting the aforementioned objective and prove that it is a local minimizer of
the GLB under certain conditions.
|
PyTorch has ascended as a premier machine learning framework, yet it lacks a
native and comprehensive library for decision and control tasks suitable for
large development teams dealing with complex real-world data and environments.
To address this issue, we propose TorchRL, a generalistic control library for
PyTorch that provides well-integrated, yet standalone components. We introduce
a new and flexible PyTorch primitive, the TensorDict, which facilitates
streamlined algorithm development across the many branches of Reinforcement
Learning (RL) and control. We provide a detailed description of the building
blocks and an extensive overview of the library across domains and tasks.
Finally, we experimentally demonstrate its reliability and flexibility and show
comparative benchmarks to demonstrate its computational efficiency. TorchRL
fosters long-term support and is publicly available on GitHub for greater
reproducibility and collaboration within the research community. The code is
open-sourced on GitHub.
|
The Lov\'{a}sz Local Lemma (LLL) is a powerful tool in probabilistic
combinatorics which can be used to establish the existence of objects that
satisfy certain properties. The breakthrough paper of Moser and Tardos and
follow-up works revealed that the LLL has intimate connections with a class of
stochastic local search algorithms for finding such desirable objects. In
particular, it can be seen as a sufficient condition for this type of
algorithms to converge fast.
Besides conditions for existence of and fast convergence to desirable
objects, one may naturally ask further questions regarding properties of these
algorithms. For instance, "are they parallelizable?", "how many solutions can
they output?", "what is the expected "weight" of a solution?", etc. These
questions and more have been answered for a class of LLL-inspired algorithms
called commutative. In this paper we introduce a new, very natural and more
general notion of commutativity (essentially matrix commutativity) which allows
us to show a number of new refined properties of LLL-inspired local search
algorithms with significantly simpler proofs.
|
Corporate mail services are designed to perform better than public mail
services. Fast mail delivery, large size file transfer as an attachments, high
level spam and virus protection, commercial advertisement free environment are
some of the advantages worth to mention. But these mail services are frequent
target of hackers and spammers. Distributed Denial of service attacks are
becoming more common and sophisticated. The researchers have proposed various
solutions to the DDOS attacks. Can we stop these kinds of attacks with
available technology? These days the DDoS attack through spam has increased and
disturbed the mail services of various organizations. Spam penetrates through
all the filters to establish DDoS attacks, which causes serious problems to
users and the data. In this paper we propose a novel approach to defend DDoS
attack caused by spam mails. This approach is a combination of fine tuning of
source filters, content filters, strictly implementing mail policies,educating
user, network monitoring and logical solutions to the ongoing attack. We have
conducted several experiments in corporate mail services; the results show that
this approach is highly effective to prevent DDoS attack caused by spam. The
novel defense mechanism reduced 60% of the incoming spam traffic and repelled
many DDoS attacks caused by spam.
|
Recently, no-go theorems for the existence of solitonic solutions in
Einstein-Maxwell-scalar (EMS) models have been established in arXiv:1902.07721.
Here we discuss how these theorems can be circumvented by a specific class of
non-minimal coupling functions between a real, canonical scalar field and the
electromagnetic field. When the non-minimal coupling function diverges in a
specific way near the location of a point charge, it regularises all physical
quantities yielding an everywhere regular, localised lump of energy. Such
solutions are possible even in flat spacetime Maxwell-scalar models, wherein
the model is fully integrable in the spherical sector, and exact solutions can
be obtained, yielding an explicit mechanism to de-singularise the Coulomb
field. Considering their gravitational backreaction, the corresponding
(numerical) EMS solitons provide a simple example of self-gravitating,
localised energy lumps.
|
Motivated by a recent experiment reporting on the possible application of
graphene as sensors, we calculate transport properties of 2D graphene
monolayers in the presence of adsorbed molecules. We find that the adsorbed
molecules, acting as compensators that partially neutralize the random charged
impurity centers in the substrate, enhance the graphene mobility without much
change in the carrier density. We predict that subsequent field-effect
measurements should preserve this higher mobility for both electrons and holes,
but with a voltage induced electron-hole asymmetry that depends on whether the
adsorbed molecule was an electron or hole donor in the compensation process. We
also calculate the low density magnetoresistance and find good quantitative
agreement with experimental results.
|
We report an experimental study of the time dependence of the resistivity and
magnetization of charge-ordered La$_{0.5}$Ca$_{0.5}$MnO$_{3}$ under different
thermal and magnetic field conditions. A relaxation with a stretched
exponential time dependence has been observed at temperatures below the charge
ordering temperature. A model using a hierarchical distribution of relaxation
times can explain the data.
|
An intriguing coincidence between the partition function of super Yang-Mills
theory and correlation functions of 2d Toda system has been heavily studied
recently. While the partition function of gauge theory was explored by
Nekrasov, the correlation functions of Toda equation have not been completely
understood. In this paper, we study the latter in the form of Dotsenko-Fateev
integral and reduce it in the form of Selberg integral of several Jack
polynomials. We conjecture a formula for such Selberg average which satisfies
some consistency conditions and show that it reproduces the SU(N) version of
AGT conjecture.
|
Let $p$ be an odd prime and let $a,b\in\mathbb Z$ with $p\nmid ab$. In this
paper we mainly evaluate
$$T_p^{(\delta)}(a,b,x):=\det\left[x+\tan\pi\frac{aj^2+bk^2}p\right]_{\delta\le
j,k\le (p-1)/2}\ \ (\delta=0,1).$$ For example, in the case $p\equiv3\pmod4$ we
show that $T_p^{(1)}(a,b,0)=0$ and $$T_p^{(0)}(a,b,x)=\begin{cases}
2^{(p-1)/2}p^{(p+1)/4}&\text{if}\ (\frac{ab}p)=1, \\p^{(p+1)/4}&\text{if}\
(\frac{ab}p)=-1,\end{cases}$$ where $(\frac{\cdot}p)$ is the Legendre symbol.
When $(\frac{-ab}p)=-1$, we also evaluate the determinant
$\det[x+\cot\pi\frac{aj^2+bk^2}p]_{1\le j,k\le(p-1)/2}.$ In addition, we pose
several conjectures one of which states that for any prime $p\equiv3\pmod4$
there is an integer $x_p\equiv1\pmod p$ such that
$$\det\left[\sec2\pi\frac{(j-k)^2}p\right]_{0\le j,k\le
p-1}=-p^{(p+3)/2}x_p^2.$$
|
This paper presents a measurement of the production cross-section of a $Z$
boson in association with $b$-jets, in proton-proton collisions at $\sqrt{s} =
13$ TeV with the ATLAS experiment at the Large Hadron Collider using data
corresponding to an integrated luminosity of 35.6 fb$^{-1}$. Inclusive and
differential cross-sections are measured for events containing a $Z$ boson
decaying into electrons or muons and produced in association with at least one
or at least two $b$-jets with transverse momentum $p_\textrm{T}>$ 20 GeV and
rapidity $|y| < 2.5$. Predictions from several Monte Carlo generators based on
leading-order (LO) or next-to-leading-order (NLO) matrix elements interfaced
with a parton-shower simulation and testing different flavour schemes for the
choice of initial-state partons are compared with measured cross-sections. The
5-flavour number scheme predictions at NLO accuracy agree better with data than
4-flavour number scheme ones. The 4-flavour number scheme predictions
underestimate data in events with at least one b-jet.
|
We seek to find normative criteria of adequacy for nonmonotonic logic similar
to the criterion of validity for deductive logic. Rather than stipulating that
the conclusion of an inference be true in all models in which the premises are
true, we require that the conclusion of a nonmonotonic inference be true in
``almost all'' models of a certain sort in which the premises are true. This
``certain sort'' specification picks out the models that are relevant to the
inference, taking into account factors such as specificity and vagueness, and
previous inferences. The frequencies characterizing the relevant models reflect
known frequencies in our actual world. The criteria of adequacy for a default
inference can be extended by thresholding to criteria of adequacy for an
extension. We show that this avoids the implausibilities that might otherwise
result from the chaining of default inferences. The model proportions, when
construed in terms of frequencies, provide a verifiable grounding of default
rules, and can become the basis for generating default rules from statistics.
|
Heterostructures consisting of a cuprate superconductor YBa2Cu3O7x and a
ruthenate/manganite (SrRuO3/La0.7Sr0.3MnO3) spin valve have been studied by
SQUID magnetometry, ferromagnetic resonances and neutron reflectometry. It was
shown that due to the influence of magnetic proximity effect a magnetic moment
is induced in the superconducting part of heterostructure and at the same time
the magnetic moment is suppressed in the ferromagnetic spin valve. The
experimental value of magnetization induced in the superconductor has the same
order of magnitude with the calculations based on the induced magnetic moment
of Cu atoms due to orbital reconstruction at the superconductor-ferromagnetic
interface. It corresponds also to the model that takes into account the change
in the density of states at a distance of order of the coherence length in the
superconductor. The experimentally obtained characteristic length of
penetration of the magnetic moment into superconductor exceeds the coherence
length for cuprate superconductor. This fact points on the dominance of the
mechanism of the induced magnetic moment of Cu atoms due to orbital
reconstruction.
|
It is shown that the "twin paradox" arises from comparing unlike entities,
namely perceived intervals with eigenintervals. When this lacuna is closed, it
is seen that there is no twin paradox and that eigentime can serve as the
independent variable for mechanics in Special Relativity.
|
We present a mesoscale representation of near-contact interactions between
colliding droplets which permits to reach up to the scale of full microfluidic
devices, where such droplets are produced. The method is demonstrated for the
case of colliding droplets and the formation of soft flowing crystals in
flow-focussing microfluidic devices. This model may open up the possibility of
multiscale simulation of microfluidic devices for the production of new
droplet/bubble-based mesoscale porous materials.
|
The electromagnetic form factors of the exotic baryons are calculated in the
framework of the relativistic quark model at small and intermediate momentum
transfer. The charge radii of the E+++ baryons are determined.
|
Let $\Gamma$ be a nondegenerate geodesic in a compact Riemannian manifold
$M$. We prove the existence of a partial foliation of a neighbourhood of
$\Gamma$ by CMC surfaces which are small perturbations of the geodesic tubes
about $\Gamma$. There are gaps in this foliation, which correspond to a
bifurcation phenomenon. Conversely, we also prove, under certain restrictions,
that the existence of a partial CMC foliation of this type about a submanifold
$\Gamma$ of any dimension implies that $\Gamma$ is minimal.
|
Michael Barnsley introduced a family of fractals sets which are repellers of
piecewise affine systems. The study of these fractals was motivated by certain
problems that arose in fractal image compression but the results we obtained
can be applied for the computation of the Hausdorff dimension of the graph of
some functions, like generalized Takagi functions and fractal interpolation
functions.
In this paper we introduce this class of fractals and present the tools in
the one-dimensional dynamics and nonconformal fractal theory that are needed to
investigate them. This is the first part in a series of two papers. In the
continuation there will be more proofs and we apply the tools introduced here
to study some fractal function graphs.
|
The emergence of Connected and Automated Vehicles (CAVs) promises better
traffic mobility for future transportation systems. Existing research mostly
focused on fully-autonomous scenarios, while the potential of CAV control at a
mixed traffic intersection where human-driven vehicles (HDVs) also exist has
been less explored. This paper proposes a notion of "1+n" mixed platoon,
consisting of one leading CAV and n following HDVs, and formulates a
platoon-based optimal control framework for CAV control at a signalized
intersection. Based on the linearized dynamics model of the "1+n" mixed
platoon, fundamental properties including stability and controllability are
under rigorous theoretical analysis. Then, a constrained optimal control
framework is established, aiming at improving the global traffic efficiency and
fuel consumption at the intersection via direct control of the CAV. A
hierarchical event-triggered algorithm is also designed for practical
implementation of the optimal control method between adjacent mixed platoons
when approaching the intersection. Extensive numerical simulations at multiple
traffic volumes and market penetration rates validate the greater benefits of
the mixed platoon based method, compared with traditional trajectory
optimization methods for one single CAV.
|
The temporal shape of a pulse in transcranial magnetic stimulation (TMS)
influences which neuron populations are activated preferentially as well as the
strength and even direction of neuromodulation effects. Furthermore, various
pulse shapes differ in their efficiency, coil heating, sensory perception, and
clicking sound. However, the available TMS pulse shape repertoire is still very
limited to a few pulses with sinusoidal or near-rectangular shapes. Monophasic
pulses, though found to be more selective and stronger in neuromodulation, are
generated inefficiently and therefore only available in simple low-frequency
repetitive protocols. Despite a strong interest to exploit the temporal effects
of TMS pulse shapes and pulse sequences, waveform control is relatively
inflexible and only possible parametrically within certain limits. Previously
proposed approaches for flexible pulse shape control, such as through power
electronic inverters, have significant limitations: Existing semiconductor
switches can fail under the immense electrical stress associated with free
pulse shaping, and most conventional power inverter topologies are incapable of
generating smooth electric fields or existing pulse shapes. Leveraging
intensive preliminary work on modular power electronics, we present a modular
pulse synthesizer (MPS) technology that can, for the first time, flexibly
generate high-power TMS pulses with user-defined electric field shape as well
as rapid sequences of pulses with high output quality. The circuit topology
breaks the problem of simultaneous high power and switching speed into smaller,
manageable portions. MPS TMS can synthesize practically any pulse shape,
including conventional ones, with fine quantization of the induced electric
field.
|
We describe the centralizer of irreducible representations from a finitely
generated group $\Gamma$ to $PSL(p,\mathbb{C})$ where $p$ is a prime number.
This leads to a description of the singular locus (the set of conjugacy classes
of representations whose centralizer strictly contains the center of the
ambient group) of the irreducible part of the character variety
$\chi^i(\Gamma,PSL(p,\mathbb{C}))$. When $\Gamma$ is a free group of rank
$l\geq 2$ or the fundamental group of a closed Riemann surface of genus $g\geq
2$, we give a complete description of this locus and prove that this locus is
exactly the set of algebraic singularities of the irreducible part of the
character variety.
|
We propose dynamic sampled stochastic approximation (SA) methods for
stochastic optimization with a heavy-tailed distribution (with finite 2nd
moment). The objective is the sum of a smooth convex function with a convex
regularizer. Typically, it is assumed an oracle with an upper bound $\sigma^2$
on its variance (OUBV). Differently, we assume an oracle with
\emph{multiplicative noise}. This rarely addressed setup is more aggressive but
realistic, where the variance may not be bounded. Our methods achieve optimal
iteration complexity and (near) optimal oracle complexity. For the smooth
convex class, we use an accelerated SA method a la FISTA which achieves, given
tolerance $\epsilon>0$, the optimal iteration complexity of
$\mathcal{O}(\epsilon^{-\frac{1}{2}})$ with a near-optimal oracle complexity of
$\mathcal{O}(\epsilon^{-2})[\ln(\epsilon^{-\frac{1}{2}})]^2$. This improves
upon Ghadimi and Lan [\emph{Math. Program.}, 156:59-99, 2016] where it is
assumed an OUBV. For the strongly convex class, our method achieves optimal
iteration complexity of $\mathcal{O}(\ln(\epsilon^{-1}))$ and optimal oracle
complexity of $\mathcal{O}(\epsilon^{-1})$. This improves upon Byrd et al.
[\emph{Math. Program.}, 134:127-155, 2012] where it is assumed an OUBV. In
terms of variance, our bounds are local: they depend on variances
$\sigma(x^*)^2$ at solutions $x^*$ and the per unit distance multiplicative
variance $\sigma^2_L$. For the smooth convex class, there exist policies such
that our bounds resemble those obtained if it was assumed an OUBV with
$\sigma^2:=\sigma(x^*)^2$. For the strongly convex class such property is
obtained exactly if the condition number is estimated or in the limit for
better conditioned problems or for larger initial batch sizes. In any case, if
it is assumed an OUBV, our bounds are thus much sharper since typically
$\max\{\sigma(x^*)^2,\sigma_L^2\}\ll\sigma^2$.
|
The very neutron-rich oxygen isotopes 25O and 26O are investigated
experimentally and theoret- ically. In this first R3B-LAND experiment, the
unbound states are populated at GSI via proton- knockout reactions from 26F and
27F at relativistic energies around 450 MeV/nucleon. From the kinematically
complete measurement of the decay into 24O plus one or two neutrons, the 25O
ground- state energy and lifetime are determined, and upper limits for the 26O
ground state are extracted. In addition, the results provide evidence for an
excited state in 26O at around 4 MeV. The ex- perimental findings are compared
to theoretical shell-model calculations based on chiral two- and three-nucleon
(3N) forces, including for the first time residual 3N forces, which are shown
to be amplified as valence neutrons are added.
|
Advances in 3D generation have facilitated sequential 3D model generation
(a.k.a 4D generation), yet its application for animatable objects with large
motion remains scarce. Our work proposes AnimatableDreamer, a text-to-4D
generation framework capable of generating diverse categories of non-rigid
objects on skeletons extracted from a monocular video. At its core,
AnimatableDreamer is equipped with our novel optimization design dubbed
Canonical Score Distillation (CSD), which lifts 2D diffusion for temporal
consistent 4D generation. CSD, designed from a score gradient perspective,
generates a canonical model with warp-robustness across different
articulations. Notably, it also enhances the authenticity of bones and skinning
by integrating inductive priors from a diffusion model. Furthermore, with
multi-view distillation, CSD infers invisible regions, thereby improving the
fidelity of monocular non-rigid reconstruction. Extensive experiments
demonstrate the capability of our method in generating high-flexibility
text-guided 3D models from the monocular video, while also showing improved
reconstruction performance over existing non-rigid reconstruction methods.
|
The focus of this work is on designing influencing strategies to shape the
collective opinion of a network of individuals. We consider a variant of the
voter model where opinions evolve in one of two ways. In the absence of
external influence, opinions evolve via interactions between individuals in the
network, while, in the presence of external influence, opinions shift in the
direction preferred by the influencer. We focus on a finite time-horizon and an
influencing strategy is characterized by when it exerts influence in this
time-horizon given its budget constraints. Prior work on this opinion dynamics
model assumes that individuals take into account the opinion of all individuals
in the network. We generalize this and consider the setting where the opinion
evolution of an individual depends on a limited collection of opinions from the
network. We characterize the nature of optimal influencing strategies as a
function of the way in which this collection of opinions is formed.
|
This work presents a nonparametric statistical test, $S$-maup, to measure the
sensitivity of a spatially intensive variable to the effects of the Modifiable
Areal Unit Problem (MAUP). $S$-maup is the first statistic of its type and
focuses on determining how much the distribution of the variable, at its
highest level of spatial disaggregation, will change when it is spatially
aggregated. Through a computational experiment, we obtain the basis for the
design of the statistical test under the null hypothesis of non-sensitivity to
MAUP. We performed a simulation study for approaching the empirical
distribution of the statistical test, obtaining its critical values, and
computing its power and size. The results indicate that the power of the
statistic is good if the sample (number of areas) grows, and in general, the
size decreases with increasing sample number. Finally, an empirical application
is made using the Mincer equation in South Africa.
|
This paper examines the safety performance of the Waymo Driver, an SAE level
4 automated driving system (ADS) used in a rider-only (RO) ride-hailing
application without a human driver, either in the vehicle or remotely. ADS
crash data was derived from NHTSA's Standing General Order (SGO) reporting over
7.14 million RO miles through the end of October 2023 in Phoenix, AZ, San
Francisco, CA, and Los Angeles, CA. When considering all locations together,
the any-injury-reported crashed vehicle rate was 0.41 incidents per million
miles (IPMM) for the ADS vs 2.80 IPMM for the human benchmark, an 85% reduction
or a human crash rate that is 6.7 times higher than the ADS rate.
Police-reported crashed vehicle rates for all locations together were 2.1 IPMM
for the ADS vs. 4.68 IPMM for the human benchmark, a 55% reduction or a human
crash rate that was 2.2 times higher than the ADS rate. Police-reported and
any-injury-reported crashed vehicle rate reductions for the ADS were
statistically significant when compared in San Francisco and Phoenix, as well
as combined across all locations. The any property damage or injury comparison
had statistically significant decrease in 3 comparisons, but also
non-significant results in 3 other benchmarks. Given imprecision in the
benchmark estimate and multiple potential sources of underreporting biasing the
benchmarks, caution should be taken when interpreting the results of the any
property damage or injury comparison. Together, these crash-rate results should
be interpreted as a directional and continuous confidence growth indicator,
together with other methodologies, in a safety case approach.
|
We present the results of neutral hydrogen (HI) observations of the NGC 5044
and NGC 1052 groups, as part of a GEMS (Group Evolution Multiwavelength Study)
investigation into the formation and evolution of galaxies in nearby groups.
Two new group members have been discovered during a wide-field HI imaging
survey conducted using the ATNF Parkes telescope. These results, as well as
those from followup HI synthesis and optical imaging, are presented here.
J1320-1427, a new member of the NGC 5044 Group, has an HI mass of
M_HI=1.05e9Msun and M_HI/L_B=1.65 Msun/Lsun, with a radial velocity of
v=2750km/s. The optical galaxy is characterised by two regions of star
formation, surrounded by an extended, diffuse halo. J0249-0806, the new member
of the NGC 1052 Group, has M_HI=5.4e8Msun, M_HI/L_R=1.13 Msun/Lsun and
v=1450km/s. The optical image reveals a low surface brightness galaxy. We
interpret both of these galaxies as irregular type, with J0249-0806 possibly
undergoing first infall into the NGC 1052 group.
|
Standard registration algorithms need to be independently applied to each
surface to register, following careful pre-processing and hand-tuning.
Recently, learning-based approaches have emerged that reduce the registration
of new scans to running inference with a previously-trained model. In this
paper, we cast the registration task as a surface-to-surface translation
problem, and design a model to reliably capture the latent geometric
information directly from raw 3D face scans. We introduce Shape-My-Face (SMF),
a powerful encoder-decoder architecture based on an improved point cloud
encoder, a novel visual attention mechanism, graph convolutional decoders with
skip connections, and a specialized mouth model that we smoothly integrate with
the mesh convolutions. Compared to the previous state-of-the-art learning
algorithms for non-rigid registration of face scans, SMF only requires the raw
data to be rigidly aligned (with scaling) with a pre-defined face template.
Additionally, our model provides topologically-sound meshes with minimal
supervision, offers faster training time, has orders of magnitude fewer
trainable parameters, is more robust to noise, and can generalize to previously
unseen datasets. We extensively evaluate the quality of our registrations on
diverse data. We demonstrate the robustness and generalizability of our model
with in-the-wild face scans across different modalities, sensor types, and
resolutions. Finally, we show that, by learning to register scans, SMF produces
a hybrid linear and non-linear morphable model. Manipulation of the latent
space of SMF allows for shape generation, and morphing applications such as
expression transfer in-the-wild. We train SMF on a dataset of human faces
comprising 9 large-scale databases on commodity hardware.
|
The CMS beam and radiation monitoring subsystem BCM1F (Fast Beam Condition
Monitor) consists of 8 individual diamond sensors situated around the beam pipe
within the pixel detector volume, for the purpose of fast bunch-by-bunch
monitoring of beam background and collision products. In addition, effort is
ongoing to use BCM1F as an online luminosity monitor. BCM1F will be running
whenever there is beam in LHC, and its data acquisition is independent from the
data acquisition of the CMS detector, hence it delivers luminosity even when
CMS is not taking data. A report is given on the performance of BCM1F during
LHC run I, including results of the van der Meer scan and on-line luminosity
monitoring done in 2012. In order to match the requirements due to higher
luminosity and 25 ns bunch spacing, several changes to the system must be
implemented during the upcoming shutdown, including upgraded electronics and
precise gain monitoring. First results from Run II preparation are shown.
|
The Schelling model of segregation looks to explain the way in which a
population of agents or particles of two types may come to organise itself into
large homogeneous clusters, and can be seen as a variant of the Ising model in
which the system is subjected to rapid cooling. While the model has been very
extensively studied, the unperturbed (noiseless) version has largely resisted
rigorous analysis, with most results in the literature pertaining to versions
of the model in which noise is introduced into the dynamics so as to make it
amenable to standard techniques from statistical mechanics or stochastic
evolutionary game theory. We rigorously analyse the one-dimensional version of
the model in which one of the two types is in the minority, and establish
various forms of threshold behaviour. Our results are in sharp contrast with
the case when the distribution of the two types is uniform (i.e. each agent has
equal chance of being of each type in the initial configuration), which was
studied by Brandt, Immorlica, Kamath, and Kleinberg.
|
Superconductivity develops in bulk doped SrTiO$_3$ and at the
LaAlO$_3$/SrTiO$_3$ interface with a dome-shaped density dependence of the
critical temperature $T_c$, despite different dimensionalities and geometries.
We propose that the $T_c$ dome of LaAlO$_3$/SrTiO$_3$ is a shape resonance due
to quantum confinement of superconducting bulk SrTiO$_3$. We substantiate this
interpretation by comparing the exact solutions of a three-dimensional and
quasi-two-dimensional two-band BCS gap equation. This comparison highlights the
role of heavy bands for $T_c$ in both geometries. For bulk SrTiO$_3$, we
extract the density dependence of the pairing interaction from the fit to
experimental data. We apply quantum confinement in a square potential well of
finite depth and calculate $T_c$ in the confined configuration. We compare the
calculated $T_c$ to transport experiments and provide an explanation as to why
the optimal $T_c$'s are so close to each other in two-dimensional interfaces
and the three-dimensional bulk material.
|
We study homogenization of a boundary obstacle problem on $ C^{1,\alpha} $
domain $D$ for some elliptic equations with uniformly elliptic coefficient
matrices $\gamma$. For any $ \epsilon\in\mathbb{R}_+$, $\partial D=\Gamma \cup
\Sigma$, $\Gamma \cap \Sigma=\emptyset $ and $ S_{\epsilon}\subset \Sigma $
with suitable assumptions,\ we prove that as $\epsilon$ tends to zero, the
energy minimizer $ u^{\epsilon} $ of $ \int_{D} |\gamma\nabla u|^{2} dx $,
subject to $ u\geq \varphi $ on $ S_{\varepsilon} $, up to a subsequence,
converges weakly in $ H^{1}(D) $ to $ \widetilde{u} $ which minimizes the
energy functional $\int_{D}|\gamma\nabla u|^{2}+\int_{\Sigma}
(u-\varphi)^{2}_{-}\mu(x) dS_{x}$, where $\mu(x)$ depends on the structure of
$S_{\epsilon}$ and $ \varphi $ is any given function in
$C^{\infty}(\overline{D})$.
|
We provide a novel method for constructing asymptotics (to arbitrary
accuracy) for the number of directed graphs that realize a fixed bidegree
sequence $d = a \times b$ with maximum degree $d_{max}=O(S^{\frac{1}{2}-\tau})$
for an arbitrarily small positive number $\tau$, where $S$ is the number edges
specified by $d$. Our approach is based on two key steps, graph partitioning
and degree preserving switches. The former idea allows us to relate enumeration
results for given sequences to those for sequences that are especially easy to
handle, while the latter facilitates expansions based on numbers of shared
neighbors of pairs of nodes. While we focus primarily on directed graphs
allowing loops, our results can be extended to other cases, including bipartite
graphs, as well as directed and undirected graphs without loops. In addition,
we can relax the constraint that $d_{max} = O(S^{\frac{1}{2}-\tau})$ and
replace it with $a_{max} b_{max} = O(S^{1-\tau})$. where $a_{max}$ and
$b_{max}$ are the maximum values for $a$ and $b$ respectively. The previous
best results, from Greenhill et al., only allow for $d_{max} =
o(S^{\frac{1}{3}})$ or alternatively $a_{max} b_{max} = o(S^{\frac{2}{3}})$.
Since in many real world networks, $d_{max}$ scales larger than
$o(S^{\frac{1}{3}})$, we expect that this work will be helpful for various
applications.
|
McVittie spacetimes represent an embedding of the Schwarzschild field in
isotropic cosmological backgrounds. Depending on the scale factor of the
background, the resulting spacetime may contain black and white hole horizons,
as well as other interesting boundary features. In order to further clarify the
nature of these spacetimes, we address this question: do there exist bound
particle and photon orbits in McVittie spacetimes? Considering first circular
photon orbits, we obtain an explicit characterization of all McVittie
spacetimes for which such orbits exist: there is a 2-parameter class of such
spacetimes, and so the existence of a circular photon orbit is a highly
specialised feature of a McVittie spacetime. However, we prove that in two
large classes of McVittie spacetimes, there are bound particle and photon
orbits: future-complete non-radial timelike and null geodesics along which the
areal radius $r$ has a finite upper bound. These geodesics are asymptotic at
large times to circular orbits of a corresponding Schwarzschild or
Schwarzschild-de Sitter spacetime. The existence of these geodesics lays the
foundations for and shows the theoretical possibility of the formation of
accretion disks in McVittie spacetimes. We also summarize and extend some
previous results on the global structure of McVittie spacetimes. The results on
bound orbits are established using centre manifold and other techniques from
the theory of dynamical systems.
|
We present a comparison between the 2001 XMM-Newton and 2005 Suzaku
observations of the quasar, PG1211+143 at z=0.0809. Variability is observed in
the 7 keV iron K-shell absorption line (at 7.6 keV in the quasar frame), which
is significantly weaker in 2005 than during the 2001 XMM-Newton observation.
From a recombination timescale of <4 years, this implies an absorber density
n>0.004 particles/cm3, while the absorber column is 5e22<N_H <1 1e24
particles/cm2. Thus the sizescale of the absorber is too compact (pc scale) and
the surface brightness of the dense gas too high (by 9-10 orders of magnitude)
to arise from local hot gas, such as the local bubble, group or Warm/Hot
Intergalactic Medium (WHIM), as suggested by McKernan et al. (2004, 2005).
Instead the iron K-shell absorption must be associated with an AGN outflow with
mildly relativistic velocities. Finally we show that the the association of the
absorption in PG1211+143 with local hot gas is simply a coincidence, the
comparison between the recession and iron K absorber outflow velocities in
other AGN does not reveal a one to one kinematic correlation.
|
The purpose of this document is to provide a brief overview of open
consultation approaches in the current, international setting and propose a
role for Information Technologies (IT) as a disruptive force in this setting.
|
Particle--anti-particle interpretation under spatially inhomogeneous external
fields within the framework of quantum field theory is a nontrivial problem. In
this paper, we focus on the two interpretations established in [Phys. Rev. D
93, 045002 (2016)] and [Prog. Theor. Exp. Phys. 2022, 073B02 (2022)], both of
which give consistent results of vacuum instability and pair production. To
shed light on their differences, a pair production under a potential step
assisted by a weak and oscillating electric field is discussed. It is shown
that the potential step and the oscillating field, each insufficient for vacuum
decay, can produce pairs when combined. In addition, the two pictures give rise
to quantitative differences in the number of created pairs at the second-order
perturbation of the oscillating field. It might provide a clue to investigate
the correct particle--anti-particle interpretation by comparing the result with
numerical simulations or experiments.
|
In the present paper, we investigate the structural, thermodynamic, dynamic,
elastic, and electronic properties of doped 2D diamond C$_4$X$_2$ (X = B or N)
nanosheets in both AA$'$A$''$ and ABC stacking configurations, by
first-principles calculations. Those systems are composed of 3 diamond-like
graphene sheets, with an undoped graphene layer between two 50% doped ones. Our
results, based on the analysis of ab-initio molecular dynamics simulations,
phonon dispersion spectra, and Born's criteria for mechanical stability,
revealed that all four structures are stable. Additionally, their standard
enthalpy of formation values are similar to the one of pristine 2D diamond,
recently synthesized by compressing three graphene layers. The C$_4$X$_2$ (X =
B or N) systems exhibit high elastic constant values and stiffness comparable
to the diamond. The C$_4$N$_2$ nanosheets present wide indirect band gaps that
could be advantageous for applications similar to the ones of the hexagonal
boron nitride (h-BN), such as a substrate for high-mobility 2D devices. On the
other hand, the C$_4$B$_2$ systems are semiconductors with direct band gaps, in
the 1.6 - 2.0 eV range, and small effective masses, which are characteristics
that may be favorable to high carrier mobility and optoelectronics
applications.
|
Complex objects are usually with multiple labels, and can be represented by
multiple modal representations, e.g., the complex articles contain text and
image information as well as multiple annotations. Previous methods assume that
the homogeneous multi-modal data are consistent, while in real applications,
the raw data are disordered, e.g., the article constitutes with variable number
of inconsistent text and image instances. Therefore, Multi-modal Multi-instance
Multi-label (M3) learning provides a framework for handling such task and has
exhibited excellent performance. However, M3 learning is facing two main
challenges: 1) how to effectively utilize label correlation; 2) how to take
advantage of multi-modal learning to process unlabeled instances. To solve
these problems, we first propose a novel Multi-modal Multi-instance Multi-label
Deep Network (M3DN), which considers M3 learning in an end-to-end multi-modal
deep network and utilizes consistency principle among different modal bag-level
predictions. Based on the M3DN, we learn the latent ground label metric with
the optimal transport. Moreover, we introduce the extrinsic unlabeled
multi-modal multi-instance data, and propose the M3DNS, which considers the
instance-level auto-encoder for single modality and modified bag-level optimal
transport to strengthen the consistency among modalities. Thereby M3DNS can
better predict label and exploit label correlation simultaneously. Experiments
on benchmark datasets and real world WKG Game-Hub dataset validate the
effectiveness of the proposed methods.
|
We investigated the thermal evolution of the magnetic properties of MnAs
epitaxial films grown on GaAs(001) during the coexistence of
hexagonal/orthorhombic phases using polarized resonant (magnetic) soft X-ray
scattering and magnetic force microscopy. The results of the diffuse satellite
X-ray peaks were compared to those obtained by magnetic force microscopy and
suggest a reorientation of ferromagnetic terraces as temperature rises. By
measuring hysteresis loops at these peaks we show that this reorientation is
common to all ferromagnetic terraces. The reorientation is explained by a
simple model based on the shape anisotropy energy. Demagnetizing factors were
calculated for different configurations suggested by the magnetic images. We
noted that the magnetic moments flip from an in-plane mono-domain orientation
at lower temperatures to a three-domain out-of-plane configuration at higher
temperatures. The transition was observed when the ferromagnetic stripe width L
is equal to 2.9 times the film thickness d. This is in good agreement with the
expected theoretical value of L = 2.6d.
|
In a previous article we have introduced an operator representing the
three-dimensional scalar curvature in loop quantum gravity. In this article we
examine the new curvature operator in the setting of quantum-reduced loop
gravity. We derive the explicit form of the curvature operator as an operator
on the Hilbert space of the quantum-reduced model. As a simple practical
example, we study the expectation values of the operator with respect to basis
states of the reduced Hilbert space.
|
We introduce the controllable graph generation problem, formulated as
controlling graph attributes during the generative process to produce desired
graphs with understandable structures. Using a transparent and straightforward
Markov model to guide this generative process, practitioners can shape and
understand the generated graphs. We propose ${\rm S{\small HADOW}C{\small
AST}}$, a generative model capable of controlling graph generation while
retaining the original graph's intrinsic properties. The proposed model is
based on a conditional generative adversarial network. Given an observed graph
and some user-specified Markov model parameters, ${\rm S{\small HADOW}C{\small
AST}}$ controls the conditions to generate desired graphs. Comprehensive
experiments on three real-world network datasets demonstrate our model's
competitive performance in the graph generation task. Furthermore, we show its
effective controllability by directing ${\rm S{\small HADOW}C{\small AST}}$ to
generate hypothetical scenarios with different graph structures.
|
We present a generalized Drude analysis of the in-plane optical conductivity
$\sigma_{ab}$($T$,$\omega$) in cuprates taking into account the effects of
in-plane anisotropy. A simple ansatz for the scattering rate
$\Gamma$($T$,$\omega$), that includes anisotropy, a quadratic frequency
dependence and saturation at the Mott-Ioffe-Regel limit, is able to reproduce
recent normal state data on an optimally doped cuprate over a wide frequency
range. We highlight the potential importance of including anisotropy in the
full expression for $\sigma_{ab}$($T$,$\omega$) and challenge previous
determinations of $\Gamma$($\omega$) in which anisotropy was neglected and
$\Gamma$($\omega$) was indicated to be strictly linear in frequency over a wide
frequency range. Possible implications of our findings for understanding
thermodynamic properties and self-energy effects in high-$T_c$ cuprates will
also be discussed.
|
We present several sharp upper bounds and some extension for product
operators. Among other inequalities, it is shown that if , , are non-negative
continuous functions on such that , , then for all non-negative operator
monotone decreasing function on , we obtain that As an application of the above
inequality, it is shown that where, and .
|
In this paper we prove the optimal $L^p$-solvability of nonlocal parabolic
equation with spatial dependent and non-smooth kernels.
|
We consider temperature-induced melting of a Wigner solid in one dimensional
(1D) and two dimensional (2D) lattices of electrons interacting via the
long-range Coulomb interaction in the presence of strong disorder arising from
charged impurities in the system. The system simulates semiconductor-based 2D
electron layers where Wigner crystallization is often claimed to be observed
experimentally. Using exact diagonalization and utilizing the inverse
participation ratio as well as conductance to distinguish between the localized
insulating solid phase and the extended metallic liquid phase, we find that the
effective melting temperature may be strongly enhanced by disorder since the
disordered crystal typically could be in a localized glassy state incorporating
the combined nonperturbative physics of both Anderson localization and Wigner
crystallization. This disorder-induced enhancement of the melting temperature
may explain why experiments often manage to observe insulating disorder-pinned
Wigner solids in spite of the experimental temperature being decisively far
above the theoretical melting temperature of the pristine Wigner crystal phase
in many cases.
|
Conformal predictors, introduced by Vovk et al. (2005), serve to build
prediction intervals by exploiting a notion of conformity of the new data point
with previously observed data. In the present paper, we propose a novel method
for constructing prediction intervals for the response variable in multivariate
linear models. The main emphasis is on sparse linear models, where only few of
the covariates have significant influence on the response variable even if
their number is very large. Our approach is based on combining the principle of
conformal prediction with the $\ell_1$ penalized least squares estimator
(LASSO). The resulting confidence set depends on a parameter $\epsilon>0$ and
has a coverage probability larger than or equal to $1-\epsilon$. The numerical
experiments reported in the paper show that the length of the confidence set is
small. Furthermore, as a by-product of the proposed approach, we provide a
data-driven procedure for choosing the LASSO penalty. The selection power of
the method is illustrated on simulated data.
|
The essence of the gravitomagnetic clock effect is properly defined showing
that its origin is in the topology of world lines with closed space
projections. It is shown that, in weak field approximation and for a
spherically symmetric central body, the loss of synchrony between two clocks
counter-rotating along a circular geodesic is proportional to the angular
momentum of the source of the gravitational field. Numerical estimates are
presented for objects within the solar system. The less unfavorable situation
is found around Jupiter.
|
This paper proposes a signature scheme where the signatures are generated by
the cooperation of a number of people from a given group of senders and the
signatures are verified by a certain number of people from the group of
recipients. Shamir's threshold scheme and Schnorr's signature scheme are used
to realize the proposed scheme.
|
Dielectric relaxation has been investigated within the framework of a
modified mean field theory, in which the dielectric response of an arbitrary
condensed matter system to the applied electric field is assumed to consist of
two parts, a collective response and a slowly fluctuating response; the former
corresponds to the cooperative response of the crystalline or noncrystalline
structures composed of the atoms or molecules held together by normal chemical
bonds and the latter represents the slow response of the strongly correlated
high-temperature structure precursors or a partially ordered nematic phase.
These two dielectric responses are not independent of each other but rather
constitute a dynamic hierarchy, in which the slowly fluctuating response is
constrained by the collective response. It then becomes clear that the
dielectric relaxation of the system is actually a specific characteristic
relaxation process modulated by the slow relaxation of the nematic phase and
its corresponding relaxation relationship should be regarded as the universal
dielectric relaxation law. Furthermore, we have shown that seemingly different
relaxation relationships, such as the Debye relaxation law, the Cole-Cole
equation, the Cole-Davidson equation, the Havriliak-Negami relaxation, the
Kohlrausch-Williams-Watts function, Jonscher's universal dielectric relaxation
law, etc., are only variants of this universal law under certain circumstances.
|
Adaptive perturbation is a new method for perturbatively computing the
eigenvalues and eigenstates of quantum mechanical Hamiltonians that are widely
believed not to be solvable by such methods. The novel feature of adaptive
perturbation theory is that it decomposes a given Hamiltonian, $H$, into an
unperturbed part and a perturbation in a way which extracts the leading
non-perturbative behavior of the problem exactly. In this talk I will introduce
the method in the context of the pure anharmonic oscillator and then apply it
to the case of tunneling between symmetric minima. After that, I will show how
this method can be applied to field theory. In that discussion I will show how
one can non-perturbatively extract the structure of mass, wavefunction and
coupling constant
|
In the last decade, detecting spin dynamics at the atomic scale has been
enabled by combining techniques like electron spin resonance (ESR) or
pump-probe spectroscopy with scanning tunneling microscopy (STM). Here, we
demonstrate an ultra-high vacuum (UHV) STM operational at milliKelvin (mK) and
in a vector magnetic field capable of both ESR and pump-probe spectroscopy. By
implementing GHz compatible cabling, we achieve appreciable RF amplitudes at
the junction while maintaining mK base temperature. We demonstrate the
successful operation of our setup by utilizing two experimental ESR modes
(frequency sweep and magnetic field sweep) on an individual TiH molecule on
MgO/Ag(100) and extract the effective g-factor. We trace the ESR transitions
down to MHz into an unprecedented low frequency band enabled by the mK base
temperature. We also implement an all-electrical pump-probe scheme based on
waveform sequencing suited for studying dynamics down to the nanoseconds range.
We benchmark our system by detecting the spin relaxation time T1 of individual
Fe atoms on MgO/Ag(100) and note a field strength and orientation dependent
relaxation time.
|
When a suspension freezes, a compacted particle layer builds up at the
solidification front with noticeable implications on the freezing process. In a
directional solidification experiment of monodispersed suspensions in thin
samples, we evidence a link between the thickness of this layer and the sample
depth. We attribute it to an inhomogeneity of particle density induced by the
sample plates. A mechanical model enables us to relate it to the layer
thickness with a dependency on the sample depth and to select the distribution
of particle density that yields the best fit to our data. This distribution
involves an influence length of sample plates of about nine particle diameters.
These results clarify the implications of boundaries on suspension freezing.
They may be useful to model polydispersed suspensions since large particles
could play the role of smooth boundaries with respect to small ones.
|
L\'evy walks are continuous time random walks with spatio-temporal coupling
of jump lengths and waiting times, often used to model superdiffusive spreading
processes such as animals searching for food, tracer motion in weakly chaotic
systems, or even the dynamics in quantum systems such as cold atoms. In the
simplest version L\'evy walks move with a finite speed. Here, we present an
extension of the L\'evy walk scenario for the case when external force fields
influence the motion. The resulting motion is a combination of the response to
the deterministic force acting on the particle, changing its velocity according
to the principle of total energy conservation, and random velocity reversals
governed by the distribution of waiting times. For the fact that the motion
stays conservative, that is, on a constant energy surface, our scenario is
fundamentally different from thermal motion in the same external potentials. In
particular, we present results for the velocity and position distributions for
single well potentials of different steepness. The observed dynamics with its
continuous velocity changes enriches the theory of L\'evy walk processes and
will be of use in a variety of systems, for which the particles are externally
confined.
|
In this paper we discuss how the peculiar properties of twisted lattice QCD
at maximal twist can be employed to set up a consistent computational scheme in
which, despite the explicit breaking of chiral symmetry induced by the presence
of the Wilson and mass terms in the action, it is possible to completely bypass
the problem of wrong chirality and parity mixings in the computation of the
CP-conserving matrix elements of the $\Delta S=1,2$ effective weak Hamiltonian
and at the same time have a positive determinant for non-degenerate quarks as
well as full O($a$) improvement in on-shell quantities with no need of
improving the lattice action and the operators.
|
In this paper, we study the uniqueness of the direct decomposition of a toric
manifold. We first observe that the direct decomposition of a toric manifold as
\emph{algebraic varieties} is unique up to order of the factors. An
algebraically indecomposable toric manifold happens to decompose as smooth
manifold and no criterion is known for two toric manifolds to be diffeomorphic,
so the unique decomposition problem for toric manifolds as \emph{smooth
manifolds} is highly nontrivial and nothing seems known for the problem so far.
We prove that this problem is affirmative if the complex dimension of each
factor in the decomposition is less than or equal to two. A similar argument
shows that the direct decomposition of a smooth manifold into copies of
$\mathbb{C}P^1$ and simply connected closed smooth 4-manifolds with smooth
actions of $(S^1)^2$ is unique up to order of the factors.
|
Due to its semantic succinctness and novelty of expression, poetry is a great
test bed for semantic change analysis. However, so far there is a scarcity of
large diachronic corpora. Here, we provide a large corpus of German poetry
which consists of about 75k poems with more than 11 million tokens, with poems
ranging from the 16th to early 20th century. We then track semantic change in
this corpus by investigating the rise of tropes (`love is magic') over time and
detecting change points of meaning, which we find to occur particularly within
the German Romantic period. Additionally, through self-similarity, we
reconstruct literary periods and find evidence that the law of linear semantic
change also applies to poetry.
|
Subsets and Splits