text
stringlengths 6
128k
|
---|
We study the dynamics of the discrete bicycle (Darboux, Backlund)
transformation of polygons in n-dimensional Euclidean space. This
transformation is a discretization of the continuous bicycle transformation,
recently studied by Foote, Levi, and Tabachnikov. We prove that the respective
monodromy is a Moebius transformation. Working toward establishing complete
integrability of the discrete bicycle transformation, we describe the monodromy
integrals and prove the Bianchi permutability property. We show that the
discrete bicycle transformation commutes with the recutting of polygons, a
discrete dynamical system, previously studied by V. Adler. We show that a
certain center associated with a polygon and discovered by Adler, is preserved
under the discrete bicycle transformation. As a case study, we give a complete
description of the dynamics of the discrete bicycle transformation on plane
quadrilaterals.
|
We explicitly identify the algebra generated by symplectic Fourier-Deligne
transforms (i.e. convolution with Kazhdan-Laumon sheaves) acting on the
Grothendieck group of perverse sheaves on the basic affine space $G/U$,
answering a question originally raised by A. Polishchuk. We show it is
isomorphic to a distinguished subalgebra, studied by I. Marin, of the
generalized algebra of braids and ties (defined in Type $A$ by F. Aicardi and
J. Juyumaya and generalized to all types by Marin), providing a connection
between geometric representation theory and an algebra defined in the context
of knot theory. Our geometric interpretation of this algebra entails some
algebraic consequences: we obtain a short and type-independent geometric proof
of the braid relations for Juyumaya's generators of the Yokonuma-Hecke algebra
(previously proved case-by-case in types $A, D, E$ by Juyumaya and separately
for types $B, C, F_4, G_2$ by Juyumaya and S. S. Kannan), a natural candidate
for an analogue of a Kazhdan-Lusztig basis, and finally an explicit formula for
the dimension of Marin's algebra in Type $A_n$ (previously only known for $n
\leq 4$).
|
Algorithmic contract design is a new frontier in the intersection of
economics and computation, with combinatorial contracts being a core problem in
this domain. A central model within combinatorial contracts explores a setting
where a principal delegates the execution of a task, which can either succeed
or fail, to an agent. The agent can choose any subset among a given set of
costly actions, where every subset is associated with a success probability.
The principal incentivizes the agent through a contract that specifies the
payment upon success of the task.
A natural setting of interest is one with submodular success probabilities.
It is known that finding the optimal contract for the principal is
$\mathsf{NP}$-hard, but the hardness result is derived from the hardness of
demand queries. A major open problem is whether the hardness arises solely from
the hardness of demand queries, or if the complexity lies within the optimal
contract problem itself. In other words: does the problem retain its hardness,
even when provided access to a demand oracle? We resolve this question in the
affirmative, showing that any algorithm that computes the optimal contract for
submodular success probabilities requires an exponential number of demand
queries, thus settling the query complexity problem.
|
The Shiga toxins comprise a family of related protein toxins secreted by
certain types of bacteria. Shigella dysenteriae, some strain of Escherichia
coli and other bacterias can express toxins which caused serious complication
during the infection. Shiga toxin and the closely related Shiga-like toxins
represent a group of very similar cytotoxins that may play an important role in
diarrheal disease and hemolytic-uremic syndrome. The outbreaks caused by this
toxin raised serious public health crisis and caused economic losses. These
toxins have the same biologic activities and according to recent studies also
share the same binding receptor, globotriosyl ceramide (Gb3). Rapid detection
of food contamination is therefore relevant for the containment of food-borne
pathogens. The conventional methods to detect pathogens, such as
microbiological and biochemical identification are time-consuming and
laborious. The immunological or nucleic acid-based techniques require extensive
sample preparation and are not amenable to miniaturization for on-site
detection. In the present are necessary of techniques of rapid identification,
simple and sensitive which can be employed in the countryside with
minimally-sophisticated instrumentation. Biosensors have shown tremendous
promise to overcome these limitations and are being aggressively studied to
provide rapid, reliable and sensitive detection platforms for such
applications.
|
The squared mass of a complex scalar field is turned dynamically into
negative by its O(2)-invariant coupling to a real field slowly rolling down in
a quadratic potential. The emergence of gapless excitations is studied in real
time simulations after spinodal instability occurs. Careful tests demonstrate
that the Goldstone modes appear almost instantly after the symmetry breaking is
over, much before thermal equilibrium is established.
|
In a 331 model in which the lepton masses arise from a scalar sextet it is
possible to break spontaneously a global symmetry implying in a pseudoscalar
majoron-like Goldstone boson. This majoron does not mix with any other scalar
fields and for this reason it does not couple, at the tree level, neither to
the charged leptons nor to the quarks. Moreover, its interaction with neutrinos
is diagonal. We also argue that there is a set of the parameters in which that
the model can be consistent with the invisible Z^0-width and that heavy
neutrinos can decay sufficiently rapid by majoron emission having a lifetime
shorter than the age of the universe.
|
In this paper we consider a multidimensional semilinear reaction-diffusion
equation and we obtain at any arbitrary time an approximate controllability
result between nonnegative states using as control term the reaction
coefficient, that is via multiplicative controls.
|
We develop the formalism for computing gravitational corrections to vacuum
decay from de Sitter space as a sub-Planckian perturbative expansion.
Non-minimal coupling to gravity can be encoded in an effective potential. The
Coleman bounce continuously deforms into the Hawking-Moss bounce, until they
coincide for a critical value of the Hubble constant. As an application, we
reconsider the decay of the electroweak Higgs vacuum during inflation. Our
vacuum decay computation reproduces and improves bounds on the maximal
inflationary Hubble scale previously computed through statistical techniques.
|
Temporal networks are commonly used to represent systems where connections
between elements are active only for restricted periods of time, such as
networks of telecommunication, neural signal processing, biochemical reactions
and human social interactions. We introduce the framework of temporal motifs to
study the mesoscale topological-temporal structure of temporal networks in
which the events of nodes do not overlap in time. Temporal motifs are classes
of similar event sequences, where the similarity refers not only to topology
but also to the temporal order of the events. We provide a mapping from event
sequences to colored directed graphs that enables an efficient algorithm for
identifying temporal motifs. We discuss some aspects of temporal motifs,
including causality and null models, and present basic statistics of temporal
motifs in a large mobile call network.
|
We establish connections between the concepts of Noetherian, regular
coherent, and regular n-coherent categories for Z-linear categories with
finitely many objects and the corresponding notions for unital rings. These
connections enable us to obtain a negative K-theory vanishing result, a
fundamental theorem, and a homotopy invariance result for the K-theory of
Z-linear categories.
|
We study the interaction driven localization transition, which a recent
experiment in Ga_{1-x}Mn_xAs As has shown to come along with multifractal
behavior of the local density of states (LDoS) and the intriguing persistence
of critical correlations close to the Fermi level. We show that the bulk of
these phenomena can be understood within a Hartree-Fock treatment of
disordered, Coulomb-interacting spinless fermions. A scaling analysis of the
LDoS correlation demonstrates multifractality with correlation dimension
d_2=1.57, which is significantly larger than at a non-interacting Anderson
transition. At the interaction-driven transition the states at the Fermi level
become critical, while the bulk of the spectrum remains delocalized up to
substantially stronger interactions. The mobility edge stays close to the Fermi
energy in a wide range of disorder strength, as the interaction strength is
further increased. The localization transition is concomitant with the
quantum-to-classical crossover in the shape of the pseudo-gap in the tunneling
density of states, and with the proliferation of metastable HF solutions that
suggest the onset of a glassy regime with poor screening properties.
|
For suitable bounded hyperconvex sets $\Omega$ in $\mathbb{C}^N$, in
particular the ball or the polydisk, we give estimates for the approximation
numbers of composition operators $C_\phi \colon H^2 (\Omega) \to H^2 (\Omega)$
when $\phi (\Omega)$ is relatively compact in $\Omega$, involving the
Monge-Amp\`ere capacity of $\phi (\Omega)$.
|
We introduce a Hom-type generalization of quantum groups, called
quasi-triangular Hom-bialgebras. They are non-associative and non-coassociative
analogues of Drinfel'd's quasi-triangular bialgebras, in which the
non-(co)associativity is controlled by a twisting map. A family of
quasi-triangular Hom-bialgebras can be constructed from any quasi-triangular
bialgebra, such as Drinfel'd's quantum enveloping algebras. Each
quasi-triangular Hom-bialgebra comes with a solution of the quantum
Hom-Yang-Baxter equation, which is a non-associative version of the quantum
Yang-Baxter equation. Solutions of the Hom-Yang-Baxter equation can be obtained
from modules of suitable quasi-triangular Hom-bialgebras.
|
The stability of the zero solution of a nonlinear Caputo fractional
differential equation with noninstantaneous impulses is studied using Lyapunov
like functions. The novelty of this paper is based on the new definition of the
derivative of a Lyapunov like function along the given noninstantaneous
impulsive fractional differential equations. On one side this definition is a
natural generalization of Caputo fractional Dini derivative of a function and
on the other side it allows us the assumption for Lyapunov functions to be
weakened to continuity. By appropriate examples it is shown the natural
relationship between the defined derivative of Lyapunov functions and Caputo
derivative. Several sufficient conditions for uniform stability and asymptotic
uniform stability of the zero solution, based on the new definition of the
derivative of Lyapunov functions are established. Some examples are given to
illustrate the results.
|
This article is a discussion of some characteristic properties in connection
with global models, particularly for the application of prediction, such as the
approximation property, the interpolation property and the transmission
property.
|
Economists are often interested in the mechanisms by which a particular
treatment affects an outcome. This paper develops tests for the ``sharp null of
full mediation'' that the treatment $D$ operates on the outcome $Y$ only
through a particular conjectured mechanism (or set of mechanisms) $M$. A key
observation is that if $D$ is randomly assigned and has a monotone effect on
$M$, then $D$ is a valid instrumental variable for the local average treatment
effect (LATE) of $M$ on $Y$. Existing tools for testing the validity of the
LATE assumptions can thus be used to test the sharp null of full mediation when
$M$ and $D$ are binary. We develop a more general framework that allows one to
test whether the effect of $D$ on $Y$ is fully explained by a potentially
multi-valued and multi-dimensional set of mechanisms $M$, allowing for
relaxations of the monotonicity assumption. We further provide methods for
lower-bounding the size of the alternative mechanisms when the sharp null is
rejected. An advantage of our approach relative to existing tools for mediation
analysis is that it does not require stringent assumptions about how $M$ is
assigned; on the other hand, our approach helps to answer different questions
than traditional mediation analysis by focusing on the sharp null rather than
estimating average direct and indirect effects. We illustrate the usefulness of
the testable implications in two empirical applications.
|
Let $V$ be an algebraic variety defined over $\mathbb R$, and $V_{top}$ the
space of its complex points. We compare the algebraic Witt group $W(V)$ of
symmetric bilinear forms on vector bundles over $V$, with the topological Witt
group $WR(V_{top})$ of symmetric forms on Real vector bundles over $V_{top}$ in
the sense of Atiyah, especially when $V$ is 2-dimensional. To do so, we develop
topological tools to calculate $WR(V_{top})$, and to measure the difference
between $W(V)$ and $WR(V_{top})$.
|
It is frequently suggested that predictions made by game theory could be
improved by considering computational restrictions when modeling agents. Under
the supposition that players in a game may desire to balance maximization of
payoff with minimization of strategy complexity, Rubinstein and co-authors
studied forms of Nash equilibrium where strategies are maximally simplified in
that no strategy can be further simplified without sacrificing payoff. Inspired
by this line of work, we introduce a notion of equilibrium whereby strategies
are also maximally simplified, but with respect to a simplification procedure
that is more careful in that a player will not simplify if the simplification
incents other players to deviate. We study such equilibria in two-player
machine games in which players choose finite automata that succinctly represent
strategies for repeated games; in this context, we present techniques for
establishing that an outcome is at equilibrium and present results on the
structure of equilibria.
|
Intrinsically faint comets in nearly-parabolic orbits with perihelion
distances much smaller than 1 AU exhibit strong propensity for suddenly
disintegrating at a time not long before perihelion, as shown by Bortle (1991).
Evidence from available observations of such comets suggests that the
disintegration event usually begins with an outburst and that the debris is
typically a massive cloud of dust grains that survives over a limited period of
time. Recent CCD observations revealed, however, that also surviving could be a
sizable fragment, resembling a devolatilized aggregate of loosely-bound dust
grains that may have exotic shape, peculiar rotational properties, and
extremely high porosity, all acquired in the course of the disintegration
event. Given that the brightness of 1I/`Oumuamua's parent could not possibly
equal or exceed the Bortle survival limit, there are reasons to believe that it
suffered the same fate as do the frail comets. The post-perihelion observations
then do not refer to the object that was entering the inner Solar System in
early 2017, as is tacitly assumed, but to its debris. Comparison with C/2017 S3
and C/2010 X1 suggests that, as a monstrous fluffy dust aggregate released in
the recent explosive event, `Oumuamua should be of strongly irregular shape,
tumbling, not outgassing, and subjected to effects of solar radiation pressure,
consistent with observation. The unknown timing of the disintegration event may
compromise studies of the parent's home stellar system. Limited search for
possible images of the object to constrain the time of the (probably minor)
outburst is recommended.
|
Starting with the braided quantum group $\operatorname{SU}_q(2)$ for a
complex deformation parameter $q$ we perform the construction of the quotient
$\operatorname{SU}_q(2)/\mathbb{T}$ which serves as a model of a quantum
sphere. Then we follow the reasoning of Podle\'{s} who for real $q$ classified
quantum spaces with the action of $\operatorname{SU}_q(2)$ with appropriate
spectral properties. These properties can also be expressed in the context of
the braided quantum $\operatorname{SU}_q(2)$ (with complex $q$) and we find
that they lead to precisely the same family of quantum spaces as found by
Podle\'{s} for the real parameter $|q|$.
|
Quantum annealing has great promise in leveraging quantum mechanics to solve
combinatorial optimisation problems. However, to realize this promise to it's
fullest extent we must appropriately leverage the underlying physics. In this
spirit, I examine how the well known tendency of quantum annealers to seek
solutions where more quantum fluctuations are allowed can be used to trade off
optimality of the solution to a synthetic problem for the ability to have a
more flexible solution, where some variables can be changed at little or no
cost. I demonstrate this tradeoff experimentally using the reverse annealing
feature a D-Wave Systems QPU for both problems composed of all binary
variables, and those containing some higher-than-binary discrete variables. I
further demonstrate how local controls on the qubits can be used to control the
levels of fluctuations and guide the search. I discuss places where leveraging
this tradeoff could be practically important, namely in hybrid algorithms where
some penalties cannot be directly implemented on the annealer and provide some
proof-of-concept evidence of how these algorithms could work.
|
We propose a conceptual distinction between hard and soft realizations of
deconfinement from nuclear to quark matter. In the high density region of Hard
Deconfinement the repulsive hard cores of baryons overlap each other and bulk
thermodynamics is dominated by the core properties that can be experimentally
accessed in high-energy scattering experiments. We find that the equation of
state estimated from a single baryon core is fairly consistent with those
empirically known from neutron star phenomenology. We next discuss a novel
concept of Soft Deconfinement, characterized by quantum percolation of quark
wave-functions, at densities lower than the threshold for Hard Deconfinement.
We make a brief review of quantum percolation in the context of nuclear and
quark matter and illustrate a possible scenario of quark deconfinement at high
baryon densities.
|
Text-to-Image (T2I) Diffusion Models (DMs) have shown impressive abilities in
generating high-quality images based on simple text descriptions. However, as
is common with many Deep Learning (DL) models, DMs are subject to a lack of
robustness. While there are attempts to evaluate the robustness of T2I DMs as a
binary or worst-case problem, they cannot answer how robust in general the
model is whenever an adversarial example (AE) can be found. In this study, we
first introduce a probabilistic notion of T2I DMs' robustness; and then
establish an efficient framework, ProTIP, to evaluate it with statistical
guarantees. The main challenges stem from: i) the high computational cost of
the generation process; and ii) determining if a perturbed input is an AE
involves comparing two output distributions, which is fundamentally harder
compared to other DL tasks like classification where an AE is identified upon
misprediction of labels. To tackle the challenges, we employ sequential
analysis with efficacy and futility early stopping rules in the statistical
testing for identifying AEs, and adaptive concentration inequalities to
dynamically determine the "just-right" number of stochastic perturbations
whenever the verification target is met. Empirical experiments validate the
effectiveness and efficiency of ProTIP over common T2I DMs. Finally, we
demonstrate an application of ProTIP to rank commonly used defence methods.
|
We investigate the dynamics of highly polydispersed finite granular chains.
From the spatio-spectral properties of small vibrations, we identify which
particular single-particle displacements lead to energy localization. Then, we
address a fundamental question: Do granular nonlinearities lead to chaotic
dynamics and if so, does chaos destroy this energy localization? Our numerical
simulations show that for moderate nonlinearities, although the overall system
behaves chaotically, it can exhibit long lasting energy localization for
particular single particle excitations. On the other hand, for sufficiently
strong nonlinearities, connected with contact breaking, the granular chain
reaches energy equipartition and an equilibrium chaotic state, independent of
the initial position excitation.
|
It is now recognised that the traditional method of calculating the LSR
fails. We find an improved estimate of the LSR by making use of the larger and
more accurate database provided by XHIP and repeating our preferred analysis
from Francis & Anderson (2009a). We confirm an unexpected high value of $U_0$
by calculating the mean for stars with orbits sufficiently inclined to the
Galactic plane that they do not participate in bulk streaming motions. Our best
estimate of the solar motion with respect to the LSR $(U_0, V_0, W_0) = (14.1
\pm 1.1, 14.6 \pm 0.4, 6.9 \pm 0.1)$ km\ s$^{-1}$.
|
As compared to a large spectrum of performance optimizations, relatively
little effort has been dedicated to optimize other aspects of embedded
applications such as memory space requirements, power, real-time
predictability, and reliability. In particular, many modern embedded systems
operate under tight memory space constraints. One way of satisfying these
constraints is to compress executable code and data as much as possible. While
research on code compression have studied efficient hardware and software based
code strategies, many of these techniques do not take application behavior into
account, that is, the same compression/decompression strategy is used
irrespective of the application being optimized. This paper presents a code
compression strategy based on control flow graph (CFG) representation of the
embedded program. The idea is to start with a memory image wherein all basic
blocks are compressed, and decompress only the blocks that are predicted to be
needed in the near future. When the current access to a basic block is over,
our approach also decides the point at which the block could be compressed. We
propose several compression and decompression strategies that try to reduce
memory requirements without excessively increasing the original instruction
cycle counts.
|
A novel framework for closed-loop control of turbulent flows is tested in an
experimental mixing layer flow. This framework, called Machine Learning Control
(MLC), provides a model-free method of searching for the best function, to be
used as a control law in closed-loop flow control. MLC is based on genetic
programming, a function optimization method of machine learning. In this
article, MLC is benchmarked against classical open-loop actuation of the mixing
layer. Results show that this method is capable of producing sensor-based
control laws which can rival or surpass the best open-loop forcing, and be
robust to changing flow conditions. Additionally, MLC can detect non-linear
mechanisms present in the controlled plant, and exploit them to find a better
type of actuation than the best periodic forcing.
|
We propose a framework for sensitivity analysis of linear programs (LPs) in
minimization form, allowing for simultaneous perturbations in the objective
coefficients and right-hand sides, where the perturbations are modeled in a
compact, convex uncertainty set. This framework unifies and extends multiple
approaches for LP sensitivity analysis in the literature and has close ties to
worst-case linear optimization and two-stage adaptive optimization. We define
the minimum (best-case) and maximum (worst-case) LP optimal values, p- and p+,
over the uncertainty set, and we discuss issues of finiteness, attainability,
and computational complexity. While p- and p+ are difficult to compute in
general, we prove that they equal the optimal values of two separate, but
related, copositive programs. We then develop tight, tractable conic
relaxations to provide lower and upper bounds on p- and p+, respectively. We
also develop techniques to assess the quality of the bounds, and we validate
our approach computationally on several examples from--and inspired by--the
literature. We find that the bounds on p- and p+ are very strong in practice
and, in particular, are at least as strong as known results for specific cases
from the literature.
|
We describe an exercise of using Big Data to predict the Michigan Consumer
Sentiment Index, a widely used indicator of the state of confidence in the US
economy. We carry out the exercise from a pure ex ante perspective. We use the
methodology of algorithmic text analysis of an archive of brokers' reports over
the period June 2010 through June 2013. The search is directed by the
social-psychological theory of agent behaviour, namely conviction narrative
theory. We compare one month ahead forecasts generated this way over a 15 month
period with the forecasts reported for the consensus predictions of Wall Street
economists. The former give much more accurate predictions, getting the
direction of change correct on 12 of the 15 occasions compared to only 7 for
the consensus predictions. We show that the approach retains significant
predictive power even over a four month ahead horizon.
|
Optical turbulence modelling and simulation are crucial for developing
astronomical ground-based instruments, laser communication, laser metrology, or
any application where light propagates through a turbulent medium. In the
context of spectrum-based optical turbulence Monte-Carlo simulations, we
present an alternative approach to the methods based on the Fast Fourier
Transform (FFT) using a quasi-random frequency sampling heuristic. This
approach provides complete control over the spectral information expressed in
the simulated measurable, without the drawbacks encountered with FFT-based
methods such as high-frequency aliasing, low-frequency under-sampling, and
static sampling statistics. The method's heuristics, implementation, and an
application example from the study of differential piston fluctuations are
discussed.
|
The Dark Energy Survey (DES; operations 2009-2015) will address the nature of
dark energy using four independent and complementary techniques: (1) a galaxy
cluster survey over 4000 deg2 in collaboration with the South Pole Telescope
Sunyaev-Zel'dovich effect mapping experiment, (2) a cosmic shear measurement
over 5000 deg2, (3) a galaxy angular clustering measurement within redshift
shells to redshift=1.35, and (4) distance measurements to 1900 supernovae Ia.
The DES will produce 200 TB of raw data in four bands, These data will be
processed into science ready images and catalogs and co-added into deeper,
higher quality images and catalogs. In total, the DES dataset will exceed 1 PB,
including a 100 TB catalog database that will serve as a key science analysis
tool for the astronomy/cosmology community. The data rate, volume, and duration
of the survey require a new type of data management (DM) system that (1) offers
a high degree of automation and robustness and (2) leverages the existing high
performance computing infrastructure to meet the project's DM targets. The DES
DM system consists of (1) a grid-enabled, flexible and scalable middleware
developed at NCSA for the broader scientific community, (2) astronomy modules
that build upon community software, and (3) a DES archive to support automated
processing and to serve DES catalogs and images to the collaboration and the
public. In the recent DES Data Challenge 1 we deployed and tested the first
version of the DES DM system, successfully reducing 700 GB of raw simulated
images into 5 TB of reduced data products and cataloguing 50 million objects
with calibrated astrometry and photometry.
|
It is demonstrated that the ionization events in the vicinity of a small
floating grain can increase the ion flux to its surface. In this respect the
effect of electron impact ionization is fully analogous to that of the
ion-neutral resonant charge exchange collisions. Both processes create slow ion
which cannot overcome grain' electrical attraction and eventually fall onto its
surface. The relative importance of ionization and ion-neutral collisions is
roughly given by the ratio of the corresponding frequencies. We have evaluated
this ratio for neon and argon plasmas to demonstrate that ionization enhanced
ion collection can indeed be an important factor affecting grain charging in
realistic experimental conditions.
|
We obtain a Beale-Kato-Majda-type criterion with optimal frequency and
temporal localization for the 3D Navier-Stokes equations. Compared to previous
results our condition only requires the control of Fourier modes below a
critical frequency, whose value is explicit in terms of time scales. As
applications it yields a strongly frequency-localized condition for regularity
in the space $B^{-1}_{\infty,\infty}$ and also a lower bound on the decaying
rate of $L^p$ norms $2\leq p <3$ for possible blowup solutions. The proof
relies on new estimates for the cutoff dissipation and energy at small time
scales which might be of independent interest.
|
Monolithic applications used to be considered the standard for software
development. However, due to the rapid evolution of technology and the
increasing demand for scalability and flexibility, these applications have
become increasingly inadequate for contemporary environment. In response to
these challenges, developers have begun to adopt a microservice (MS)
architecture, which offers a modular approach to software creation. However,
this transition requires to rethink the enablement system to meet the new
requirements. Two MS architectures can be deployed: a centralized or
decentralized architecture. Based on the requirements of the application's
users, a centralized authorization management architecture was chosen. The
purpose of this study is to explain the migration from a Role- Based Access
Control (RBAC) authorization system to a centralized microservice authorization
architecture. The migration is carried out in two stages: 1) Creation of an
authorization microservice 2) Abandonment of RBAC
|
A homotopy commutative algebra, or $C_{\infty}$-algebra, is defined via the
Tornike Kadeishvili homotopy transfer theorem on the vector space generated by
the set of Young tableaux with self-conjugated Young diagrams. We prove that
this $C_{\infty}$-algebra is generated in degree 1 by the binary and the
ternary operations.
|
The Galactic Halo is a key target for indirect dark matter detection. The
High Altitude Water Cherenkov (HAWC) observatory is a high-energy (~300 GeV to
>100 TeV) gamma-ray detector located in central Mexico. HAWC operates via the
water Cherenkov technique and has both a wide field of view of 2 sr and a >95%
duty cycle, making it ideal for analyses of highly extended sources. We made
use of these properties of HAWC and a new background-estimation technique
optimized for extended sources to probe a large region of the Galactic Halo for
dark matter signals. With this approach, we set improved constraints on dark
matter annihilation and decay between masses of 10 and 100 TeV. Due to the
large spatial extent of the HAWC field of view, these constraints are robust
against uncertainties in the Galactic dark matter spatial profile.
|
The generation of input files for density functional theory (DFT) programs
must often be manually done by researchers. If one wishes to produce a
maximally localized wannier functions (MLWFs) the calculation consists of
several separate files that must be formatted correctly in order for the
program to work properly. Many of the inputs are repeated throughout the files
and can be easily automated. In this work, a program is presented to generate
all of the input files needed to produce wannier functions with Wannier90
starting from open source DFT programs such as Quantum Espresso, Abinit, and
Siesta. In addition, the input files for WannierTools are also included for
those that wish to produce surface green's functions for the generation of
surface state bands. The program presented allows for users new to DFT to use
the programs with minimal understanding of parameters needed to produce good
results, in addition, this program allows for researchers who are advanced DFT
users to utilize this program for high throughput wannier calculations.
|
We investigate the quasibound states of charged massive scalar fields in the
Kerr-Newman black hole spacetime by using a new approach recently developed,
which uses the polynomial conditions of the Heun functions. We calculate the
resonant frequencies related to the spectrum of quasibound states, as well as
its corresponding angular and radial wave eigenfunctions. We also analyze the
instability of the system. These results are particularized to the cases of
Schwarzschild and Kerr black holes. Additionally, we compare our analytical
results with the numerical ones known in the literature. Finally, we apply the
obtained results to compute the characteristic times of growth and decay of
bosonic particles around a supermassive black hole situated at the center of
the M87 galaxy.
|
Nanoelectromechanical systems are characterized by an intimate connection
between electronic and mechanical degrees of freedom. Due to the nanoscopic
scale, current flowing through the system noticeably impacts the vibrational
dynamics of the device, complementing the effect of the vibrational modes on
the electronic dynamics. We employ the scattering matrix approach to quantum
transport to develop a unified theory of nanoelectromechanical systems out of
equilibrium. For a slow mechanical mode, the current can be obtained from the
Landauer-B\"uttiker formula in the strictly adiabatic limit. The leading
correction to the adiabatic limit reduces to Brouwer's formula for the current
of a quantum pump in the absence of the bias voltage. The principal result of
the present paper are scattering matrix expressions for the current-induced
forces acting on the mechanical degrees of freedom. These forces control the
Langevin dynamics of the mechanical modes. Specifically, we derive expressions
for the (typically nonconservative) mean force, for the (possibly negative)
damping force, an effective "Lorentz" force which exists even for time reversal
invariant systems, and the fluctuating Langevin force originating from Nyquist
and shot noise of the current flow. We apply our general formalism to several
simple models which illustrate the peculiar nature of the current-induced
forces. Specifically, we find that in out of equilibrium situations the current
induced forces can destabilize the mechanical vibrations and cause limit-cycle
dynamics.
|
Brownian motion of free particles on curved surfaces is studied by means of
the Langevin equation written in Riemann normal coordinates. In the diffusive
regime we find the same physical behavior as the one described by the diffusion
equation on curved manifolds [J. Stat. Mech. (2010) P08006]. Therefore, we use
the latter in order to analytically investigate the whole diffusive dynamics in
compact geometries, namely, the circle and the sphere. Our findings are
corroborated by means of Brownian dynamics computer simulations based on a
heuristic adaptation of the Ermak-McCammon algorithm to the Langevin equation
along the curves, as well as on the standard algorithm, but for particles
subjected to an external harmonic potential, deep and narrow, that possesses a
"Mexican hat" shape, whose minima define the desired surface. The short-time
diffusive dynamics is found to occur on the tangential plane. Besides, at long
times and compact geometries, the mean-square displacement moves towards a
saturation value given only by the geometrical properties of the surface.
|
One of the widespread solutions for non-rigid tracking has a nested-loop
structure: with Gauss-Newton to minimize a tracking objective in the outer
loop, and Preconditioned Conjugate Gradient (PCG) to solve a sparse linear
system in the inner loop. In this paper, we employ learnable optimizations to
improve tracking robustness and speed up solver convergence. First, we upgrade
the tracking objective by integrating an alignment data term on deep features
which are learned end-to-end through CNN. The new tracking objective can
capture the global deformation which helps Gauss-Newton to jump over local
minimum, leading to robust tracking on large non-rigid motions. Second, we
bridge the gap between the preconditioning technique and learning method by
introducing a ConditionNet which is trained to generate a preconditioner such
that PCG can converge within a small number of steps. Experimental results
indicate that the proposed learning method converges faster than the original
PCG by a large margin.
|
The purpose of this paper is to study optimal control of conditional
McKean-Vlasov (mean-field) stochastic differential equations with jumps
(conditional McKean-Vlasov jump diffusions, for short). To this end, we first
prove a stochastic Fokker-Planck equation for the conditional law of the
solution of such equations. Combining this equation with the original state
equation, we obtain a Markovian system for the state and its conditional law.
Furthermore, we apply this to formulate an Hamilton-Jacobi-Bellman (HJB)
equation for the optimal control of conditional McKean-Vlasov jump diffusions.
Then we study the situation when the law is absolutely continuous with respect
to Lebesgue measure. In that case the Fokker-Planck equation reduces to a
stochastic partial differential equation (SPDE) for the Radon-Nikodym
derivative of the conditional law. Finally we apply these results to solve
explicitly the following problems: -Linear-quadratic optimal control of
conditional stochastic McKean-Vlasov jump diffusions. -Optimal consumption from
a cash flow modelled as a conditional stochastic McKean-Vlasov differential
equation with jumps.
|
First-order convergence in time and space is proved for a fully discrete
semi-implicit finite element method for the two-dimensional Navier--Stokes
equations with $L^2$ initial data in convex polygonal domains, without extra
regularity assumptions or grid-ratio conditions. The proof utilises the
smoothing properties of the Navier--Stokes equations, an appropriate duality
argument, and the smallness of the numerical solution in the discrete
$L^2(0,t_m;H^1)$ norm when $t_m$ is smaller than some constant. Numerical
examples are provided to support the theoretical analysis.
|
Doppler shifts of the Fe I spectral line at lambda5250 Angstroms from the
full solar disk obtained over the period 1986 to 2009 are analyzed to determine
the circulation velocity of the solar surface along meridional planes.
Simultaneous measurements of the Zeeman splitting of this line are used to
obtain measurements of the solar magnetic field that are used to select low
field points and impose corrections for the magnetically induced Doppler shift.
The data utilized is from a new reduction that preserves the full spatial
resolution of the original observations so that the circulation flow can be
followed to latitudes of 80 degrees N/S. The deduced meridional flow is shown
to differ from the circulation velocities derived from magnetic pattern
movements. A reversed circulation pattern is seen in polar regions for three
successive solar minima. An surge in circulation velocity at low latitudes is
seen during the rising phases of cycles 22 and 23.
|
Gravitational spin-orbit interactions induce a relativistic capillary effect
along open magnetic flux-tubes, that join the event horizon of a spinning black
hole to infinity. It launches a leptonic outflow from electron-positron pairs
created near the black hole, which terminates in an ultra-relativistic Alfv\'en
wave. Upstream to infinity, it maintains a clean linear accelerator for baryons
picked-up from an ionized ambient environment. We apply it to the origin of
UHECRs and to spectral energy correlations in cosmological gamma-ray bursts.
The former is identified with the Fermi-level of the black hole event horizon,
the latter with a correlation $E_pT_{90}^{1/2}\simeq E_\gamma$ in HETE-II and
Swift data.
|
The mixing-induced CP asymmetries in $B_d \to J/\psi K_S$ and $B_s \to J/\psi
\phi$ are essential to detect or constrain new physics in the $B_d\! -
\overline{\!B}{}_d$ and $B_s\! - \overline{\!B}{}_s$ mixing amplitudes,
respectively. To this end one must control the penguin contributions to the
decay amplitudes, which affect the extraction of fundamental CP phases from the
measured CP asymmetries. Although the "penguin pollution" is doubly
Cabibbo-suppressed, it could compete in size with current experimental errors.
In this talk I present a calculation of the penguin contributions treating QCD
effects with soft-collinear factorisation and compare method and results with
the alternative approach employing flavour-SU(3) symmetry. As a novel feature,
I present results for the penguin pollution in $b\to c\overline c d$ modes.
|
A strategy is proposed to excite particles from a Fermi sea in a noise-free
fashion by electromagnetic pulses with realistic parameters. We show that by
using quantized pulses of simple form one can suppress the particle-hole pairs
which are created by a generic excitation. The resulting many-body states are
characterized by one or several particles excited above the Fermi surface
accompanied by no disturbance below it. These excitations carry charge which is
integer for noninteracting electron gas and fractional for Luttinger liquid.
The operator algebra describing these excitations is derived, and a method of
their detection which relies on noise measurement is proposed.
|
Using the circle method, we count integer points on complete intersections in
biprojective space in boxes of different side length, provided the number of
variables is large enough depending on the degree of the defining equations and
certain loci related to the singular locus. Having established these
asymptotics we deduce asymptotic formulas for rational points on such varieties
with respect to the anticanonical height function. In particular, we establish
a conjecture of Manin for certain smooth hypersurfaces in biprojective space of
sufficiently large dimension.
|
In this paper, the reaction of electron-positron annihilation into
$\Lambda_c^+\bar{\Lambda}_c^-$ is investigated. The
$\Lambda_c^+\bar{\Lambda}_c^-$ scattering amplitudes are obtained by solving
the Lippmann-Schwinger equation. The contact, annihilation, and two
pseudoscalar-exchange potentials are taken into account in the spirit of the
chiral effective field theory. The amplitudes of $e^+e^-\to
\Lambda_c^+\bar{\Lambda}_c^-$ are constructed by the distorted wave Born
approximation method, with the final state interactions of the
$\Lambda_c^+\bar{\Lambda}_c^-$ re-scattering implemented. By fitting to the
experimental data, the unknown couplings are fixed, and high-quality solutions
are obtained. With these amplitudes, the individual electromagnetic form
factors in the timelike region, $G_E^{\Lambda_c}$, $G_M^{\Lambda_c}$, and their
ratio, $G_E^{\Lambda_c}/G_M^{\Lambda_c}$, are extracted. Both modulus and
phases are predicted. These individual electromagnetic form factors reveal new
insights into the properties of the $\Lambda_c$. The separated contributions of
the Born term, contact, annihilation, as well as the two pseudoscalar exchange
potentials to the electromagnetic form factors are isolated. It is found that
the Born term dominates the whole energy region. The contact term plays a
crucial role in the enhancement near the threshold, and the annihilation term
is essential in generating the fluctuation of the electromagnetic form factors.
|
In this note, we prove that under some conditions, certain products of
integers related to Gauss factorials are always quadratic residues.
|
This paper compares the Anderson-Darling and some Eicker-Jaeschke statistics
to the classical unweighted Kolmogorov-Smirnov statistic. The goal is to
provide a quantitative comparison of such tests and to study real possibilities
of using them to detect departures from the hypothesized distribution that
occur in the tails. This contribution covers the case when under the
alternative a moderately large portion of probability mass is allocated towards
the tails. It is demonstrated that the approach allows for tractable, analytic
comparison between the given test and the benchmark, and for reliable
quantitative evaluation of weighted statistics. Finite sample results
illustrate the proposed approach and confirm the theoretical findings. In the
course of the investigation we also prove that a slight and natural
modification of the solution proposed by Borovkov and Sycheva (1968) leads to a
statistic which is a member of Eicker-Jaeschke class and can be considered an
attractive competitor of the very popular supremum-type Anderson-Darling
statistic.
|
We have investigated the field-induced-changes in both the magnetization and
the polarization in ferromagnet/Insulator/ferroelectric (FM/I/FE) multilayer by
following both the Stoner-Wohlfarth (SW) model and the Landau theory. It has
been found that with the stresses introduced in the FM/I/FE structure by the
fields, both the magnetization and the polarization states can be significantly
modified and the combination of their states can be of multiple states. These
results demonstrate the feasibility of combining both the spintronics and the
ferroelectrics into the multiferroictronics.
|
The search of unconventional magnetic and nonmagnetic states is a major topic
in the study of frustrated magnetism. Canonical examples of those states
include various spin liquids and spin nematics. However, discerning their
existence and the correct characterization is usually challenging. Here we
introduce a machine-learning protocol that can identify general nematic order
and their order parameter from seemingly featureless spin configurations, thus
providing comprehensive insight on the presence or absence of hidden orders. We
demonstrate the capabilities of our method by extracting the analytical form of
nematic order parameter tensors up to rank 6. This may prove useful in the
search for novel spin states and for ruling out spurious spin liquid
candidates.
|
We investigate mucosalivary dispersal and deposition on horizontal surfaces
corresponding to human exhalations with physical experiments under still-air
conditions. Synthetic fluorescence tagged sprays with size and speed
distributions comparable to human sneezes are observed with high-speed imaging.
We show that while some larger droplets follow parabolic trajectories, smaller
droplets stay aloft for several seconds and settle slowly with speeds
consistent with a buoyant cloud dynamics model. The net deposition distribution
is observed to become correspondingly broader as the source height $H$ is
increased, ranging from sitting at a table to standing upright. We find that
the deposited mucosaliva decays exponentially in front of the source, after
peaking at distance $x = 0.71$\,m when $H = 0.5$\,m, and $x = 0.56$\,m when
$H=1.5$\,m, with standard deviations $\approx 0.5$\,m. Greater than 99\% of the
mucosaliva is deposited within $x = 2$\,m, with faster landing times {\em
further} from the source. We then demonstrate that a standard nose and mouth
mask reduces the mucosaliva dispersed by a factor of at least a hundred
compared to the peaks recorded when unmasked.
|
We experimentally demonstrate a record net capacity per wavelength of
1.23~Tb/s over a single silicon-on-insulator (SOI) multimode waveguide for
optical interconnects employing on-chip mode-division multiplexing and
11$\times$11 multiple-in-multiple-out (MIMO) digital signal processing.
|
We report a new analytical method for solution of a wide class of
second-order differential equations with eigenvalues replaced by arbitrary
functions. Such classes of problems occur frequently in Quantum Mechanics and
Optics. This approach is based on the extension of the previously reported
differential transfer matrix method with modified basis functions. Applications
of the method to boundary value and initial value problems, as well as several
examples are illustrated.
|
We present a pair of 3-d magnetohydrodynamical simulations of intermittent
jets from a central active galactic nucleus (AGN) in a galaxy cluster extracted
from a high resolution cosmological simulation. The selected cluster was chosen
as an apparently relatively relaxed system, not having undergone a major merger
in almost 7 Gyr. Despite this characterization and history, the intra-cluster
medium (ICM) contains quite active "weather". We explore the effects of this
ICM weather on the morphological evolution of the AGN jets and lobes. The
orientation of the jets is different in the two simulations so that they probe
different aspects of the ICM structure and dynamics. We find that even for this
cluster that can be characterized as relaxed by an observational standard, the
large-scale, bulk ICM motions can significantly distort the jets and lobes.
Synthetic X-ray observations of the simulations show that the jets produce
complex cavity systems, while synthetic radio observations reveal bending of
the jets and lobes similar to wide-angle tail (WAT) radio sources. The jets are
cycled on and off with a 26 Myr period using a 50% duty cycle. This leads to
morphological features similar to those in "double-double" radio galaxies.
While the jet and ICM magnetic fields are generally too weak in the simulations
to play a major role in the dynamics, Maxwell stresses can still become locally
significant.
|
In this paper, we prove the non-vanishing conjecture for cotangent bundles on
isotrivial elliptic surfaces. Combined with the result by H\"{o}ring and
Peternell, it completely solves the question for surfaces with Kodaira
dimension at most $1$.
|
We theoretically study the transport properties in the T-shaped
double-quantum-dot structure, by introducing the Majorana bound state (MBS) to
couple to the dot in the main channel. It is found that the side-coupled dot
governs the effect of the MBS on the transport behavior. When its level is
consistent with the energy zero point, the MBS contributes little to the
conductance spectrum. Otherwise, the linear conductance exhibits notable
changes according to the inter-MBS coupling manners. In the case of Majorana
zero mode, the linear conductance value keeps equal to $e^2\over 2h$ when the
level of the side-coupled dot departs from the energy zero point. However, the
linear conductance is always analogous to the MBS-absent case once the
inter-MBS coupling comes into play. These findings provide new information
about the interplay between the MBSs and electron states in the quantum dots.
|
Combination of two basic types of synchronization, anticipated and
isochronous synchronization, is investigated numerically in coupled
semiconductor lasers. Due to the combination, a synchronization of good quality
can be obtained. We study the dependence of the lag time between two lasers and
the synchronization quality on the converse coupling retardation time
$\tau_{c21}$. When $\tau_{c21}$ is close to the difference of external cavity
round trip time $\tau$ and coupling retardation time $\tau_{c12}$, the
combination of anticipated and isochronous synchronization may produce a better
synchronization, with a lag time proportional to $\tau_{c21}$. When
$\tau_{c21}$ is largely different from $\tau-\tau_{c12}$, the combination is
noneffective and even negative in some cases, with a lag time independent of
$\tau_{c21}$.
|
Face synthesis has been a fascinating yet challenging problem in computer
vision and machine learning. Its main research effort is to design algorithms
to generate photo-realistic face images via given semantic domain. It has been
a crucial prepossessing step of main-stream face recognition approaches and an
excellent test of AI ability to use complicated probability distributions. In
this paper, we provide a comprehensive review of typical face synthesis works
that involve traditional methods as well as advanced deep learning approaches.
Particularly, Generative Adversarial Net (GAN) is highlighted to generate
photo-realistic and identity preserving results. Furthermore, the public
available databases and evaluation metrics are introduced in details. We end
the review with discussing unsolved difficulties and promising directions for
future research.
|
Recent advances in bottom-up growth are giving rise to a range of new
two-dimensional nanostructures. Hall effect measurements play an important role
in their electrical characterization. However, size constraints can lead to
device geometries that deviate significantly from the ideal of elongated Hall
bars with currentless contacts. Many devices using these new materials have a
low aspect ratio and feature metal Hall probes that overlap with the
semiconductor channel. This can lead to a significant distortion of the current
flow. We present experimental data from InAs 2D nanofin devices with different
Hall probe geometries to study the influence of Hall probe length and width. We
use finite-element simulations to further understand the implications of these
aspects and expand the scope to contact resistance and sample aspect ratios.
Our key finding is that invasive probes lead to a significant underestimation
in the measured Hall voltage, typically of the order of 40-80%. This in turn
leads to a subsequent proportional overestimation of carrier concentration and
an underestimation of mobility
|
In the paper, we introduce the generalized convex function on fractal sets of
real line numbers and study the properties of the generalized convex function.
Based on these properties, we establish the generalized Jensen inequality and
generalized Hermite-Hadamard inequality. Furthermore,some applications are
given.
|
On the rise of distributed computing technologies, video big data analytics
in the cloud have attracted researchers and practitioners' attention. The
current technology and market trends demand an efficient framework for video
big data analytics. However, the current work is too limited to provide an
architecture on video big data analytics in the cloud, including managing and
analyzing video big data, the challenges, and opportunities. This study
proposes a service-oriented layered reference architecture for intelligent
video big data analytics in the cloud. Finally, we identify and articulate
several open research issues and challenges, which have been raised by the
deployment of big data technologies in the cloud for video big data analytics.
This paper provides the research studies and technologies advancing video
analyses in the era of big data and cloud computing. This is the first study
that presents the generalized view of the video big data analytics in the cloud
to the best of our knowledge.
|
The binding energy and wavefunctions of two-dimensional indirect biexcitons
are studied analytically and numerically. It is proven that stable biexcitons
exist only when the distance between electron and hole layers is smaller than a
certain critical threshold. Numerical results for the biexciton binding
energies are obtained using the stochastic variational method and compared with
the analytical asymptotics. The threshold interlayer separation and its
uncertainty are estimated. The results are compared with those obtained by
other techniques, in particular, the diffusion Monte-Carlo method and the
Born-Oppenheimer approximation.
|
Let $A$ be a Noetherian domain and $R$ be a finitely generated $A$-algebra.
We study several features regarding the generic freeness over $A$ of an
$R$-module. For an ideal $I \subset R$, we show that the local cohomology
modules ${\rm H}_I^i(R)$ are generically free over $A$ under certain settings
where $R$ is a smooth $A$-algebra. By utilizing the theory of Gr\"obner bases
over arbitrary Noetherian rings, we provide an effective method to make
explicit the generic freeness over $A$ of a finitely generated $R$-module.
|
In this paper, we first establish regularity of the heat flow of biharmonic
maps into the unit sphere $S^L\subset\mathbb R^{L+1}$ under a smallness
condition of renormalized total energy. For the class of such solutions to the
heat flow of biharmonic maps, we prove the properties of uniqueness, convexity
of hessian energy, and unique limit at time infinity. We establish both
regularity and uniqueness for the class of weak solutions $u$ to the heat flow
of biharmonic maps into any compact Riemannian manifold $N$ without boundary
such that $\nabla^2 u\in L^q_tL^p_x$ for some $p>n/2$ and $q>2$ satisfying
(1.13).
|
Hereditary coreflective subcategories of an epireflective subcategory A of
Top such that I_2\notin A (here I_2 is the 2-point indiscrete space) were
studied in [C]. It was shown that a coreflective subcategory B of A is
hereditary (closed under the formation of subspaces) if and only if it is
closed under the formation of prime factors. The main problem studied in this
paper is the question whether this claim remains true if we study the (more
general) subcategories of A which are closed under topological sums and
quotients in A instead of the coreflective subcategories of A. We show that
this is true if A \subseteq Haus or under some reasonable conditions on B.
E.g., this holds if B contains either a prime space, or a space which is not
locally connected, or a totally disconnected space or a non-discrete Hausdorff
space. We touch also other questions related to such subclasses of A. We
introduce a method extending the results from the case of non-bireflective
subcategories (which was studied in [C]) to arbitrary epireflective
subcategories of Top. We also prove some new facts about the lattice of
coreflective subcategories of Top and ZD.
[C] J. \v{C}in\v{c}ura: Heredity and coreflective subcategories of the
category of topological spaces. Appl. Categ. Structures 9, 131-138 (2001)
|
Kitaev's compass model on the honeycomb lattice realizes a spin liquid whose
emergent excitations are dispersive Majorana fermions and static Z_2 gauge
fluxes. We discuss the proper selection of physical states for finite-size
simulations in the Majorana representation, based on a recent paper by
Pedrocchi, Chesi, and Loss [Phys. Rev. B 84, 165414 (2011)]. Certain physical
observables acquire large finite-size effects, in particular if the ground
state is not fermion-free, which we prove to generally apply to the system in
the gapless phase and with periodic boundary conditions. To illustrate our
findings, we compute the static and dynamic spin susceptibilities for
finite-size systems. Specifically, we consider random-bond disorder (which
preserves the solubility of the model), calculate the distribution of local
flux gaps, and extract the NMR lineshape. We also predict a transition to a
random-flux state with increasing disorder.
|
We present the results of testing a new technique for stochastic noise
reduction in the calculation of propagators by implementing it in OpenQ*D for
two ensembles with O(a) improved Wilson fermion action, with periodic boundary
conditions and pion masses of 437 MeV and 331 MeV, for the connected vector and
pseudoscalar correlators. We find that the technique yields no speedup compared
to traditional methods, owning to the failure of its underlying assumption that
the spectra of the spatial Laplacian and Dirac operators are sufficiently
similar for the technique's purposes.
|
We look at the rate of growth of the partial quotients of the infinite
continued fraction expansion of an irrational number relative to the rate of
approximation of the number by its convergents. In non-generic cases the
Hausdorff dimension of some exceptional sets is computed.
|
We present the Kepler photometric light-variation analysis of the late-type
double-lined binary system V568 Lyr that is in the field of the high
metallicity old open cluster NGC 6791. The radial velocity and the high-quality
short-cadence light curve of the system are analysed simultaneously. The
masses, radii and luminosities of the component stars are $M_1 =
1.0886\pm0.0031\, M{\odot}$, $M_2 = 0.8292 \pm 0.0026\, M{\odot}$, $R_1 =
1.4203\pm 0.0058\, R{\odot}$, $R_2 = 0.7997 \pm 0.0015\, R{\odot}$, $L_1 =
1.85\pm 0.15\, L{\odot}$, $L_2 = 0.292 \pm 0.018\, L{\odot}$ and their
separation is $a = 31.060 \pm 0.002\, R{\odot}$. The distance to NGC 6791 is
determined to be $4.260\pm 0.290\,$kpc by analysis of this binary system. We
fit the components of this well-detached binary system with evolution models
made with the Cambridge STARS and TWIN codes to test low-mass binary star
evolution. We find a good fit with a metallicity of $Z = 0.04$ and an age of
$7.704\,$Gyr. The standard tidal dissipation, included in TWIN is insufficient
to arrive at the observed circular orbit unless it formed rather circular to
begin with.
|
Spin-pumping across ferromagnet/superconductor (F/S) interfaces has attracted
much attention lately. Yet the focus has been mainly on s-wave
superconductors-based systems whereas (high-temperature) d-wave superconductors
such as YBa2Cu3O7-d (YBCO) have received scarce attention despite their
fundamental and technological interest. Here we use wideband ferromagnetic
resonance to study spin-pumping effects in bilayers that combine a soft
metallic Ni80Fe20 (Py) ferromagnet and YBCO. We evaluate the spin conductance
in YBCO by analyzing the magnetization dynamics in Py. We find that the Gilbert
damping exhibits a drastic drop as the heterostructures are cooled across the
normal-superconducting transition and then, depending on the S/F interface
morphology, either stays constant or shows a strong upturn. This unique
behavior is explained considering quasiparticle density of states at the YBCO
surface, and is a direct consequence of zero-gap nodes for particular
directions in the momentum space. Besides showing the fingerprint of d-wave
superconductivity in spin-pumping, our results demonstrate the potential of
high-temperature superconductors for fine tuning of the magnetization dynamics
in ferromagnets using k-space degrees of freedom of d-wave/F interfaces.
|
We present two uniqueness results for the inverse problem of determining an
index of refraction by the corresponding acoustic far-field measurement encoded
into the scattering amplitude. The first one is a local uniqueness in
determining a variable index of refraction by the fixed incident-direction
scattering amplitude. The inverse problem is formally posed with such
measurement data. The second one is a global uniqueness in determining a
constant refractive index by a single far-field measurement. The arguments are
based on the study of certain nonlinear and non-selfadjoint interior
transmission eigenvalue problems.
|
On a polarized compact symplectic manifold endowed with an action of a
compact Lie group, in analogy with geometric invariant theory, one can define
the space of invariant functions of degree k. A central statement in symplectic
geometry, the quantization commutes with reduction hypothesis, is equivalent to
saying that the dimension of these invariant functions depends polynomially on
k. This statement was proved by Meinrenken and Sjamaar under positivity
conditions. In this paper, we give a new proof of this polynomiality property.
The proof is based on a study of the Atiyah-Bott fixed point formula from the
point of view of the theory of partition functions, and a technique for
localizing positivity.
|
We present the heavy-to-light form factors with two different non-vanishing
masses at next-to-next-to-leading order and study its expansion in the small
mass. The leading term of this small-mass expansion leads to a factorized
expression for the form factor. The presence of a second mass results in a new
feature, in that the soft contribution develops a factorization anomaly. This
cancels with the corresponding anomaly in the collinear contribution. With the
generalized factorization presented here, it is possible to obtain the leading
small-mass terms for processes with large masses, such as muon-electron
scattering, from the corresponding massless amplitude and the soft
contribution.
|
We consider an extension of the Standard Model within the frame work of
Noncommutative Geometry. The model is based on an older model [St09] which
extends the Standard Model by new fermions, a new U(1)-gauge group and,
crucially, a new scalar field which couples to the Higgs field. This new scalar
field allows to lower the mass of the Higgs mass from ~170 GeV, as predicted by
the Spectral Action for the Standard Model, to a value of 120-130 GeV. The
short-coming of the previous model lay in its inability to meet all the
constraints on the gauge couplings implied by the Spectral Action. These
shortcomings are cured in the present model which also features a "dark sector"
containing fermions and scalar particles.
|
We develop a general theory for the goodness-of-fit test to non-linear
models. In particular, we assume that the observations are noisy samples of a
submanifold defined by a \yao{sufficiently smooth non-linear map}. The
observation noise is additive Gaussian. Our main result shows that the
"residual" of the model fit, by solving a non-linear least-square problem,
follows a (possibly noncentral) $\chi^2$ distribution. The parameters of the
$\chi^2$ distribution are related to the model order and dimension of the
problem. We further present a method to select the model orders sequentially.
We demonstrate the broad application of the general theory in machine learning
and signal processing, including determining the rank of low-rank (possibly
complex-valued) matrices and tensors from noisy, partial, or indirect
observations, determining the number of sources in signal demixing, and
potential applications in determining the number of hidden nodes in neural
networks.
|
Using the infrared-renormalon approach, we obtain the constraints on the
next-to-leading order non-singlet polarised parton densities. The advocated
feature follows from the consideration of the effect revealed in the process of
the next-to-leading order fits to the data for the assymetry of polarised
lepton-nucleon scattering which result in the approximate nullification of the
$1/Q^2$-correction to A_1^N(x,Q^2).
|
This paper presents a data-driven approach to learning vision-based
collective behavior from a simple flocking algorithm. We simulate a swarm of
quadrotor drones and formulate the controller as a regression problem in which
we generate 3D velocity commands directly from raw camera images. The dataset
is created by simultaneously acquiring omnidirectional images and computing the
corresponding control command from the flocking algorithm. We show that a
convolutional neural network trained on the visual inputs of the drone can
learn not only robust collision avoidance but also coherence of the flock in a
sample-efficient manner. The neural controller effectively learns to localize
other agents in the visual input, which we show by visualizing the regions with
the most influence on the motion of an agent. This weakly supervised saliency
map can be computed efficiently and may be used as a prior for subsequent
detection and relative localization of other agents. We remove the dependence
on sharing positions among flock members by taking only local visual
information into account for control. Our work can therefore be seen as the
first step towards a fully decentralized, vision-based flock without the need
for communication or visual markers to aid detection of other agents.
|
We have presented a complete description of classical dynamics generated by
the Hamiltonian of quadrupole nuclear oscillations and identified those
peculiarities of quantum dynamics that can be interpreted as quantum
manifestations of classical stochasticity. Particular attention has been given
to investigation of classical dynamics in the potential energy surface with a
few local minima. A new technique is suggested for determination of the
critical energy of the transition to chaos. It is simpler than criteria of
transition to chaos connected with one or another version of overlap resonances
criterion. We have numerically demonstrated that for potential with a localized
unstable region motion becomes regular at the high energy again, i.e.
transition regularity-chaos-regularity {R - C - R} takes place for these
potentials. The variations of statistical properties of energy spectrum in the
process of {R - C - R) transition have been studied in detail. We proved that
the type of the classical motion is correlated with the structure of the
eigenfunctions of highly excited states in the (R - C - R) transition. Shell
structure destruction induced by the increase of nonintegrable perturbation was
analyzed.
|
We reconstruct $f(T)$ theories from three different holographic dark energy
models in different time durations. For the HDE model, the dark energy
dominated era with new setting up is chosen for reconstruction, and the
radiation dominated era is chosen when the involved model changes into NADE.
For the RDE model, radiation, matter and dark energy dominated time durations
are all investigated. We also investigate the limitation which prevents an
arbitrary choice of the time duration for reconstruction in HDE and NADE, and
find that an improved boundary condition is needed for a more precise
reconstruction of $f(T)$ theory.
|
We address the problem of video moment localization with natural language,
i.e. localizing a video segment described by a natural language sentence. While
most prior work focuses on grounding the query as a whole, temporal
dependencies and reasoning between events within the text are not fully
considered. In this paper, we propose a novel Temporal Compositional Modular
Network (TCMN) where a tree attention network first automatically decomposes a
sentence into three descriptions with respect to the main event, context event
and temporal signal. Two modules are then utilized to measure the visual
similarity and location similarity between each segment and the decomposed
descriptions. Moreover, since the main event and context event may rely on
different modalities (RGB or optical flow), we use late fusion to form an
ensemble of four models, where each model is independently trained by one
combination of the visual input. Experiments show that our model outperforms
the state-of-the-art methods on the TEMPO dataset.
|
In this article we prove an existence theorem for coincidence points of
mappings in Banach spaces. This theorem generalizes the Kantorovich fixed point
theorem.
|
This chapter explores the foundational concept of robustness in Machine
Learning (ML) and its integral role in establishing trustworthiness in
Artificial Intelligence (AI) systems. The discussion begins with a detailed
definition of robustness, portraying it as the ability of ML models to maintain
stable performance across varied and unexpected environmental conditions. ML
robustness is dissected through several lenses: its complementarity with
generalizability; its status as a requirement for trustworthy AI; its
adversarial vs non-adversarial aspects; its quantitative metrics; and its
indicators such as reproducibility and explainability. The chapter delves into
the factors that impede robustness, such as data bias, model complexity, and
the pitfalls of underspecified ML pipelines. It surveys key techniques for
robustness assessment from a broad perspective, including adversarial attacks,
encompassing both digital and physical realms. It covers non-adversarial data
shifts and nuances of Deep Learning (DL) software testing methodologies. The
discussion progresses to explore amelioration strategies for bolstering
robustness, starting with data-centric approaches like debiasing and
augmentation. Further examination includes a variety of model-centric methods
such as transfer learning, adversarial training, and randomized smoothing.
Lastly, post-training methods are discussed, including ensemble techniques,
pruning, and model repairs, emerging as cost-effective strategies to make
models more resilient against the unpredictable. This chapter underscores the
ongoing challenges and limitations in estimating and achieving ML robustness by
existing approaches. It offers insights and directions for future research on
this crucial concept, as a prerequisite for trustworthy AI systems.
|
This article proposes a communication-efficient decentralized deep learning
algorithm, coined layer-wise federated group ADMM (L-FGADMM). To minimize an
empirical risk, every worker in L-FGADMM periodically communicates with two
neighbors, in which the periods are separately adjusted for different layers of
its deep neural network. A constrained optimization problem for this setting is
formulated and solved using the stochastic version of GADMM proposed in our
prior work. Numerical evaluations show that by less frequently exchanging the
largest layer, L-FGADMM can significantly reduce the communication cost,
without compromising the convergence speed. Surprisingly, despite less
exchanged information and decentralized operations, intermittently skipping the
largest layer consensus in L-FGADMM creates a regularizing effect, thereby
achieving the test accuracy as high as federated learning (FL), a baseline
method with the entire layer consensus by the aid of a central entity.
|
Neural text-to-speech synthesis (NTTS) models have shown significant progress
in generating high-quality speech, however they require a large quantity of
training data. This makes creating models for multiple styles expensive and
time-consuming. In this paper different styles of speech are analysed based on
prosodic variations, from this a model is proposed to synthesise speech in the
style of a newscaster, with just a few hours of supplementary data. We pose the
problem of synthesising in a target style using limited data as that of
creating a bi-style model that can synthesise both neutral-style and
newscaster-style speech via a one-hot vector which factorises the two styles.
We also propose conditioning the model on contextual word embeddings, and
extensively evaluate it against neutral NTTS, and neutral concatenative-based
synthesis. This model closes the gap in perceived style-appropriateness between
natural recordings for newscaster-style of speech, and neutral speech synthesis
by approximately two-thirds.
|
We study hysteresis in the random-field Ising model with an asymmetric
distribution of quenched fields, in the limit of low disorder in two and three
dimensions. We relate the spin flip process to bootstrap percolation, and show
that the characteristic length for self-averaging $L^*$ increases as $exp(exp
(J/\Delta))$ in 2d, and as $exp(exp(exp(J/\Delta)))$ in 3d, for disorder
strength $\Delta$ much less than the exchange coupling J. For system size $1 <<
L < L^*$, the coercive field $h_{coer}$ varies as $2J - \Delta \ln \ln L$ for
the square lattice, and as $2J - \Delta \ln \ln \ln L$ on the cubic lattice.
Its limiting value is 0 for L tending to infinity, both for square and cubic
lattices. For lattices with coordination number 3, the limiting magnetization
shows no jump, and $h_{coer}$ tends to J.
|
We present a joint estimate of the stellar/dark matter mass fraction in lens
galaxies and the average size of the accretion disk of lensed quasars from
microlensing measurements of 27 quasar image pairs seen through 19 lens
galaxies. The Bayesian estimate for the fraction of the surface mass density in
the form of stars is $\alpha=0.21\pm0.14$ near the Einstein radius of the
lenses ($\sim 1 - 2$ effective radii). The estimate for the average accretion
disk size is $R_{1/2}=7.9^{+3.8}_{-2.6}\sqrt{M/0.3M_\sun}$ light days. The
fraction of mass in stars at these radii is significantly larger than previous
estimates from microlensing studies assuming quasars were point-like. The
corresponding local dark matter fraction of 79\% is in good agreement with
other estimates based on strong lensing or kinematics. The size of the
accretion disk inferred in the present study is slightly larger than previous
estimates.
|
The ground state of colloidal magnetic particles in a modulated channel are
investigated as function of the tilt angle of an applied magnetic field. The
particles are confined by a parabolic potential in the transversal direction
while in the axial direction a periodic substrate potential is present. By
using Monte Carlo (MC) simulations, we construct a phase diagram for the
different crystal structures as a function of the magnetic field orientation,
strength of the modulated potential and the commensurability factor of the
system. Interestingly, we found first and second order phase transitions
between different crystal structures, which can be manipulated by the
orientation of the external magnetic field. A re-entrant behavior is found
between two- and four-chain configurations, with continuous second order
transitions. Novel configurations are found consisting of frozen in solitons.
By changing the orientation and/or strength of the magnetic field and/or the
strength and the spatial frequency of the periodic substrate potential, the
system transits through different phases.
|
The Internet of Things (IoT) is seen as a novel technical paradigm aimed at
enabling connectivity between billions of interconnected devices all around the
world. This IoT is being served in various domains, such as smart healthcare,
traffic surveillance, smart homes, smart cities, and various industries. IoT's
main functionality includes sensing the surrounding environment, collecting
data from the surrounding, and transmitting those data to the remote data
centers or the cloud. This sharing of vast volumes of data between billions of
IoT devices generates a large energy demand and increases energy wastage in the
form of heat. The Green IoT envisages reducing the energy consumption of IoT
devices and keeping the environment safe and clean. Inspired by achieving a
sustainable next-generation IoT ecosystem and guiding us toward making a
healthy green planet, we first offer an overview of Green IoT (GIoT), and then
the challenges and the future directions regarding the GIoT are presented in
our study.
|
The Tor anonymity network is difficult to measure because, if not done
carefully, measurements could risk the privacy (and potentially the safety) of
the network's users. Recent work has proposed the use of differential privacy
and secure aggregation techniques to safely measure Tor, and preliminary
proof-of-concept prototype tools have been developed in order to demonstrate
the utility of these techniques. In this work, we significantly enhance two
such tools--PrivCount and Private Set-Union Cardinality--in order to support
the safe exploration of new types of Tor usage behavior that have never before
been measured. Using the enhanced tools, we conduct a detailed measurement
study of Tor covering three major aspects of Tor usage: how many users connect
to Tor and from where do they connect, with which destinations do users most
frequently communicate, and how many onion services exist and how are they
used. Our findings include that Tor has ~8 million daily users (a factor of
four more than previously believed) while Tor user IPs turn over almost twice
in a 4 day period. We also find that ~40% of the sites accessed over Tor have a
torproject.org domain name, ~10% of the sites have an amazon.com domain name,
and ~80% of the sites have a domain name that is included in the Alexa top 1
million sites list. Finally, we find that ~90% of lookups for onion addresses
are invalid, and more than 90% of attempted connections to onion services fail.
|
Generative models have shown great promise in synthesizing photorealistic 3D
objects, but they require large amounts of training data. We introduce SinGRAF,
a 3D-aware generative model that is trained with a few input images of a single
scene. Once trained, SinGRAF generates different realizations of this 3D scene
that preserve the appearance of the input while varying scene layout. For this
purpose, we build on recent progress in 3D GAN architectures and introduce a
novel progressive-scale patch discrimination approach during training. With
several experiments, we demonstrate that the results produced by SinGRAF
outperform the closest related works in both quality and diversity by a large
margin.
|
Materials that are lightweight yet exhibit superior mechanical properties are
of compelling importance for several technological applications that range from
aircrafts to household appliances. Lightweight materials allow energy saving
and reduce the amount of resources required for manufacturing. Researchers have
expended significant efforts in the quest for such materials, which require new
concepts in both tailoring material microstructure as well as structural
design. Architectured materials, which take advantage of new engineering
paradigms, have recently emerged as an exciting avenue to create bespoke
combinations of desired macroscopic material responses. In some instances,
rather unique structures have emerged from advanced geometrical concepts (e.g.
gyroids, menger cubes, or origami/kirigami-based structures), while in others
innovation has emerged from mimicking nature in bio-inspired materials (e.g.
honeycomb structures, nacre, fish scales etc.). Beyond design, additive
manufacturing has enabled the facile fabrication of complex geometrical and
bio-inspired architectures, using computer aided design models. The combination
of simulations and experiments on these structures has led to an enhancement of
mechanical properties, including strength, stiffness and toughness. In this
review, we provide a perspective on topologically engineered architectured
materials that exhibit optimal mechanical behaviour and can be readily printed
using additive manufacturing.
|
Current state-of-the-art methods cast monocular 3D human pose estimation as a
learning problem by training neural networks on large data sets of images and
corresponding skeleton poses. In contrast, we propose an approach that can
exploit small annotated data sets by fine-tuning networks pre-trained via
self-supervised learning on (large) unlabeled data sets. To drive such networks
towards supporting 3D pose estimation during the pre-training step, we
introduce a novel self-supervised feature learning task designed to focus on
the 3D structure in an image. We exploit images extracted from videos captured
with a multi-view camera system. The task is to classify whether two images
depict two views of the same scene up to a rigid transformation. In a
multi-view data set, where objects deform in a non-rigid manner, a rigid
transformation occurs only between two views taken at the exact same time,
i.e., when they are synchronized. We demonstrate the effectiveness of the
synchronization task on the Human3.6M data set and achieve state-of-the-art
results in 3D human pose estimation.
|
The effect of massive neutrinos on the evolution of the early type galaxies
(ETGs) in size ($R_{e}$) and stellar mass ($M_{\star}$) is explored by tracing
the merging history of galaxy progenitors with the help of the robust
semi-analytic prescriptions. We show that as the presence of massive neutrinos
plays a role of enhancing the mean merger rate per halo, the high-$z$
progenitors of a descendant galaxy with fixed mass evolves much more rapidly in
size for a $\Lambda$MDM ($\Lambda$CDM + massive neutrinos) model than for the
$\Lambda$CDM case. The mass-normalized size evolution of the progenitor
galaxies, $R_{e}[M_{\star}/(10^{11}M_{\odot})]^{-0.57}\propto (1+z)^{-\beta}$,
is found to be quite steep with the power-law index of $\beta\sim 1.5$ when the
neutrino mass fraction is $f_{\nu}=0.05$, while it is $\beta\sim 1$ when
$f_{\nu}=0$. It is concluded that if the presence and role of massive neutrinos
are properly taken into account, it may explain away the anomalous compactness
of the high-$z$ ETGs compared with the local ellipticals with similar stellar
masses.
|
Sparsely-activated Mixture-of-Experts (MoE) architecture has increasingly
been adopted to further scale large language models (LLMs) due to its
sub-linear scaling for computation costs. However, frequent failures still pose
significant challenges as training scales. The cost of even a single failure is
significant, as all GPUs need to wait idle until the failure is resolved,
potentially losing considerable training progress as training has to restart
from checkpoints. Existing solutions for efficient fault-tolerant training
either lack elasticity or rely on building resiliency into pipeline
parallelism, which cannot be applied to MoE models due to the expert
parallelism strategy adopted by the MoE architecture.
We present Lazarus, a system for resilient and elastic training of MoE
models. Lazarus adaptively allocates expert replicas to address the inherent
imbalance in expert workload and speeds-up training, while a provably optimal
expert placement algorithm is developed to maximize the probability of recovery
upon failures. Through adaptive expert placement and a flexible token
dispatcher, Lazarus can also fully utilize all available nodes after failures,
leaving no GPU idle. Our evaluation shows that Lazarus outperforms existing MoE
training systems by up to 5.7x under frequent node failures and 3.4x on a real
spot instance trace.
|
We present transport measurements on quantum dots of sizes 45, 60 and 80 nm
etched with an Ar/O2-plasma into a single graphene sheet, allowing a size
comparison avoiding effects from different graphene flakes. The transport gaps
and addition energies increase with decreasing dot size, as expected, and
display a strong correlation, suggesting the same physical origin for both,
i.e. disorder-induced localization in presence of a small confinement gap. Gate
capacitance measurements indicate that the dot charges are located in the
narrow device region as intended. A dominant role of disorder is further
substantiated by the gate dependence and the magnetic field behavior, allowing
only approximate identification of the electron-hole crossover and spin filling
sequences. Finally, we extract a g-factor consistent with g=2 within the error
bars.
|
The difficulty of an entity matching task depends on a combination of
multiple factors such as the amount of corner-case pairs, the fraction of
entities in the test set that have not been seen during training, and the size
of the development set. Current entity matching benchmarks usually represent
single points in the space along such dimensions or they provide for the
evaluation of matching methods along a single dimension, for instance the
amount of training data. This paper presents WDC Products, an entity matching
benchmark which provides for the systematic evaluation of matching systems
along combinations of three dimensions while relying on real-world data. The
three dimensions are (i) amount of corner-cases (ii) generalization to unseen
entities, and (iii) development set size (training set plus validation set).
Generalization to unseen entities is a dimension not covered by any of the
existing English-language benchmarks yet but is crucial for evaluating the
robustness of entity matching systems. Instead of learning how to match entity
pairs, entity matching can also be formulated as a multi-class classification
task that requires the matcher to recognize individual entities. WDC Products
is the first benchmark that provides a pair-wise and a multi-class formulation
of the same tasks. We evaluate WDC Products using several state-of-the-art
matching systems, including Ditto, HierGAT, and R-SupCon. The evaluation shows
that all matching systems struggle with unseen entities to varying degrees. It
also shows that for entity matching contrastive learning is more training data
efficient compared to cross-encoders.
|
We study residue formulas for push-forward in K-theory of homogeneous spaces.
First we review formulas for classical groups, which we derive from a formula
for the classical Grassmannian case. Next we consider the homogeneous spaces
for G2. One of them embeds in the Grassmannian Gr(2,7). We find its fundamental
class in the equivariant K-theory and obtain the residue formula for the
push-forward. This formula is valid for G2/B as well.
|
Subsets and Splits