text
stringlengths 6
128k
|
---|
Let $G$ be a finite non-abelian group and ${\Gamma}_{nc}(G)$ be its
non-commuting graph. In this paper, we compute spectrum and energy of
${\Gamma}_{nc}(G)$ for certain classes of finite groups. As a consequence of
our results we construct infinite families of integral complete $r$-partite
graphs. We compare energy and Laplacian energy (denoted by
$E({\Gamma}_{nc}(G))$ and $LE({\Gamma}_{nc}(G))$ respectively) of
${\Gamma}_{nc}(G)$ and conclude that $E({\Gamma}_{nc}(G)) \leq
LE({\Gamma}_{nc}(G))$ for those groups except for some non-abelian groups of
order $pq$. This shows that the conjecture posed in [Gutman, I., Abreu, N. M.
M., Vinagre, C. T.M., Bonifacioa, A. S and Radenkovic, S. Relation between
energy and Laplacian energy, MATCH Commun. Math. Comput. Chem., 59: 343--354,
(2008)] does not hold for non-commuting graphs of certain finite groups, which
also produces new families of counter examples to the above mentioned
conjecture.
|
We find a new class of theories of massive gravity with five propagating
degrees of freedom where only rotations are preserved. Our results are based on
a non-perturbative and background-independent Hamiltonian analysis. In these
theories the weak field approximation is well behaved and the static
gravitational potential is typically screened \`a la Yukawa at large distances,
while at short distances no vDVZ discontinuity is found and there is no need to
rely on nonlinear effects to pass the solar system tests. The effective field
theory analysis shows that the ultraviolet cutoff is (m M_PL)^1/2 ~ 1/\mu m,
the highest possible. Thus, these theories can be studied in weak-field regime
at all the phenomenologically interesting scales, and are candidates for a
calculable large-distance modified gravity.
|
The formation of massive planetary or brown dwarf companions at large
projected separations from their host star is not yet well understood. In order
to put constraints on formation scenarios we search for signatures in the orbit
dynamics of the systems. We are specifically interested in the eccentricities
and inclinations since those parameters might tell us about the dynamic history
of the systems and where to look for additional low-mass sub-stellar
companions. For this purpose we utilized VLT/NACO to take several well
calibrated high resolution images of 6 target systems and analyze them together
with available literature data points of those systems as well as Hubble Space
Telescope archival data. We used a statistical Least-Squares Monte-Carlo
approach to constrain the orbit elements of all systems that showed significant
differential motion of the primary star and companion. We show for the first
time that the GQ Lup system shows significant change in both separation and
position angle. Our analysis yields best fitting orbits for this system, which
are eccentric (e between 0.21 and 0.69), but can not rule out circular orbits
at high inclinations. Given our astrometry we discuss formation scenarios of
the GQ Lup system. In addition, we detected an even fainter new companion
candidate to GQ Lup, which is most likely a background object. We also updated
the orbit constraints of the PZ Tel system, confirming that the companion is on
a highly eccentric orbit with e > 0.62. Finally we show with a high
significance, that there is no orbital motion observed in the cases of the DH
Tau, HD 203030 and 1RXS J160929.1-210524 systems and give the most precise
relative astrometric measurement of the UScoCTIO 108 system to date.
|
Spitzer observations of extended dust in two optically normal elliptical
galaxies provide a new confirmation of buoyant feedback outflow in the hot gas
atmospheres around these galaxies. AGN feedback energy is required to prevent
wholesale cooling and star formation in these group-centered galaxies. In NGC
5044 we observe interstellar (presumably PAH) emission at 8 microns out to
about 5 kpc. Both NGC 5044 and 4636 have extended 70 microns emission from cold
dust exceeding that expected from stellar mass loss. The sputtering lifetime of
this extended dust in the ~1keV interstellar gas, ~10^7 yrs, establishes the
time when the dust first entered the hot gas. Evidently the extended dust
originated in dusty disks or clouds, commonly observed in elliptical galaxy
cores, that were disrupted, heated and buoyantly transported outward. The
surviving central dust in NGC 5044 and 4636 has been disrupted into many small
filaments. It is remarkable that the asymmetrically extended 8 micron emission
in NGC 5044 is spatially coincident with Halpha+[NII] emission from warm gas. A
calculation shows that dust-assisted cooling in buoyant hot gas moving out from
the galactic core can cool within a few kpc in about ~10^7 yrs, explaining the
optical line emission observed. The X-ray images of both galaxies are
disturbed. All timescales for transient activity - restoration of equilibrium
and buoyant transport in the hot gas, dynamics of surviving dust fragments, and
dust sputtering - are consistent with a central release of feedback energy in
both galaxies about 10^7 yrs ago.
|
In this article we discuss some numerical parts of the mirror conjecture. For
any 3 - dimensional Calabi - Yau manifold author introduces a generalization of
the Casson invariant known in 3 - dimensional geometry, which is called Casson
- Donaldson invariant. In the framework of the mirror relationship it
corresponds to the number of SpLag cycles which are Bohr - Sommerfeld with
respect to the given polarization. To compute the Casson - Donaldson invariant
the author uses well known in classical algebraic geometry degeneration
principle. By it, when the given Calabi - Yau manifold is deformed to a pair of
quasi Fano manifolds glued upon some K3 - surface, one can compute the
invariant in terms of "flag geometry" of the pairs (quasi Fano, K3 - surface).
|
In this work, we propose MUSTACHE, a new page cache replacement algorithm
whose logic is learned from observed memory access requests rather than fixed
like existing policies. We formulate the page request prediction problem as a
categorical time series forecasting task. Then, our method queries the learned
page request forecaster to obtain the next $k$ predicted page memory references
to better approximate the optimal B\'el\'ady's replacement algorithm. We
implement several forecasting techniques using advanced deep learning
architectures and integrate the best-performing one into an existing
open-source cache simulator. Experiments run on benchmark datasets show that
MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU),
improving the cache hit ratio by 1.9% and reducing the number of reads/writes
required to handle cache misses by 18.4% and 10.3%.
|
In higher order calculations a number of new technical problems arise: one
needs diagrams in arbitrary dimension in order to obtain their needed
$\epsilon$-expansion, zero Gram determinants appear, renormalization produces
diagrams with `dots' on the lines, i.e. higher order powers of scalar
propagators. All these problems cannot be accessed by the `standard'
Passarino-Veltman approach: there is not available what is needed for higher
loops. We demonstrate our method of how to solve these problems.
|
We propose a simple phenomenological model for wet granular media to take
into account many particle interaction through liquid in the funicular state as
well as two-body cohesive force by a liquid bridge in the pendular state. In
the wet granular media with small liquid content, liquid forms a bridge at each
contact point, which induces two-body cohesive force due to the surface
tension. As the liquid content increases, some liquid bridges merge, and more
than two grains interact through a single liquid cluster. In our model, the
cohesive force acts between the grains connected by a liquid-gas interface. As
the liquid content increases, the number of grains that interact through the
liquid increases, but the liquid-gas interface may decrease when liquid
clusters are formed. Due to this competition, our model shows that the shear
stress has a maximum as a function of the liquid-content.
|
We introduce a setting based on the one-dimensional (1D) nonlinear
Schroedinger equation (NLSE) with the self-focusing (SF) cubic term modulated
by a singular function of the coordinate, |x|^{-a}. It may be additionally
combined with the uniform self-defocusing (SDF) nonlinear background, and with
a similar singular repulsive linear potential. The setting, which can be
implemented in optics and BEC, aims to extend the general analysis of the
existence and stability of solitons in NLSEs. Results for fundamental solitons
are obtained analytically and verified numerically. The solitons feature a
quasi-cuspon shape, with the second derivative diverging at the center, and are
stable in the entire existence range, which is 0 < a < 1. Dipole (odd) solitons
are found too. They are unstable in the infinite domain, but stable in the
semi-infinite one. In the presence of the SDF background, there are two
subfamilies of fundamental solitons, one stable and one unstable, which exist
together above a threshold value of the norm (total power of the soliton). The
system which additionally includes the singular repulsive linear potential
emulates solitons in a uniform space of the fractional dimension, 0 < D < 1. A
two-dimensional extension of the system, based on the quadratic nonlinearity,
is formulated too.
|
We study a \emph{Plurality-Consensus} process in which each of $n$ anonymous
agents of a communication network initially supports an opinion (a color chosen
from a finite set $[k]$). Then, in every (synchronous) round, each agent can
revise his color according to the opinions currently held by a random sample of
his neighbors. It is assumed that the initial color configuration exhibits a
sufficiently large \emph{bias} $s$ towards a fixed plurality color, that is,
the number of nodes supporting the plurality color exceeds the number of nodes
supporting any other color by $s$ additional nodes. The goal is having the
process to converge to the \emph{stable} configuration in which all nodes
support the initial plurality. We consider a basic model in which the network
is a clique and the update rule (called here the \emph{3-majority dynamics}) of
the process is the following: each agent looks at the colors of three random
neighbors and then applies the majority rule (breaking ties uniformly).
We prove that the process converges in time $\mathcal{O}( \min\{ k, (n/\log
n)^{1/3} \} \, \log n )$ with high probability, provided that $s \geqslant c
\sqrt{ \min\{ 2k, (n/\log n)^{1/3} \}\, n \log n}$.
We then prove that our upper bound above is tight as long as $k \leqslant
(n/\log n)^{1/4}$. This fact implies an exponential time-gap between the
plurality-consensus process and the \emph{median} process studied by Doerr et
al. in [ACM SPAA'11].
A natural question is whether looking at more (than three) random neighbors
can significantly speed up the process. We provide a negative answer to this
question: In particular, we show that samples of polylogarithmic size can speed
up the process by a polylogarithmic factor only.
|
A common model of the explosion mechanism of Type Ia supernovae is based on a
delayed detonation of a white dwarf. A variety of models differ primarily in
the method by which the deflagration leads to a detonation. A common feature of
the models, however, is that all of them involve the propagation of the
detonation through a white dwarf that is either expanding or contracting, where
the stellar internal velocity profile depends on both time and space. In this
work, we investigate the effects of the pre-detonation stellar internal
velocity profile and the post-detonation velocity of expansion on the
production of alpha-particle nuclei, including Ni56, which are the primary
nuclei produced by the detonation wave. We perform one-dimensional hydrodynamic
simulations of the explosion phase of the white dwarf for center and off-center
detonations with five different stellar velocity profiles at the onset of the
detonation. We observe two distinct post-detonation expansion phases:
rarefaction and bulk expansion. Almost all the burning to Ni56 occurs only in
the rarefaction phase, and its expansion time scale is influenced by
pre-existing flow structure in the star, in particular by the pre-detonation
stellar velocity profile. We find that the mass fractions of the alpha-particle
nuclei, including Ni56, are tight functions of the empirical physical parameter
rho_up/v_down, where rho_up is the mass density immediately upstream of the
detonation wave front and v_down is the velocity of the flow immediately
downstream of the detonation wave front. We also find that v_down depends on
the pre-detonation flow velocity. We conclude that the properties of the
pre-existing flow, in particular the internal stellar velocity profile,
influence the final isotopic composition of burned matter produced by the
detonation.
|
Recent advances in large pretrained language models have increased attention
to zero-shot text classification. In particular, models finetuned on natural
language inference datasets have been widely adopted as zero-shot classifiers
due to their promising results and off-the-shelf availability. However, the
fact that such models are unfamiliar with the target task can lead to
instability and performance issues. We propose a plug-and-play method to bridge
this gap using a simple self-training approach, requiring only the class names
along with an unlabeled dataset, and without the need for domain expertise or
trial and error. We show that fine-tuning the zero-shot classifier on its most
confident predictions leads to significant performance gains across a wide
range of text classification tasks, presumably since self-training adapts the
zero-shot model to the task at hand.
|
Dispersive shock waves in thermal optical media belong to the third-order
nonlinear phenomena, whose intrinsic irreversibility is described by time
asymmetric quantum mechanics. Recent studies demonstrated that nonlocal wave
breaking evolves in an exponentially decaying dynamics ruled by the reversed
harmonic oscillator, namely, the simplest irreversible quantum system in the
rigged Hilbert spaces. The generalization of this theory to more complex
scenarios is still an open question. In this work, we use a thermal third-order
medium with an unprecedented giant Kerr coefficient, the M-Cresol/Nylon mixed
solution, to access an extremely-nonlinear highly-nonlocal regime and realize
anisotropic shock waves. We prove that a superposition of the Gamow vectors in
an ad hoc rigged Hilbert space describes the nonlinear beam propagation beyond
the shock point. Specifically, the resulting rigged Hilbert space is a
tensorial product between the reversed and the standard harmonic oscillators
spaces. The anisotropy turns out from the interaction of trapping and
antitrapping potentials in perpendicular directions. Our work opens the way to
a complete description of novel intriguing shock phenomena, and those mediated
by extreme nonlinearities.
|
We study generating functions in the context of Rota-Baxter algebras. We show
that exponential generating functions can be naturally viewed in a very special
case of complete free commutative Rota-Baxter algebras. This allows us to use
free Rota-Baxter algebras to give a broad class of algebraic structures in
which generalizations of generating functions can be studied. We generalize the
product formula and composition formula for exponential power series. We also
give generating functions both for known number families such as Stirling
numbers of the second kind and partition numbers, and for new number families
such as those from not necessarily disjoint partitions and partitions of
multisets.
|
We consider spinless electrons in two dimensions with the bare spectrum
$\epsilon({\bf p})=|p_x|+|p_y|$. In momentum space, the interactions among
electrons have a finite range $q_0$, which is small compared to the Fermi
momentum. A golden rule calculation of the electron lifetime indicates a
breakdown of the Landau Fermi liquid in the model. At the one-loop level of
perturbation theory, we show that the density wave and the superconducting
instabilities cancel each other and there is no symmetry breaking. We solve the
model via bosonization; the excitation spectrum is found to consist of gapless
bosonic modes as in a one-dimensional Luttinger liquid.
|
More than thirty years ago, Brooks and Buser-Sarnak constructed sequences of
closed hyperbolic surfaces with logarithmic systolic growth in the genus.
Recently, Liu and Petri showed that such logarithmic systolic lower bound holds
for every genus (not merely for genera in some infinite sequence) using random
surfaces. In this article, we show a similar result through a more direct
approach relying on the original Brooks/Buser-Sarnak surfaces.
|
Faddeev-Yakubovski equations are solved numerically for 4He tetramer and
trimer states using realistic helium-helium interaction models. We describe the
properties of ground and excited states, and we discuss with a special emphasis
the 4He-4He3 low energy scattering.
|
We introduce various measures of forward classical communication for
bipartite quantum channels. Since a point-to-point channel is a special case of
a bipartite channel, the measures reduce to measures of classical communication
for point-to-point channels. As it turns out, these reduced measures have been
reported in prior work of Wang et al. on bounding the classical capacity of a
quantum channel. As applications, we show that the measures are upper bounds on
the forward classical capacity of a bipartite channel. The reduced measures are
upper bounds on the classical capacity of a point-to-point quantum channel
assisted by a classical feedback channel. Some of the various measures can be
computed by semi-definite programming.
|
A sliding window algorithm receives a stream of symbols and has to output at
each time instant a certain value which only depends on the last $n$ symbols.
If the algorithm is randomized, then at each time instant it produces an
incorrect output with probability at most $\epsilon$, which is a constant error
bound. This work proposes a more relaxed definition of correctness which is
parameterized by the error bound $\epsilon$ and the failure ratio $\phi$: A
randomized sliding window algorithm is required to err with probability at most
$\epsilon$ at a portion of $1-\phi$ of all time instants of an input stream.
This work continues the investigation of sliding window algorithms for regular
languages. In previous works a trichotomy theorem was shown for deterministic
algorithms: the optimal space complexity is either constant, logarithmic or
linear in the window size. The main results of this paper concerns three
natural settings (randomized algorithms with failure ratio zero and
randomized/deterministic algorithms with bounded failure ratio) and provide
natural language theoretic characterizations of the space complexity classes.
|
Despite the recent progress in genome sequencing and assembly, many of the
currently available assembled genomes come in a draft form. Such draft genomes
consist of a large number of genomic fragments (scaffolds), whose order and/or
orientation (i.e., strand) in the genome are unknown. There exist various
scaffold assembly methods, which attempt to determine the order and orientation
of scaffolds along the genome chromosomes. Some of these methods (e.g., based
on FISH physical mapping, chromatin conformation capture, etc.) can infer the
order of scaffolds, but not necessarily their orientation. This leads to a
special case of the scaffold orientation problem (i.e., deducing the
orientation of each scaffold) with a known order of the scaffolds.
We address the problem of orientating ordered scaffolds as an optimization
problem based on given weighted orientations of scaffolds and their pairs
(e.g., coming from pair-end sequencing reads, long reads, or homologous
relations). We formalize this problem using notion of a scaffold graph (i.e., a
graph, where vertices correspond to the assembled contigs or scaffolds and
edges represent connections between them). We prove that this problem is
NP-hard, and present a polynomial-time algorithm for solving its special case,
where orientation of each scaffold is imposed relatively to at most two other
scaffolds. We further develop an FPT algorithm for the general case of the OOS
problem.
|
Counterfactual explanations and adversarial attacks have a related goal:
flipping output labels with minimal perturbations regardless of their
characteristics. Yet, adversarial attacks cannot be used directly in a
counterfactual explanation perspective, as such perturbations are perceived as
noise and not as actionable and understandable image modifications. Building on
the robust learning literature, this paper proposes an elegant method to turn
adversarial attacks into semantically meaningful perturbations, without
modifying the classifiers to explain. The proposed approach hypothesizes that
Denoising Diffusion Probabilistic Models are excellent regularizers for
avoiding high-frequency and out-of-distribution perturbations when generating
adversarial attacks. The paper's key idea is to build attacks through a
diffusion model to polish them. This allows studying the target model
regardless of its robustification level. Extensive experimentation shows the
advantages of our counterfactual explanation approach over current
State-of-the-Art in multiple testbeds.
|
Jointly training a speech enhancement (SE) front-end and an automatic speech
recognition (ASR) back-end has been investigated as a way to mitigate the
influence of \emph{processing distortion} generated by single-channel SE on
ASR. In this paper, we investigate the effect of such joint training on the
signal-level characteristics of the enhanced signals from the viewpoint of the
decomposed noise and artifact errors. The experimental analyses provide two
novel findings: 1) ASR-level training of the SE front-end reduces the artifact
errors while increasing the noise errors, and 2) simply interpolating the
enhanced and observed signals, which achieves a similar effect of reducing
artifacts and increasing noise, improves ASR performance without jointly
modifying the SE and ASR modules, even for a strong ASR back-end using a WavLM
feature extractor. Our findings provide a better understanding of the effect of
joint training and a novel insight for designing an ASR agnostic SE front-end.
|
We show that there is an affine Schubert variety in the infinite dimensional
partial Flag variety (associated to the two- step parabolic subgroup of the
Kac-Moody group {\hat SL(n)}, corresponding to omitting {\alpha}_0,{\alpha}_d)
which is a natural compactification of the cotangent bundle to the Grassmann
variety.
|
We present a novel method, SALAD, for the challenging vision task of adapting
a pre-trained "source" domain network to a "target" domain, with a small budget
for annotation in the "target" domain and a shift in the label space. Further,
the task assumes that the source data is not available for adaptation, due to
privacy concerns or otherwise. We postulate that such systems need to jointly
optimize the dual task of (i) selecting fixed number of samples from the target
domain for annotation and (ii) transfer of knowledge from the pre-trained
network to the target domain. To do this, SALAD consists of a novel Guided
Attention Transfer Network (GATN) and an active learning function, HAL. The
GATN enables feature distillation from pre-trained network to the target
network, complemented with the target samples mined by HAL using
transfer-ability and uncertainty criteria. SALAD has three key benefits: (i) it
is task-agnostic, and can be applied across various visual tasks such as
classification, segmentation and detection; (ii) it can handle shifts in output
label space from the pre-trained source network to the target domain; (iii) it
does not require access to source data for adaptation. We conduct extensive
experiments across 3 visual tasks, viz. digits classification (MNIST, SVHN,
VISDA), synthetic (GTA5) to real (CityScapes) image segmentation, and document
layout detection (PubLayNet to DSSE). We show that our source-free approach,
SALAD, results in an improvement of 0.5%-31.3%(across datasets and tasks) over
prior adaptation methods that assume access to large amounts of annotated
source data for adaptation.
|
In this work we analyze how effects of finite size may modify the
thermodynamics of a system of strongly interacting fermions that we model using
an effective field theory with four-point interactions at finite temperature
and density and look in detail at the case of a confining two-layer system. We
compute the thermodynamic potential in the large-$N$ and mean-field
approximations and adopt a zeta-function regularization scheme to regulate the
divergences. Explicit expansions are obtained in different regimes of
temperature and separation. The analytic structure of the potential is
carefully analyzed and relevant integral and series representations for the
various expressions involved are obtained. Several known results are obtained
as limiting case of general results. We numerically implement the formalism and
compute the thermodynamic potential, the critical temperature and the fermion
condensate showing that effects of finite size tend to shift the critical
points and the order of the transitions. The present discussion may be of some
relevance for the study of the Casimir effect between strongly coupled
fermionic materials with inter-layer interactions.
|
With the rapid development of smart mobile devices, the car-hailing platforms
(e.g., Uber or Lyft) have attracted much attention from both the academia and
the industry. In this paper, we consider an important dynamic car-hailing
problem, namely \textit{maximum revenue vehicle dispatching} (MRVD), in which
rider requests dynamically arrive and drivers need to serve as many riders as
possible such that the entire revenue of the platform is maximized. We prove
that the MRVD problem is NP-hard and intractable. In addition, the dynamic
car-hailing platforms have no information of the future riders, which makes the
problem even harder. To handle the MRVD problem, we propose a queueing-based
vehicle dispatching framework, which first uses existing machine learning
algorithms to predict the future vehicle demand of each region, then estimates
the idle time periods of drivers through a queueing model for each region. With
the information of the predicted vehicle demands and estimated idle time
periods of drivers, we propose two batch-based vehicle dispatching algorithms
to efficiently assign suitable drivers to riders such that the expected overall
revenue of the platform is maximized during each batch processing. Through
extensive experiments, we demonstrate the efficiency and effectiveness of our
proposed approaches over both real and synthetic datasets.
|
We propose various methods for combining or amalgamating propositional
languages and deductive systems. We make heavy use of quantales and quantale
modules in the wake of previous works by the present and other authors. We also
describe quite extensively the relationships among the algebraic and
order-theoretic constructions and the corresponding ones based on a purely
logical approach.
|
In this Letter we derive the gravity field equations by varying the action
for an ultraviolet complete quantum gravity. Then we consider the case of a
static source term and we determine an exact black hole solution. As a result
we find a regular spacetime geometry: in place of the conventional curvature
singularity extreme energy fluctuations of the gravitational field at small
length scales provide an effective cosmological constant in a region locally
described in terms of a de Sitter space. We show that the new metric coincides
with the noncommutative geometry inspired Schwarzschild black hole. Indeed, we
show that the ultraviolet complete quantum gravity, generated by ordinary
matter is the dual theory of ordinary Einstein gravity coupled to a
noncommutative smeared matter. In other words we obtain further insights about
that quantum gravity mechanism which improves Einstein gravity in the vicinity
of curvature singularities. This corroborates all the existing literature in
the physics and phenomenology of noncommutative black holes.
|
Recent works demonstrate that early layers in a neural network contain useful
information for prediction. Inspired by this, we show that extending
temperature scaling across all layers improves both calibration and accuracy.
We call this procedure "layer-stack temperature scaling" (LATES). Informally,
LATES grants each layer a weighted vote during inference. We evaluate it on
five popular convolutional neural network architectures both in- and
out-of-distribution and observe a consistent improvement over temperature
scaling in terms of accuracy, calibration, and AUC. All conclusions are
supported by comprehensive statistical analyses. Since LATES neither retrains
the architecture nor introduces many more parameters, its advantages can be
reaped without requiring additional data beyond what is used in temperature
scaling. Finally, we show that combining LATES with Monte Carlo Dropout matches
state-of-the-art results on CIFAR10/100.
|
The typical scalar field theory has a cosmological constant problem. We
propose a generic mechanism by which this problem is avoided at tree level by
embedding the theory into a larger theory. The metric and the scalar field
coupling constants in the original theory do not need to be fine-tuned, while
the extra scalar field parameters and the metric associated with the extended
theory are fine-tuned dynamically. Hence, no fine-tuning of parameters in the
full Lagrangian is needed for the vacuum energy in the new physical system to
vanish at tree level. The cosmological constant problem can be solved if the
method can be extended to quantum loops.
|
KIC 8560861 (HD 183648) is a marginally eccentric (e=0.05) eclipsing binary
with an orbital period of P_orb=31.973d, exhibiting mmag amplitude pulsations
on time scales of a few days. We present the results of the complex analysis of
high and medium-resolution spectroscopic data and Kepler Q0 -- Q16 long cadence
photometry. The iterative combination of spectral disentangling, atmospheric
analysis, radial velocity and eclipse timing variation studies, separation of
pulsational features of the light curve, and binary light curve analysis led to
the accurate determination of the fundamental stellar parameters. We found that
the binary is composed of two main sequence stars with an age of 0.9\+-0.2 Gyr,
having masses, radii and temperatures of M_1=1.93+-0.12 M_sun, R_1=3.30+-0.07
R_sun, T_eff1=7650+-100 K for the primary, and M_2=1.06+-0.08 M_sun,
R_2=1.11+-0.03 R_sun, T_eff2=6450+-100 K for the secondary. After subtracting
the binary model, we found three independent frequencies, two of which are
separated by twice the orbital frequency. We also found an enigmatic half
orbital period sinusoidal variation that we attribute to an anomalous
ellipsoidal effect. Both of these observations indicate that tidal effects are
strongly influencing the luminosity variations of HD 183648. The analysis of
the eclipse timing variations revealed both a parabolic trend, and apsidal
motion with a period of (P_apse)_obs=10,400+-3,000 y, which is three times
faster than what is theoretically expected. These findings might indicate the
presence of a distant, unseen companion.
|
Bayesian inference in state-space models is challenging due to
high-dimensional state trajectories. A viable approach is particle Markov chain
Monte Carlo, combining MCMC and sequential Monte Carlo to form "exact
approximations" to otherwise intractable MCMC methods. The performance of the
approximation is limited to that of the exact method. We focus on particle
Gibbs and particle Gibbs with ancestor sampling, improving their performance
beyond that of the underlying Gibbs sampler (which they approximate) by
marginalizing out one or more parameters. This is possible when the parameter
prior is conjugate to the complete data likelihood. Marginalization yields a
non-Markovian model for inference, but we show that, in contrast to the general
case, this method still scales linearly in time. While marginalization can be
cumbersome to implement, recent advances in probabilistic programming have
enabled its automation. We demonstrate how the marginalized methods are viable
as efficient inference backends in probabilistic programming, and demonstrate
with examples in ecology and epidemiology.
|
We carried out a finite-size scaling analysis of the restricted
solid-on-solid version of a recently introduced growth model that exhibits a
roughening transition accompanied by spontaneous symmetry breaking. The dynamic
critical exponent of the model was calculated and found to be consistent with
the universality class of the directed percolation process in a symmetry-broken
phase with a crossover to Kardar-Parisi-Zhang behavior in a rough phase. The
order parameter of the roughening transition together with the string order
parameter was calculated, and we found that the flat, gapped phase is
disordered with an antiferromagnetic spin-fluid structure of kinks, although
strongly dominated by the completely flat configuration without kinks. A
possible interesting extension of the model is mentioned.
|
We present our effort to create a large Multi-Layered representational
repository of Linguistic Code-Switched Arabic data. The process involves
developing clear annotation standards and Guidelines, streamlining the
annotation process, and implementing quality control measures. We used two main
protocols for annotation: in-lab gold annotations and crowd sourcing
annotations. We developed a web-based annotation tool to facilitate the
management of the annotation process. The current version of the repository
contains a total of 886,252 tokens that are tagged into one of sixteen
code-switching tags. The data exhibits code switching between Modern Standard
Arabic and Egyptian Dialectal Arabic representing three data genres: Tweets,
commentaries, and discussion fora. The overall Inter-Annotator Agreement is
93.1%.
|
A bistable nonlinear energy sink conceived to mitigate the vibrations of host
structural systems is considered in this paper. The hosting structure consists
of two coupled symmetric linear oscillators (LOs) and the nonlinear energy sink
(NES) is connected to one of them. The peculiar nonlinear dynamics of the
resulting three-degree-of-freedom system is analytically described by means of
its slow invariant manifold derived from a suitable rescaling, coupled with a
harmonic balance procedure, applied to the governing equations transformed in
modal coordinates. On the basis of the first-order reduced model, the absorber
is tuned and optimized to mitigate both modes for a broad range of impulsive
load magnitudes applied to the LOs. On the one hand, for low-amplitude,
in-well, oscillations, the parameters governing the bistable NES are tuned in
order to make it functioning as a linear tuned mass damper (TMD); on the other,
for high-amplitude, cross-well, oscillations, the absorber is optimized on the
basis of the invariant manifolds features. The analytically predicted
performance of the resulting tuned bistable nonlinear energy sink (TBNES) are
numerically validated in terms of dissipation time; the absorption capabilities
are eventually compared with either a TMD and a purely cubic NES. It is shown
that, for a wide range of impulse amplitudes, the TBNES allows the most
efficient absorption even for the detuned mode, where a single TMD cannot be
effective.
|
The human ear canal couples the external sound field to the eardrum and the
solid parts of the middle ear. Therefore, knowledge of the acoustic impedance
of the human ear is widely used in the industry to develop audio devices such
as smartphones, headsets, and hearing aids. In this study acoustic impedance
measurements in the human ear canal of 32 adult subjects is presented. Wideband
measurement techniques developed specifically for this purpose enable impedance
measurement to be obtained in the full audio band up to 20kHz. Full ear canal
geometries of all subjects are also available from the first of its kind in
vivo based magnetic resonance imaging study of the human outer ear. These ear
canal geometries are used to obtain individual ear moulds of all subjects and
to process the data. By utilizing a theoretical Webster's horn description, the
measured impedance is propagated in each ear canal to a common theoretical
reference plane across all subjects. At this plane the mean human impedance and
standard deviation of the population is found. The results are further
demographically divided by gender and age and compared to a widely used ear
simulator (the IEC711 coupler).
|
In this paper we have considered the possibility that the Standard Model, and
its minimal extension with the addition of singlets, merges with a high-scale
supersymmetric theory at a scale satisfying the Veltman condition and therefore
with no sensitivity to the cutoff. The matching of the Standard Model is
achieved at Planckian scales. In its complex singlet extension the matching
scale depends on the strength of the coupling between the singlet and Higgs
fields. For order one values of the coupling, still in the perturbative region,
the matching scale can be located in the TeV ballpark. Even in the absence of
quadratic divergences there remains a finite adjustment of the parameters in
the high-energy theory which should guarantee that the Higgs and the singlets
in the low-energy theory are kept light. This fine-tuning (unrelated to
quadratic divergences) is the entire responsibility of the ultraviolet theory
and remains as the missing ingredient to provide a full solution to the
hierarchy problem.
|
The determination and classification of fixed points of large Boolean
networks is addressed in terms of constraint satisfaction problem. We develop a
general simplification scheme that, removing all those variables and functions
belonging to trivial logical cascades, returns the computational core of the
network. The onset of an easy-to-complex regulatory phase is introduced as a
function of the parameters of the model, identifying both theoretically and
algorithmically the relevant regulatory variables.
|
We propose a machine learning approach aiming at reducing Bond Graphs. The
output of the machine learning is a hybrid modeling that contains a reduced
Bond Graph coupled to a simple artificial neural network. The proposed coupling
enables knowledge continuity in machine learning. In this paper, a neural
network is obtained by a linear calibration procedure. We propose a method that
contains two training steps. First, the method selects the components of the
original Bond Graph that are kept in the Reduced Bond Graph. Secondly, the
method builds an artificial neural network that supplements the reduced Bond
Graph. Because the output of the machine learning is a hybrid model, not solely
data, it becomes difficult to use a usual Backpropagation Through Time to
calibrate the weights of the neural network. So, in a first attempt, a very
simple neural network is proposed by following a model reduction approach. We
consider the modeling of the automotive cabins thermal behavior. The data used
for the training step are obtained via solutions of differential algebraic
equations by using a design of experiment. Simple cooling simulations are run
during the training step. We show a simulation speed-up when the reduced bond
graph is used to simulate the driving cycle of the WLTP vehicles homologation
procedure, while preserving accuracy on output variables. The variables of the
original Bond Graph are split into a set of primary variables, a set of
secondary variables and a set of tertiary variables. The reduced bond graph
contains all the primary variables, but none of the tertiary variables.
Secondary variables are coupled to primary ones via an artificial neural
network. We discuss the extension of this coupling approach to more complex
artificial neural networks.
|
The diffusion Monte Carlo method with symmetry-based state selection is used
to calculate the quantum energy states of H$_2^+$ confined into potential
barriers of atomic dimensions (a model for these ions in solids). Special
solutions are employed permitting one to obtain satisfactory results with
rather simple native code. As a test case, $^2\Pi_u$ and $^2\Pi_g$ states of
H$_2^+$ ions under spherical confinement are considered. The results are
interpreted using the correlation of H$_2^+$ states to atomic orbitals of H
atoms lying on the confining surface and perturbation calculations. The method
is straightforwardly applied to cavities of any shape and different hydrogen
plasma species (at least one-electron ones, including H) for future studies
with real crystal symmetries.
|
This paper presents a new numerical approach to the study of non-periodicity
in signals, which can complement the maximal Lyapunov exponent method for
determining chaos transitions of a given dynamical system. The proposed
technique is based on the continuous wavelet transform and the wavelet
multiresolution analysis. A new parameter, the \textit{scale index}, is
introduced and interpreted as a measure of the degree of the signal's
non-periodicity. This methodology is successfully applied to three classical
dynamical systems: the Bonhoeffer-van der Pol oscillator, the logistic map, and
the Henon map.
|
Critical behaviors of quark-hadron phase transition in high-energy heavy-ion
collisions are investigated with the aim of identifying hadronic observables.
The surface of the plasma cylinder is mapped onto a 2D lattice. The Ising model
is used to simulate configurations corresponding to cross-over transitions in
accordance to the findings of QCD lattice gauge theory. Hadrons are formed in
clusters of all sizes. Various measures are examined to quantify the
fluctuations of the cluster sizes and of the voids among the clusters. The
canonical power-law behaviors near the critical temperature are found for
appropriately chosen measures. Since the temperature is not directly
observable, attention is given to the problem of finding observable measures.
It is demonstrated that for the measures considered the dependence on the
final-state randomization is weak. Thus the critical behavior of the measures
proposed is likely to survive the scattering effect of the hadron gas in the
final state.
|
We prove that the Brauer-Picard group of Morita autoequiv- alences of each of
the three fusion categories which arise as an even part of the Asaeda-Haagerup
subfactor or of its index 2 extension is the Klein four-group. We describe the
36 bimodule categories which occur in the full subgroupoid of the Brauer-Picard
groupoid on these three fusion categories. We also classify all irreducible
subfactors both of whose even parts are among these categories, of which there
are 111 up to isomorphism of the planar algebra (76 up to duality). Although we
identify the entire Brauer-Picard group, there may be additional fusion
categories in the groupoid. We prove a partial classification of possible
additional fusion categories Morita equivalent to the Asaeda-Haagerup fusion
categories and make some conjectures about their existence; we hope to address
these conjectures in future work.
|
Robust visual place recognition (VPR) requires scene representations that are
invariant to various environmental challenges such as seasonal changes and
variations due to ambient lighting conditions during day and night. Moreover, a
practical VPR system necessitates compact representations of environmental
features. To satisfy these requirements, in this paper we suggest a
modification to the existing pipeline of VPR systems to incorporate supervised
hashing. The modified system learns (in a supervised setting) compact binary
codes from image feature descriptors. These binary codes imbibe robustness to
the visual variations exposed to it during the training phase, thereby, making
the system adaptive to severe environmental changes. Also, incorporating
supervised hashing makes VPR computationally more efficient and easy to
implement on simple hardware. This is because binary embeddings can be learned
over simple-to-compute features and the distance computation is also in the
low-dimensional hamming space of binary codes. We have performed experiments on
several challenging data sets covering seasonal, illumination and viewpoint
variations. We also compare two widely used supervised hashing methods of
CCAITQ and MLH and show that this new pipeline out-performs or closely matches
the state-of-the-art deep learning VPR methods that are based on
high-dimensional features extracted from pre-trained deep convolutional neural
networks.
|
It is well-known that solutions to the basic problem in the calculus of
variations may fail to be Lipschitz continuous when the Lagrangian depends on
t. Similarly, for viscosity solutions to time-dependent Hamilton-Jacobi
equations one cannot expect Lipschitz bounds to hold uniformly with respect to
the regularity of coefficients. This phenomenon raises the question whether
such solutions satisfy uniform estimates in some weaker norm. We will show that
this is the case for a suitable H\"older norm, obtaining uniform estimates in
(x,t) for solutions to first and second order Hamilton-Jacobi equations. Our
results apply to degenerate parabolic equations and require superlinear growth
at infinity, in the gradient variables, of the Hamiltonian. Proofs are based on
comparison arguments and representation formulas for viscosity solutions, as
well as weak reverse H\"older inequalities.
|
Let $n_1$ and $n_2$ be two distinct primes with
$\mathrm{gcd}(n_1-1,n_2-1)=4$. In this paper, we compute the autocorrelation
values of generalized cyclotomic sequence of order $4$. Our results show that
this sequence can have very good autocorrelation property. We determine the
linear complexity and minimal polynomial of the generalized cyclotomic sequence
over $\mathrm{GF}(q)$ where $q=p^m$ and $p$ is an odd prime. Our results show
that this sequence possesses large linear complexity. So, the sequence can be
used in many domains such as cryptography and coding theory. We employ this
sequence of order $4$ to construct several classes of cyclic codes over
$\mathrm{GF}(q)$ with length $n_1n_2$. We also obtain the lower bounds on the
minimum distance of these cyclic codes.
|
We extend the N-Intertwined Mean-Field Approximation (NIMFA) for the
Susceptible-Infectious-Susceptible (SIS) epidemiological process to
time-varying networks. Processes on time-varying networks are often analysed
under the assumption that the process and network evolution happen on different
timescales. This approximation is called timescale separation. We investigate
timescale separation between disease spreading and topology updates of the
network. We introduce the transition times $\mathrm{\underline{T}}(r)$ and
$\mathrm{\overline{T}}(r)$ as the boundaries between the intermediate regime
and the annealed (fast changing network) and quenched (static network) regimes,
respectively, for a fixed accuracy tolerance $r$. By analysing the convergence
of static NIMFA processes, we analytically derive upper and lower bounds for
$\mathrm{\overline{T}}(r)$. Our results provide insights/bounds on the time of
convergence to the steady state of the static NIMFA SIS process. We show that,
under our assumptions, the upper-transition time $\mathrm{\overline{T}}(r)$ is
almost entirely determined by the basic reproduction number $R_0$ of the
network. The value of the upper-transition time $\mathrm{\overline{T}}(r)$
around the epidemic threshold is large, which agrees with the current
understanding that some real-world epidemics cannot be approximated with the
aforementioned timescale separation.
|
The pre-inflationary evolution of the universe describes the beginning of the
expansion from a static initial state, such that the Hubble parameter is
initially zero, but increases to an asymptotic constant value, in which it
could achieve a de Sitter (inflationary) expansion. The expansion is driven by
a background phantom field. The back-reaction effects at this moment should
describe vacuum geometrical excitations, which are studied with detail in this
work using Relativistic Quantum Geometry.
|
We consider the problem of partitioning a line segment into two subsets, so
that $n$ finite measures all have the same ratio of values for the subsets.
Letting $\alpha\in[0,1]$ denote the desired ratio, this generalises the
PPA-complete consensus-halving problem, in which $\alpha=\frac{1}{2}$.
Stromquist and Woodall showed that for any $\alpha$, there exists a solution
using $2n$ cuts of the segment. They also showed that if $\alpha$ is
irrational, that upper bound is almost optimal. In this work, we elaborate the
bounds for rational values $\alpha$. For $\alpha = \frac{\ell}{k}$, we show a
lower bound of $\frac{k-1}{k} \cdot 2n - O(1)$ cuts; we also obtain almost
matching upper bounds for a large subset of rational $\alpha$. On the
computational side, we explore its dependence on the number of cuts available.
More specifically,
1. when using the minimal number of cuts for each instance is required, the
problem is NP-hard for any $\alpha$;
2. for a large subset of rational $\alpha = \frac{\ell}{k}$, when
$\frac{k-1}{k} \cdot 2n$ cuts are available, the problem is in PPA-$k$ under
Turing reduction;
3. when $2n$ cuts are allowed, the problem belongs to PPA for any $\alpha$;
more generally, the problem belong to PPA-$p$ for any prime $p$ if $2(p-1)\cdot
\frac{\lceil p/2 \rceil}{\lfloor p/2 \rfloor} \cdot n$ cuts are available.
|
We propose a Gribov-Zwanziger type action for the Landau-DeWitt gauge that
preserves, for any gauge group, the invariance under background gauge
transformations. At zero temperature, and to one-loop accuracy, the model can
be related to the Gribov no-pole condition. We apply the model to the
deconfinement transition in SU(2) and SU(3) Yang-Mills theories and compare the
predictions obtained with a single or with various (color dependent) Gribov
parameters that can be introduced in the action without jeopardizing its
background gauge invariance. The Gribov parameters associated to color
directions orthogonal to the background can become negative, while keeping the
background effective potential real. In some cases, the proper analysis of the
transition requires the potential to be resolved in those regions.
|
We propose a generalisation of the Weak Gravity Conjecture in de Sitter space
by studying charged black-holes and comparing the gravitational and an abelian
gauge forces. Using the same condition as in flat space, namely the absence of
black-hole remnants, one finds that for a given mass $m$ there should be a
state with a charge $q$ bigger than a minimal value $q_{\rm min}(m,l)$,
depending on the mass and the de Sitter radius $l$, in Planck units. In the
large radius flat space limit (large $l$), $q_{\rm min}\to m$ leading to the
known result $q>m/\sqrt{2}$, while in the highly curved case (small $l$)
$q_{\rm min}$ behaves as $\sqrt{ml}$. We also discuss the example of the gauged
R-symmetry in $N=1$ supergravity.
|
Under inhomogeneous flow, dense suspensions exhibit complex behaviour that
violates the conventional homogenous rheology. Specifically, one finds flowing
regions with a macroscopic friction coefficient below the yielding criterion,
and volume fraction above the jamming criterion. We demonstrate the underlying
physics by incorporating shear rate fluctuations into a recently proposed
tensor model for the microstructure and stress, and applying the model to an
inhomogeneous flow problem. The model predictions agree qualitatively with
particle-based simulations.
|
The BESIII collaboration here reports the first observation of polarized
$\Lambda$ and $\bar{\Lambda}$ hyperons produced in two different processes: i)
the resonant $e^+e^- \to J/\psi\to\Lambda\bar{\Lambda}$, using a data sample of
1.31 $\times$ 10$^9$ $J/\psi$ events and ii) the non-resonant $e^+e^-\to
\gamma^* \to \Lambda\bar{\Lambda}$, using a 66.9 pb$^{-1}$ data sample
collected at $\sqrt{s} =$ 2.396 GeV. In $e^+e^-\to
J/\psi\to\Lambda\bar{\Lambda}$, the phase between the electric and the magnetic
amplitude is measured for the first time to be $42.3^{\mathrm{o}} \pm
0.6^{\mathrm{o}} \pm 0.5^{\mathrm{o}}$. The multi-dimensional analysis enables
a model-independent measurement of the decay parameters for $\Lambda\to p\pi^-$
($\alpha_-$), $\bar{\Lambda}\to\bar{p}\pi^+$ ($\alpha_+$) and
$\bar{\Lambda}\to\bar{n}\pi^0$ ($\bar{\alpha}_0$). The obtained value
$\alpha_-=0.750\pm0.009\pm0.004$ differs with ~5$\sigma$ from the PDG value.
This value, together with the measurement $\alpha_+=-0.758\pm0.010\pm0.007$
allow for the most precise test of CP violation in $\Lambda$ decays so far:
$A_{CP} = (\alpha_- + \alpha_+)/(\alpha_- - \alpha_+)$ of
$-0.006\pm0.012\pm0.007$. The decay asymmetry $\bar{\alpha}_0 =
-0.692\pm0.016\pm0.006$ is measured for the first time. The $e^+e^- \to
\Lambda\bar{\Lambda}$ reaction at $\sqrt{s} =$ 2.396 GeV enables a first
complete measurement of the time-like electric and magnetic form factor of any
baryon, of the modulus of the ratio $R=|G_E/G_M|$ and of the relative phase
$\Delta\Phi=\Phi_E-\Phi_M$. With the decay asymmetry parameters from the
$J/\psi$ data, the obtained values are $R=0.96\pm0.14\pm0.02$ and $\Delta\Phi =
37^{\mathrm{o}} \pm 12^{\mathrm{o}} \pm 6^{\mathrm{o}}$. In addition, the cross
section has been measured with unprecedented precision to be $\sigma = 119.0\pm
5.3\pm5.1$ pb, which corresponds to an effective form factor of $|G|=0.123 \pm
0.003 \pm 0.003$.
|
We prove that semialgebraic sets of rectangular matrices of a fixed rank, of
skew-symmetric matrices of a fixed rank and of real symmetric matrices whose
eigenvalues have prescribed multiplicities are minimal submanifolds of the
space of real matrices of a given size.
|
Thanks to the Big Data revolution and increasing computing capacities,
Artificial Intelligence (AI) has made an impressive revival over the past few
years and is now omnipresent in both research and industry. The creative
sectors have always been early adopters of AI technologies and this continues
to be the case. As a matter of fact, recent technological developments keep
pushing the boundaries of intelligent systems in creative applications: the
critically acclaimed movie "Sunspring", released in 2016, was entirely written
by AI technology, and the first-ever Music Album, called "Hello World",
produced using AI has been released this year. Simultaneously, the exploratory
nature of the creative process is raising important technical challenges for AI
such as the ability for AI-powered techniques to be accurate under limited data
resources, as opposed to the conventional "Big Data" approach, or the ability
to process, analyse and match data from multiple modalities (text, sound,
images, etc.) at the same time. The purpose of this white paper is to
understand future technological advances in AI and their growing impact on
creative industries. This paper addresses the following questions: Where does
AI operate in creative Industries? What is its operative role? How will AI
transform creative industries in the next ten years? This white paper aims to
provide a realistic perspective of the scope of AI actions in creative
industries, proposes a vision of how this technology could contribute to
research and development works in such context, and identifies research and
development challenges.
|
We study the Zak transform of totally positive (TP) functions. We use the
convergence of the Zak transform of TP functions of finite type to prove that
the Zak transforms of all TP functions without Gaussian factor in the Fourier
transform have only one zero in their fundamental domain of quasi-periodicity.
Our proof is based on complex analysis, especially the Theorem of Hurwitz and
some real analytic arguments, where we use the connection of TP functions of
finite type and exponential B-splines.
|
We construct new examples of cubic polynomials with a parabolic fixed point
that cannot be approximated by Misiurewicz polynomials. In particular, such
parameters admit maximal bifurcations, but do not belong to the support of the
bifurcation measure.
|
We prove that there exists an entire function for which every complex number
is an asymptotic value and whose growth is arbitrarily slow subject only to the
necessary condition that the function is of infinite order.
|
We present an alternative way to calculate the screening of the static
potential between two charges in (non)abelian gauge theories at high
temperatures. Instead of a loop expansion of a gauge boson self-energy, we
evaluate the energy shift of the vacuum to order e^2 after applying an external
static magnetic field and extract a temperature- and momentum-dependent
dielectric permittivity. The Hard Thermal Loop (HTL) gluon and photon Debye
masses are recovered from the lowest lying Landau levels of the perturbed
vacuum. In QED, the complete calculation exhibits an interesting cancellation
of terms, resulting in a logarithmic running alpha(T). In QCD, a Landau pole in
alpha_s arises in the infrared from the sign of the gluon contribution, as in
more sophisticated thermal renormalization group calculations.
|
Understanding human activity is very challenging even with the recently
developed 3D/depth sensors. To solve this problem, this work investigates a
novel deep structured model, which adaptively decomposes an activity instance
into temporal parts using the convolutional neural networks (CNNs). Our model
advances the traditional deep learning approaches in two aspects. First, { we
incorporate latent temporal structure into the deep model, accounting for large
temporal variations of diverse human activities. In particular, we utilize the
latent variables to decompose the input activity into a number of temporally
segmented sub-activities, and accordingly feed them into the parts (i.e.
sub-networks) of the deep architecture}. Second, we incorporate a radius-margin
bound as a regularization term into our deep model, which effectively improves
the generalization performance for classification. For model training, we
propose a principled learning algorithm that iteratively (i) discovers the
optimal latent variables (i.e. the ways of activity decomposition) for all
training instances, (ii) { updates the classifiers} based on the generated
features, and (iii) updates the parameters of multi-layer neural networks. In
the experiments, our approach is validated on several complex scenarios for
human activity recognition and demonstrates superior performances over other
state-of-the-art approaches.
|
The instability of an atomic clock is characterized by the Allan variance, a
measure widely used to describe the noise of frequency standards. We provide an
explicit method to find the ultimate bound on the Allan variance of an atomic
clock in the most general scenario where N atoms are prepared in an arbitrarily
entangled state and arbitrary measurement and feedback are allowed, including
those exploiting coherences between succeeding interrogation steps. While the
method is rigorous and general, it becomes numerically challenging for large N
and long averaging times.
|
Nematicity and magnetism are two key features in Fe-based superconductors,
and their interplay is one of the most important unsolved problems. In FeSe,
the magnetic order is absent below the structural transition temperature
$T_{str}=90$K, in stark contrast that the magnetism emerges slightly below
$T_{str}$ in other families. To understand such amazing material dependence, we
investigate the spin-fluctuation-mediated orbital order ($n_{xz}\neq n_{yz}$)
by focusing on the orbital-spin interplay driven by the strong-coupling effect,
called the vertex correction. This orbital-spin interplay is very strong in
FeSe because of the small ratio between the Hund's and Coulomb interactions
($\bar{J}/\bar{U}$) and large $d_{xz},d_{yz}$-orbitals weight at the Fermi
level. For this reason, in the FeSe model, the orbital order is established
irrespective that the spin fluctuations are very weak, so the magnetism is
absent below $T_{str}$. In contrast, in the LaFeAsO model, the magnetic order
appears just below $T_{str}$ both experimentally and theoretically. Thus, the
orbital-spin interplay due to the vertex correction is the key ingredient in
understanding the rich phase diagram with nematicity and magnetism in Fe-based
superconductors in a unified way.
|
The design of multi-stable RNA molecules has important applications in
biology, medicine, and biotechnology. Synthetic design approaches profit
strongly from effective in-silico methods, which can tremendously impact their
cost and feasibility. We revisit a central ingredient of most in-silico design
methods: the sampling of sequences for the design of multi-target structures,
possibly including pseudoknots. For this task, we present the efficient, tree
decomposition-based algorithm. Our fixed parameter tractable approach is
underpinned by establishing the P-hardness of uniform sampling. Modeling the
problem as a constraint network, our program supports generic
Boltzmann-weighted sampling for arbitrary additive RNA energy models; this
enables the generation of RNA sequences meeting specific goals like expected
free energies or \GCb-content. Finally, we empirically study general properties
of the approach and generate biologically relevant multi-target
Boltzmann-weighted designs for a common design benchmark. Generating seed
sequences with our program, we demonstrate significant improvements over the
previously best multi-target sampling strategy (uniform sampling).Our software
is freely available at: https://github.com/yannponty/RNARedPrint .
|
We prove new necessary and sufficient conditions to carry out a compact
linearization approach for a general class of binary quadratic problems subject
to assignment constraints as it has been proposed by Liberti in 2007. The new
conditions resolve inconsistencies that can occur when the original method is
used. We also present a mixed-integer linear program to compute a
minimally-sized linearization. When all the assignment constraints have
non-overlapping variable support, this program is shown to have a totally
unimodular constraint matrix. Finally, we give a polynomial-time combinatorial
algorithm that is exact in this case and can still be used as a heuristic
otherwise.
|
We consider inference on a scalar regression coefficient under a constraint
on the magnitude of the control coefficients. A class of estimators based on a
regularized propensity score regression is shown to exactly solve a tradeoff
between worst-case bias and variance. We derive confidence intervals (CIs)
based on these estimators that are bias-aware: they account for the possible
bias of the estimator. Under homoskedastic Gaussian errors, these estimators
and CIs are near-optimal in finite samples for MSE and CI length. We also
provide conditions for asymptotic validity of the CI with unknown and possibly
heteroskedastic error distribution, and derive novel optimal rates of
convergence under high-dimensional asymptotics that allow the number of
regressors to increase more quickly than the number of observations. Extensive
simulations and an empirical application illustrate the performance of our
methods.
|
The fabrication and experimental characterization of a thermal flow meter,
capable of detecting and measuring two independent gas flows with a single
chip, is described. The device is based on a 4 x 4 mm2 silicon chip, where a
series of differential micro-anemometers have been integrated together with
standard electronic components by means of postprocessing techniques. The
innovative aspect of the sensor is the use of a plastic adapter, thermally
bonded to the chip, to convey the gas flow only to the areas where the sensors
are located. The use of this inexpensive packaging procedure to include
different sensing structures in distinct flow channels is demonstrated.
|
Inflationary models predict a definite, model independent, angular dependence
for the three-point correlation function of $\Delta T/T$ at large angles
(greater than $\sim 1^\circ$) which we calculate. The overall amplitude is
model dependent and generically unobservably small, but may be large in some
specific models. We compare our results with other models of nongaussian
fluctuations.
|
Measurements are presented of the polarisation of W+W- boson pairs produced
in e+e- collisions, and of CP-violating WWZ and WWGamma trilinear gauge
couplings. The data were recorded by the OPAL experiment at LEP during 1998,
where a total integrated luminosity of 183 pb^-1 was obtained at a
centre-of-mass energy of 189 GeV. The measurements are performed through a spin
density matrix analysis of the W boson decay products. The fraction of W bosons
produced with longitudinal polarisation was found to be sigma_L/sigma_total =
(21.0 +- 3.3 +- 1.6)% where the first error is statistical and the second
systematic. The joint W boson pair production fractions were found to be
sigma_TT/sigma_total = (78.1 +- 9.0 +- 3.2) %, sigma_LL/sigma_total = (20.1 +-
7.2 +- 1.8) % and sigma_TL/sigma_total = (1.8 +- 14.7 +- 3.8) %. In the
CP-violating trilinear gauge coupling sector we find kappa_z = -0.20 +0.10
-0.07, g^z_4 = -0.02 +0.32 -0.33 and lambda_z = -0.18 +0.24 -0.16, where errors
include both statistical and systematic uncertainties. In each case the
coupling is determined with all other couplings set to their Standard Model
values except those related to the measured coupling via SU(2)_LxU(1)_Y
symmetry. These results are consistent with Standard Model expectations.
|
Let $B_{\alpha}^{p}$ be the space of $f$ holomorphic in the unit ball of
$\Bbb C^n$ such that $(1-|z|^2)^\alpha f(z) \in L^p$, where $0<p\leq\infty$,
$\alpha\geq -1/p$ (weighted Bergman space). In this paper we study the
interpolating sequences for various $B_{\alpha}^{p}$. The limiting cases
$\alpha=-1/p$ and $p=\infty$ are respectively the Hardy spaces $H^p$ and
$A^{-\alpha}$, the holomorphic functions with polynomial growth of order
$\alpha$, which have generated particular interest.
In \S 1 we first collect some definitions and well-known facts about weighted
Bergman spaces and then introduce the natural interpolation problem, along with
some basic properties. In \S 2 we describe in terms of $\alpha$ and $p$ the
inclusions between $B_{\alpha}^{p}$ spaces, and in \S 3 we show that most of
these inclusions also hold for the corresponding spaces of interpolating
sequences. \S 4 is devoted to sufficient conditions for a sequence to be
$B_{\alpha}^{p}$-interpolating, expressed in the same terms as the conditions
given in previous works of Thomas for the Hardy spaces and Massaneda for
$A^{-\alpha}$. In particular we show, under some restrictions on $\alpha$ and
$p$, that finite unions of $B_{\alpha}^{p}$-interpolating sequences coincide
with finite unions of separated sequences.
In his article in Inventiones, Seip implicitly gives a characterization of
interpolating sequences for all weighted Bergman spaces in the disk. We spell
out the details for the reader's convenience in an appendix (\S 5).
|
In this paper, we focus on temperature-aware Monolithic 3D (Mono3D) deep
neural network (DNN) inference accelerators for biomedical applications. We
develop an optimizer that tunes aspect ratios and footprint of the accelerator
under user-defined performance and thermal constraints, and generates
near-optimal configurations. Using the proposed Mono3D optimizer, we
demonstrate up to 61% improvement in energy efficiency for biomedical
applications over a performance-optimized accelerator.
|
We study a hybrid quantum system consisting of spin ensembles and
superconducting flux qubits, where each spin ensemble is realized using the
nitrogen-vacancy centers in a diamond crystal and the nearest-neighbor spin
ensembles are effectively coupled via a flux qubit.We show that the coupling
strengths between flux qubits and spin ensembles can reach the strong and even
ultrastrong coupling regimes by either engineering the hybrid structure in
advance or tuning the excitation frequencies of spin ensembles via external
magnetic fields. When extending the hybrid structure to an array with equal
coupling strengths, we find that in the strong-coupling regime, the hybrid
array is reduced to a tight-binding model of a one-dimensional bosonic lattice.
In the ultrastrong-coupling regime, it exhibits quasiparticle excitations
separated from the ground state by an energy gap. Moreover, these quasiparticle
excitations and the ground state are stable under a certain condition that is
tunable via the external magnetic field. This may provide an experimentally
accessible method to probe the instability of the system.
|
This work brings a wavelet analysis for 14 Kepler white dwarf stars, in order
to confirm their photometric variability behavior and to search for
periodicities in these targets. From the observed Kepler light curves we
obtained the wavelet local and global power spectra. Through this procedure,
one can perform an analysis in time-frequency domain rich in details, and so to
obtain a new perspective on the time evolution of the periodicities present in
these stars. We identified a photometric variability behavior in ten white
dwarfs, corresponding to period variations of ~ 2 h to 18 days: among these
stars, three are new candidates and seven, earlier identified from other
studies, are confirmed.
|
We report in this paper what is to our knowledge the first observation of a
time-resolved diffusing wave spectroscopy signal recorded by transillumination
through a thick turbid medium: the DWS signal is measured for a fixed photon
transit time, which opens the possibility of improving the spatial resolution.
This technique could find biomedical applications, especially in mammography.
|
The software of robotic assistants needs to be verified, to ensure its safety
and functional correctness. Testing in simulation allows a high degree of
realism in the verification. However, generating tests that cover both
interesting foreseen and unforeseen scenarios in human-robot interaction (HRI)
tasks, while executing most of the code, remains a challenge. We propose the
use of belief-desire-intention (BDI) agents in the test environment, to
increase the level of realism and human-like stimulation of simulated robots.
Artificial intelligence, such as agent theory, can be exploited for more
intelligent test generation. An automated testbench was implemented for a
simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table
assembly task between a humanoid robot and a person. Requirements were verified
for this task, and some unexpected design issues were discovered, leading to
possible code improvements. Our results highlight the practicality of BDI
agents to automatically generate valid and human-like tests to get high code
coverage, compared to hand-written directed tests, pseudorandom generation, and
other variants of model-based test generation. Also, BDI agents allow the
coverage of combined behaviours of the HRI system with more ease than writing
temporal logic properties for model checking.
|
Recent papers published in the last years contributed to resolve the enigma
on the hypothetical Be nature of the hot pulsating star $\beta$ Cep. This star
shows variable emission in the H$\alpha$ line, typical for Be stars, but its
projected rotational velocity is very much lower than the critical limit,
contrary to what is expected for a typical Be star. The emission has been
attributed to the secondary component of the $\beta$ Cep spectroscopic binary
system.
In this paper, using both ours and archived spectra, we attempted to recover
the H$\alpha$ profile of the secondary component and to analyze its behavior
with time for a long period. To accomplish this task, we first derived the
atmospheric parameters of the primary: T$_{\rm eff}$ = 24000 $\pm$ 250 K and
$\log g$ = 3.91 $\pm$ 0.10, then we used these values to compute its synthetic
H$\alpha$ profile and finally we reconstructed the secondary's profile
disentangling the observed one.
The secondary's H$\alpha$ profile shows the typical two peaks emission of a
Be star with a strong variability. We analyzed also the behavior versus time of
some line width parameters: equivalent width, V/R, FWHM, peaks separation and
radial velocity of the central depression.
Projected rotational velocity ($v \sin i$) of the secondary and the dimension
of the equatorial surrounding disk have been estimated, too.
|
A content recommender system or a recommendation system represents a subclass
of information filtering systems which seeks to predict the user preferences,
i.e. the content that would be most likely positively "rated" by the user.
Nowadays, the recommender systems of OpenCourseWare (OCW) platforms typically
generate a list of recommendations in one of two ways, i.e. through the
content-based filtering, or user-based collaborative filtering (CF). In this
paper, the conceptual idea of the content recommendation module was provided,
which is capable of proposing the related decks (presentations, educational
material, etc.) to the user having in mind past user activities, preferences,
type and content similarity, etc. It particularly analyses suitable techniques
for implementation of the user-based CF approach and user-related features that
are relevant for the content evaluation. The proposed approach also envisages a
hybrid recommendation system as a combination of user-based and content-based
approaches in order to provide a holistic and efficient solution for content
recommendation. Finally, for evaluation and testing purposes, a designated
content recommendation module was implemented as part of the SlideWiki
authoring OCW platform.
|
A central issue of Mott physics, with symmetries being fully retained in the
spin background, concerns the charge excitation. In a two-leg spin ladder with
spin gap, an injected hole can exhibit either a Bloch wave or a density wave by
tuning the ladder anisotropy through a `quantum critical point' (QCP). The
nature of such a QCP has been a subject of recent studies by density matrix
renormalization group (DMRG). In this paper, we reexamine the ground state of
the one doped hole, and show that a two-component structure is present in the
density wave regime in contrast to the single component in the Bloch wave
regime. In the former, the density wave itself is still contributed by a
standing-wave-like component characterized by a quasiparticle spectral weight
$Z$ in a finite-size system. But there is an additional charge incoherent
component emerging, which intrinsically breaks the translational symmetry
associated with the density wave. The partial momentum is carried away by
neutral spin excitations. Such an incoherent part does not manifest in the
single-particle spectral function, directly probed by the angle-resolved
photoemission spectroscopy (ARPES) measurement, however it is demonstrated in
the momentum distribution function. The Landau's one-to-one correspondence
hypothesis for a Fermi liquid breaks down here. The microscopic origin of this
density wave state as an intrinsic manifestation of the doped Mott physics will
be also discussed.
|
We calculate the neutral pion photoproduction on the proton near threshold in
covariant baryon chiral perturbation theory, including the $\Delta(1232)$
resonance as an explicit degree of freedom, up to chiral order $p^{7/2}$ in the
$\delta$ counting. We compare our results with recent low-energy data from the
Mainz Microtron for angular distributions and photon asymmetries. The
convergence of the chiral series of the covariant approach is found to improve
substantially with the inclusion of the $\Delta(1232)$ resonance.
|
The aim of this paper is to introduce a new learning procedure for neural
networks and to demonstrate that it works well enough on a few small problems
to be worth further investigation. The Forward-Forward algorithm replaces the
forward and backward passes of backpropagation by two forward passes, one with
positive (i.e. real) data and the other with negative data which could be
generated by the network itself. Each layer has its own objective function
which is simply to have high goodness for positive data and low goodness for
negative data. The sum of the squared activities in a layer can be used as the
goodness but there are many other possibilities, including minus the sum of the
squared activities. If the positive and negative passes could be separated in
time, the negative passes could be done offline, which would make the learning
much simpler in the positive pass and allow video to be pipelined through the
network without ever storing activities or stopping to propagate derivatives.
|
In clustering problems, a central decision-maker is given a complete metric
graph over vertices and must provide a clustering of vertices that minimizes
some objective function. In fair clustering problems, vertices are endowed with
a color (e.g., membership in a group), and the features of a valid clustering
might also include the representation of colors in that clustering. Prior work
in fair clustering assumes complete knowledge of group membership. In this
paper, we generalize prior work by assuming imperfect knowledge of group
membership through probabilistic assignments. We present clustering algorithms
in this more general setting with approximation ratio guarantees. We also
address the problem of "metric membership", where different groups have a
notion of order and distance. Experiments are conducted using our proposed
algorithms as well as baselines to validate our approach and also surface
nuanced concerns when group membership is not known deterministically.
|
We rewrite the time dependent Schr\"odinger equation by using only three
dimensional vector algebra and by avoiding to introduce any complex numbers. We
show that this equation leads to the same conclusions than the "complex
version" concerning the hydrogen atom and the harmonic oscillator. We show also
that this equation can be written as a Maxwell-Amp\`ere equation.
|
Within particle physics itself, Gauguin's questions may be interpreted as: P1
- What is the status of the Standard Model? P2 - What physics may lie beyond
the Standard Model? P3 - What is the `Theory of Everything'? Gauguin's
questions may also asked within a cosmological context: C1 - What were the
early stages of the Big Bang? C2 - What is the material content of the Universe
today? C3 - What is the future of the Universe? In this talk I preview many of
the topics to be discussed in the plenary sessions of this conference,
highlighting how they bear on these fundamental questions.
|
The study of human mobility patterns is of both theoretical and practical
values in many aspects. For long-distance travels, a few research endeavors
have shown that the displacements of human travels follow the power-law
distribution. However, controversies remain in the issue of the scaling law of
human mobility in intra-urban areas. In this work we focus on the mobility
pattern of taxi passengers by examining five datasets of the three
metropolitans of New York, Dalian and Nanjing. Through statistical analysis, we
find that the lognormal distribution with a power-law tail can best approximate
both the displacement and the duration time of taxi trips, as well as the
vacant time of taxicabs, in all the examined cities. The universality of
scaling law of human mobility is subsequently discussed, in accordance with the
data analytics.
|
Backpropagation through time (BPTT) is a technique of updating tuned
parameters within recurrent neural networks (RNNs). Several attempts at
creating such an algorithm have been made including: Nth Ordered Approximations
and Truncated-BPTT. These methods approximate the backpropagation gradients
under the assumption that the RNN only utilises short-term dependencies. This
is an acceptable assumption to make for the current state of artificial neural
networks. As RNNs become more advanced, a shift towards influence by long-term
dependencies is likely. Thus, a new method for backpropagation is required. We
propose using the 'discrete forward sensitivity equation' and a variant of it
for single and multiple interacting recurrent loops respectively. This solution
is exact and also allows the network's parameters to vary between each
subsequent step, however it does require the computation of a Jacobian.
|
We study the tensor product $W$ of any number of "elementary" irreducible
modules $V_1,...,V_k$ over the Yangian of the general linear Lie algebra. Each
of these modules is determined by a skew Young diagram and a complex parameter.
For any indices $i,j=1,...,k$ there is a canonical non-zero intertwining
operator $A_{ij}$ between the tensor products $V_i\otimes V_j$ and $V_j\otimes
V_i$. This operator is defined up to a scalar multipler. We show that the
tensor product $W$ is irreducible, if and only if all operators $A_{ij}$ with
$i<j$ are invertible. This implies that the Yangian module $W$ is irreducible,
if and only if all pairwise tensor products $V_i\otimes V_j$ with $i<j$ are
irreducible. We also introduce the notion of a Durfee rank of a skew Young
diagram. For an ordinary Young diagram, this is the length of its main
diagonal.
|
The vast majority of well studied giant-planet systems, including the Solar
System, are nearly coplanar which implies dissipation within a primordial gas
disk. however, intrinsic instability may lead to planet-planet scattering,
which often produces non-coplanar, eccentric orbits. Planet scattering theories
have been developed to explain observed high eccentricity systems and also hot
Jupiters; thus far their predictions for mutual inclination (I) have barely
been tested. Here we characterize a highly mutually-inclined (I ~ 15-60
degrees), moderately eccentric (e >~ 0.1) giant planet system: Kepler-108. This
system consists of two approximately Saturn-mass planets with periods of ~49
and ~190 days around a star with a wide (~300AU) binary companion in an orbital
configuration inconsistent with a purely disk migration origin.
|
This paper examines how the circumgalactic medium (CGM) evolves as a function
of time by comparing results from different absorption-line surveys that have
been conducted in the vicinities of galaxies at different redshifts. Despite
very different star formation properties of the galaxies considered in these
separate studies and different intergalactic radiation fields at redshifts
between z~2.2 and z~0, I show that both the spatial extent and mean absorption
equivalent width of the CGM around galaxies of comparable mass have changed
little over this cosmic time interval.
|
Let G be a connected, simply connected Poisson-Lie group with quasitriangular
Lie bialgebra g. An explicit description of the double D(g) is given, together
with the embeddings of g and g^*. This description is then used to provide a
construction of the double D(G). The aim of this work is to describe D(G) in
sufficient detail to be able to apply the procedures of Semenov-Tian-Shansky
and Drinfeld for the classification of symplectic leaves and Poisson
homogeneous spaces for Poisson-Lie groups.
|
We present a comprehensive study of the static properties of a mobile
impurity interacting with a bath with a few particles trapped in a
one-dimensional harmonic trap. We consider baths with either identical bosons
or distinguishable particles and we focus on the limiting case where the bath
is non-interacting. We provide numerical results for the energy spectra and
density profiles by means of the exact diagonalization of the Hamiltonian, and
find that these systems show non-trivial solutions, even in the limit of
infinite repulsion. A detailed physical interpretation is provided for the
lowest energy states. In particular, we find a seemingly universal transition
from the impurity being localized in the center of the trap to being expelled
outside the majority cloud. We also develop an analytical ansatz and a
mean-field solution to compare them with our numerical results in limiting
configurations.
|
The transiting exoplanet WASP-18b was discovered in 2008 by the Wide Angle
Search for Planets (WASP) project. The Spitzer Exoplanet Target of Opportunity
Program observed secondary eclipses of WASP-18b using Spitzer's Infrared Array
Camera (IRAC) in the 3.6 micron and 5.8 micron bands on 2008 December 20, and
in the 4.5 micron and 8.0 micron bands on 2008 December 24. We report eclipse
depths of 0.30 +/- 0.02%, 0.39 +/- 0.02%, 0.37 +/- 0.03%, 0.41 +/- 0.02%, and
brightness temperatures of 3100 +/- 90, 3310 +/- 130, 3080 +/- 140 and 3120 +/-
110 K in order of increasing wavelength. WASP-18b is one of the hottest planets
yet discovered - as hot as an M-class star. The planet's pressure-temperature
profile most likely features a thermal inversion. The observations also require
WASP-18b to have near-zero albedo and almost no redistribution of energy from
the day-side to the night side of the planet.
|
Last-mile routing refers to the final step in a supply chain, delivering
packages from a depot station to the homes of customers. At the level of a
single van driver, the task is a traveling salesman problem. But the choice of
route may be constrained by warehouse sorting operations, van-loading
processes, driver preferences, and other considerations, rather than a
straightforward minimization of tour length. We propose a simple and efficient
penalty-based local-search algorithm for route optimization in the presence of
such constraints, adopting a technique developed by Helsgaun to extend the LKH
traveling salesman problem code to general vehicle-routing models. We apply his
technique to handle combinations of constraints obtained from an analysis of
historical routing data, enforcing properties that are desired in high-quality
solutions. Our code is available under the open-source MIT license. An earlier
version of the code received the $100,000 top prize in the Amazon Last Mile
Routing Research Challenge organized in 2021.
|
We provide new general methods in the calculus of variations for the
anisotropic Plateau problem in arbitrary dimension and codimension. A new
direct proof of Almgren's 1968 existence result is presented; namely, we
produce from a class of competing "surfaces," which span a given bounding set
in some ambient space, one with minimal anisotropically weighted area. In
particular, rectifiability of a candidate minimizer is proved without the
assumption of quasiminimality. Our ambient spaces are a class of Lipschitz
neighborhood retracts which includes manifolds with boundary and manifolds with
certain singularities. Our competing surfaces are rectifiable sets which
satisfy any combination of general homological, cohomological or homotopical
spanning conditions. An axiomatic spanning criterion is also provided. Our
boundaries are permitted to be arbitrary closed subsets of the ambient space,
providing a good setting for surfaces with sliding boundaries.
|
Spectral Graph Convolutional Networks (GCNs) are a generalization of
convolutional networks to learning on graph-structured data. Applications of
spectral GCNs have been successful, but limited to a few problems where the
graph is fixed, such as shape correspondence and node classification. In this
work, we address this limitation by revisiting a particular family of spectral
graph networks, Chebyshev GCNs, showing its efficacy in solving graph
classification tasks with a variable graph structure and size. Chebyshev GCNs
restrict graphs to have at most one edge between any pair of nodes. To this
end, we propose a novel multigraph network that learns from multi-relational
graphs. We model learned edges with abstract meaning and experiment with
different ways to fuse the representations extracted from annotated and learned
edges, achieving competitive results on a variety of chemical classification
benchmarks.
|
For a prime $p\equiv 3\pmod 4$ and a positive integer $t$, let $q=p^{2t}$.
The Peisert graph of order $q$ is the graph with vertex set $\mathbb{F}_q$ such
that $ab$ is an edge if $a-b\in\langle g^4\rangle\cup g\langle g^4\rangle$,
where $g$ is a primitive element of $\mathbb{F}_q$. In this paper, we construct
a similar graph with vertex set as the commutative ring $\mathbb{Z}_n$ for
suitable $n$, which we call \textit{Peisert-like} graph and denote by
$G^\ast(n)$. Owing to the need for cyclicity of the group of units of
$\mathbb{Z}_n$, we consider $n=p^\alpha$ or $2p^\alpha$, where $p\equiv 1\pmod
4$ is a prime and $\alpha$ is a positive integer. For primes $p\equiv 1\pmod
8$, we compute the number of triangles in the graph $G^\ast(p^{\alpha})$ by
evaluating certain character sums. Next, we study cliques of order 4 in
$G^\ast(p^{\alpha})$. To find the number of cliques of order $4$ in
$G^\ast(p^{\alpha})$, we first introduce hypergeometric functions containing
Dirichlet characters as arguments, and then express the number of cliques of
order $4$ in $G^\ast(p^{\alpha})$ in terms of these hypergeometric functions.
|
In a typical video conferencing setup, it is hard to maintain eye contact
during a call since it requires looking into the camera rather than the
display. We propose an eye contact correction model that restores the eye
contact regardless of the relative position of the camera and display. Unlike
previous solutions, our model redirects the gaze from an arbitrary direction to
the center without requiring a redirection angle or camera/display/user
geometry as inputs. We use a deep convolutional neural network that inputs a
monocular image and produces a vector field and a brightness map to correct the
gaze. We train this model in a bi-directional way on a large set of
synthetically generated photorealistic images with perfect labels. The learned
model is a robust eye contact corrector which also predicts the input gaze
implicitly at no additional cost. Our system is primarily designed to improve
the quality of video conferencing experience. Therefore, we use a set of
control mechanisms to prevent creepy results and to ensure a smooth and natural
video conferencing experience. The entire eye contact correction system runs
end-to-end in real-time on a commodity CPU and does not require any dedicated
hardware, making our solution feasible for a variety of devices.
|
Constructing physical models of living cells and tissues is an extremely
challenging task because of the high complexities of both intra- and
intercellular processes. In addition, the force that a single cell generates
vanishes in total due to the law of action and reaction. The typical mechanics
of cell crawling involve periodic changes in the cell shape and in the adhesion
characteristics of the cell to the substrate. However, the basic physical
mechanisms by which a single cell coordinates these processes cooperatively to
achieve autonomous migration are not yet well understood. To obtain a clearer
grasp of how the intracellular force is converted to directional motion, we
develop a basic mechanochemical model of a crawling cell based on subcellular
elements with the focus on the dependence of the protrusion and contraction as
well as the adhesion and deadhesion processes on intracellular biochemical
signals. By introducing reaction-diffusion equations that reproduce traveling
waves of local chemical concentrations, we clarify that the chemical dependence
of the cell-substrate adhesion dynamics determines the crawling direction and
distance with one chemical wave. Finally, we also perform multipole analysis of
the traction force to compare it with the experimental results. To our
knowledge, our present work is the first study that accomplishes fully
force-free migration utilizing intracellular chemical reactions. Although the
detailed mechanisms of actual cells are far more complicated than our simple
model, we believe that this mechanochemical model is a good prototype for more
realistic models.
|
Let $F$ be a number field, $\pi$ either a unitary cuspidal automorphic
representation of $\mathrm{GL}(2)/F$ or a unitary Eisenstein series, and $\chi$
a unitary Hecke character of analytic conductor $C(\chi).$ We develop a
regularized relative trace formula to prove a refined hybrid subconvex bound
for $L(1/2,\pi\times\chi).$ In particular, we obtain the Burgess subconvex
bound \begin{align*}
L(1/2,\pi\times\chi)\ll_{\pi,F,\varepsilon}C(\chi)^{\frac{1}{2}-\frac{1}{8}+\varepsilon},
\end{align*} where the implied constant depends on $\pi,$ $F$ and
$\varepsilon.$
|
The global-in-time existence of weak solutions to the barotropic compressible
quantum Navier-Stokes equations with damping is proved for large data in three
dimensional space. The model consists of the compressible Navier-Stokes
equations with degenerate viscosity, and a nonlinear third-order differential
operator, with the quantum Bohm potential, and the damping terms. The global
weak solutions to such system is shown by using the Faedo-Galerkin method and
the compactness argument. This system is also a very important approximated
system to the compressible Navier-Stokes equations. It will help us to prove
the existence of global weak solutions to the compressible Navier-Stokes
equations with degenerate viscosity in three dimensional space.
|
In the first partial result toward Steinberg's now-disproved three coloring
conjecture, Abbott and Zhou used a counting argument to show that every planar
graph without cycles of lengths 4 through 11 is 3-colorable. Implicit in their
proof is a fact about plane graphs: in any plane graph of minimum degree 3, if
no two triangles share an edge, then triangles make up strictly less than 2/3
of the faces. We show how this result, combined with Kostochka and Yancey's
resolution of Ore's conjecture for k = 4, implies that every planar graph
without cycles of lengths 4 through 8 is 3-colorable.
|
A possible model of twin high-frequency QPOs (HF QPOs) of microquasars is
examined. The disk is assumed to have global magnetic fields and to be deformed
with a two-armed pattern. In this deformed disk, set of a two-armed ($m=2$)
vertical p-mode oscillation and an axisymmetric ($m=0$) g-mode oscillation are
considered. They resonantly interact through the disk deformation when their
frequencies are the same. This resonant interaction amplifies the set of the
above oscillations in the case where these two oscillations have wave energies
of opposite signs. These oscillations are assumed to be excited most
efficiently in the case where the radial group velocities of these two waves
vanish at the same place. The above set of oscillations is not unique,
depending on the node number, $n$, of oscillations in the vertical direction.
We consider that the basic two sets of oscillations correspond to the twin
QPOs. The frequencies of these oscillations depend on disk parameters such as
strength of magnetic fields. For observational mass ranges of GRS 1915+105, GRO
J1655-40, XTE J1550-564, and H1743-322, spins of these sources are estimated.
High spins of these sources can be described if the disks have weak poloidal
magnetic fields as well as toroidal magnetic fields of moderate strength. In
this model the 3 : 2 frequency ratio of high-frequency QPOs is not related to
their excitation, but occurs by chance.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.