text
stringlengths 6
128k
|
---|
This is the first of a three parts paper providing full details for our
previous announcement in Pr\'epublications Orsay 2007-16, arXiv:0711.3579. Here
we prove the results stated in the title.
|
The innermost parsec around Sgr A* has been found to play host to two discs
or streamers of O and W-R stars. They are misaligned by an angle approaching 90
degrees. That the stars are approximately coeval indicates that they formed in
the same event rather than independently. We have performed SPH simulations of
the infall of a single prolate cloud towards a massive black hole. As the cloud
is disrupted, the large spread in angular momentum can, if conditions allow,
lead to the creation of misaligned gas discs. In turn, stars may form within
those discs. We are now investigating the origins of these clouds in the
Galactic Centre (GC) region.
|
Let $G$ be a finite group. There is a natural Galois correspondence between
the permutation groups containing $G$ as a regular subgroup, and the Schur
rings (S-rings) over~$G$. The problem we deal with in the paper, is to
characterize those S-rings that are closed under this correspondence, when the
group $G$ is cyclic (the schurity problem for circulant S-rings). It is proved
that up to a natural reduction, the characteristic property of such an S-ring
is to be a certain algebraic fusion of its coset closure introduced and studied
in the paper. Basing on this characterization we show that the schurity problem
is equivalent to the consistency of a modular linear system associated with a
circulant S-ring under consideration. As a byproduct we show that a circulant
S-ring is Galois closed if and only if so is its dual.
|
We study a one-dimensional Anderson model in which one site interacts with a
detector monitoring the occupation of that site. We demonstrate that such an
interaction, no matter how weak, leads to total delocalization of the Anderson
model, and we discuss the experimental consequences
|
Data grid replication is an effective method to achieve efficient and fault
tolerant data access while reducing access latency and bandwidth consumption in
grids. Since we have storage limitation, a replica should be created in the
best site. Through evaluation of previously suggested algorithms, we understand
that by blind creation of replications on different sites after each demand, we
may be able to improve algorithm regarding response time. In practice, however,
most of the created replications will never be used and existing resources in
Grid will be wasted through the creation of unused replications. In this paper,
we propose a new dynamic replication algorithm called Predictive Fuzzy
Replication (PFR). PFR not only redefines the Balanced Ant Colony Optimization
(BACO) algorithm, which is used for job scheduling in grids, but also uses it
for replication in appropriate sites in the data grid. The new algorithm
considers the history usage of files, files size, the level of the sites and
free available space for replication and tries to predict future needs and pre
replicates them in the resources that are more suitable or decides which
replica should be deleted if there is not enough space for replicating. This
algorithm considers the related files of the replicated file and replicates
them considering their own history. PFR acts more efficiently than Cascading
method, which is one of the algorithms in optimized use of existing replicas.
|
Partial transposition of state operator is a well known tool to detect
quantum correlations between two parts of a composite system. In this letter,
the global partial transpose (GPT) is linked to conceptually multipartite
underlying structures in a state - the negativity fonts. If K-way negativity
fonts with non zero determinants exist, then selective partial transposition of
a pure state, involving K of the N qubits (K leq N) yields an operator with
negative eigevalues, identifying K-body correlations in the state. Expansion of
GPT interms of K-way partially transposed (KPT) operators reveals the nature of
intricate intrinsic correlations in the state. Classification criteria for
multipartite entangled states, based on underlying structure of global partial
transpose of canonical state, are proposed. Number of N-partite entanglement
types for an N qubit system is found to be 2^{N-1}-N+2, while the number of
major entanglement classes is 2^{N-1}-1. Major classes for three and four qubit
states are listed. Subclasses are determined by the number and type of
negativity fonts in canonical state.
|
Bethe ansatz equations have been proposed for the asymptotic spectral problem
of AdS_4/CFT_3. This proposal assumes integrability, but the previous
verification of weak-coupling integrability covered only the su(4) sector of
the ABJM gauge theory. Here we derive the complete planar two-loop dilatation
generator of N=6 superconformal Chern-Simons theory from osp(6|4)
superconformal symmetry. For the osp(4|2) sector, we prove integrability
through a Yangian construction. We argue that integrability extends to the full
planar two-loop dilatation generator, confirming the applicability of the Bethe
equations at weak coupling. Further confirmation follows from an analytic
computation of the two-loop twist-one spectrum.
|
We consider the elementary radiative-correction terms in loop quantum
gravity. These are a two-vertex "elementary bubble" and a five-vertex "ball";
they correspond to the one-loop self-energy and the one-loop vertex correction
of ordinary quantum field theory. We compute their naive degree of (infrared)
divergence.
|
In this paper, we examine a ready-to-use, robust, and computationally fast
fixed-size memory pool manager with no-loops and no-memory overhead that is
highly suited towards time-critical systems such as games. The algorithm
achieves this by exploiting the unused memory slots for bookkeeping in
combination with a trouble-free indexing scheme. We explain how it works in
amalgamation with straightforward step-by-step examples. Furthermore, we
compare just how much faster the memory pool manager is when compared with a
system allocator (e.g., malloc) over a range of allocations and sizes.
|
We derive a new version of SU(3) non-Abelian Stokes theorem by making use of
the coherent state representation on the coset space $SU(3)/(U(1)\times
U(1))=F_2$, the flag space. Then we outline a derivation of the area law of the
Wilson loop in SU(3) Yang-Mills theory in the maximal Abelian gauge (The
detailed exposition will be given in a forthcoming article). This derivation is
performed by combining the non-Abelian Stokes theorem with the reformulation of
the Yang-Mills theory as a perturbative deformation of a topological field
theory recently proposed by one of the authors. Within this framework, we show
that the fundamental quark is confined even if $G=SU(3)$ is broken by partial
gauge fixing into $H=U(2)$ just as $G$ is broken to $H=U(1) \times U(1)$. An
origin of the area law is related to the geometric phase of the Wilczek-Zee
holonomy for U(2). Abelian dominance is an immediate byproduct of these results
and magnetic monopole plays the dominant role in this derivation.
|
We construct actions for (p,0)- and (p,1)- supersymmetric, 1 <= p <= 4,
two-dimensional gauge theories coupled to non-linear sigma model matter with a
Wess-Zumino term. We derive the scalar potential for a large class of these
models. We then show that the Euclidean actions of the (2,0) and
(4,0)-supersymmetric models without Wess-Zumino terms are bounded by
topological charges which involve the equivariant extensions of the Kahler
forms of the sigma model target spaces evaluated on the two-dimensional
spacetime. We give similar bounds for Euclidean actions of appropriate gauge
theories coupled to non-linear sigma model matter in higher spacetime
dimensions which now involve the equivariant extensions of the Kahler forms of
the sigma model target spaces and the second Chern character of gauge fields.
The BPS configurations are generalisations of abelian and non-abelian vortices.
|
The scattering amplitude of polarized nucleons has been found within the
framework of the Klein Gordon with the phenomenological spin - orbit potential.
It has the Glauber type representation. The differential cross sections of
polarized nucleon are considered and discussed.The Yukawa potential is applied
for this problem to determine the polarization of high energy scattering
nucleons.
|
Given an existing system learned from previous source domains, it is
desirable to adapt the system to new domains without accessing and forgetting
all the previous domains in some applications. This problem is known as domain
expansion. Unlike traditional domain adaptation in which the target domain is
the domain defined by new data, in domain expansion the target domain is formed
jointly by the source domains and the new domain (hence, domain expansion) and
the label function to be learned must work for the expanded domain.
Specifically, this paper presents a method for unsupervised multi-source domain
expansion (UMSDE) where only the pre-learned models of the source domains and
unlabelled new domain data are available. We propose to use the predicted class
probability of the unlabelled data in the new domain produced by different
source models to jointly mitigate the biases among domains, exploit the
discriminative information in the new domain, and preserve the performance in
the source domains. Experimental results on the VLCS, ImageCLEF_DA and PACS
datasets have verified the effectiveness of the proposed method.
|
According to Jerne's idiotypic network hypothesis, the adaptive immune system
is regulated by interactions between the variable regions of antibodies, B
cells, and T cells.1 The symmetrical immune network theory2,3 is based on
Jerne's hypothesis, and provides a basis for understanding many of the
phenomena of adaptive immunity. The theory includes the postulate that the
repertoire of serum IgG molecules is regulated by T cells, with the result that
IgG molecules express V region determinants that mimic V region determinants
present on suppressor T cells. In this paper we describe rapid binding between
purified murine serum IgG of H-2b and H-2d mice and serum IgG from the same
strain and from MHC-matched mice, but not between serum IgG preparations of
mice with different MHC genes. We interpret this surprising finding in terms of
a model in which IgG molecules are selected to have both anti-anti-(self MHC
class II) and anti-anti-anti-(self MHC class II) specificity.
|
We study classical and quantum aspects of D=4, N=2 BPS black holes for T_2
compactification of D=6, N=1 heterotic string vacua. We extend dynamical
relaxation phenomena of moduli fields to background consisting of a BPS soliton
or a black hole and provide a simpler but more general derivation of the
Ferrara-Kallosh's extremized black hole mass and entropy. We study quantum
effects to the BPS black hole mass spectra and to their dynamical relaxation.
We show that, despite non-renormalizability of string effective supergravity,
quantum effect modifies BPS mass spectra only through coupling constant and
moduli field renormalizations. Based on target-space duality, we establish a
perturbative non-renormalization theorem and obatin exact BPS black hole mass
and entropy in terms of renormalized string loop-counting parameter and
renormalized moduli fields. We show that similar conclusion holds, in the large
T_2 limit, for leading non- perturbative correction. We finally discuss
implications to type-I and type-IIA Calabi -Yau black holes.
|
We optimize the running time of the primal-dual algorithms by optimizing
their stopping criteria for solving convex optimization problems under affine
equality constraints, which means terminating the algorithm earlier with fewer
iterations. We study the relations between four stopping criteria and show
under which conditions they are accurate to detect optimal solutions. The
uncomputable one: ''Optimality gap and Feasibility error'', and the computable
ones: the ''Karush-Kuhn-Tucker error'', the ''Projected Duality Gap'', and the
''Smoothed Duality Gap''. Assuming metric sub-regularity or quadratic error
bound, we establish that all of the computable criteria provide practical upper
bounds for the optimality gap, and approximate it effectively. Furthermore, we
establish comparability between some of the computable criteria under certain
conditions. Numerical experiments on basis pursuit, and quadratic programs
with(out) non-negative weights corroborate these findings and show the superior
stability of the smoothed duality gap over the rest.
|
Meta-learning can extract an inductive bias from previous learning experience
and assist the training of new tasks. It is often realized through optimizing a
meta-model with the evaluation loss of task-specific solvers. Most existing
algorithms sample non-overlapping $\mathit{support}$ sets and $\mathit{query}$
sets to train and evaluate the solvers respectively due to simplicity
($\mathcal{S}$/$\mathcal{Q}$ protocol). Different from
$\mathcal{S}$/$\mathcal{Q}$ protocol, we can also evaluate a task-specific
solver by comparing it to a target model $\mathcal{T}$, which is the optimal
model for this task or a model that behaves well enough on this task
($\mathcal{S}$/$\mathcal{T}$ protocol). Although being short of research,
$\mathcal{S}$/$\mathcal{T}$ protocol has unique advantages such as offering
more informative supervision, but it is computationally expensive. This paper
looks into this special evaluation method and takes a step towards putting it
into practice. We find that with a small ratio of tasks armed with target
models, classic meta-learning algorithms can be improved a lot without
consuming many resources. We empirically verify the effectiveness of
$\mathcal{S}$/$\mathcal{T}$ protocol in a typical application of meta-learning,
$\mathit{i.e.}$, few-shot learning. In detail, after constructing target models
by fine-tuning the pre-trained network on those hard tasks, we match the
task-specific solvers and target models via knowledge distillation.
|
This work presents our approach to train a neural network to detect
hate-speech texts in Hindi and Bengali. We also explore how transfer learning
can be applied to learning these languages, given that they have the same
origin and thus, are similar to some extend. Even though the whole experiment
was conducted with low computational power, the obtained result is comparable
to the results of other, more expensive, models. Furthermore, since the
training data in use is relatively small and the two languages are almost
entirely unknown to us, this work can be generalized as an effort to demystify
lost or alien languages that no human is capable of understanding.
|
Angular momentum loss by the plasma wind is considered as a universal feature
of isolated neutron stars including magnetars. The wind nebulae powered by
magnetars allow us to compare the wind properties and the spin-evolution of
magnetars with those of rotation-powered pulsars (RPPs). In this paper, we
construct a broadband emission model of magnetar wind nebulae (MWNe). The model
is similar to past studies of young pulsar wind nebulae (PWNe) around RPPs, but
is modified for the application to MWNe that have far less observational
information than the young PWNe. We apply the model to the MWN around the
youngest ($\sim$ 1kyr) magnetar 1E 1547.0-5408 that has the largest spin-down
power $L_{\rm spin}$ among all the magnetars. However, the MWN is faint because
of low $L_{\rm spin}$ of 1E 1547.0-5408 compared with the young RPPs. Since
most of parameters are not well constrained only by an X-ray flux upper limit
of the MWN, we adopt the model parameters from young PWN Kes 75 around PSR
J1846-0258 that is a peculiar RPP showing magnetar-like behaviors. The model
predicts $\gamma$-ray flux that will be detected in a future TeV $\gamma$-ray
observation by {\it CTA}. The MWN spectrum does not allow us to test hypothesis
that 1E 1547.0-5408 had milliseconds period at its birth because the particles
injected early phase of evolution are suffered from severe adiabatic and
synchrotron losses. Further both observational and theoretical studies of the
wind nebulae around magnetars are required to constrain the wind and spin-down
properties of magnetars.
|
For any primitive proper substitution \sigma, we give explicit constructions
of countably many pairwise non-isomorphic substitution dynamical systems
{(X_{\zeta_n}, T_{\zeta_n})}_{n=1}^{\infty} such that they all are (strong)
orbit equivalent to (X_{\sigma}, T_{\sigma}). We show that the complexity of
the substitution dynamical systems {(X_{\zeta_n}, T_{\zeta_n})} is essentially
different that prevents them from being isomorphic. Given a primitive (not
necessarily proper) substitution \tau, we find a stationary simple properly
ordered Bratteli diagram with the least possible number of vertices such that
the corresponding Bratteli-Vershik system is orbit equivalent to (X_{\tau},
T_{\tau}).
|
This investigation examined the relationships among scene complexity,
workload, presence, and cybersickness in virtual reality (VR) environments.
Numerous factors can influence the overall VR experience, and existing research
on this matter is not yet conclusive, warranting further investigation. In this
between-subjects experimental setup, 44 participants engaged in the Pendulum
Chair game, with half exposed to a simple scene with lower optic flow and lower
familiarity, and the remaining half to a complex scene characterized by higher
optic flow and greater familiarity. The study measured the dependent variables
workload, presence, and cybersickness and analyzed their correlations.
Equivalence testing was also used to compare the simple and complex
environments. Results revealed that despite the visible differences between the
environments, within the 10% boundaries of the maximum possible value for
workload and presence, and 13.6% of the maximum SSQ value, a statistically
significant equivalence was observed between the simple and complex scenes.
Additionally, a moderate, negative correlation emerged between workload and SSQ
scores. The findings suggest two key points: (1) the nature of the task can
mitigate the impact of scene complexity factors such as optic flow and
familiarity, and (2) the correlation between workload and cybersickness may
vary, showing either a positive or negative relationship.
|
This paper studies the problem of optimal switching for one-dimensional
diffusion, which may be regarded as sequential optimal stopping problem with
changes of regimes. The resulting dynamic programming principle leads to a
system of variational inequa-lities, and the state space is divided into
continuation regions and switching regions. By means of viscosity solutions
approach, we prove the smoot-fit $C^1$ property of the value functions.
|
Following our first article, we continue to investigate ultrametic modules
over a ring of twisted polynomials of the form $[K;\vfi]$, where $\vfi$ is a
ring endomorphism of $K$. The main motivation comes from the the theory of
valued difference fields (including characteristic $p>0$ valued fields equipped
with the Frobenius endomorphism). We introduce the class of modules, that we
call, affinely maximal and residually divisible and we prove (relative -)
quantifier elimination results. Ax-Kochen \& Erhov type theorems follows. As an
application, we axiomatize, as a valued module, any ultraproduct of
algebraically closed valued fields $(\mathbb{F}_{p^n}(t)^{alg})_{n\in
\mathbb{N}}$, of fixed characteristic $p>0$, each equipped with the morphism
$x\mapsto x^{p^n}$ and with the $t$-adic valuation.
|
We present a pseudo-Newtonian potential for accretion disk modeling around
the rotating black holes. This potential can describe the general relativistic
effects on accretion disk. As the inclusion of rotation in a proper way is very
important at an inner edge of disk the potential is derived from the Kerr
metric. This potential can reproduce all the essential properties of general
relativity within 10% error even for rapidly rotating black holes.
|
Here we introduce a variation of the trap model of glasses based on softness,
a local structural variable identified by machine learning, in supercooled
liquids. Softness is a particle-based quantity that reflects the local
structural environment of a particle and characterizes the energy barrier for
the particle to rearrange. As in the trap model, we treat each particle's
softness, and hence energy barrier, as evolving independently. We show that
such a model reproduces many qualitative features of softness, and therefore
makes qualitatively reasonable predictions of behaviors such as the dependence
of fragility on density in a model supercooled liquid. We also show failures of
this simple model, indicating features of the dynamics of softness that may
only be explained by correlations.
|
G\'{e}rard Watts predicted a formula for the probability in percolation that
there is both a left--right and an up--down crossing, which was later proved by
Julien Dub\'{e}dat. Here we present a simpler proof due to Oded Schramm, which
builds on Cardy's formula in a conceptually appealing way: the triple
derivative of Cardy's formula is the sum of two multi-arm densities. The
relative sizes of the two terms are computed with Girsanov conditioning. The
triple integral of one of the terms is equivalent to Watts' formula. For the
relevant calculations, we present and annotate Schramm's original (and
remarkably elegant) Mathematica code.
|
We developed a unified mesoscopic transport model for graphene nanoribbons,
which combines the non-equilibrium Green's function (NEGF) formalism with the
real-space {\pi}-orbital model. Based on this model, we probe the spatial
distributions of electrons under a magnetic field, in order to obtain insights
into the various signature Hall effects in disordered armchair graphene
nanoribbons (AGNR). In the presence of a uniform perpendicular magnetic field
(B\perp-field), a perfect AGNR shows three distinct spatial current profiles at
equilibrium, depending on its width. Under non-equilibrium conditions (i.e. in
the presence of an applied bias), the net electron flow is restricted to the
edges and occurs in opposite directions depending on whether the Fermi level
lies within the valence or conduction band. For electrons at energy level below
the conduction window, the B\perp-field gives rise to local electron flux
circulation, although the global flux is zero. Our study also reveals the
suppression of electron backscattering as a result of the edge transport which
is induced by the B\perp-field. This phenomenon can potentially mitigate the
undesired effects of disorders, such as the bulk and edge vacancies, on the
transport properties of AGNR. Lastly, we show that the effect of B\perp-field
on electronic transport is less significant in the multimode compared to the
single mode electron transport.
|
Models for genome-wide prediction and association studies usually target a
single phenotypic trait. However, in animal and plant genetics it is common to
record information on multiple phenotypes for each individual that will be
genotyped. Modeling traits individually disregards the fact that they are most
likely associated due to pleiotropy and shared biological basis, thus providing
only a partial, confounded view of genetic effects and phenotypic interactions.
In this paper we use data from a Multiparent Advanced Generation Inter-Cross
(MAGIC) winter wheat population to explore Bayesian networks as a convenient
and interpretable framework for the simultaneous modeling of multiple
quantitative traits. We show that they are equivalent to multivariate genetic
best linear unbiased prediction (GBLUP), and that they are competitive with
single-trait elastic net and single-trait GBLUP in predictive performance.
Finally, we discuss their relationship with other additive-effects models and
their advantages in inference and interpretation. MAGIC populations provide an
ideal setting for this kind of investigation because the very low population
structure and large sample size result in predictive models with good power and
limited confounding due to relatedness.
|
We establish the existence of positive solutions for a system of coupled
fourth-order partial differential equations on a bounded domain $\Omega \subset
\mathbb{R}^n$\begin{align*}
\left\{\begin{array}{l} \Delta^2u_1 +\beta_1 \Delta u_1-\alpha_1 u_1=f_1({
x},u_1,u_2),\\\Delta^2 u_2+\beta_2\Delta u_2-\alpha_2 u_2=f_2({ x},u_1,u_2),
\end{array} \quad \quad x\in\Omega,
\right. \end{align*}subject to homogeneous Navier boundary conditions, where
the functions $f_1,f_2 : \Omega\times [0,\infty)\times [0,\infty) \rightarrow
[0,\infty)$ are continuous, and $\alpha_1,\alpha_2,\beta_1$ and $\beta_2$ are
real parameters satisfying certain constraints related to the eigenvalues of
the associated Laplace operator.
|
Dark matter particles annihilating into Standard Model fermions may be able
to explain the recent observation of a gamma-ray excess in the direction of the
Galactic Center. Recently, a hidden photon model has been proposed to explain
this signal. Supplementing this model with a dipole moment operator and a small
dark sector mass splitting allows a large cross section to a photon line while
avoiding direct detection and other constraints. Comparing the line and
continuum cross sections, we find that the line is suppressed only by the
relative scales and couplings. Given current constraints on this ratio, a line
discovery in the near future could point to a new scale Lambda ~ O(1 TeV),
where we would expect to discover new charged particles. Moreover, such a line
would also imply that dark matter can be visible in near-future direct
detection experiments.
|
Large Language Models (LLMs) have recently been shown to be effective as
automatic evaluators with simple prompting and in-context learning. In this
work, we assemble 15 LLMs of four different size ranges and evaluate their
output responses by preference ranking from the other LLMs as evaluators, such
as System Star is better than System Square. We then evaluate the quality of
ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators
(CoBBLEr), a benchmark to measure six different cognitive biases in LLM
evaluation outputs, such as the Egocentric bias where a model prefers to rank
its own outputs highly in evaluation. We find that LLMs are biased text quality
evaluators, exhibiting strong indications on our bias benchmark (average of 40%
of comparisons across all models) within each of their evaluations that
question their robustness as evaluators. Furthermore, we examine the
correlation between human and machine preferences and calculate the average
Rank-Biased Overlap (RBO) score to be 49.6%, indicating that machine
preferences are misaligned with humans. According to our findings, LLMs may
still be unable to be utilized for automatic annotation aligned with human
preferences. Our project page is at: https://minnesotanlp.github.io/cobbler.
|
Document-level entity-based extraction (EE), aiming at extracting
entity-centric information such as entity roles and entity relations, is key to
automatic knowledge acquisition from text corpora for various domains. Most
document-level EE systems build extractive models, which struggle to model
long-term dependencies among entities at the document level. To address this
issue, we propose a generative framework for two document-level EE tasks:
role-filler entity extraction (REE) and relation extraction (RE). We first
formulate them as a template generation problem, allowing models to efficiently
capture cross-entity dependencies, exploit label semantics, and avoid the
exponential computation complexity of identifying N-ary relations. A novel
cross-attention guided copy mechanism, TopK Copy, is incorporated into a
pre-trained sequence-to-sequence model to enhance the capabilities of
identifying key information in the input document. Experiments done on the
MUC-4 and SciREX dataset show new state-of-the-art results on REE (+3.26%),
binary RE (+4.8%), and 4-ary RE (+2.7%) in F1 score.
|
DPMJET samples hadron-hadron, hadron-nucleus, nucleus-nucleus and
neutrino-nucleus interactions at high energies.
The two-component Dual Parton Model is used with multiple soft chains and
multiple minijets at each elementary interaction.
Particle production is realized by the fragmentation of colorless
parton-parton chains constructed from the quark content of the interacting
hadrons. DPMJET-II.5 includes the cascading of secondaries within the target as
well as projectile nuclei which is suppressed by the formation time concept.
The excitation energy of the remaining target and projectile nuclei is
calculated and using this nuclear evaporation is included into the model. It is
possible to use the model up to primary energies of 10${}^{21}$ eV (per
nucleon) in the lab. frame.
DPMJET can also be applied to neutrino nucleus collisions. It extends the
neutrino-nucleon models qel (quasi elastic neutrino interactions) and lepto
(deep inelastic neutrino nucleon collisions) to neutrino collisions on nuclear
targets.
|
Deep Learning methods are renowned for their performances, yet their lack of
interpretability prevents them from high-stakes contexts. Recent model agnostic
methods address this problem by providing post-hoc interpretability methods by
reverse-engineering the model's inner workings. However, in many regulated
fields, interpretability should be kept in mind from the start, which means
that post-hoc methods are valid only as a sanity check after model training.
Interpretability from the start, in an abstract setting, means posing a set of
soft constraints on the model's behavior by injecting knowledge and
annihilating possible biases. We propose a Multicriteria technique that allows
to control the feature effects on the model's outcome by injecting knowledge in
the objective function. We then extend the technique by including a non-linear
knowledge function to account for more complex effects and local lack of
knowledge. The result is a Deep Learning model that embodies interpretability
from the start and aligns with the recent regulations. A practical empirical
example based on credit risk, suggests that our approach creates performant yet
robust models capable of overcoming biases derived from data scarcity.
|
A necessary and sufficient condition for a parameter transformation that
leaves invariant the energy of a one dimensional autonomous system is obtained.
Using a parameter transformation the Hamilton-Jacobi equation is solved by a
quadrature. An example of this approach is given.
|
The muon anomalous magnetic moment is investigated in the standard model with
two Higgs doublets (S2HDM) motivated from spontaneous CP violation. Thus all
the effective Yukawa couplings become complex. As a consequence of the non-zero
phase in the couplings, the one loop contribution from the neutral scalar
bosons could be positive and negative relying on the CP phases. The
interference between one and two loop diagrams can be constructive in a large
parameter space of CP-phases. This will result in a significant contribution to
muon anomalous magnetic moment even in the flavor conserving process with a
heavy neutral scalar boson ($m_h \sim$ 200 GeV) once the effective muon Yukawa
coupling is large ($|\xi_\mu|\sim 50$). In general, the one loop contributions
from lepton flavor changing scalar interactions become more important. In
particular, when all contributions are positive in a reasonable parameter space
of CP phases, the recently reported 2.6 sigma experiment vs. theory deviation
can be easily explained even for a heavy scalar boson with a relative small
Yukawa coupling in the S2HDM.
|
Landau damping is an essential mechanism for ensuring collective beam
stability in particle accelerators. Precise knowledge of how strong Landau
damping is, is key to making accurate predictions on beam stability for
state-of-the-art high energy colliders. In this paper we demonstrate an
experimental procedure that would allow quantifying the strength of Landau
damping and the limits of beam stability using an active transverse feedback as
a controllable source of beam coupling impedance. In a proof-of-principle test
performed at the Large Hadron Collider stability diagrams for a range of Landau
Octupole strengths have been measured. In the future, the procedure could
become an accurate way of measuring stability diagrams throughout the machine
cycle.
|
We investigate the interaction between a single mode light field and an
elongated cigar shaped Bose-Einstein condensate (BEC), subject to a temporal
modulation of the trap frequency in the tight confinement direction. Under
appropriate conditions, the longitudinal sound like waves (Faraday waves) in
the direction of weak confinement acts as a dynamic diffraction grating for the
incident light field analogous to the acousto-optic effect in classical optics.
The change in the refractive index due to the periodic modulation of the BEC
density is responsible for the acousto-optic effect. The dynamics is
characterised by Bragg scattering of light fom the matter wave Faraday grating
and simultaneous Bragg scattering of the condensate atoms from the optical
grating formed due to the interference between the incident light and the
diffracted light fields. Varying the intensity of the incident laser beam we
observe the transition from the acousto-optic effect regime to the atomic Bragg
scattering regime, where Rabi oscillations between two momentum levels of the
atoms are observed. We show that the acousto-optic effect is reduced as the
atomic interaction is increased.
|
Recent work has shown that object-centric representations can greatly help
improve the accuracy of learning dynamics while also bringing interpretability.
In this work, we take this idea one step further, ask the following question:
"can learning disentangled representation further improve the accuracy of
visual dynamics prediction in object-centric models?" While there has been some
attempt to learn such disentangled representations for the case of static
images \citep{nsb}, to the best of our knowledge, ours is the first work which
tries to do this in a general setting for video, without making any specific
assumptions about the kind of attributes that an object might have. The key
building block of our architecture is the notion of a {\em block}, where
several blocks together constitute an object. Each block is represented as a
linear combination of a given number of learnable concept vectors, which is
iteratively refined during the learning process. The blocks in our model are
discovered in an unsupervised manner, by attending over object masks, in a
style similar to discovery of slots \citep{slot_attention}, for learning a
dense object-centric representation. We employ self-attention via transformers
over the discovered blocks to predict the next state resulting in discovery of
visual dynamics. We perform a series of experiments on several benchmark 2-D,
and 3-D datasets demonstrating that our architecture (1) can discover
semantically meaningful blocks (2) help improve accuracy of dynamics prediction
compared to SOTA object-centric models (3) perform significantly better in OOD
setting where the specific attribute combinations are not seen earlier during
training. Our experiments highlight the importance discovery of disentangled
representation for visual dynamics prediction.
|
We study the causality violation in the non-local quantum field theory (as
formulated by Kleppe and Woodard) containing a finite mass scale $\Lambda $. We
use $\phi ^{4}$ theory as a simple model for study. Starting from the
Bogoliubov-Shirkov criterion for causality, we construct and study combinations
of S-matrix elements that signal violation of causality in the one loop
approximation. We find that the causality violation in the exclusive process
$\phi +\phi \to \phi +\phi $ grows with energy, but the growth with energy,
(for low to moderate energies) is suppressed to all orders compared to what one
would expect purely from dimensional considerations. We however find that the
causality violation in other processes such as $\phi +\phi \to \phi +\phi +\phi
+\phi $ grows with energy as expected from dimensional considerations at low to
moderate energies. For high enough energies comparable to the mass scale
$\Lambda $, however, we find a rapid (exponential-like) growth in the degree of
causality violation. We generalize some of the 1-loop results to all orders. We
present interpretations of the results based on possible interpretations of the
non-local quantum field theory models.
|
Gas giant planets are expected to accrete most of their mass via a
circumplanetary disk. If the planet is unmagnetized and initially slowly
rotating, it will accrete gas via a radially narrow boundary layer and rapidly
spin up. Radial broadening of the boundary layer as the planet spins up reduces
the specific angular momentum of accreted gas, allowing the planet to find a
terminal rotation rate short of the breakup rate. Here, we use axisymmetric
viscous hydrodynamic simulations to quantify the terminal rotation rate of
planets accreting from their circumplanetary disks. For an isothermal
planet-disk system with a disk scale height $h/r =0.1$ near the planetary
surface, spin up switches to spin down at between 70\% and 80\% of the planet's
breakup angular velocity. In a qualitative difference from vertically-averaged
models -- where spin down can co-exist with mass accretion -- we observe
\emph{decretion} accompanying solutions where angular momentum is being lost.
The critical spin rate depends upon the disk thickness near the planet. For an
isothermal system with a disk scale height of $h/r = 0.15$ near the planet, the
critical spin rate drops to between 60\% and 70\% of the planet's breakup
angular velocity. In the disk outside the boundary layer, we identify
meridional circulation flows, which are unsteady and instantaneously asymmetric
across the mid-plane. The simulated flows are strong enough to vertically
redistribute solid material in early-stage satellite formation. We discuss how
extrasolar planetary rotation measurements, when combined with spectroscopic
and variability studies of protoplanets with circumplanetary disks, could
determine the role of magnetic and non-magnetic processes in setting giant
planet spins.
|
For a certain parametrized family of maps on the circle with critical points
and logarithmic singularities where derivatives blow up to infinity, we
construct a positive measure set of parameters corresponding to maps which
exhibit nonuniformly expanding behavior. This implies the existence of
"chaotic" dynamics in dissipative homoclinic tangles in periodically perturbed
differential equations.
|
Video Motion Magnification (VMM) aims to reveal subtle and imperceptible
motion information of objects in the macroscopic world. Prior methods directly
model the motion field from the Eulerian perspective by Representation Learning
that separates shape and texture or Multi-domain Learning from phase
fluctuations. Inspired by the frequency spectrum, we observe that the
low-frequency components with stable energy always possess spatial structure
and less noise, making them suitable for modeling the subtle motion field. To
this end, we present FD4MM, a new paradigm of Frequency Decoupling for Motion
Magnification with a Multi-level Isomorphic Architecture to capture multi-level
high-frequency details and a stable low-frequency structure (motion field) in
video space. Since high-frequency details and subtle motions are susceptible to
information degradation due to their inherent subtlety and unavoidable external
interference from noise, we carefully design Sparse High/Low-pass Filters to
enhance the integrity of details and motion structures, and a Sparse Frequency
Mixer to promote seamless recoupling. Besides, we innovatively design a
contrastive regularization for this task to strengthen the model's ability to
discriminate irrelevant features, reducing undesired motion magnification.
Extensive experiments on both Real-world and Synthetic Datasets show that our
FD4MM outperforms SOTA methods. Meanwhile, FD4MM reduces FLOPs by 1.63$\times$
and boosts inference speed by 1.68$\times$ than the latest method. Our code is
available at https://github.com/Jiafei127/FD4MM.
|
Previous STRIPS domain model acquisition approaches that learn from state
traces start with the names and parameters of the actions to be learned.
Therefore their only task is to deduce the preconditions and effects of the
given actions. In this work, we explore learning in situations when the
parameters of learned actions are not provided. We define two levels of trace
quality based on which information is provided and present an algorithm for
each. In one level (L1), the states in the traces are labeled with action
names, so we can deduce the number and names of the actions, but we still need
to work out the number and types of parameters. In the other level (L2), the
states are additionally labeled with objects that constitute the parameters of
the corresponding grounded actions. Here we still need to deduce the types of
the parameters in the learned actions. We experimentally evaluate the proposed
algorithms and compare them with the state-of-the-art learning tool FAMA on a
large collection of IPC benchmarks. The evaluation shows that our new
algorithms are faster, can handle larger inputs and provide better results in
terms of learning action models more similar to reference models.
|
The cardinality constraint is an intrinsic way to restrict the solution
structure in many domains, for example, sparse learning, feature selection, and
compressed sensing. To solve a cardinality constrained problem, the key
challenge is to solve the projection onto the cardinality constraint set, which
is NP-hard in general when there exist multiple overlapped cardinality
constraints. In this paper, we consider the scenario where the overlapped
cardinality constraints satisfy a Three-view Cardinality Structure (TVCS),
which reflects the natural restriction in many applications, such as
identification of gene regulatory networks and task-worker assignment problem.
We cast the projection into a linear programming, and show that for TVCS, the
vertex solution of this linear programming is the solution for the original
projection problem. We further prove that such solution can be found with the
complexity proportional to the number of variables and constraints. We finally
use synthetic experiments and two interesting applications in bioinformatics
and crowdsourcing to validate the proposed TVCS model and method.
|
Most MRI liver segmentation methods use a structural 3D scan as input, such
as a T1 or T2 weighted scan. Segmentation performance may be improved by
utilizing both structural and functional information, as contained in dynamic
contrast enhanced (DCE) MR series. Dynamic information can be incorporated in a
segmentation method based on convolutional neural networks in a number of ways.
In this study, the optimal input configuration of DCE MR images for
convolutional neural networks (CNNs) is studied. The performance of three
different input configurations for CNNs is studied for a liver segmentation
task. The three configurations are I) one phase image of the DCE-MR series as
input image; II) the separate phases of the DCE-MR as input images; and III)
the separate phases of the DCE-MR as channels of one input image. The three
input configurations are fed into a dilated fully convolutional network and
into a small U-net. The CNNs were trained using 19 annotated DCE-MR series and
tested on another 19 annotated DCE-MR series. The performance of the three
input configurations for both networks is evaluated against manual annotations.
The results show that both neural networks perform better when the separate
phases of the DCE-MR series are used as channels of an input image in
comparison to one phase as input image or the separate phases as input images.
No significant difference between the performances of the two network
architectures was found for the separate phases as channels of an input image.
|
A link between the spin fluctuation and the "fermiology" is explored for the
single-band Hubbard model within the fluctuation exchange (FLEX) approximation.
We show that the experimentally observed peak position of the spin structure in
the high T_C cuprates can be understood from the model that reproduces the
experimentally observed Fermi surface. In particular, both the variation of the
incommensurability of the peak in the spin structure and the evolution of the
Fermi surface with hole doping in La_{2-x}Sr_xCuO_4 may be understood with a
second nearest neighbor hopping decreasing with hole doping.
|
Drell-Yan dilepton pair production and inclusive direct photon production can
be described within a unified framework in the color dipole approach. The
inclusion of non-perturbative primordial transverse momenta and DGLAP evolution
is studied. We successfully describe data for dilepton spectra from 800-GeV pp
collisions, inclusive direct photon spectra for pp collisions at RHIC energies
$\sqrt{s}=200$ GeV, and for $p\bar{p}$ collisions at Tevatron energies
$\sqrt{s}=1.8$ TeV, in a formalism that is free from any extra parameters.
|
Applications of the phase space approach to the calculation of the
microlensing autocorrelation function are presented. The continuous propagation
equation for a random star field with a Gaussian velocity distribution is
solved in the leading non-trivial approximation using the perturbation
technique. It is shown that microlensing modulations can be important in the
interpretation of optical and shorter-wavelength light curves of pulsars, power
spectra of active galactic nuclei and coherence estimates for quasi-periodic
oscillations of dwarf novae and low-mass X-ray binaries. Extra scatter in the
brightness of type Ia supernovae due to gravitational microlensing is shown to
be of order up to 0.2 stellar magnitudes depending on the extent of the light
curves.
|
Let integer $n \ge 3$ and integer $r = r(n) \ge 3$. Define the binomial
random $r$-uniform hypergraph $H_r(n, p)$ to be the $r$-uniform graph on the
vertex set $[n]$ such that each $r$-set is an edge independently with
probability $p$. A hypergraph is linear if every pair of hyperedges intersects
in at most one vertex. We study the probability of linearity of random
hypergraphs $H_r(n, p)$ via cluster expansion and give more precise asymptotics
of the probability in question, improving the asymptotic probability of
linearity obtained by McKay and Tian, in particular, when $r=3$ and $p =
o(n^{-7/5})$.
|
A method for calibrating the momentum scale in a particle physics detector is
described. The method relies on the determination of the masses of the final
state particles in two-body decays of neutral particles, which can then be used
to obtain corrections in the momentum scale. A modified version of the
Armenteros-Podolanski plot and the $K_S^0 \to \pi^+ \pi^-$ decay is used as a
proof of principle for this method.
|
We propose a method named Super Characters for sentiment classification. This
method converts the sentiment classification problem into image classification
problem by projecting texts into images and then applying CNN models for
classification. Text features are extracted automatically from the generated
Super Characters images, hence there is no need of any explicit step of
embedding the words or characters into numerical vector representations.
Experimental results on large social media corpus show that the Super
Characters method consistently outperforms other methods for sentiment
classification and topic classification tasks on ten large social media
datasets of millions of contents in four different languages, including
Chinese, Japanese, Korean and English.
|
We build rearticulable models for arbitrary everyday man-made objects
containing an arbitrary number of parts that are connected together in
arbitrary ways via 1 degree-of-freedom joints. Given point cloud videos of such
everyday objects, our method identifies the distinct object parts, what parts
are connected to what other parts, and the properties of the joints connecting
each part pair. We do this by jointly optimizing the part segmentation,
transformation, and kinematics using a novel energy minimization framework. Our
inferred animatable models, enables retargeting to novel poses with sparse
point correspondences guidance. We test our method on a new articulating robot
dataset, and the Sapiens dataset with common daily objects, as well as
real-world scans. Experiments show that our method outperforms two leading
prior works on various metrics.
|
Optical long baseline interferometry is a technique that has generated almost
850 refereed papers to date. The targets span a large variety of objects from
planetary systems to extragalactic studies and all branches of stellar physics.
We have created a database hosted by the JMMC and connected to the Optical Long
Baseline Interferometry Newsletter (OLBIN) web site using MySQL and a
collection of XML or PHP scripts in order to store and classify these
publications. Each entry is defined by its ADS bibcode, includes basic ADS
informations and metadata. The metadata are specified by tags sorted in
categories: interferometric facilities, instrumentation, wavelength of
operation, spectral resolution, type of measurement, target type, and paper
category, for example. The whole OLBIN publication list has been processed and
we present how the database is organized and can be accessed. We use this tool
to generate statistical plots of interest for the community in optical long
baseline interferometry.
|
Entanglement entropies have revealed, in the last years, to be a powerful
tool to extract information about the physics of condensed-matter systems. In
the first part of this thesis, we show how to extract essential details about
the quasi-long-range order of a one-dimensional critical systems by means of
entanglement entropies. In the second part we show how to derive analytically
the scaling of such quantities for critical systems, whose low-energy physics
is described by a conformal field theory, in presence of general open boundary
conditions that preserve the conformal invariance.
|
One of the most fundamental properties of an interacting electron system is
its frequency- and wave-vector-dependent density response function, $\chi({\bf
q},\omega)$. The imaginary part, $\chi''({\bf q},\omega)$, defines the
fundamental bosonic charge excitations of the system, exhibiting peaks wherever
collective modes are present. $\chi$ quantifies the electronic compressibility
of a material, its response to external fields, its ability to screen charge,
and its tendency to form charge density waves. Unfortunately, there has never
been a fully momentum-resolved means to measure $\chi({\bf q},\omega)$ at the
meV energy scale relevant to modern elecronic materials. Here, we demonstrate a
way to measure $\chi$ with quantitative momentum resolution by applying
alignment techniques from x-ray and neutron scattering to surface
high-resolution electron energy-loss spectroscopy (HR-EELS). This approach,
which we refer to here as "M-EELS," allows direct measurement of $\chi''({\bf
q},\omega)$ with meV resolution while controlling the momentum with an accuracy
better than a percent of a typical Brillouin zone. We apply this technique to
finite-q excitations in the optimally-doped high temperature superconductor,
Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$ (Bi2212), which exhibits several phonons
potentially relevant to dispersion anomalies observed in ARPES and STM
experiments. Our study defines a path to studying the long-sought collective
charge modes in quantum materials at the meV scale and with full momentum
control.
|
This paper uses combinatorics and group theory to answer questions about the
assembly of icosahedral viral shells. Although the geometric structure of the
capsid (shell) is fairly well understood in terms of its constituent subunits,
the assembly process is not. For the purpose of this paper, the capsid is
modeled by a polyhedron whose facets represent the monomers. The assembly
process is modeled by a rooted tree, the leaves representing the facets of the
polyhedron, the root representing the assembled polyhedron, and the internal
vertices representing intermediate stages of assembly (subsets of facets).
Besides its virological motivation, the enumeration of orbits of trees under
the action of a finite group is of independent mathematical interest. If $G$ is
a finite group acting on a finite set $X$, then there is a natural induced
action of $G$ on the set $\mathcal{T}_X$ of trees whose leaves are bijectively
labeled by the elements of $X$. If $G$ acts simply on $X$, then $|X| := |X_n| =
n \cdot |G|$, where $n$ is the number of $G$-orbits in $X$. The basic
combinatorial results in this paper are (1) a formula for the number of orbits
of each size in the action of $G$ on $\mathcal{T}_{X_n}$, for every $n$, and
(2) a simple algorithm to find the stabilizer of a tree $\tau \in
\mathcal{T}_X$ in $G$ that runs in linear time and does not need memory in
addition to its input tree.
|
We explore the symmetry group of the pressure isotropy condition in isotropic
coordinates finding a rich structure. We work out some specific examples.
|
We will show that for any $n\ge N$ points on the $N$-dimensional sphere $S^N$
there is a closed hemisphere which contains at least
$\lfloor\frac{n+N+1}{2}\rfloor$ of these points. This bound is sharp and we
will calculate the amount of sets which realize this value.
If we change to open hemispheres things will be easier. For any $n$ points on
the sphere there is an open hemisphere which contains at least
$\lfloor\frac{n+1}{2}\rfloor$ of these points, independent of the dimension.
This bound is sharp.
|
We study gravitational waves from a hierarchical three-body system up to
first-order postNewtonian approximation. Under certain conditions, the
existence of a nearby third body can cause periodic exchange between
eccentricity of an inner binary and relative inclination, known as Kozai-Lidov
oscillations. We analyze features of the waveform from the inner binary system
undergoing such oscillations. We find that variation caused due to the tertiary
companion can be observed in the gravitational waveforms and energy spectra,
which should be compared with those from isolated binaries and coplanar
three-body system. The detections from future space interferometers will make
possible the investigation of the gravitational wave spectrum in mHz range and
may fetch signals by sources addressed.
|
We propose new mechanisms for small neutrino masses based on clockwork
mechanism. The Standard Model neutrinos and lepton number violating operators
communicate through the zero mode of clockwork gears, one of the two couplings
of the zero mode is exponentially suppressed by clockwork mechanism. Including
all known examples for the clockwork realization of the neutrino masses,
different types of models are realized depending on the profile and chirality
of the zero mode fermion. Each type of realization would have
phenomenologically distinctive features with the accompanying heavy neutrinos.
|
SN1988Z is the most luminous X-ray-emitting supernova, initially detected in
1995 using the ROSAT HRI with a luminosity of ~8x10^40 erg s^-1 (Fabian &
Terlevich 1996). Its high luminosity was ascribed to expansion of the blast
wave into an especially dense circumstellar medium. In this paper, we describe
a recent observation of SN1988Z using the ACIS detector on Chandra. We readily
detect SN1988Z, obtaining ~30 net counts which corresponds to a 0.2-2 keV
luminosity of ~3.2x10^39 erg s^-1. The calculated quantiles for the extracted
counts allow a broad range of temperatures, but require a temperature hotter
than 5 keV if there is no intrinsic absorption. The long term light curve
(1995-2005) declines as t^-2.6+/-0.6. This is one of the steepest X-ray light
curves. The X-ray luminosity indicates that the emitting region has a high
density (>10^5 cm^-3) and that the density profile is not consistent with a
constant mass loss stellar wind during the ~5000 years before the explosion. If
the circumstellar medium is due to progenitor mass loss, then the mass loss
rate is extremely high (~10^-3 M_sol yr^-1(v_w / 10 km s^-1)). The X-ray
results are compared with the predictions of models of SN1988Z.
|
This paper makes a simple increment to state-of-the-art in sarcasm detection
research. Existing approaches are unable to capture subtle forms of context
incongruity which lies at the heart of sarcasm. We explore if prior work can be
enhanced using semantic similarity/discordance between word embeddings. We
augment word embedding-based features to four feature sets reported in the
past. We also experiment with four types of word embeddings. We observe an
improvement in sarcasm detection, irrespective of the word embedding used or
the original feature set to which our features are augmented. For example, this
augmentation results in an improvement in F-score of around 4\% for three out
of these four feature sets, and a minor degradation in case of the fourth, when
Word2Vec embeddings are used. Finally, a comparison of the four embeddings
shows that Word2Vec and dependency weight-based features outperform LSA and
GloVe, in terms of their benefit to sarcasm detection.
|
Recent developments of Baxter algebras have lead to applications to
combinatorics, number theory and mathematical physics. We relate Baxter
algebras to Stirling numbers of the first kind and the second kind, partitions
and multinomial coefficients. This allows us to apply congruences from number
theory to obtain congruences in Baxter algebras.
|
The spin-1/2 zig-zag Heisenberg ladder (J_1 - J_2 model) is considered. A new
representation for the model is found and a saddle point approximation over the
spin-liquid order parameter < \vec \sigma_{n-1}(\vec \sigma_{n}\times \vec
\sigma_{n+1}) > is performed. Corresponding effective action is derived and
analytically analyzed. We observe the presence of phase transitions at values
J_2/J_1=0.231 and J_2/J_1=1/2.
|
We resume a long-standing, yet not forgotten, debate on whether a
Chern-Simons birefringence can be generated by a local term
$b_\mu\bar\psi\gamma^\mu \gamma_5\psi$ in the Lagrangian (where $b_\mu$ are
constants). In the present paper we implement a new way of managing $\gamma_5$
in dimensional regularization. Gauge invariance in the underlying theory (QED)
is enforced by this choice of defining divergent amplitudes. We investigate the
singular behavior of the vector meson two-point-function around the $m^2=0$ and
$p^2=0$ point. We find that the coefficient of the effective Chern-Simons can
be finite or zero. It depends on how one takes the limits: they cannot be
interchanged due to the associate change of symmetry. For $m^2=0$ we evaluate
also the self-mass of the photon at the second orderin $b_\mu$. We find zero.
|
The main result is an explicit expression for the Pressure Metric on the
Hitchin component of surface group representations into PSL(n,R) along the
Fuchsian locus. The expression is in terms of a parametrization of the tangent
space by holomorphic differentials, and it gives a precise relationship with
the Petersson pairing. Along the way, variational formulas are established that
generalize results from classical Teichmueller theory, such as Gardiner's
formula, the relationship between length functions and Fenchel-Nielsen
deformations, and variations of cross ratios.
|
We extend some of the results of Agler, Knese, and McCarthy [1] to $n$-tuples
of commuting isometries for $n>2$. Let $\mathbb{V}=(V_1,\dots,V_n)$ be an
$n$-tuple of a commuting isometries on a Hilbert space and let
Ann$(\mathbb{V})$ denote the set of all $n$-variable polynomials $p$ such that
$p(\mathbb{V})=0$. When Ann$(\mathbb{V})$ defines an affine algebraic variety
of dimension 1 and $\mathbb{V}$ is completely non-unitary, we show that
$\mathbb{V}$ decomposes as a direct sum of $n$-tuples
$\mathbb{W}=(W_1,\dots,W_n)$ with the property that, for each $i=1,\dots,n$,
$W_i$ is either a shift or a scalar multiple of the identity. If $\mathbb{V}$
is a cyclic $n$-tuple of commuting shifts, then we show that $\mathbb{V}$ is
determined by Ann$(\mathbb{V})$ up to near unitary equivalence, as defined in
[1].
|
For the open unit disc $\mathbb{D}$ in the complex plane, it is well known
that if $\phi \in C(\overline{\mathbb{D}})$ then its Berezin transform
$\widetilde{\phi}$ also belongs to $C(\overline{\mathbb{D}})$. We say that
$\mathbb{D}$ is BC-regular. In this paper we study BC-regularity of some
pseudoconvex domains in $\mathbb{C}^n$ and show that the boundary geometry
plays an important role. We also establish a relationship between the essential
norm of an operator in a natural Toeplitz subalgebra and its Berezin transform.
|
The paper presents an implementation and tests of a simple home entertainment
distribution architecture (server + multiple clients) implemented using two
conventional cabling architectures: CATV coaxial cable and conventional
Ethernet. This architecture is created taking into account the "Home gateway"
concept present in most attempts to solve the problem of the "Intelligent
home". A short presentation of the experimental is given with an investigation
of the main performances obtained using this architecture. The experiments
revealed that this simple solution makes possible to have entertainment and
data services with performances close to traditional data services in a
cost-effective architecture
|
Active galactic nuclei (AGN) feedback models are generally calibrated to
reproduce galaxy observables such as the stellar mass function and the
bimodality in galaxy colors. We use variations of the AGN feedback
implementations in the IllustrisTNG (TNG) and Simba cosmological hydrodynamic
simulations to show that the low redshift Lyman-$\alpha$ forest can provide
constraints on the impact of AGN feedback. We show that TNG over-predicts the
number density of absorbers at column densities $N_{\rm HI} < 10^{14}$
cm$^{-2}$ compared to data from the Cosmic Origins Spectrograph (in agreement
with previous work), and we demonstrate explicitly that its kinetic feedback
mode, which is primarily responsible for galaxy quenching, has a negligible
impact on the column density distribution (CDD) of absorbers. In contrast, we
show that the fiducial Simba model which includes AGN jet feedback is the
preferred fit to the observed CDD of the $z = 0.1$ Lyman-$\alpha$ forest across
five orders of magnitude in column density. We show that the Simba results with
jets produce a quantitatively better fit to the observational data than the
Simba results without jets, even when the UVB is left as a free parameter. AGN
jets in Simba are high speed, collimated, weakly-interacting with the
interstellar medium (via brief hydrodynamic decoupling) and heated to the halo
virial temperature. Collectively these properties result in stronger long-range
impacts on the IGM when compared to TNG's kinetic feedback mode, which drives
isotropic winds with lower velocities at the galactic radius. Our results
suggest that the low redshift Lyman-$\alpha$ forest provides plausible evidence
for long-range AGN jet feedback.
|
Structured growth of high quality graphene is necessary for technological
development of carbon based electronics. Specifically, control of the bunching
and placement of surface steps under epitaxial graphene on SiC is an important
consideration for graphene device production. We demonstrate lithographically
patterned evaporated amorphous carbon corrals as a method to pin SiC surface
steps. Evaporated amorphous carbon is an ideal step-flow barrier on SiC due to
its chemical compatibility with graphene growth and its structural stability at
high temperatures, as well as its patternability. The amorphous carbon is
deposited in vacuum on SiC prior to graphene growth. In the graphene furnace at
temperatures above 1200$^\circ$C, mobile SiC steps accumulate at these
amorphous carbon barriers, forming an aligned step free region for graphene
growth at temperatures above 1330$^\circ$C. AFM imaging and Raman spectroscopy
support the formation of quality step-free graphene sheets grown on SiC with
the step morphology aligned to the carbon grid.
|
We present new Chandra and XMM-Newton observations of a sample of eight
radio-quiet Gamma-ray pulsars detected by the Fermi Large Area Telescope. For
all eight pulsars we identify the X-ray counterpart, based on the X-ray source
localization and the best position obtained from Gamma-ray pulsar timing. For
PSR J2030+4415 we found evidence for an about 10 arcsec-long pulsar wind
nebula. Our new results consolidate the work from Marelli et al. 2011 and
confirm that, on average, the Gamma-ray--to--X-ray flux ratios (Fgamma/Fx) of
radio-quiet pulsars are higher than for the radio-loud ones. Furthermore, while
the Fgamma/Fx distribution features a single peak for the radio-quiet pulsars,
the distribution is more dispersed for the radio-loud ones, possibly showing
two peaks. We discuss possible implications of these different distributions
based on current models for pulsar X-ray emission.
|
In visual computing, 3D geometry is represented in many different forms
including meshes, point clouds, voxel grids, level sets, and depth images. Each
representation is suited for different tasks thus making the transformation of
one representation into another (forward map) an important and common problem.
We propose Omnidirectional Distance Fields (ODFs), a new 3D shape
representation that encodes geometry by storing the depth to the object's
surface from any 3D position in any viewing direction. Since rays are the
fundamental unit of an ODF, it can be used to easily transform to and from
common 3D representations like meshes or point clouds. Different from level set
methods that are limited to representing closed surfaces, ODFs are unsigned and
can thus model open surfaces (e.g., garments). We demonstrate that ODFs can be
effectively learned with a neural network (NeuralODF) despite the inherent
discontinuities at occlusion boundaries. We also introduce efficient forward
mapping algorithms for transforming ODFs to and from common 3D representations.
Specifically, we introduce an efficient Jumping Cubes algorithm for generating
meshes from ODFs. Experiments demonstrate that NeuralODF can learn to capture
high-quality shape by overfitting to a single object, and also learn to
generalize on common shape categories.
|
We show how to sample in parallel from a distribution $\pi$ over $\mathbb
R^d$ that satisfies a log-Sobolev inequality and has a smooth log-density, by
parallelizing the Langevin (resp. underdamped Langevin) algorithms. We show
that our algorithm outputs samples from a distribution $\hat\pi$ that is close
to $\pi$ in Kullback--Leibler (KL) divergence (resp. total variation (TV)
distance), while using only $\log(d)^{O(1)}$ parallel rounds and
$\widetilde{O}(d)$ (resp. $\widetilde O(\sqrt d)$) gradient evaluations in
total. This constitutes the first parallel sampling algorithms with TV distance
guarantees.
For our main application, we show how to combine the TV distance guarantees
of our algorithms with prior works and obtain RNC sampling-to-counting
reductions for families of discrete distribution on the hypercube $\{\pm 1\}^n$
that are closed under exponential tilts and have bounded covariance.
Consequently, we obtain an RNC sampler for directed Eulerian tours and
asymmetric determinantal point processes, resolving open questions raised in
prior works.
|
Compound Poisson distributions and signed compound Poisson measures are used
for approximation of the Markov binomial distribution. The upper and lower
bound estimates are obtained for the total variation, local and Wasserstein
norms. In a special case, asymptotically sharp constants are calculated. For
the upper bounds, the smoothing properties of compound Poisson distributions
are applied. For the lower bound estimates, the characteristic function method
is used.
|
As the most essential part of CAD modeling operations, boolean operations on
B-rep CAD models often suffer from errors. Errors caused by geometric precision
or numerical uncertainty are hard to eliminate. They will reduce the
reliability of boolean operations and damage the integrity of the resulting
models. And it is difficult to repair false boolean resulting models damaged by
errors. In practice, we find that the illegal boolean resulting models stem
from the false intersection edges caused by errors. Therefore, this paper
proposes an automatic method based on set reasoning to repair flawed structures
of the boolean resulting models by correcting their topological intersection
edges. We provide a local adaptive tolerance estimation method for each
intersection edge based on its geometric features as well as its origin. Then,
we propose a set of inference mechanisms based on set operations to infer
whether a repair is needed based on the tolerance value and how to correct the
inaccurate intersection edge. Our inference strategies are strictly proven,
ensuring the reliability and robustness of the repair process. The inference
process will transform the problem into a geometric equivalent form less
susceptible to errors to get a more accurate intersection edge. Since our
inference procedure focuses on topological features, our method can repair the
flawed boolean resulting models, no matter what source of errors causes the
problem.
|
For quantum fluids, the role of quantum fluctuations may be significant in
several regimes such as when the dimensionality is low, the density is high,
the interactions are strong, or for low particle numbers. In this paper we
propose a fundamentally different regime for enhanced quantum fluctuations
without being restricted by any of the above conditions. Instead, our scheme
relies on the engineering of an effective attractive interaction in a dilute,
two-component Bose-Einstein condensate (BEC) consisting of thousands of atoms.
In such a regime, the quantum spin fluctuations are significantly enhanced
(atom bunching with respect to the noninteracting limit) since they act to
reduce the interaction energy - a remarkable property given that spin
fluctuations are normally suppressed (anti-bunching) at zero temperature. In
contrast to the case of true attractive interactions, our approach is not
vulnerable to BEC collapse. We numerically demonstrate that these quantum
fluctuations are experimentally accessible by either spin or single-component
Bragg spectroscopy, offering a useful platform on which to test
beyond-mean-field theories. We also develop a variational model and use it to
analytically predict the shift of the immiscibility critical point, finding
good agreement with our numerics.
|
We have studied the phase volume fraction related magnetoresistance (MR)
across the first order martensite transformation (MT) of Ni44Cu2Mn43In11 alloy.
Within the metastability of MT, an isothermal application of magnetic field
converts the martensite into austenite. The field induced austenite phase
fraction (fIA) at any temperature depends on the availability and instability
of martensite phase fraction (fM ) at that temperature. This fIA is found to
contribute most significantly to the observed giant MR while the contribution
from pure martensite and austenite phase fraction is negligible. It is found
that the net MR follows a non linear proportional relation with the fIA and the
ascending and descending branch of fIA follows different power law giving rise
to hysteresis in MR. Here we present a detail explanation of the observed
behaviours of MR based on the existing phase fraction.
|
We study the carrier transport and magnetic properties of group-IV-based
ferromagnetic semiconductor Ge1-xFex thin films (Fe concentration x = 2.3 - 14
%) with and without boron (B) doping, by measuring their transport
characteristics; the temperature dependence of resistivity, hole concentration,
mobility, and the relation between the anomalous Hall conductivity versus
conductivity. At relatively low x (= 2.3 %), the transport in the undoped
Ge1-xFex film is dominated by hole hopping between Fe-rich hopping sites in the
Fe impurity band, whereas that in the B-doped Ge1-xFex film is dominated by the
holes in the valence band in the degenerated Fe-poor regions. As x increases (x
= 2.3 - 14 %), the transport in the both undoped and B-doped Ge1-xFex films is
dominated by hole hopping between the Fe-rich hopping sites of the impurity
band. The magnetic properties of the Ge1-xFex films are studied by various
methods including magnetic circular dichroism, magnetization and anomalous Hall
resistance, and are not influenced by B-doping. We show band profile models of
both undoped and B-doped Ge1-xFex films, which can explain the transport and
the magnetic properties of the Ge1-xFex films.
|
In this paper, two problems that show great similarities are examined. The
first problem is the reconstruction of the angular-domain periodogram from
spatial-domain signals received at different time indices. The second one is
the reconstruction of the frequency-domain periodogram from time-domain signals
received at different wireless sensors. We split the entire angular or
frequency band into uniform bins. The bin size is set such that the received
spectra at two frequencies or angles, whose distance is equal to or larger than
the size of a bin, are uncorrelated. These problems in the two different
domains lead to a similar circulant structure in the so-called coset
correlation matrix. This circulant structure allows for a strong compression
and a simple least-squares reconstruction method. The latter is possible under
the full column rank condition of the system matrix, which can be achieved by
designing the spatial or temporal sampling patterns based on a circular sparse
ruler. We analyze the statistical performance of the compressively
reconstructed periodogram including bias and variance. We further consider the
case when the bins are so small that the received spectra at two frequencies or
angles, with a spacing between them larger than the size of the bin, can still
be correlated. In this case, the resulting coset correlation matrix is
generally not circulant and thus a special approach is required.
|
In this paper, we study the property of weak approximation with Brauer-Manin
obstruction for surfaces with respect to field extensions of number fields. For
any nontrivial extension of number fields L/K, assuming a conjecture of M.
Stoll, we construct a smooth, projective, and geometrically connected surface
over K such that it satisfies weak approximation with Brauer-Manin obstruction
off all archimedean places, while its base change to L fails. Then we
illustrate this construction with an explicit unconditional example.
|
The interactions between electrons and phonons drive a large array of
technologically relevant material properties including ferroelectricity,
thermoelectricity, and phase-change behaviour. In the case of many group IV-VI,
V, and related materials, these interactions are strong and the materials exist
near electronic and structural phase transitions. Their close proximity to
phase instability produces a fragile balance among the various properties. The
prototypical example is PbTe whose incipient ferroelectric behaviour has been
associated with large phonon anharmonicity and thermoelectricity. Experimental
measurements on PbTe reveal anomalous lattice dynamics, especially in the soft
transverse optical phonon branch. This has been interpreted in terms of both
giant anharmonicity and local symmetry breaking due to off-centering of the Pb
ions. The observed anomalies have prompted renewed theoretical and
computational interest, which has in turn revived focus on the extent that
electron-phonon interactions drive lattice instabilities in PbTe and related
materials. Here, we use Fourier-transform inelastic x-ray scattering (FT-IXS)
to show that photo-injection of free carriers stabilizes the paraelectric
state. With support from constrained density functional theory (CDFT)
calculations, we find that photoexcitation weakens the long-range forces along
the cubic direction tied to resonant bonding and incipient ferroelectricity.
This demonstrates the importance of electronic states near the band edges in
determining the equilibrium structure.
|
A non-standard CP-odd Higgs boson could induce a slight (but observable)
lepton universality breaking in Upsilon leptonic decays. Moreover, mixing
between such a pseudoscalar Higgs boson and $\eta_b$ states might shift their
mass levels, thereby modifying the values of the
$M_{\Upsilon(nS)}-M_{\eta_b(nS)}$ hyperfine spplitings predicted in the
standard model. Besides, $\eta_b$ resonances could be broader than expected
with potentially negative consequences for discovery in both $e^+e^-$ and
hadron colliders. A scenario with a CP violating Higgs sector is also
considered. Finally, further strategies to search for a light Higgs particle in
bottomonium decays are outlined.
|
Much of the theoretical work on strategic voting makes strong assumptions
about what voters know about the voting situation. A strategizing voter is
typically assumed to know how other voters will vote and to know the rules of
the voting method. A growing body of literature explores strategic voting when
there is uncertainty about how others will vote. In this paper, we study
strategic voting when there is uncertainty about the voting method. We
introduce three notions of manipulability for a set of voting methods: sure,
safe, and expected manipulability. With the help of a computer program, we
identify voting scenarios in which uncertainty about the voting method may
reduce or even eliminate a voter's incentive to misrepresent her preferences.
Thus, it may be in the interest of an election designer who wishes to reduce
strategic voting to leave voters uncertain about which of several reasonable
voting methods will be used to determine the winners of an election.
|
The high penetration of renewable energy and power electronic equipment bring
significant challenges to the efficient construction of adaptive emergency
control strategies against various presumed contingencies in today's power
systems. Traditional model-based emergency control methods have difficulty in
adapt well to various complicated operating conditions in practice. Fr emerging
artificial intelligence-based approaches, i.e., reinforcement learning-enabled
solutions, they are yet to provide solid safety assurances under strict
constraints in practical power systems. To address these research gaps, this
paper develops a safe reinforcement learning (SRL)-based pre-decision making
framework against short-term voltage collapse. Our proposed framework employs
neural networks for pre-decision formulation, security margin estimation, and
corrective action implementation, without reliance on precise system
parameters. Leveraging the gradient projection, we propose a security
projecting correction algorithm that offers theoretical security assurances to
amend risky actions. The applicability of the algorithm is further enhanced
through the incorporation of active learning, which expedites the training
process and improves security estimation accuracy. Extensive numerical tests on
the New England 39-bus system and the realistic Guangdong Provincal Power Grid
demonstrate the effectiveness of the proposed framework.
|
We consider the use of probabilistic neural networks for fluid flow
{surrogate modeling} and data recovery. This framework is constructed by
assuming that the target variables are sampled from a Gaussian distribution
conditioned on the inputs. Consequently, the overall formulation sets up a
procedure to predict the hyperparameters of this distribution which are then
used to compute an objective function given training data. We demonstrate that
this framework has the ability to provide for prediction confidence intervals
based on the assumption of a probabilistic posterior, given an appropriate
model architecture and adequate training data. The applicability of the present
framework to cases with noisy measurements and limited observations is also
assessed. To demonstrate the capabilities of this framework, we consider
canonical regression problems of fluid dynamics from the viewpoint of
reduced-order modeling and spatial data recovery for four canonical data sets.
The examples considered in this study arise from (1) the shallow water
equations, (2) a two-dimensional cylinder flow, (3) the wake of NACA0012
airfoil with a Gurney flap, and (4) the NOAA sea surface temperature data set.
The present results indicate that the probabilistic neural network not only
produces a machine-learning-based fluid flow {surrogate} model but also
systematically quantifies the uncertainty therein to assist with model
interpretability.
|
We address the problem of recovering a sparse signal observed by a resource
constrained wireless sensor network under channel fading. Sparse random
matrices are exploited to reduce the communication cost in forwarding
information to a fusion center. The presence of channel fading leads to
inhomogeneity and non Gaussian statistics in the effective measurement matrix
that relates the measurements collected at the fusion center and the sparse
signal being observed. We analyze the impact of channel fading on nonuniform
recovery of a given sparse signal by leveraging the properties of heavy-tailed
random matrices. We quantify the additional number of measurements required to
ensure reliable signal recovery in the presence of nonidentical fading channels
compared to that is required with identical Gaussian channels. Our analysis
provides insights into how to control the probability of sensor transmissions
at each node based on the channel fading statistics in order to minimize the
number of measurements collected at the fusion center for reliable sparse
signal recovery. We further discuss recovery guarantees of a given sparse
signal with any random projection matrix where the elements are sub-exponential
with a given sub-exponential norm. Numerical results are provided to
corroborate the theoretical findings.
|
Transformers have revolutionized deep learning and generative modeling to
enable unprecedented advancements in natural language processing tasks and
beyond. However, designing hardware accelerators for executing transformer
models is challenging due to the wide variety of computing kernels involved in
the transformer architecture. Existing accelerators are either inadequate to
accelerate end-to-end transformer models or suffer notable thermal limitations.
In this paper, we propose the design of a three-dimensional heterogeneous
architecture referred to as HeTraX specifically optimized to accelerate
end-to-end transformer models. HeTraX employs hardware resources aligned with
the computational kernels of transformers and optimizes both performance and
energy. Experimental results show that HeTraX outperforms existing
state-of-the-art by up to 5.6x in speedup and improves EDP by 14.5x while
ensuring thermally feasibility.
|
It has been suggested that the boxy and peanut-shaped bulges found in some
edge-on galaxies are galactic bars viewed from the side. We investigate this
hypothesis by presenting emission-line spectra for a sample of 10 edge-on
galaxies that display a variety of bulge morphologies. To avoid potential
biases in the classification of this morphology, we use an objective measure of
bulge shape. Generally, bulges classified as more boxy show the more
complicated kinematics characteristic of edge-on bars, confirming the intimate
relation between the two phenomena.
|
We calculate Gamma-Ray Burst afterglow light-curves from a relativistic jet
of initial opening angle theta_0, as seen by observers at a wide range of
viewing angles, theta_obs, from the jet axis. We describe three increasingly
more realistic models and compare the resulting light-curves. An observer at
theta_obs < theta_0 should see a light curve very similar to that for an
on-axis observer. An observer at theta_obs > theta_0 should see a rising light
curve at early times, the flux peaking when the jet Lorentz factor sim
1/theta_obs. After this time the flux is not very different from that seen by
an on-axis observer. A strong linear polarization (<40%) may occur near the
peak in the light curve, and slowly decay with time. We show that if GRB jets
have a universal energy, then orphan afterglows associated with off-axis jets
should be seen up to a constant theta_obs, therefore the detection rate of
orphan afterglows would be proportional to the true GRB rate. We also discuss
the proposed connection between supernova 1998bw and GRB 980425.
|
We are carrying out a program of optical spectroscopy of the complete
subsample of the 3CR catalog of radio sources at redshift z < 0.3. The sample
consists of 113 3CR sources, comprising FR I, FR II radio galaxies and Quasars.
Complete datasets in other bands are already or will be soon available for the
whole sample but the optical spectra are sparse and inhomogeneous in quality.
The observations are carried out at the 3.58m Telescopio Nazionale Galileo
(TNG, La Palma). More than 100 sources have been already observed. We present
here the preliminary results on the analysis of the high and low resolution
spectra. We found that sources can be spectroscopically characterized as: High
Excitation Galaxies (HEG), Low Excitation Galaxies (LEG) and "Relic" AGNs. This
classification is supported by the optical - radio correlations in which
objects spectroscopically different follow different correlations. We conclude
that AGNs with the same radio power can be fueled with different accretion
properties. "Relic" radio-galaxies are characterized by extreme low excitation
spectra that we interpret as nuclei whose activity has recently turned-off. The
full spectral catalog will be made available to the scientific community.
|
This paper considers a broadly biologically relevant question of a chain
(such as a protein) binding to a sequence of receptors with matching multiple
ligands distributed along the chain. This binding is critical in cell adhesion
events, and in protein self-assembly. Using a mean field approximation of
polymer dynamics, we first calculate the characteristic binding time for a
tethered ligand reaching for a specific binding site on the surface. This time
is determined by two separate entropic effects: an entropic barrier for the
chain to be stretched sufficiently to reach the distant target, and a
restriction on chain conformations near the surface. We then derive the
characteristic time for a sequence of single binding events, and find that it
is determined by the `zipper effect', optimizing the sequence of single and
multiple binding steps.
|
Brain-Computer Interface (BCI) is a system empowering humans to communicate
with or control the outside world with exclusively brain intentions.
Electroencephalography (EEG) based BCIs are promising solutions due to their
convenient and portable instruments. Motor imagery EEG (MI-EEG) is a kind of
most widely focused EEG signals, which reveals a subjects movement intentions
without actual actions. Despite the extensive research of MI-EEG in recent
years, it is still challenging to interpret EEG signals effectively due to the
massive noises in EEG signals (e.g., low signal noise ratio and incomplete EEG
signals), and difficulties in capturing the inconspicuous relationships between
EEG signals and certain brain activities. Most existing works either only
consider EEG as chain-like sequences neglecting complex dependencies between
adjacent signals or performing simple temporal averaging over EEG sequences. In
this paper, we introduce both cascade and parallel convolutional recurrent
neural network models for precisely identifying human intended movements by
effectively learning compositional spatio-temporal representations of raw EEG
streams. The proposed models grasp the spatial correlations between physically
neighboring EEG signals by converting the chain like EEG sequences into a 2D
mesh like hierarchy. An LSTM based recurrent network is able to extract the
subtle temporal dependencies of EEG data streams. Extensive experiments on a
large-scale MI-EEG dataset (108 subjects, 3,145,160 EEG records) have
demonstrated that both models achieve high accuracy near 98.3% and outperform a
set of baseline methods and most recent deep learning based EEG recognition
models, yielding a significant accuracy increase of 18% in the cross-subject
validation scenario.
|
We study a mathematical relationship between holographic Wilsonian
renormalization group and stochastic quantization framework. We extend the
original proposal given in arXiv:1209.2242 to interacting theories. The
original proposal suggests that fictitious time(or stochastic time) evolution
of stochastic 2-point correlation function will be identical to the radial
evolution of the double trace operator of certain classes of holographic
models, which are free theories in AdS space. We study holographic gravity
models with interactions in AdS space and establish a map between the
holographic renormalization flow of multi-trace operators and stochastic
$n$-point functions. To give precise examples, we extensively study conformally
coupled scalar theory in AdS$_6$. What we have found is that the stochastic
time $t$ dependent 3-point function obtained from Langevin equation with its
Euclidean action being given by $S_E=2I_{os}$ is identical to holographic
renormalization group evolution of holographic triple trace operator as its
energy scale $r$ changes once an identification of $t=r$ is made. $I_{os}$ is
the on-shell action of holographic model of conformally coupled scalar theory
at the AdS boundary. We argue that this can be fully extended to mathematical
relationship between multi point functions and multi trace operators in each
framework.
|
In this paper, we have studied the holographic subregion complexity for
boosted black brane for strip like subsystem. The holographic subregion
complexity has been computed for a subsystem chosen along and perpendicular to
the boost direction. We have observed that there is an asymmetry in the result
due to the boost parameter which can be attributed to the asymmetry in the
holographic entanglement entropy. The Fisher information metric and the
fidelity susceptibility have also been computed using bulk dual prescriptions.
It is observed that the two metrics computed holographically are not related
for both the pure black brane as well as the boosted black brane. This is one
of the main findings in this paper and the holographic results have been
compared with the results available in the quantum information literature where
it is known that the two distances are related to each other in general.
|
Most existing dehazing algorithms often use hand-crafted features or
Convolutional Neural Networks (CNN)-based methods to generate clear images
using pixel-level Mean Square Error (MSE) loss. The generated images generally
have better visual appeal, but not always have better performance for
high-level vision tasks, e.g. image classification. In this paper, we
investigate a new point of view in addressing this problem. Instead of focusing
only on achieving good quantitative performance on pixel-based metrics such as
Peak Signal to Noise Ratio (PSNR), we also ensure that the dehazed image itself
does not degrade the performance of the high-level vision tasks such as image
classification. To this end, we present an unified CNN architecture that
includes three parts: a dehazing sub-network (DNet), a classification-driven
Conditional Generative Adversarial Networks sub-network (CCGAN) and a
classification sub-network (CNet) related to image classification, which has
better performance both on visual appeal and image classification. We conduct
comprehensive experiments on two challenging benchmark datasets for
fine-grained and object classification: CUB-200-2011 and Caltech-256.
Experimental results demonstrate that the proposed method outperforms many
recent state-of-the-art single image dehazing methods in terms of image
dehazing metrics and classification accuracy.
|
We discuss the final stages of double ionization of atoms in a strong
linearly polarized laser field within a classical model. We propose that all
trajectories leading to non-sequential double ionization pass close to a saddle
in phase space which we identify and characterize. The saddle lies in a two
degree of freedom subspace of symmetrically escaping electrons. The
distribution of longitudinal momenta of ions as calculated within the subspace
shows the double hump structure observed in experiments. Including a symmetric
bending mode of the electrons allows us to reproduce the transverse ion
momenta. We discuss also a path to sequential ionization and show that it does
not lead to the observed momentum distributions.
|
The electron and positron accelerator complex at KEK offers unique
experimental opportunities in the fields of elementary particle physics with
SuperKEKB collider and photon science with two light sources. In order to
maximize the experimental performances at those facilities the injector LINAC
employs pulse-to-pulse modulation at 50 Hz, injecting beams with diverse
properties. The event-based control system effectively manages different beam
configurations. This injection scheme was initially designed 15 years ago and
has been in full operation since 2019. Over the years, quite a few enhancements
have been implemented. As the event-based controls are tightly coupled with
microwave systems, machine protection systems and so on, their modifications
require meticulous planning. However, the diverse requirements from particle
physics and photon science, stemming from the distinct nature of those
experiments, often necessitate patient negotiation to meet the demands of both
fields. This presentation discusses those operational aspects of the
multidisciplinary facility.
|
The Hasegawa-Wakatani models are used in the study of confinement of hot
plasmas with externally imposed magnetic fields. The nonlinear terms in the
Hasegawa-Wakatani models complicate the analysis of the system as they
propagate local changes across the entire system. Centre manifold analysis
allows us to project down onto much smaller systems that are more easily
analysed. Qualitative information about the behaviour of the reduced system,
such as whether it is stable or unstable, can be used to predict the behaviour
of the original full system. We show how the simple structure of the linear
part of the Hasegawa-Wakatani equations can be used to define these projection
operators. The centre manifold analysis will be used on a few examples to
highlight certain properties of the Hasegawa-Wakatani models.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.