text
stringlengths 6
128k
|
---|
In the present paper we refute the criticism advanced in a recent preprint by
Figueiredo et al [1] about the possible application of the $q$-generalized
Central Limit Theorem (CLT) to a paradigmatic long-range-interacting many-body
classical Hamiltonian system, the so-called Hamiltonian Mean Field (HMF) model.
We exhibit that, contrary to what is claimed by these authors and in accordance
with our previous results, $q$-Gaussian-like curves are possible and real
attractors for a certain class of initial conditions, namely the one which
produces nontrivial longstanding quasi-stationary states before the arrival,
only for finite size, to the thermal equilibrium.
|
We introduce and study generalized Umemura polynomials
$U_{n,m}^{(k)}(z,w;a,b)$ which are the natural generalization of the Umemura
polynomials $U_n(z,w;a,b)$ related to the Painleve VI equation. We show that if
either a=b, or a=0, or b=0, then polynomials $U_{n,m}^{(0)}(z,w;a,b)$ generate
solutions to the Painleve VI equation. We give new proof of
Noumi-Okada-Okamoto-Umemura conjecture, and describe connections between
polynomials $U_{n,m}^{(0)}(z,w;a,0)$ and certain Umemura polynomials
$U_k(z,w;\alpha,\beta)$. Finally we show that after appropriate rescaling,
Umemura's polynomials $U_k(z,w;a,b)$ satisfy the Hirota-Miwa bilinear
equations.
|
We calculate the so--called Fermi motion parameter $p_{_F}$ of ACCMM model
using the variational method in a potential model approach. We also propose
hadronic invariant mass distribution as an alternative experimental observable
to measure $V_{ub}$ at future asymmetric $B$ factories.
|
We relate the star formation from cold baryons in virialized structures to
the X-ray properties of the associated diffuse, hot baryonic component. Our
computations use the standard ``semi-analytic'' models to describe i) the
evolution of dark matter halos through merging after the hierarchical
clustering, ii) the star formation governed by radiative cooling and by
supernova feedback, iii) the hydro- and thermodynamics of the hot gas, rendered
with our Punctuated Equilibria model. So we relate the X-ray observables
concerning the intra-cluster medium to the thermal energy of the gas pre-heated
and expelled by supernovae following star formation, and then accreted during
the subsequent merging events. We show that at fluxes fainter than $F_X\approx
10^{-15}$ erg/cm$^2 $ s (well within the reach of next generation X-ray
observatories) the X-ray counts of extended extragalactic sources (as well as
the faint end of the luminosity function, the contribution to the soft X-ray
background, and the $L_X-T$ correlation at the group scales) increase
considerably when the star formation rate is enhanced for z>1 as indicated by
growing optical/infrared evidence. Specifically, the counts in the range 0.5-2
keV are increased by factors $\sim 4$ when the the feedback is decreased and
star formation is enhanced as to yield a flat shape of the star formation rate
for 2<z<4.
|
We present an optimized secure multi-antenna transmission approach based on
artificial-noise-aided beamforming, with limited feedback from a desired
single-antenna receiver. To deal with beamformer quantization errors as well as
unknown eavesdropper channel characteristics, our approach is aimed at
maximizing throughput under dual performance constraints - a connection outage
constraint on the desired communication channel and a secrecy outage constraint
to guard against eavesdropping. We propose an adaptive transmission strategy
that judiciously selects the wiretap coding parameters, as well as the power
allocation between the artificial noise and the information signal. This
optimized solution reveals several important differences with respect to
solutions designed previously under the assumption of perfect feedback. We also
investigate the problem of how to most efficiently utilize the feedback bits.
The simulation results indicate that a good design strategy is to use
approximately 20% of these bits to quantize the channel gain information, with
the remainder to quantize the channel direction, and this allocation is largely
insensitive to the secrecy outage constraint imposed. In addition, we find that
8 feedback bits per transmit antenna is sufficient to achieve approximately 90%
of the throughput attainable with perfect feedback.
|
Movies provide us with a mass of visual content as well as attracting
stories. Existing methods have illustrated that understanding movie stories
through only visual content is still a hard problem. In this paper, for
answering questions about movies, we put forward a Layered Memory Network (LMN)
that represents frame-level and clip-level movie content by the Static Word
Memory module and the Dynamic Subtitle Memory module, respectively.
Particularly, we firstly extract words and sentences from the training movie
subtitles. Then the hierarchically formed movie representations, which are
learned from LMN, not only encode the correspondence between words and visual
content inside frames, but also encode the temporal alignment between sentences
and frames inside movie clips. We also extend our LMN model into three variant
frameworks to illustrate the good extendable capabilities. We conduct extensive
experiments on the MovieQA dataset. With only visual content as inputs, LMN
with frame-level representation obtains a large performance improvement. When
incorporating subtitles into LMN to form the clip-level representation, we
achieve the state-of-the-art performance on the online evaluation task of
'Video+Subtitles'. The good performance successfully demonstrates that the
proposed framework of LMN is effective and the hierarchically formed movie
representations have good potential for the applications of movie question
answering.
|
The Steiner Multicut problem asks, given an undirected graph G, terminals
sets T1,...,Tt $\subseteq$ V(G) of size at most p, and an integer k, whether
there is a set S of at most k edges or nodes s.t. of each set Ti at least one
pair of terminals is in different connected components of G \ S. This problem
generalizes several graph cut problems, in particular the Multicut problem (the
case p = 2), which is fixed-parameter tractable for the parameter k [Marx and
Razgon, Bousquet et al., STOC 2011].
We provide a dichotomy of the parameterized complexity of Steiner Multicut.
That is, for any combination of k, t, p, and the treewidth tw(G) as constant,
parameter, or unbounded, and for all versions of the problem (edge deletion and
node deletion with and without deletable terminals), we prove either that the
problem is fixed-parameter tractable or that the problem is hard (W[1]-hard or
even (para-)NP-complete). We highlight that:
- The edge deletion version of Steiner Multicut is fixed-parameter tractable
for the parameter k+t on general graphs (but has no polynomial kernel, even on
trees). We present two proofs: one using the randomized contractions technique
of Chitnis et al, and one relying on new structural lemmas that decompose the
Steiner cut into important separators and minimal s-t cuts.
- In contrast, both node deletion versions of Steiner Multicut are W[1]-hard
for the parameter k+t on general graphs.
- All versions of Steiner Multicut are W[1]-hard for the parameter k, even
when p=3 and the graph is a tree plus one node. Hence, the results of Marx and
Razgon, and Bousquet et al. do not generalize to Steiner Multicut.
Since we allow k, t, p, and tw(G) to be any constants, our characterization
includes a dichotomy for Steiner Multicut on trees (for tw(G) = 1), and a
polynomial time versus NP-hardness dichotomy (by restricting k,t,p,tw(G) to
constant or unbounded).
|
We present an operational component of a real-world patient triage system.
Given a specific patient presentation, the system is able to assess the level
of medical urgency and issue the most appropriate recommendation in terms of
best point of care and time to treat. We use an attention-based convolutional
neural network architecture trained on 600,000 doctor notes in German. We
compare two approaches, one that uses the full text of the medical notes and
one that uses only a selected list of medical entities extracted from the text.
These approaches achieve 79% and 66% precision, respectively, but on a
confidence threshold of 0.6, precision increases to 85% and 75%, respectively.
In addition, a method to detect warning symptoms is implemented to render the
classification task transparent from a medical perspective. The method is based
on the learning of attention scores and a method of automatic validation using
the same data.
|
Single Event Effects (SEEs) - predominately bit-flips in electronics caused
by particle interactions - are a major concern for ASICs operated in high
radiation environments such as ABCStar ASICs, which are designed to be used in
the future ATLAS ITk strip tracker. The chip design is therefore optimised to
protect it from SEEs by implementing triplication techniques such as Triple
Modular Redundancy (TMR). In order to verify the radiation protection
mechanisms of the chip design, the cross-section for Single Event Upsets
(SEUs), a particular class of SEEs, is measured by exposing the chip to
high-intensity particle beams while monitoring it for observed SEUs. This study
presents the setup, the performed measurements, and the results from SEU tests
performed using the latest version of the ABCStar ASIC (ABCStar V1) using a 480
MeV proton beam.
|
The beam normal spin asymmetry for the elastic $eN$ scattering is studied in
the leading logarithm approximation. We derive the expression for the
asymmetry, which is valid for any scattering angles. The result is compared
with the results of other authors, obtained for the forward kinematics. We also
calculate the numerical values of the asymmetry at intermediate energy and show
that they are consistent with existing experimental data.
|
The experimentally observed disappearance below T = 0.5K of the second sound
in liquid He II as a separate wave mode and its subsequent propagation at the
speed of the first sound (Peshkov [3]) may be interpreted as a resonant mode
conversion of the second sound to the first sound. Near the resonant mode
coupling point T = T*, where the frequencies of the two waves become equal, the
anomalous effect of entropy changes on the first sound and density changes on
the second sound, though generally small, become significant. This leads to the
resonant mode coupling of the first sound and the second sound and forces them
to lose their identities and hence pave the way for the resonant mode
conversion of the second sound to the first sound. We give a theoretical
framework for this proposition and an estimate for the fraction of the second
sound that is mode-converted to the first sound.
|
We study the dynamical evolution of globular clusters using our 2D Monte
Carlo code with the inclusion of primordial binary interactions for equal-mass
stars. We use approximate analytical cross sections for energy generation from
binary-binary and binary-single interactions. After a brief period of slight
contraction or expansion of the core over the first few relaxation times, all
clusters enter a much longer phase of stable "binary burning" lasting many tens
of relaxation times. The structural parameters of our models during this phase
match well those of most observed globular clusters. At the end of this phase,
clusters that have survived tidal disruption undergo deep core collapse,
followed by gravothermal oscillations. Our results clearly show that the
presence of even a small fraction of binaries in a cluster is sufficient to
support the core against collapse significantly beyond the normal core collapse
time predicted without the presence of binaries. For tidally truncated systems,
collapse is easily delayed sufficiently that the cluster will undergo complete
tidal disruption before core collapse. As a first step toward the eventual goal
of computing all interactions exactly using dynamical three- and four-body
integration, we have incorporated an exact treatment of binary-single
interactions in our code. We show that results using analytical cross sections
are in good agreement with those using exact three-body integration, even for
small binary fractions where binary-single interactions are energetically most
important.
|
Image super-resolution is a process to enhance image resolution. It is widely
used in medical imaging, satellite imaging, target recognition, etc. In this
paper, we conduct continuous modeling and assume that the unknown image
intensity function is defined on a continuous domain and belongs to a space
with a redundant basis. We propose a new iterative model for single image
super-resolution based on an observation: an image is consisted of smooth
components and non-smooth components, and we use two classes of approximated
Heaviside functions (AHFs) to represent them respectively. Due to sparsity of
the non-smooth components, a $L_{1}$ model is employed. In addition, we apply
the proposed iterative model to image patches to reduce computation and
storage. Comparisons with some existing competitive methods show the
effectiveness of the proposed method.
|
Accurate cross section data for electron impact ionization (EII) are needed
in order to interpret the spectra of collisionally ionized plasmas both in
astrophysics and in the laboratory. Models and spectroscopic diagnostics of
such plasmas rely on accurate ionization balance calculations, which depend, in
turn, on the underlying rates for EII and electron-ion recombination. EII
measurements have been carried out using the TSR storage ring located at the
Max-Planck-Institut fuer Kernphysik in Heidelberg, Germany. Storage ring
measurements are largely free of metastable contamination, resulting in
unambiguous EII data, unlike what is encountered with other experimental
geometries. As it is impractical to perform experiments for every ion, theory
must provide the bulk of the necessary EII data. In order to guide theory, TSR
experiments have focused on providing at least one measurement for every
isoelectronic sequence. EII data have been measured for ions from 13
isoelectronic sequences: Li-like silicon and chlorine, Be-like sulfur, B-like
magnesium, and F-like through K-like iron. These experimental results provide
an important benchmark for EII theory.
|
Quark-gluon plasma during its initial phase after its production in heavy-ion
collisions is expected to have substantial pressure anisotropies. In order to
model this situation by a strongly coupled N=4 super-Yang-Mills plasma with
fixed anisotropy by means of AdS/CFT duality, two models have been discussed in
the literature. Janik and Witaszczyk have considered a geometry involving a
comparatively benign naked singularity, while more recently Mateos and
Trancanelli have used a regular geometry involving a nontrivial axion field
dual to a parity-odd deformation of the gauge theory by a spatially varying
theta parameter. We study the (rather different) implications of these two
models on the heavy-quark potential as well as jet quenching and compare their
respective predictions with those of weakly coupled anisotropic plasmas.
|
In this paper, we show that the strong embeddability has fibering permanence
property and is preserved under the direct limit for the metric space.
Moreover, we show the following result: let $G$ is a finitely generated group
with a coarse quasi-action on a metric space $X$. If $X$ has finite asymptotic
dimension and the quasi-stabilizers are strongly embeddable, then $G$ is also
strongly embeddable.
|
When executing a certain task, human beings can choose or make an appropriate
tool to achieve the task. This research especially addresses the optimization
of tool shape for robotic tool-use. We propose a method in which a robot
obtains an optimized tool shape, tool trajectory, or both, depending on a given
task. The feature of our method is that a transition of the task state when the
robot moves a certain tool along a certain trajectory is represented by a deep
neural network. We applied this method to object manipulation tasks on a 2D
plane, and verified that appropriate tool shapes are generated by using this
novel method.
|
We point out that violation of Lorentz invariance affects the interaction of
high-energy photons with the Earth's atmosphere and magnetic field. In certain
parameter region this interaction becomes suppressed and the photons escape
observation passing through the atmosphere without producing air showers. We
argue that a detection of photon-induced air showers with energies above 10^19
eV, implying the absence of suppression as well as the absence of photon decay,
will put tight double-sided limits on Lorentz violation in the sector of
quantum electrodynamics. These constraints will be by several orders of
magnitude stronger than the existing ones and will be robust against any
assumptions about the astrophysical origin of the detected photons.
|
We introduce a systematic mathematical language for describing fixed point
models and apply it to the study to topological phases of matter. The framework
is reminiscent of state-sum models and lattice topological quantum field
theories, but is formalised and unified in terms of tensor networks. In
contrast to existing tensor network ansatzes for the study of ground states of
topologically ordered phases, the tensor networks in our formalism represent
discrete path integrals in Euclidean space-time. This language is more directly
related to the Hamiltonian defining the model than other approaches, via a
Trotterization of the respective imaginary time evolution. We introduce our
formalism by simple examples, and demonstrate its full power by expressing
known families of models in 2+1 dimensions in their most general form, namely
string-net models and Kitaev quantum doubles based on weak Hopf algebras. To
elucidate the versatility of our formalism, we also show how fermionic phases
of matter can be described and provide a framework for topological fixed point
models in 3+1 dimensions.
|
Deep neural networks (DNNs) have gain its popularity in various scenarios in
recent years. However, its excellent ability of fitting complex functions also
makes it vulnerable to backdoor attacks. Specifically, a backdoor can remain
hidden indefinitely until activated by a sample with a specific trigger, which
is hugely concealed. Nevertheless, existing backdoor attacks operate backdoors
in spatial domain, i.e., the poisoned images are generated by adding additional
perturbations to the original images, which are easy to detect. To bring the
potential of backdoor attacks into full play, we propose low-pass attack, a
novel attack scheme that utilizes low-pass filter to inject backdoor in
frequency domain. Unlike traditional poisoned image generation methods, our
approach reduces high-frequency components and preserve original images'
semantic information instead of adding additional perturbations, improving the
capability of evading current defenses. Besides, we introduce "precision mode"
to make our backdoor triggered at a specified level of filtering, which further
improves stealthiness. We evaluate our low-pass attack on four datasets and
demonstrate that even under pollution rate of 0.01, we can perform stealthy
attack without trading off attack performance. Besides, our backdoor attack can
successfully bypass state-of-the-art defending mechanisms. We also compare our
attack with existing backdoor attacks and show that our poisoned images are
nearly invisible and retain higher image quality.
|
Kalb-Ramond equations for massive and massless particles are considered in
the framework of the Petiau-Duffin-Kemmer formalism. We obtain $10\times10$
matrices of the relativistic wave equation of the first-order and solutions in
the form of density matrix. The canonical and Belinfante energy-momentum
tensors are found. We investigate the scale invariance and obtain the conserved
dilatation current. It was demonstrated that the conformal symmetry is broken
even for massless fields.
|
We present a theory for the dynamics of a binary mixture with particle size
swaps. The theory is based on a factorization approximation similar to that
employed in the mode-coupling theory of glassy dynamics. The theory shows that,
in accordance with physical intuition, particle size swaps open up an
additional channel for the relaxation of density fluctuations. Thus, allowing
swaps speeds up the dynamics and moves the dynamic glass transition towards
higher densities and/or lower temperatures. We calculate an approximate dynamic
glass transition phase diagram for an equimolar binary hard sphere mixture. We
find that in the presence of particle size swaps, with increasing ratio of the
hard sphere diameters the dynamic glass transition line moves towards higher
volume fractions, up to the ratio of the diameters approximately equal to 1.2,
and then saturates. We comment on the implications of our findings for the
theoretical description of the glass transition.
|
The cooling rate of young neutron stars gives direct insight into their
internal makeup. Although the temperatures of several young neutron stars have
been measured, until now a young neutron star has never been observed to
decrease in temperature over time. We fit 9 years of archival Chandra ACIS
spectra of the likely neutron star in the ~330 years old Cassiopeia A supernova
remnant with our non-magnetic carbon atmosphere model. Our fits show a relative
decline in the surface temperature by 4% (5.4 sigma, from 2.12+-0.01*10^6 K in
2000 to 2.04+-0.01*10^6 K in 2009) and observed flux (by 21%). Using a simple
model for neutron star cooling, we show that this temperature decline could
indicate that the neutron star became isothermal sometime between 1965 and
1980, and constrains some combinations of neutrino emission mechanisms and
envelope compositions. However, the neutron star is likely to have become
isothermal soon after formation, in which case the temperature history suggests
episodes of additional heating or more rapid cooling. Observations over the
next few years will allow us to test possible explanations for the temperature
evolution.
|
Structural imperfections such as grain boundaries (GBs) and dislocations are
ubiquitous in solids and have been of central importance in understanding
nature of polycrystals. In addition to their classical roles, advent of
topological insulators (TIs) offers a chance to realize distinct topological
states bound to them. Although dislocation inside three-dimensional TIs is one
of the prime candidates to look for, its direct detection and characterization
are challenging. Instead, in two-dimensional (2D) TIs, their creations and
measurements are easier and, moreover, topological states at the GBs or
dislocations intimately connect to their lattice symmetry. However, such roles
of crystalline symmetries of GBs in 2D TIs have not been clearly measured yet.
Here, we present the first direct evidence of a symmetry enforced Dirac type
metallic state along a GB in 1T'-MoTe$_2$, a prototypical 2D TI. Using scanning
tunneling microscope, we show a metallic state along a grain boundary with
non-symmorphic lattice symmetry and its absence along the other boundary with
symmorphic one. Our large scale atomistic simulations demonstrate hourglass
like nodal-line semimetallic in-gap states for the former while the gap-opening
for the latter, explaining our observation very well. The protected metallic
state tightly linked to its crystal symmetry demonstrated here can be used to
create stable metallic nanowire inside an insulator.
|
Link prediction in collaboration networks is often solved by identifying
structural properties of existing nodes that are disconnected at one point in
time, and that share a link later on. The maximally possible recall rate or
upper bound of this approach's success is capped by the proportion of links
that are formed among existing nodes embedded in these properties.
Consequentially, sustained ties as well as links that involve one or two new
network participants are typically not predicted. The purpose of this study is
to highlight formational constraints that need to be considered to increase the
practical value of link prediction methods for collaboration networks. In this
study, we identify the distribution of basic link formation types based on four
large-scale, over-time collaboration networks, showing that current link
predictors can maximally anticipate around 25% of links that involve at least
one prior network member. This implies that for collaboration networks,
increasing the accuracy of computational link prediction solutions may not be a
reasonable goal when the ratio of collaboration ties that are eligible to the
classic link prediction process is low.
|
We study massive real scalar $\phi^4$ theory in the expanding Poincare patch
of de Sitter space. We calculate the leading two-loop infrared contribution to
the two-point function in this theory. We do that for the massive fields both
from the principal and complementary series. As can be expected at this order
light fields from the complementary series show stronger infrared effects than
the heavy fields from the principal one. For the principal series, unlike the
complementary one, we can derive the kinetic equation from the system of
Dyson--Schwinger equation, which allows us to sum up the leading infrared
contributions from all loops. We find two peculiar solutions of the kinetic
equation. One of them describes the stationary Gibbons--Hawking-type
distribution for the density per comoving volume. Another solution shows
explosive (square root of the pole in finite proper time) growth of the
particle number density per comoving volume. That signals the possibility of
the destruction of the expanding Poincare patch even by the very massive
fields. We conclude with the consideration of the infrared divergences in
global de Sitter space and in its contracting Poincare patch.
|
This paper addresses the energy accumulation problem, in terms of the $H_2$
norm, of linearly coupled dynamical networks. An interesting outer-coupling
relationship is constructed, under which the $H_2$ norm of the newly
constructed network with column-input and row-output shaped matrices increases
exponentially fast with the node number $N$: it increases generally much faster
than $2^N$ when $N$ is large while the $H_2$ norm of each node is 1. However,
the $H_2$ norm of the network with a diffusive coupling is equal to $\gamma_2
N$, i.e., increasing linearly, when the network is stable, where $\gamma_2$ is
the $H_2$ norm of a single node. And the $H_2$ norm of the network with
antisymmetrical coupling also increases, but rather slowly, with the node
number $N$. Other networks with block-diagonal-input and block-diagonal-output
matrices behave similarly. It demonstrates that the changes of $H_2$ norms in
different networks are very complicated, despite the fact that the networks are
linear. Finally, the influence of the $H_2$ norm of the locally linearized
network on the output of a network with Lur'e nodes is discussed.
|
We determine the groups of minimal order in which all groups of order n can
embedded for 1 < n < 16. We further determine the order of a minimal group in
which all groups or order n or less can be embedded, also for 1 < n < 16.
|
These are notes for the Bootcamp volume for the 2015 AMS Summer Institute in
Algebraic Geometry. They are based on earlier notes for the "Positive
Characteristic Algebraic Geometry Workshop" held at University of Illinois at
Chicago in March 2014.
|
Fuzzing has achieved tremendous success in discovering bugs and
vulnerabilities in various software systems. Systems under test (SUTs) that
take in programming or formal language as inputs, e.g., compilers, runtime
engines, constraint solvers, and software libraries with accessible APIs, are
especially important as they are fundamental building blocks of software
development. However, existing fuzzers for such systems often target a specific
language, and thus cannot be easily applied to other languages or even other
versions of the same language. Moreover, the inputs generated by existing
fuzzers are often limited to specific features of the input language, and thus
can hardly reveal bugs related to other or new features. This paper presents
Fuzz4All, the first fuzzer that is universal in the sense that it can target
many different input languages and many different features of these languages.
The key idea behind Fuzz4All is to leverage large language models (LLMs) as an
input generation and mutation engine, which enables the approach to produce
diverse and realistic inputs for any practically relevant language. To realize
this potential, we present a novel autoprompting technique, which creates LLM
prompts that are wellsuited for fuzzing, and a novel LLM-powered fuzzing loop,
which iteratively updates the prompt to create new fuzzing inputs. We evaluate
Fuzz4All on nine systems under test that take in six different languages (C,
C++, Go, SMT2, Java and Python) as inputs. The evaluation shows, across all six
languages, that universal fuzzing achieves higher coverage than existing,
language-specific fuzzers. Furthermore, Fuzz4All has identified 98 bugs in
widely used systems, such as GCC, Clang, Z3, CVC5, OpenJDK, and the Qiskit
quantum computing platform, with 64 bugs already confirmed by developers as
previously unknown.
|
In this paper, we formulate extended stream functions (ESFs) to describe the
dynamics of Bose-Einstein condensations in the two-dimensional space. The
ordinary stream function is applicable only for stationary and incompressible
superfluids, whereas the ESFs can describe the dynamics of compressible and
non-stationary superfluids. The ESFs are composed of two stream functions,
i.e., one describes the compressible density modulations and the other the
incompressible rotational superflow. As an application, we study the snake
instability of the dark soliton in a rectangular potential in detail by the
ESFs.
|
In 1989, Dicks and Dunwoody proved the Almost Stability Theorem, which has
among its corollaries the Stallings-Swan theorem that groups of cohomological
dimension one are free. In this article, we use a nestedness result of Bergman,
Bowditch, and Dunwoody to simplify somewhat the proof of the finitely generable
case of the Almost Stability Theorem. We also simplify the proof of the non
finitely generable case.
The proof we give here of the Almost Stability Theorem is essentially self
contained, except that in the non finitely generable case we refer the reader
to the original argument for the proofs of two technical lemmas about groups
acting on trees.
|
The harmonic numbers are the sequence 1, 1+1/2, 1+1/2+1/3, ... Their
asymptotic difference from the sequence of the natural logarithm of the
positive integers is Euler's constant gamma. We define a family of natural
generalizations of the harmonic numbers. The jth iterated harmonic numbers are
a sequence of rational numbers that nests the previous sequences and relates in
a similar way to the sequence of the jth iterate of the natural logarithm of
positive integers. The analogues of several well-known properties of the
harmonic numbers also hold for the iterated harmonic numbers, including a
generalization of Euler's constant. We reproduce the proof that only the first
harmonic number is an integer and, providing some numeric evidence for the
cases j = 2 and j = 3, conjecture that the same result holds for all iterated
harmonic numbers. We also review another proposed generalization of harmonic
numbers.
|
The electron residual energy originated from the stochastic heating in
under-dense field-ionized plasma is here investigated. The optical response of
plasma is initially modeled by using the concept of two counter-propagating
electromagnetic waves. The solution of motion equation of a single electron
indicates that by including the ionization, the electron with higher residual
energy compared to the case without ionization could be obtained. In agreement
with chaotic nature of the motion, it is found that the electron residual
energy will significantly be changed by applying a minor change to the initial
conditions. Extensive kinetic 1D-3V particle-in-cell (PIC) simulations have
been performed in order to resolve full plasma reactions. In this way, two
different regimes of plasma behavior are observed by varying the pulse length.
The results indicate that the amplitude of scattered fields in sufficient long
pulse length is high enough to act as a second counter-propagating wave for
triggering the stochastic electron motion. On the other hand, the analyses of
intensity spectrum reveal this fact that the dominant scattering mechanism
tends to Thomson rather Raman scattering by increasing the pulse length. A
covariant formalism is used to describe the plasma heating so that it enables
us to measure electron temperature inside the pulse region.
|
In this work, a recent theoretically predicted phenomenon of enhanced
permittivity with electromagnetic waves using lossy materials is investigated
for t he analogous case of mass density and acoustic waves, which represents
inertial enhancement. Starting from fundamental relationships for the
homogenized quasi-static effective density of a fluid host with fluid
inclusions, theoretical expressions are developed for the conditions on the
real and imaginary parts of the constitutive fluids to have inertial
enhancement, which are verified with numerical simulations. Realizable
structures are designed to demonstrate this phenomenon using multi-scale sonic
crystals, which are fabricated using a 3D printer and tested in an acoustic
impedance tube, yielding good agreement with the theoretical predictions and
demonstrating enhanced inertia.
|
We examine the transport behaviour of non-interacting particles in a simple
channel billiard, at equilibrium and in the presence of an external field. The
channel walls are constructed from straight line-segments. We observe a
sensitive dependence on the model parameters of the transport properties, which
range from sub-diffusive to super-diffusive regimes. In non-equilibrium, we
find a transition in the transport behaviour between seemingly-chaotic and
(quasi-) periodic behaviour. Our results support the view that normal transport
laws do not need chaos, or quenched disorder, to be realized. Furthermore, they
motivate some new definitions of complexity, which are relevant for transport
phenomena.
|
Many DNN-enabled vision applications constantly operate under severe energy
constraints such as unmanned aerial vehicles, Augmented Reality headsets, and
smartphones. Designing DNNs that can meet a stringent energy budget is becoming
increasingly important. This paper proposes ECC, a framework that compresses
DNNs to meet a given energy constraint while minimizing accuracy loss. The key
idea of ECC is to model the DNN energy consumption via a novel bilinear
regression function. The energy estimate model allows us to formulate DNN
compression as a constrained optimization that minimizes the DNN loss function
over the energy constraint. The optimization problem, however, has nontrivial
constraints. Therefore, existing deep learning solvers do not apply directly.
We propose an optimization algorithm that combines the essence of the
Alternating Direction Method of Multipliers (ADMM) framework with
gradient-based learning algorithms. The algorithm decomposes the original
constrained optimization into several subproblems that are solved iteratively
and efficiently. ECC is also portable across different hardware platforms
without requiring hardware knowledge. Experiments show that ECC achieves higher
accuracy under the same or lower energy budget compared to state-of-the-art
resource-constrained DNN compression techniques.
|
Testing and characterizing the difference between two data samples is of
fundamental interest in statistics. Existing methods such as Kolmogorov-Smirnov
and Cramer-von-Mises tests do not scale well as the dimensionality increases
and provides no easy way to characterize the difference should it exist. In
this work, we propose a theoretical framework for inference that addresses
these challenges in the form of a prior for Bayesian nonparametric analysis.
The new prior is constructed based on a random-partition-and-assignment
procedure similar to the one that defines the standard optional P\'olya tree
distribution, but has the ability to generate multiple random distributions
jointly. These random probability distributions are allowed to "couple", that
is to have the same conditional distribution, on subsets of the sample space.
We show that this "coupling optional P\'olya tree" prior provides a convenient
and effective way for both the testing of two sample difference and the
learning of the underlying structure of the difference. In addition, we discuss
some practical issues in the computational implementation of this prior and
provide several numerical examples to demonstrate its work.
|
We present a discrete element method (DEM) model to simulate the mechanical
behavior of sea ice in response to ocean waves. The interaction of ocean waves
and sea ice can potentially lead to the fracture and fragmentation of sea ice
depending on the wave amplitude and period. The fracture behavior of sea ice is
explicitly modeled by a DEM method, where sea ice is modeled by densely packed
spherical particles with finite size. These particles are bonded together at
their contact points through mechanical bonds that can sustain both tensile and
compressive forces and moments. Fracturing can be naturally represented by the
sequential breaking of mechanical bonds. For a given amplitude and period of
incident ocean wave, the model provides information for the spatial
distribution and time evolution of stress and micro-fractures and the fragment
size distribution. We demonstrate that the fraction of broken bonds, ,
increases with increasing wave amplitude. In contrast, the ice fragment size l
decreases with increasing amplitude. This information is important for the
understanding of breakup of individual ice floes and floe fragment size.
|
The hydrogen-deficiency in extremely hot post-AGB stars of spectral class
PG1159 is probably caused by a (very) late helium-shell flash or a AGB final
thermal pulse that consumes the hydrogen envelope, exposing the usually-hidden
intershell region. Thus, the photospheric element abundances of these stars
allow to draw conclusions about details of nuclear burning and mixing processes
in the precursor AGB stars. We compare predicted element abundances to those
determined by quantitative spectral analyses performed with advanced non-LTE
model atmospheres. A good qualitative and quantitative agreement is found for
many species (He, C, N, O, Ne, F, Si) but discrepancies for others (P, S, Fe)
point at shortcomings in stellar evolution models for AGB stars.
|
Consumer applications are becoming increasingly smarter and most of them have
to run on device ecosystems. Potential benefits are for example enabling
cross-device interaction and seamless user experiences. Essential for today's
smart solutions with high performance are machine learning models. However,
these models are often developed separately by AI engineers for one specific
device and do not consider the challenges and potentials associated with a
device ecosystem in which their models have to run. We believe that there is a
need for tool-support for AI engineers to address the challenges of
implementing, testing, and deploying machine learning models for a next
generation of smart interactive consumer applications. This paper presents
preliminary results of a series of inquiries, including interviews with AI
engineers and experiments for an interactive machine learning use case with a
Smartwatch and Smartphone. We identified the themes through interviews and
hands-on experience working on our use case and proposed features, such as data
collection from sensors and easy testing of the resources consumption of
running pre-processing code on the target device, which will serve as
tool-support for AI engineers.
|
A search for nu_bar_mu to nu_bar_e oscillations has been conducted at the Los
Alamos Meson Physics Facility using nu_bar_mu from mu+ decay at rest. The
nu_bar_e are detected via the reaction (nu_bar_e,p) -> (e+,n), correlated with
the 2.2 MeV gamma from (n,p) -> (d,gamma). The use of tight cuts to identify e+
events with correlated gamma rays yields 22 events with e+ energy between 36
and 60 MeV and only 4.6 (+/- 0.6) background events. The probability that this
excess is due entirely to a statistical fluctuation is 4.1E-08. A chi^2 fit to
the entire e+ sample results in a total excess of 51.8 (+18.7) (-16.9) (+/-
8.0) events with e+ energy between 20 and 60 MeV. If attributed to nu_bar_mu ->
nu_bar_e oscillations, this corresponds to an oscillation probability (averaged
over the experimental energy and spatial acceptance) of 0.0031 (+0.0011)
(-0.0010) (+/- 0.0005).
|
We have measured the heat capacities of $\delta-$Pu$_{0.95}$Al$_{0.05}$ and
$\alpha-$Pu over the temperature range 2-303 K. The availability of data below
10 K plus an estimate of the phonon contribution to the heat capacity based on
recent neutron-scattering experiments on the same sample enable us to make a
reliable deduction of the electronic contribution to the heat capacity of
$\delta-$Pu$_{0.95}$Al$_{0.05}$; we find $\gamma = 64 \pm 3$
mJK$^{-2}$mol$^{-1}$ as $T \to 0$. This is a factor $\sim 4$ larger than that
of any element, and large enough for $\delta-$Pu$_{0.95}$Al$_{0.05}$ to be
classed as a heavy-fermion system. By contrast, $\gamma = 17 \pm 1$
mJK$^{-2}$mol$^{-1}$ in $\alpha-$Pu. Two distinct anomalies are seen in the
electronic contribution to the heat capacity of
$\delta-$Pu$_{0.95}$Al$_{0.05}$, one or both of which may be associated with
the formation of the $\alpha'-$ martensitic phase. We suggest that the large
$\gamma$-value of $\delta-$Pu$_{0.95}$Al$_{0.05}$ may be caused by proximity to
a quantum-critical point.
|
Anderson introduced t-modules as higher dimensional analogs of Drinfeld
modules. Attached to such a t-module, there are its t-motive and its dual
t-motive. The t-module gets the attribute "abelian" when the t-motive is a
finitely generated module, and the attribute "t-finite" when the dual t-motive
is a finitely generated module. The main theorem of this article is the
affirmative answer to the long standing question whether these two attributes
are equivalent. The proof relies on an invariant of the t-module and a
condition for that invariant which is necessary and sufficient for both being
abelian and being t-finite. We further show that this invariant also provides
the information whether the t-module is pure or not. Moreover, we conclude that
also over general coefficient rings A, i.e. for Anderson A-modules, the
attributes of being abelian and being A-finite are equivalent.
|
This paper presents a light-weight and accurate deep neural model for
audiovisual emotion recognition. To design this model, the authors followed a
philosophy of simplicity, drastically limiting the number of parameters to
learn from the target datasets, always choosing the simplest earning methods:
i) transfer learning and low-dimensional space embedding allows to reduce the
dimensionality of the representations. ii) The isual temporal information is
handled by a simple score-per-frame selection process, averaged across time.
iii) A simple frame selection echanism is also proposed to weight the images of
a sequence. iv) The fusion of the different modalities is performed at
prediction level (late usion). We also highlight the inherent challenges of the
AFEW dataset and the difficulty of model selection with as few as 383
validation equences. The proposed real-time emotion classifier achieved a
state-of-the-art accuracy of 60.64 % on the test set of AFEW, and ranked 4th at
he Emotion in the Wild 2018 challenge.
|
Although statistical inference in stochastic differential equations (SDEs)
driven by Wiener process has received significant attention in the literature,
inference in those driven by fractional Brownian motion seem to have seen much
less development in comparison, despite their importance in modeling long range
dependence. In this article, we consider both classical and Bayesian inference
in such fractional Brownian motion based SDEs. In particular, we consider
asymptotic inference for two parameters in this regard; a multiplicative
parameter associated with the drift function, and the so-called "Hurst
parameter" of the fractional Brownian motion, when the time domain tends to
infinity. For unknown Hurst parameter, the likelihood does not lend itself
amenable to the popular Girsanov form, rendering usual asymptotic development
difficult. As such, we develop increasing domain infill asymptotic theory, by
discretizing the SDE. In this setup, we establish consistency and asymptotic
normality of the maximum likelihood estimators, as well as consistency and
asymptotic normality of the Bayesian posterior distributions. However,
classical or Bayesian asymptotic normality with respect to the Hurst parameter
could not be established. We supplement our theoretical investigations with
simulation studies in a non-asymptotic setup, prescribing suitable
methodologies for classical and Bayesian analyses of SDEs driven by fractional
Brownian motion. Applications to a real, close price data, along with
comparison with standard SDE driven by Wiener process, is also considered. As
expected, it turned out that our Bayesian fractional SDE triumphed over the
other model and methods, in both simulated and real data applications.
|
We have shown previously that the mass of the muon neutrino can be determined
from the energy released in the decay of the pi (+-) mesons, and that the mass
of the electron neutrino can be determined from the energy released in the
decay of the neutron. We will now show how the mass of the tau neutrino can be
determined from the decay of the D(s)(+-) mesons.
|
Static spherically symmetric solutions to the Einstein-Euler equations with
prescribed central densities are known to exist, be unique and smooth for
reasonable equations of state. Some criteria are also available to decide
whether solutions have finite extent (stars with a vacuum exterior) or infinite
extent. In the latter case, the matter extends globally with the density
approaching zero at infinity. The asymptotic behavior largely depends on the
equation of state of the fluid and is still poorly understood. While a few such
unbounded solutions are known to be asymptotically flat with finite ADM mass,
the vast majority are not. We provide a full geometric description of the
asymptotic behavior of static spherically symmetric perfect fluid solutions
with linear and polytropic-type equations of state with index n>5. In order to
capture the asymptotic behavior we introduce a notion of scaled
quasi-asymptotic flatness, which encodes a form of asymptotic conicality. In
particular, these spacetimes are asymptotically simple.
|
We construct evolutionary models of the populations of AGN and supermassive
black holes, in which the black hole mass function grows at the rate implied by
the observed luminosity function, given assumptions about the radiative
efficiency and the Eddington ratio. We draw on a variety of recent X-ray and
optical measurements to estimate the bolometric AGN luminosity function and
compare to X-ray background data and the independent estimate of Hopkins et al.
(2007) to assess remaining systematic uncertainties. The integrated AGN
emissivity closely tracks the cosmic star formation history, suggesting that
star formation and black hole growth are closely linked at all redshifts.
Observational uncertainties in the local black hole mass function remain
substantial, with estimates of the integrated black hole mass density \rho_BH
spanning the range 3-5.5x10^5 Msun/Mpc^3. We find good agreement with estimates
of the local mass function for a reference model where all active black holes
have efficiency \eps=0.065 and L_bol/L_Edd~0.4. In this model, the duty cycle
of 10^9 Msun black holes declines from 0.07 at z=3 to 0.004 at z=1 and 0.0001
at z=0. The decline is shallower for less massive black holes, a signature of
"downsizing" evolution in which more massive black holes build their mass
earlier. The predicted duty cycles and AGN clustering bias in this model are in
reasonable accord with observational estimates. If the typical Eddington ratio
declines at z<2, then the "downsizing" of black hole growth is less pronounced.
Matching the integrated AGN emissivity to the local black hole mass density
implies \eps=0.075 (\rho_BH/4.5x10^5 Msun/Mpc^3)^{-1} for our standard
luminosity function estimate (25% higher for Hopkins et al.'s), lower than the
values \eps=0.16-0.20 predicted by MHD simulations of disk accretion.
|
Binary matrix optimization commonly arise in the real world, e.g.,
multi-microgrid network structure design problem (MGNSDP), which is to minimize
the total length of the power supply line under certain constraints. Finding
the global optimal solution for these problems faces a great challenge since
such problems could be large-scale, sparse and multimodal. Traditional linear
programming is time-consuming and cannot solve nonlinear problems. To address
this issue, a novel improved feasibility rule based differential evolution
algorithm, termed LBMDE, is proposed. To be specific, a general heuristic
solution initialization method is first proposed to generate high-quality
solutions. Then, a binary-matrix-based DE operator is introduced to produce
offspring. To deal with the constraints, we proposed an improved feasibility
rule based environmental selection strategy. The performance and searching
behaviors of LBMDE are examined by a set of benchmark problems.
|
Desirable random graph models (RGMs) should (i) be tractable so that we can
compute and control graph statistics, and (ii) generate realistic structures
such as high clustering (i.e., high subgraph densities). A popular category of
RGMs (e.g., Erdos-Renyi and stochastic Kronecker) outputs edge probabilities,
and we need to realize (i.e., sample from) the edge probabilities to generate
graphs. Typically, each edge (in)existence is assumed to be determined
independently. However, with edge independency, RGMs theoretically cannot
produce high subgraph densities unless they "replicate" input graphs. In this
work, we explore realization beyond edge independence that can produce more
realistic structures while ensuring high tractability. Specifically, we propose
edge-dependent realization schemes called binding and derive closed-form
tractability results on subgraph (e.g., triangle) densities in graphs generated
with binding. We propose algorithms for graph generation with binding and
parameter fitting of binding. We empirically validate that binding exhibits
high tractability and generates realistic graphs with high clustering,
significantly improving upon existing RGMs assuming edge independency.
|
It has been suggested in the literature that, given a black hole spacetime, a
relativistic membrane can provide an effective description of the horizon
dynamics. In this paper, we explore such a framework in the context of a
2+1-dimensional BTZ black hole. Following this membrane prescription, we are
able to translate the horizon dynamics (now described by a string) into the
convenient form of a 1+1-dimensional Klein-Gordon equation. We proceed to
quantize the solutions and construct a thermodynamic partition function.
Ultimately, we are able to extract the quantum-corrected entropy, which is
shown to comply with the BTZ form of the Bekenstein-Hawking area law. We also
substantiate that the leading-order correction is proportional to the logarithm
of the area.
|
Intrinsically gapless symmetry protected topological phases (igSPT) are
gapless systems with SPT edge states with properties that could not arise in a
gapped system with the same symmetry and dimensionality. igSPT states arise
from gapless systems in which an anomaly in the low-energy (IR) symmetry group
emerges from an extended anomaly-free microscopic (UV) symmetry We construct a
general framework for constructing lattice models for igSPT phases with
emergent anomalies classified by group cohomology, and establish a direct
connection between the emergent anomaly, group-extension, and topological edge
states by gauging the extending symmetry. In many examples, the edge-state
protection has a physically transparent mechanism: the extending UV symmetry
operations pump lower dimensional SPTs onto the igSPT edge, tuning the edge to
a (multi)critical point between different SPTs protected by the IR symmetry. In
two- and three- dimensional systems, an additional possibility is that the
emergent anomaly can be satisfied by an anomalous symmetry-enriched topological
order, which we call a quotient-symmetry enriched topological order (QSET) that
is sharply distinguished from the non-anomalous UV SETs by an edge phase
transition. We construct exactly solvable lattice models with QSET order.
|
We present measurements of azimuthal correlations of charged hadron pairs in
$\sqrt{s_{_{NN}}}=200$ GeV Au$+$Au collisions for the trigger and associated
particle transverse-momentum ranges of $1<p_T^t<10$~GeV/$c$ and
$0.5<p_T^a<10$~GeV/$c$. After subtraction of an underlying event using a model
that includes higher-order azimuthal anisotropy $v_2$, $v_3$, and $v_4$, the
away-side yield of the highest trigger-\pt ($p_T^t>4$~GeV/$c$) correlations is
suppressed compared to that of correlations measured in $p$$+$$p$ collisions.
At the lowest associated particle $p_T$ ($0.5<p_T^a<1$ GeV/$c$), the away-side
shape and yield are modified relative to those in $p$$+$$p$ collisions. These
observations are consistent with the scenario of radiative-jet energy loss. For
the low-$p_T$ trigger correlations ($2<p_T^t<4$ GeV/$c$), a finite away-side
yield exists and we explore the dependence of the shape of the away-side within
the context of an underlying-event model. Correlations are also studied
differentially versus event-plane angle $\Psi_2$ and $\Psi_3$. The angular
correlations show an asymmetry when selecting the sign of the difference
between the trigger-particle azimuthal angle and the $\Psi_2$ event plane. This
asymmetry and the measured suppression of the pair yield out of plane is
consistent with a path-length-dependent energy loss. No $\Psi_3$ dependence can
be resolved within experimental uncertainties.
|
The increasing availability of distributed energy resources (DERs) and
sensors in smart grid, as well as overlaying communication network, provides
substantial potential benefits for improving the power system's reliability. In
this paper, the problem of sensor selection is studied for the MAC layer design
of wireless sensor networks for regulating the voltages in smart grid. The
framework of hybrid dynamical system is proposed, using Kalman filter for
voltage state estimation and LQR feedback control for voltage adjustment. The
approach to obtain the optimal sensor selection sequence is studied. A sub-
optimal sequence is obtained by applying the sliding window algorithm.
Simulation results show that the proposed sensor selection strategy achieves a
40% performance gain over the baseline algorithm of the round-robin sensor
polling.
|
Small-cell architecture is widely adopted by cellular network operators to
increase network capacity. By reducing the size of cells, operators can pack
more (low-power) base stations in an area to better serve the growing demands,
without causing extra interference. However, this approach suffers from low
spectrum temporal efficiency. When a cell becomes smaller and covers fewer
users, its total traffic fluctuates significantly due to insufficient traffic
aggregation and exhibiting a large "peak-to-mean" ratio. As operators
customarily provision spectrum for peak traffic, large traffic temporal
fluctuation inevitably leads to low spectrum temporal efficiency. In this
paper, we advocate device-to-device (D2D) load-balancing as a useful mechanism
to address the fundamental drawback of small-cell architecture. The idea is to
shift traffic from a congested cell to its adjacent under-utilized cells by
leveraging inter-cell D2D communication, so that the traffic can be served
without using extra spectrum, effectively improving the spectrum temporal
efficiency. We provide theoretical modeling and analysis to characterize the
benefit of D2D load balancing, in terms of total spectrum requirements of all
individual cells. We also derive the corresponding cost, in terms of incurred
D2D traffic overhead. We carry out empirical evaluations based on real-world 4G
data traces to gauge the benefit and cost of D2D load balancing under practical
settings. The results show that D2D load balancing can reduce the spectrum
requirement by 25% as compared to the standard scenario without D2D load
balancing, at the expense of negligible 0.7% D2D traffic overhead.
|
We consider magnetic catalysis in a field-theoretic system of
(3+1)-dimensional Dirac fermions with anisotropic kinetic term. By placing the
system in a strong external magnetic field, we examine magnetically-induced
fermion mass generation. When the coupling anisotropy is strong, in which case
the fermions effectively localize on the plane, we find a significant
enhancement of the induced mass gap compared to the isotropic four-dimensional
case of quantum electrodynamics. As expected on purely dimensional grounds, the
mass and critical temperature scale with the square root of the magnetic field.
This phenomenon might be related to recent experimental findings on
magnetically-induced gaps at the nodes of d-wave superconducting gaps in
high-temperature cuprates.
|
The wide-band Suzaku spectra of the black hole binary GX 339-4, acquired in
2007 February during the Very High state, were reanalyzed. Effects of event
pileup (significant within ~ 3' of the image center) and telemetry saturation
of the XIS data were carefully considered. The source was detected up to ~ 300$
keV, with an unabsorbed 0.5--200 keV luminosity of ~3.8 10^{38} erg/s at 8 kpc.
The spectrum can be approximated by a power-law of photon index 2.7, with a
mild soft excess and a hard X-ray hump. When using the XIS data outside 2' of
the image center, the Fe-K line appeared extremely broad, suggesting a high
black hole spin as already reported by Miller et al. (2008) based on the Suzaku
data and other CCD data. When the XIS data accumulation is further limited to
>3' to avoid event pileup, the Fe-K profile becomes narrower, and there appears
a marginally better solution that suggests the inner disk radius to be 5-14
times the gravitational radius (1-sigma), though a maximally spinning black
hole is still allowed by the data at the 90% confidence level. Consistently,
the optically-thick accretion disk is inferred to be truncated at a radius 5-32
times the gravitational radius. Thus, the Suzaku data allow an alternative
explanation without invoking a rapidly spinning black hole. This inference is
further supported by the disk radius measured previously in the High/Soft
state.
|
In this paper we give a brief review of the astrophysics of active galactic
nuclei (AGN). After a general introduction motivating the study of AGNs, we
discuss our present understanding of the inner workings of the central engines,
most likely accreting black holes with masses between a million and ten billion
solar masses. We highlight recent results concerning the jets (collimated
outflows) of AGNs derived from X-ray observations (Chandra) of kpc-scale jets
and gamma-ray observations of AGNs (Fermi, Cherenkov telescopes) with jets
closely aligned with the lines of sight (blazars), and discuss the
interpretation of these observations. Subsequently, we summarize our knowledge
about the cosmic history of AGN formation and evolution. We conclude with a
description of upcoming observational opportunities.
|
The perturbative framework is developed for the calculation of the pi(+)pi(-)
atom characteristics (energy level shift and lifetime) on the basis of the
field-theoretical Bethe-Salpeter approach. A closed expression for the
first-order correction to the pi(+)pi(-) atom lifetime has been obtained.
|
We report dense lightcurve photometry, $BVR_{c}$ colors and phase - mag curve
of (6478) Gault, an active asteroid with sporadic comet-like ejection of dust.
We collected optical observations along the 2020 Jul-Nov months during which
the asteroid appear always star-like, without any form of perceptible activity.
We found complex lightcurves, with low amplitude around opposition and a bit
higher amplitude far opposition, with a mean best rotation period of $2.46_{\pm
0.02}$ h. Shape changes were observed in the phased lightcurves after
opposition, a probable indication of concavities and surface irregularities. We
suspect the existence of an Amplitude-Phase Relationship in $C$ band. The mean
colors are $B-V = +0.84_{\pm 0.04}$, $V-R_{c} = +0.43_{\pm 0.03}$ and $B-R_{c}
= +1.27_{\pm 0.02}$, compatible with an S-type asteroid, but variables with the
rotational phase index of a non-homogeneous surface composition. From our phase
- mag curve and Shevchenko's empirical photometric system, the geometric albedo
result $p_V=0.13_{\pm 0.04}$, lower than the average value of the S-class. We
estimate an absolute mag in $V$ band of about +14.9 and this, together with the
albedo value, allows to estimate a diameter of about 3-4 km, so Gault may be
smaller than previously thought.
|
Luttinger liquid (LL) phase refers to a quantum phase which emerges in the
ground state phase diagram of quite often low-dimensional quantum magnets as
spin-1/2 XX, XYY and frustrated chains. It is believed that the quasi
long-range order exists between particles forming the system in the LL phase.
Here, at the first step we concentrate on the study of correlated spin
particles in the one-dimensional (1D) spin-1/2 XX model which is exactly
solvable. We show that the spin-1/2 particles form string orders with an even
number of spins in the LL phase of the 1D spin-1/2 XX model. As soon as the
transverse magnetic field is applied to the system, string orders with an odd
number of spins induce in the LL phase. All ordered strings of spin-1/2
particles will be destroyed at the quantum critical transverse field, $h_c$. No
strings exist in the saturated ferromagnetic phase. At the second step we focus
on the LL phase in the ground state phase diagram of the 1D spin-1/2 XYY and
frustrated ferromagnetic models. We show that the even-string orders exist in
the LL phase of the 1D spin-1/2 XYY model but in the LL phase of the 1D
spin-1/2 frustrated ferromagnetic model we found all kind of strings. In
addition, the existence of a clear relation between the long-distance
entanglement and string orders in the LL phase is shown. Also, the effect of
the thermal fluctuations on the behavior of the string orders is studied.
|
In the supersymmetric extensions of the standard model, neutrino masses and
leptogenesis requires existence of new particles. We point out that if these
particles with lepton number violating interactions have standard model gauge
interactions, then they may not be created after reheating because of the
gravitino problem. This will rule out all existing models of neutrino masses
and leptogenesis, except the one with right-handed singlet neutrinos.
|
The recent results on the main soft observables, including hadron and photon
yields and particle number ratios, $p_T$ spectra, flow harmonics, as well as
the femtoscopy radii, obtained within the integrated hydrokinetic model (iHKM)
for high-energy heavy-ion collisions are reviewed and re-examined. The cases of
different nuclei colliding at different energies are considered: Au+Au
collisions at the top RHIC energy $\sqrt{s_{NN}}=200$ GeV, Pb+Pb collisions at
the LHC energies $\sqrt{s_{NN}}=2.76$ TeV and $\sqrt{s_{NN}}=5.02$ TeV, and the
LHC Xe+Xe collisions at $\sqrt{s_{NN}}=5.44$ TeV. The effect of the initial
conditions and the model parameters, including the utilized equation of state
(EoS) for quark-gluon phase, on the simulation results, as well as the role of
the final afterburner stage of the matter evolution are discussed. The possible
solution of the so-called ``photon puzzle'' is considered. The attention is
also paid to the dependency of the interferometry volume and individual
interferometry radii on the initial transverse geometrical size of the system
formed in the collision.
|
Inspired by the observation of the fully-charm tetraquark $X(6900)$ state at
LHCb, the production of $X(6900)$ in $\bar{p}p\rightarrow J/\psi J/\psi $
reaction is studied within an effective Lagrangian approach and Breit-Wigner
formula. The numerical results show that the cross section of $X(6900)$ at the
c.m. energy of 6.9 GeV is much larger than that from the background
contribution. Moreover, we estimate dozens of signal events can be detected by
D0 experiment, which indicates that searching for the $X(6900)$ via
antiproton-proton scattering may be a very important and promising way.
Therefore, related experiments are suggested to be carried out.
|
We focus on the definition of the unitary transformation leading to an
effective second order Hamiltonian, inside degenerate eigensubspaces of the
non-perturbed Hamiltonian. We shall prove, by working out in detail the
Su-Schrieffer-Heeger Hamiltonian case, that the presence of degenerate states,
including fermions and bosons, which might seemingly pose an obstacle towards
the determination of such "Froehlich-transformed" Hamiltonian, in fact does
not: we explicitly show how degenerate states may be harmlessly included in the
treatment, as they contribute with vanishing matrix elements to the effective
Hamiltonian matrix. In such a way, one can use without difficulty the
eigenvalues of the effective Hamiltonian to describe the renormalized energies
of the real excitations in the interacting system. Our argument applies also to
few-body systems where one may not invoke the thermodynamic limit to get rid of
the "dangerous" perturbation terms.
|
Computer assisted diagnosis in digital pathology is becoming ubiquitous as it
can provide more efficient and objective healthcare diagnostics. Recent
advances have shown that the convolutional Neural Network (CNN) architectures,
a well-established deep learning paradigm, can be used to design a Computer
Aided Diagnostic (CAD) System for breast cancer detection. However, the
challenges due to stain variability and the effect of stain normalization with
such deep learning frameworks are yet to be well explored. Moreover,
performance analysis with arguably more efficient network models, which may be
important for high throughput screening, is also not well explored.To address
this challenge, we consider some contemporary CNN models for binary
classification of breast histopathology images that involves (1) the data
preprocessing with stain normalized images using an adaptive colour
deconvolution (ACD) based color normalization algorithm to handle the stain
variabilities; and (2) applying transfer learning based training of some
arguably more efficient CNN models, namely Visual Geometry Group Network
(VGG16), MobileNet and EfficientNet. We have validated the trained CNN networks
on a publicly available BreaKHis dataset, for 200x and 400x magnified
histopathology images. The experimental analysis shows that pretrained networks
in most cases yield better quality results on data augmented breast
histopathology images with stain normalization, than the case without stain
normalization. Further, we evaluated the performance and efficiency of popular
lightweight networks using stain normalized images and found that EfficientNet
outperforms VGG16 and MobileNet in terms of test accuracy and F1 Score. We
observed that efficiency in terms of test time is better in EfficientNet than
other networks; VGG Net, MobileNet, without much drop in the classification
accuracy.
|
We implement a double-pixel, compressive sensing camera to efficiently
characterize, at high resolution, the spatially entangled fields produced by
spontaneous parametric downconversion. This technique leverages sparsity in
spatial correlations between entangled photons to improve acquisition times
over raster-scanning by a scaling factor up to n^2/log(n) for n-dimensional
images. We image at resolutions up to 1024 dimensions per detector and
demonstrate a channel capacity of 8.4 bits per photon. By comparing the
classical mutual information in conjugate bases, we violate an entropic
Einstein-Podolsky-Rosen separability criterion for all measured resolutions.
More broadly, our result indicates compressive sensing can be especially
effective for higher-order measurements on correlated systems.
|
Environmental sustainability is crucial for Integrated Circuits (ICs) across
their lifecycle, particularly in manufacturing and use. Meanwhile, ICs using
3D/2.5D integration technologies have emerged as promising solutions to meet
the growing demands for computational power. However, there is a distinct lack
of carbon modeling tools for 3D/2.5D ICs. Addressing this, we propose
3D-Carbon, an analytical carbon modeling tool designed to quantify the carbon
emissions of 3D/2.5D ICs throughout their life cycle. 3D-Carbon factors in both
potential savings and overheads from advanced integration technologies,
considering practical deployment constraints like bandwidth. We validate
3D-Carbon's accuracy against established baselines and illustrate its utility
through case studies in autonomous vehicles. We believe that 3D-Carbon lays the
initial foundation for future innovations in developing environmentally
sustainable 3D/2.5D ICs. Our open-source code is available at
https://github.com/UMN-ZhaoLab/3D-Carbon.
|
We show that if a (locally compact) group $G$ acts properly on a locally
compact $\sigma$-compact space $X$ then there is a family of $G$-invariant
proper continuous finite-valued pseudometrics which induces the topology of
$X$. If $X$ is furthermore metrizable then $G$ acts properly on $X$ if and only
if there exists a $G$-invariant proper compatible metric on $X$.
|
Spin-polarization response functions are examined for high-energy
$(\vec{e},e'\vec{p})$ reaction by computing the full 18 response functions for
the proton kinetic energy $T_{p'}=$ 0.515 GeV and 3.179 GeV with an 16O target.
The Dirac eikonal formalism is applied to account for the final-state
interactions. The formalism is found to yield the response functions in good
agreement with those calculated by the partial-wave expansion method at 0.515
GeV. We identify the response functions that depend on the spin-orbital
potential in the final-state interactions, but not on the central potential.
Dependence on the Dirac- or Pauli-type current of the nucleon is investigated
in the helicity-dependent response functions, and the normal-component
polarization of the knocked-out proton, $P_n$, is computed.
|
We present some new discoveries on the mathematical foundation of linear
hydrodynamic stability theory. The new discoveries are: 1. Linearized Euler
equations fail to provide a linear approximation on inviscid hydrodynamic
stability. 2. Eigenvalue instability predicted by high Reynolds number
linearized Navier-Stokes equations cannot capture the dominant instability of
super fast growth. 3. As equations for directional differentials, Rayleigh
equation and Orr-Sommerfeld equation cannot capture the nature of the full
differentials.
|
We report results of a search for the rare radiative decay B0 -> D*0 gamma.
Using 9.7 million BB meson pairs collected with the CLEO detector at the
Cornell Electron Storage Ring, we limit Br(B0->D*0 gamma) < 5.0 * 10^-5 at 90%
CL. This provides evidence that anomalous enhancement is absent in W-exchange
processes and that weak radiative B decays are dominated by the short-distance
b -> s gamma mechanism in the Standard Model.
|
Quantum process tomography might be the most important paradigm shift which
has yet to be translated fully into theoretical chemistry. Its fundamental
strength, long established in quantum information science, offers a wealth of
information about quantum dynamic processes which lie at the heart of many (if
not all) chemical processes. However, due to its complexity its application to
real chemical systems is currently beyond experimental reach. Furthermore, it
is susceptible to errors due to experimental and theoretical inaccuracies and
disorder has long been thought to be an obstacle in its applicability. Here, I
present the first results of a study into the use of quantum light for quantum
process tomography. By using a toy model and comparing numerical simulations to
theoretical predictions the possible enhancement of using non-conventional
light is studied. It is found, however, that disorder is necessary make the use
of quantum light suitable for process tomography and that, in contrast to
conventional wisdom, disorder can make the results more accurate than in an
ordered system.
|
We report on a study of quasi-ballistic transport in deep submicron,
inhomogeneous semiconductor structures, focusing on the analysis of signatures
found in the full nonequilibrium electron distribution. We perform
self-consistent numerical calculations of the Poisson-Boltzmann equations for a
model n(+)-n(-)-n(+) GaAs structure and realistic, energy-dependent scattering.
We show that, in general, the electron distribution displays significant,
temperature dependent broadening and pronounced structure in the high-velocity
tail of the distribution. The observed characteristics have a strong spatial
dependence, related to the energy-dependence of the scattering, and the large
inhomogeneous electric field variations in these systems. We show that in this
quasi-ballistic regime, the high-velocity tail structure is due to pure
ballistic transport, whereas the strong broadening is due to electron
scattering within the channel, and at the source(drain) interfaces.
|
The string duality revolution calls into question virtually all of the
working assumptions of string model builders. A number of difficult questions
arise. I use fractional charge as an example of a criterion which one would
hope is robust beyond the weak coupling heterotic limit. Talk given at the 5th
International Workshop on Supersymmetry and Unification of Fundamental
Interactions (SUSY-96), University of Maryland, College Park, May 29 - June 1,
1996.
|
Sparsifying the Transformer has garnered considerable interest, as training
the Transformer is very computationally demanding. Prior efforts to sparsify
the Transformer have either used a fixed pattern or data-driven approach to
reduce the number of operations involving the computation of multi-head
attention, which is the main bottleneck of the Transformer. However, existing
methods suffer from inevitable problems, such as the potential loss of
essential sequence features due to the uniform fixed pattern applied across all
layers, and an increase in the model size resulting from the use of additional
parameters to learn sparsity patterns in attention operations. In this paper,
we propose a novel sparsification scheme for the Transformer that integrates
convolution filters and the flood filling method to efficiently capture the
layer-wise sparse pattern in attention operations. Our sparsification approach
reduces the computational complexity and memory footprint of the Transformer
during training. Efficient implementations of the layer-wise sparsified
attention algorithm on GPUs are developed, demonstrating a new SPION that
achieves up to 3.08X speedup over existing state-of-the-art sparse Transformer
models, with better evaluation quality.
|
We consider an important class of signal processing problems where the signal
of interest is known to be sparse, and can be recovered from data given
auxiliary information about how the data was generated. For example, a sparse
Green's function may be recovered from seismic experimental data using sparsity
optimization when the source signature is known. Unfortunately, in practice
this information is often missing, and must be recovered from data along with
the signal using deconvolution techniques.
In this paper, we present a novel methodology to simultaneously solve for the
sparse signal and auxiliary parameters using a recently proposed variable
projection technique. Our main contribution is to combine variable projection
with sparsity promoting optimization, obtaining an efficient algorithm for
large-scale sparse deconvolution problems. We demonstrate the algorithm on a
seismic imaging example.
|
Over the past decades, deep learning (DL) systems have achieved tremendous
success and gained great popularity in various applications, such as
intelligent machines, image processing, speech processing, and medical
diagnostics. Deep neural networks are the key driving force behind its recent
success, but still seem to be a magic black box lacking interpretability and
understanding. This brings up many open safety and security issues with
enormous and urgent demands on rigorous methodologies and engineering practice
for quality enhancement. A plethora of studies have shown that the
state-of-the-art DL systems suffer from defects and vulnerabilities that can
lead to severe loss and tragedies, especially when applied to real-world
safety-critical applications. In this paper, we perform a large-scale study and
construct a paper repository of 223 relevant works to the quality assurance,
security, and interpretation of deep learning. We, from a software quality
assurance perspective, pinpoint challenges and future opportunities towards
universal secure deep learning engineering. We hope this work and the
accompanied paper repository can pave the path for the software engineering
community towards addressing the pressing industrial demand of secure
intelligent applications.
|
We present a new mouse cursor designed to facilitate the use of the mouse by
people with peripheral vision loss. The pointer consists of a collection of
converging straight lines covering the whole screen and following the position
of the mouse cursor. We measured its positive effects with a group of
participants with peripheral vision loss of different kinds and we found that
it can reduce by a factor of 7 the time required to complete a targeting task
using the mouse. Using eye tracking, we show that this system makes it possible
to initiate the movement towards the target without having to precisely locate
the mouse pointer. Using Fitts' Law, we compare these performances with those
of full visual field users in order to understand the relation between the
accuracy of the estimated mouse cursor position and the index of performance
obtained with our tool.
|
We show that groups satisfying Kazhdan's property (T) have no unbounded
actions on finite dimensional CAT(0) cube complexes, and deduce that there is a
locally CAT(-1) Riemannian manifold which is not homotopy equivalent to any
finite dimensional, locally CAT(0) cube complex.
|
We present observations (with NAO Rozhen and AS Vidojevica telescopes) of the
AM Canum Venaticorum (AM CVn) type binary star CR Bootis (CR Boo) in the UBV
bands. The data were obtained in two nights in July 2019, when the V band
brightness was in the range of 16.1-17.0 mag. In both nights, a variability for
a period of $25 (\pm 1)$ min and amplitude of about 0.2 magnitudes was visible.
These brightness variations are most likely indications of "humps". During our
observational time, they appear for a period similar to the CR Boo orbital
period. A possible reason of their origin is the phase rotation of the bright
spot, placed in the contact point of the infalling matter and the outer disc
edge. We estimated some of the parameters of the binary system, on the base of
the observational data.
|
Data augmentation is a widely used trick when training deep neural networks:
in addition to the original data, properly transformed data are also added to
the training set. However, to the best of our knowledge, a clear mathematical
framework to explain the performance benefits of data augmentation is not
available. In this paper, we develop such a theoretical framework. We show data
augmentation is equivalent to an averaging operation over the orbits of a
certain group that keeps the data distribution approximately invariant. We
prove that it leads to variance reduction. We study empirical risk
minimization, and the examples of exponential families, linear regression, and
certain two-layer neural networks. We also discuss how data augmentation could
be used in problems with symmetry where other approaches are prevalent, such as
in cryo-electron microscopy (cryo-EM).
|
Working in the setting of i.i.d. last-passage percolation on $\mathbb{R}^D$
with no assumptions on the underlying edge\hyp{}weight distribution, we arrive
at the notion of grid entropy - a Subadditive Ergodic Theorem limit of the
entropies of paths with empirical measures weakly converging to a given target,
or equivalently a deterministic critical exponent of canonical order statistics
associated with the Levy-Prokhorov metric. This provides a fresh approach to an
entropy first developed by Rassoul-Agha and Sepp\"al\"ainen as a large
deviation rate function of empirical measures along paths. In their 2014 paper
arXiv:1202.2584, variational formulas are developed for the
point-to-point/point-to-level Gibbs Free Energies as the convex conjugates of
this entropy. We rework these formulas in our new framework and explicitly link
our descriptions of grid entropy to theirs. We also improve on a known bound
for this entropy by introducing a relative entropy term in the inequality.
Furthermore, we show that the set of measures with finite grid entropy
coincides with the deterministic set of limit points of empirical measures
studied in a recent paper arXiv:2006.12580 by Bates. In addition, we partially
answer a directed polymer version of a question of Hoffman which was previously
tackled in the zero temperature case by Bates. Our results cover both the
point-to-point and point-to-level scenarios.
|
Correlated time series are time series that, by virtue of the underlying
process to which they refer, are expected to influence each other strongly. We
introduce a novel approach to handle such time series, one that models their
interaction as a two-dimensional cellular automaton and therefore allows them
to be treated as a single entity. We apply our approach to the problems of
filling gaps and predicting values in rainfall time series. Computational
results show that the new approach compares favorably to Kalman smoothing and
filtering.
|
The mechanisms causing the reduction in lattice thermal conductivity in
highly P- and B-doped Si are looked into in detail. Scattering rates of phonons
by point defects, as well as by electrons, are calculated from first
principles. Lattice thermal conductivities are calculated considering these
scattering mechanisms both individually and together. It is found that at low
carrier concentrations and temperatures phonon scattering by electrons is
dominant and can reproduce the experimental thermal conductivity reduction.
However, at higher doping concentrations the scattering rates of phonons by
point defects dominate the ones by electrons except for the lowest phonon
frequencies. Consequently, phonon scattering by point defects contributes
substantially to the thermal conductivity reduction in Si at defect
concentrations above $10^{19}$ cm$^{-3}$ even at room temperature. Only when,
phonon scattering by both point defects and electrons are taken into account,
excellent agreement is obtained with the experimental values at all
temperatures.
|
To enable robotic weed control, we develop algorithms to detect nutsedge weed
from bermudagrass turf. Due to the similarity between the weed and the
background turf, manual data labeling is expensive and error-prone.
Consequently, directly applying deep learning methods for object detection
cannot generate satisfactory results. Building on an instance detection
approach (i.e. Mask R-CNN), we combine synthetic data with raw data to train
the network. We propose an algorithm to generate high fidelity synthetic data,
adopting different levels of annotations to reduce labeling cost. Moreover, we
construct a nutsedge skeleton-based probabilistic map (NSPM) as the neural
network input to reduce the reliance on pixel-wise precise labeling. We also
modify loss function from cross entropy to Kullback-Leibler divergence which
accommodates uncertainty in the labeling process. We implement the proposed
algorithm and compare it with both Faster R-CNN and Mask R-CNN. The results
show that our design can effectively overcome the impact of imprecise and
insufficient training sample issues and significantly outperform the Faster
R-CNN counterpart with a false negative rate of only 0.4%. In particular, our
approach also reduces labeling time by 95% while achieving better performance
if comparing with the original Mask R-CNN approach.
|
In this paper, we propose a feature-free method for detecting phishing
websites using the Normalized Compression Distance (NCD), a parameter-free
similarity measure which computes the similarity of two websites by compressing
them, thus eliminating the need to perform any feature extraction. It also
removes any dependence on a specific set of website features. This method
examines the HTML of webpages and computes their similarity with known phishing
websites, in order to classify them. We use the Furthest Point First algorithm
to perform phishing prototype extractions, in order to select instances that
are representative of a cluster of phishing webpages. We also introduce the use
of an incremental learning algorithm as a framework for continuous and adaptive
detection without extracting new features when concept drift occurs. On a large
dataset, our proposed method significantly outperforms previous methods in
detecting phishing websites, with an AUC score of 98.68%, a high true positive
rate (TPR) of around 90%, while maintaining a low false positive rate (FPR) of
0.58%. Our approach uses prototypes, eliminating the need to retain long term
data in the future, and is feasible to deploy in real systems with a processing
time of roughly 0.3 seconds.
|
The mass assembly of a whole population of sub-Milky Way galaxies is studied
by means of hydrodynamical simulations within the $\Lambda$-CDM cosmology. Our
results show that while dark halos assemble hierarchically, in stellar mass
this trend is inverted in the sense that the smaller the galaxy, the later is
its stellar mass assembly on average. Our star formation and supernovae
feedback implementation in a multi-phase interstellar medium seems to play a
key role on this process. However, the obtained downsizing trend is not yet as
strong as observations show.
|
Effectively aligning Large Language Models (LLMs) with human-centric values
while preventing the degradation of abilities acquired through Pre-training and
Supervised Fine-tuning (SFT) poses a central challenge in Reinforcement
Learning from Human Feedback (RLHF). In this paper, we first discover that
interpolating RLHF and SFT model parameters can adjust the trade-off between
human preference and basic capabilities, thereby reducing the alignment tax at
the cost of alignment reward. Inspired by this, we propose integrating the RL
policy and SFT models at each optimization step in RLHF to continuously
regulate the training direction, introducing the Online Merging Optimizer.
Specifically, we merge gradients with the parameter differences between SFT and
pretrained models, effectively steering the gradient towards maximizing rewards
in the direction of SFT optimization. We demonstrate that our optimizer works
well with different LLM families, such as Qwen and LLaMA, across various model
sizes ranging from 1.8B to 8B, various RLHF algorithms like DPO and KTO, and
existing model merging methods. It significantly enhances alignment reward
while mitigating alignment tax, achieving higher overall performance across 14
benchmarks.
|
Edge computing is the natural progression from Cloud computing, where,
instead of collecting all data and processing it centrally, like in a cloud
computing environment, we distribute the computing power and try to do as much
processing as possible, close to the source of the data. There are various
reasons this model is being adopted quickly, including privacy, and reduced
power and bandwidth requirements on the Edge nodes. While it is common to see
inference being done on Edge nodes today, it is much less common to do training
on the Edge. The reasons for this range from computational limitations, to it
not being advantageous in reducing communications between the Edge nodes. In
this paper, we explore some scenarios where it is advantageous to do training
on the Edge, as well as the use of checkpointing strategies to save memory.
|
This paper considers simultaneous wireless information and power transfer
(SWIPT) in a multiple-input single-output (MISO) downlink system consisting of
one multi-antenna transmitter, one single-antenna information receiver (IR),
multiple multi-antenna eavesdroppers (Eves) and multiple single-antenna
energy-harvesting receivers (ERs). The main objective is to keep the
probability of the legitimate user's achievable secrecy rate outage as well as
the ERs' harvested energy outage caused by channel state information (CSI)
uncertainties below some prescribed thresholds. As is well known, the secrecy
rate outage constraints present a significant analytical and computational
challenge. Incorporating the energy harvesting (EH) outage constraints only
intensifies that challenge. In this paper, we address this challenging issue
using convex restriction approaches which are then proved to yield rank-one
optimal beamforming solutions. Numerical results reveal the effectiveness of
the proposed schemes.
|
In this work, we propose to explicitly use the landmarks of prostate to guide
the MR-TRUS image registration. We first train a deep neural network to
automatically localize a set of meaningful landmarks, and then directly
generate the affine registration matrix from the location of these landmarks.
For landmark localization, instead of directly training a network to predict
the landmark coordinates, we propose to regress a full-resolution distance map
of the landmark, which is demonstrated effective in avoiding statistical bias
to unsatisfactory performance and thus improving performance. We then use the
predicted landmarks to generate the affine transformation matrix, which
outperforms the clinicians' manual rigid registration by a significant margin
in terms of TRE.
|
In this paper, the effects of disorder on the dynamical quantum phase
transitions (DQPTs) in the transverse-field anisotropic XY chain are studied by
numerically calculating the Loschmidt echo after quench. We obtain the formula
for calculating the Loschmidt echo of the inhomogeneous system in real space.
By comparing the results with that of the homogeneous chain, we find that when
the quench crosses the Ising transition, the small disorder will cause a new
critical point. As the disorder increases, more critical points of the DQPTs
will occur, constituting a critical region. In the quench across the
anisotropic transition, the disorder will cause a critical region near the
critical point, and the width of the critical region increases by the
disordered strength. In the case of quench passing through two critical lines,
the small disorder leads to the system to have three additional critical
points. When the quench is in the ferromagnetic phase, the large disorder
causes the two critical points of the homogeneous case to become a critical
region. And for the quench in the paramagnetic phase, the DQPTs will disappear
for large disorder.
|
Despite the established convergence theory of Optimistic Gradient Descent
Ascent (OGDA) and Extragradient (EG) methods for the convex-concave minimax
problems, little is known about the theoretical guarantees of these methods in
nonconvex settings. To bridge this gap, for the first time, this paper
establishes the convergence of OGDA and EG methods under the
nonconvex-strongly-concave (NC-SC) and nonconvex-concave (NC-C) settings by
providing a unified analysis through the lens of single-call extra-gradient
methods. We further establish lower bounds on the convergence of GDA/OGDA/EG,
shedding light on the tightness of our analysis. We also conduct experiments
supporting our theoretical results. We believe our results will advance the
theoretical understanding of OGDA and EG methods for solving complicated
nonconvex minimax real-world problems, e.g., Generative Adversarial Networks
(GANs) or robust neural networks training.
|
The conditions of local thermodynamic equilibrium of baryons (non-strange,
strange) and mesons (strange) are presented for central Au + Au collisions at
FAIR energies using the microscopic transport model UrQMD. The net particle
density, longitudinal-to-transverse pressure anisotropy and inverse slope
parameters of the energy spectra of non-strange and strange hadrons are
calculated inside a cell in the central region within rapidity window $|y| <
1.0$ at different time steps after the collision. We observed that the
strangeness content is dominated by baryons at all energies, however
contribution from mesons become significant at higher energies. The time scale
obtained from local pressure (momentum) isotropization and thermalization of
energy spectra are nearly equal and found to decrease with increase in
laboratory energy. The equilibrium thermodynamic properties of the system are
obtained with statistical thermal model. The time evolution of the entropy
densities at FAIR energies are found very similar with the ideal hydrodynamic
behaviour at top RHIC energy.
|
We prove a priori estimates for wave systems of the type \[ \partial_{tt} u -
\Delta u = \Omega \cdot du + F(u) \quad \text{in $\mathbb{R}^d \times
\mathbb{R}$} \]
where $d \geq 4$ and $\Omega$ is a suitable antisymmetric potential. We show
that the assumptions on $\Omega$ are applicable to wave- and half-wave maps,
the latter by means of the Krieger-Sire reduction. We thus obtain
well-posedness of those equations for small initial data in
$\dot{H}^{\frac{d}{2}}(\mathbb{R}^d)$.
|
Deep neural network algorithms are difficult to analyze because they lack
structure allowing to understand the properties of underlying transforms and
invariants. Multiscale hierarchical convolutional networks are structured deep
convolutional networks where layers are indexed by progressively higher
dimensional attributes, which are learned from training data. Each new layer is
computed with multidimensional convolutions along spatial and attribute
variables. We introduce an efficient implementation of such networks where the
dimensionality is progressively reduced by averaging intermediate layers along
attribute indices. Hierarchical networks are tested on CIFAR image data bases
where they obtain comparable precisions to state of the art networks, with much
fewer parameters. We study some properties of the attributes learned from these
databases.
|
We use an atomic force microscope (AFM) to manipulate graphene films on a
nanoscopic length scale. By means of local anodic oxidation with an AFM we are
able to structure isolating trenches into single-layer and few-layer graphene
flakes, opening the possibility of tabletop graphene based device fabrication.
Trench sizes of less than 30 nm in width are attainable with this technique.
Besides oxidation we also show the influence of mechanical peeling and
scratching with an AFM of few layer graphene sheets placed on different
substrates.
|
The low-mass star GJ 1151 has been reported to display variable low-frequency
radio emission, which has been interpreted as a signpost of coronal star-planet
interactions with an unseen exoplanet. Here we report the first X-ray detection
of GJ 1151's corona based on XMM-Newton data. We find that the star displays a
small flare during the X-ray observation. Averaged over the observation, we
detect the star with a low coronal temperature of 1.6~MK and an X-ray
luminosity of $L_X = 5.5\times 10^{26}$\,erg/s. During the quiescent time
periods excluding the flare, the star remains undetected with an upper limit of
$L_{X,\,qui} \leq 3.7\times 10^{26}$\,erg/s. This is compatible with the
coronal assumptions used in a recently published model for a star-planet
interaction origin of the observed radio signals from this star.
|
Subsets and Splits