text
stringlengths 6
128k
|
---|
In this paper, we count the number of matrices whose rows generate different
$\mathbb{Z}_2\mathbb{Z}_8$ additive codes. This is a natural generalization of
the well known Gaussian numbers that count the number of matrices whose rows
generate vector spaces with particular dimension over finite fields. Due to
this similarity we name this numbers as Mixed Generalized Gaussian Numbers
(MGN). The MGN formula by specialization leads to the well known formula for
the number of binary codes and the number of codes over $\mathbb{Z}_8,$ and for
additive $\mathbb{Z}_2\mathbb{Z}_4$ codes. Also, we conclude by some properties
and examples of the MGN numbers that provide a good source for new number
sequences that are not listed in The On-Line Encyclopedia of Integer Sequences.
|
Light emission in atomically thin heterostructures is known to depend on the
type of materials, number and stacking sequence of the constituent layers. Here
we show that the thickness of a two-dimensional substrate can be crucial in
modulating the light emission. We study the layer-dependent charge transfer in
vertical heterostructures built from monolayer tungsten disulphide (WS2) on
one- and two-layer epitaxial graphene, unravelling the effect that the
interlayer electronic coupling has on the excitonic properties of such
heterostructures. We bring evidence that the excitonic properties of WS2 can be
effectively tuned by the number of supporting graphene layers. Integrating WS2
monolayers with two-layer graphene leads to a significant enhancement of the
photoluminescence response, up to one order of magnitude higher compared to WS2
supported on one-layer graphene. Our findings highlight the importance of
substrate engineering when constructing atomically thin layered
heterostructures.
|
Damped oscillating afterglows in GRB 030329 and in SGR 1900+14 find a natural
explanation in precessing gamma Jet model for both GRBs and SGRs. The very thin
Jet cone (solid angle ratio Delta Omega/Omega less or near 10^{-8}) combines at
once the Supernova power and the apparent huge GRBs output: {dE/dt}_{GRBs}
comparable to {dE/dt}_{SN}* Omega/(Delta Omega), leading to a better
understanding of their remarkable GRB-Super Nova connection shown in early
GRB980425/SN1998bw event and in most recent GRB030329/SN2003dh one. The same
thin beaming offer a understanding of the apparent SGR-X-Pulsar power
connection: {dE/dt}_{SGRs} comparable to {dE/dt}_{Xpuls}*Omega/(Delta Omega).
The precessing Jet model for both GRBs and SGRs, at their different
luminosity,explains the existence of a few identical energy spectra and time
evolution of these two sources leading to their unified understanding. The
spinning-precessing Jet explains the rare mysterious X-Ray precursors in GRBs
and SGRs. The Multi-precessing Jet at peak activity in all band may explain the
puzzling X or optical re-brightening bumps found in last GRB030329 and earliest
SGR 1900+ 14 on 27 August 1998 and on 18 April 2001, as well as the multi-bump
radio light-curve observed in GRB980425 and GRB030329. Rarest micro-quasars in
our galaxy as SS433, and Herbig Haro objects describe these thin precessing Jet
imprint in their 3D relic nebulae shapes.
|
Computational efficiency and non-adversarial robustness are critical factors
in process modeling and optimization for real-world engineering applications.
Yet, conventional neural networks often fall short in addressing both
simultaneously, or even separately. Drawing insights from natural physical
systems and existing literature, it is known theoretically that an input convex
architecture will enhance computational efficiency, while a
Lipschitz-constrained architecture will bolster non-adversarial robustness.
However, integrating both properties into one model is a nontrivial task, as
enforcing one property may compromise the other one. Therefore, in this work,
we develop a novel network architecture, termed Input Convex Lipschitz
Recurrent Neural Networks, that inherits the strengths of both convexity and
Lipschitz continuity. This model is explicitly designed for fast and robust
optimization-based tasks, which outperforms existing recurrent units in terms
of computational efficiency and non-adversarial robustness. Additionally, we
have successfully implemented this model in various practical engineering
applications, such as optimization of chemical processes and real-world solar
irradiance prediction for Solar PV system planning at LHT Holdings in
Singapore. Source code is available at
https://github.com/killingbear999/ICLRNN.
|
The evolution of lithium abundance over a star's lifetime is indicative of
transport processes operating in the stellar interior. We revisit the
relationship between lithium content and rotation rate previously reported for
cool dwarfs in the Pleiades cluster. We derive new LiI 670.8 nm equivalent
width measurements from high-resolution spectra obtained for low-mass Pleiades
members. We combine these new measurements with previously published ones, and
use the Kepler/K2 rotational periods recently derived for Pleiades cool dwarfs
to investigate the lithium-rotation connection in this 125 Myr-old cluster. The
new data confirm the correlation between lithium equivalent width and stellar
spin rate for a sample of 51 early K-type members of the cluster, where fast
rotating stars are systematically lithium-rich compared to slowly rotating
ones. The correlation is valid for all stars over the (J-Ks) color range
0.50-0.70 mag, corresponding to a mass range from about 0.75 to 0.90 solar
mass, and may extend down to lower masses. We argue that the dispersion in
lithium equivalent widths observed for cool dwarfs in the Pleiades cluster
reflects an intrinsic scatter in lithium abundances, and suggest that the
physical origin of the lithium dispersion pattern is to be found in the
pre-main sequence rotational history of solar-type stars.
|
A precise modelling of the dynamics of bubbles nucleated during first-order
phase transitions in the early Universe is pivotal for a quantitative
determination of various cosmic relics, including the stochastic background of
gravitational waves. The equation of motion of the bubble front is affected by
the out-of-equilibrium distributions of particle species in the plasma which,
in turn, are described by the corresponding Boltzmann equations. In this work
we provide a solution to these equations by thoroughly incorporating the
non-linearities arising from the population factors. Moreover, our methodology
relies on a spectral decomposition that leverages the rotational properties of
the collision integral within the Boltzmann equations. This novel approach
allows for an efficient and robust computation of both the bubble speed and
profile. We also refine our analysis by including the contributions from the
electroweak gauge bosons. We find that their impact is dominated by the
infrared modes and proves to be non-negligible, contrary to the naive
expectations.
|
We prove dilation invariant inequalities involving radial functions,
poliharmonic operators and weights that are powers of the distance from the
origin. Then we discuss the existence of extremals and in some cases we compute
the best constants.
|
The study of core partitions has been very active in recent years, with the
study of $(s,t)$-cores - partitions which are both $s$- and $t$-cores - playing
a prominent role. A conjecture of Armstrong, proved recently by Johnson, says
that the average size of an $(s,t)$-core, when $s$ and $t$ are coprime positive
integers, is $\frac1{24}(s-1)(t-1)(s+t-1)$. Armstrong also conjectured that the
same formula gives the average size of a self-conjugate $(s,t)$-core; this was
proved by Chen, Huang and Wang.
In the present paper, we develop the ideas from the author's paper [J.
Combin. Theory Ser. A 118 (2011) 1525-1539] studying actions of affine
symmetric groups on the set of $s$-cores in order to give variants of
Armstrong's conjectures in which each $(s,t)$-core is weighted by the
reciprocal of the order of its stabiliser under a certain group action.
Informally, this weighted average gives the expected size of the $t$-core of a
random $s$-core.
|
We consider spin relaxation dynamics in cold Fermi gases with a pure-gauge
spin-orbit coupling corresponding to recent experiments. We show that such
experiments can give a direct access to the collisional spin drag rate, and
establish conditions for the observation of spin drag effects. In the recent
experiments the dynamics is found to be mainly ballistic leading to new regimes
of reversible spin relaxation-like processes.
|
The operator of time formalism is applied to radioactive decay. It appears
that the proposed approach offers better insight and understanding of the
phenomena in a way that the decay exponential-law becomes the Boltzmann
distribution in Gibbs treatment of canonical ensemble. The radioactive decay is
seen as temporal canonical ensemble where the radioactive constant appears as
the analog of the absolute temperature multiplied by Boltzmann constant. The
stochastic character of decay process becomes plausible in the proposed
approach and the explanation why decay is characterized by constant, and not by
some parameter, is offered.
|
In this paper, a novel deterioration and damage identification procedure
(DIP) is presented and applied to building models. The challenge associated
with applications on these types of structures is related to the strong
correlation of responses, which gets further complicated when coping with real
ambient vibrations with high levels of noise. Thus, a DIP is designed utilizing
low-cost ambient vibrations to analyze the acceleration responses using the
Stockwell transform (ST) to generate spectrograms. Subsequently, the ST outputs
become the input of two series of Convolutional Neural Networks (CNNs)
established for identifying deterioration and damage to the building models. To
the best of our knowledge, this is the first time that both damage and
deterioration are evaluated on building models through a combination of ST and
CNN with high accuracy.
|
We have obtained 12CO(1--0) data of the nearby barred spiral galaxy M83 from
Atacama Large Millimeter/submillimeter Array and Nobeyama 45m observations. By
combining these two data sets, the total CO flux has been recovered, and a high
angular resolution (2" corresponding to ~40 pc at the distance of M83) has been
achieved. The field of view is 3' corresponding to ~3.4 kpc and covers the
galactic center, bar, and spiral arm regions. In order to investigate how these
galactic structures affect gas properties, we have created a probability
distribution function (PDF) of the CO integrated intensity (I_CO), peak
temperature, and velocity dispersion for a region with each structure. We find
that the I_CO PDF for the bar shows a bright-end tail while that for the arm
does not. Since the star formation efficiency is lower in the bar, this
difference in PDF shape is contrary to the trend in Milky Way studies where the
bright-end tail is found for star-forming molecular clouds. While the peak
temperature PDFs are similar for bar and arm regions, velocity dispersion in
bar is systematically larger than in arm. This large velocity dispersion is
likely a major cause of the bright-end tail and of suppressed star formation.
We also investigate an effect of stellar feedback to PDF profiles and find that
the different I_CO PDFs between bar and arm regions cannot be explained by the
feedback effect, at least at the current spatial scale.
|
Appearance variations result in many difficulties in face image analysis. To
deal with this challenge, we present a Unified Tensor-based Active Appearance
Model (UT-AAM) for jointly modelling the geometry and texture information of 2D
faces. For each type of face information, namely shape and texture, we
construct a unified tensor model capturing all relevant appearance variations.
This contrasts with the variation-specific models of the classical tensor AAM.
To achieve the unification across pose variations, a strategy for dealing with
self-occluded faces is proposed to obtain consistent shape and texture
representations of pose-varied faces. In addition, our UT-AAM is capable of
constructing the model from an incomplete training dataset, using tensor
completion methods. Last, we use an effective cascaded-regression-based method
for UT-AAM fitting. With these advancements, the utility of UT-AAM in practice
is considerably enhanced. As an example, we demonstrate the improvements in
training facial landmark detectors through the use of UT-AAM to synthesise a
large number of virtual samples. Experimental results obtained using the
Multi-PIE and 300-W face datasets demonstrate the merits of the proposed
approach.
|
In the last decade, Human Activity Recognition (HAR) has become a vibrant
research area, especially due to the spread of electronic devices such as
smartphones, smartwatches and video cameras present in our daily lives. In
addition, the advance of deep learning and other machine learning algorithms
has allowed researchers to use HAR in various domains including sports, health
and well-being applications. For example, HAR is considered as one of the most
promising assistive technology tools to support elderly's daily life by
monitoring their cognitive and physical function through daily activities. This
survey focuses on critical role of machine learning in developing HAR
applications based on inertial sensors in conjunction with physiological and
environmental sensors.
|
Superchannels leverage the flexibility of elastic optical networks and pave
the way to higher capacity channels in space division multiplexing (SDM)
networks. A superchannel consists of subchannels to which continuous spectral
grid slots are assigned. To guarantee superchannel operation, we need to
account for soft failures, e.g., laser drifts causing interference between
subchannels, wavelength-dependent performance variations, and filter
misalignments affecting the edge subchannels. This is achieved by reserving
spectral guardband between subchannels or by employing a lower modulation
format. We propose a process that dynamically retunes the subchannel
transmitter (TX) lasers to compensate for soft failures during operation and
optimizes the total capacity or the minimum subchannel quality of transmission
(QoT) performance. We use an iterative stochastic subgradient method that at
each iteration probes the network and leverages monitoring information,
particularly subchannels signal-to-noise ratio (SNR) values, to optimize the TX
frequencies. Our results indicate that our proposed method always approaches
the optima found with an exhaustive search technique, unsuitable for operating
networks, irrespective of the subchannel number, modulation format, roll-off
factor, filters bandwidth, and starting frequencies. Considering a
four-subchannel superchannel, the proposed method achieves 2.47 dB and 3.73 dB
improvements for a typical soft failure of +/- 2 GHz subchannel frequency
drifts around the optimum, for the two examined objectives.
|
We propose a new form of human-machine interaction. It is a pictorial game
consisting of interactive rounds of creation between artists and a machine.
They repetitively paint one after the other. At its rounds, the computer
partially completes the drawing using machine learning algorithms, and projects
its additions directly on the canvas, which the artists are free to insert or
modify. Alongside fostering creativity, the process is designed to question the
growing interaction between humans and machines.
|
The two most important notions of fractal dimension are {\it Hausdorff
dimension}, developed by Hausdorff (1919), and {\it packing dimension},
developed by Tricot (1982).
Lutz (2000) has recently proven a simple characterization of Hausdorff
dimension in terms of {\it gales}, which are betting strategies that generalize
martingales. Imposing various computability and complexity constraints on these
gales produces a spectrum of effective versions of Hausdorff dimension.
In this paper we show that packing dimension can also be characterized in
terms of gales. Moreover, even though the usual definition of packing dimension
is considerably more complex than that of Hausdorff dimension, our gale
characterization of packing dimension is an exact dual of -- and every bit as
simple as -- the gale characterization of Hausdorff dimension.
Effectivizing our gale characterization of packing dimension produces a
variety of {\it effective strong dimensions}, which are exact duals of the
effective dimensions mentioned above.
We develop the basic properties of effective strong dimensions and prove a
number of results relating them to fundamental aspects of randomness,
Kolmogorov complexity, prediction, Boolean circuit-size complexity,
polynomial-time degrees, and data compression.
|
Fermi inhibition is a quantum statistical analogue for the inhibition of
spontaneous emission by an excited atom in a cavity. This is achieved when the
relevant motional states are already occupied by a cloud of cold atoms in the
internal ground state. We exhibit non-trivial effects at finite temperature and
in anisotropic traps, and briefly consider a possible experimental realization.
|
The aim of this paper is to explore the possibilities of Conley index
techniques in the study of heteroclinic connections between finite and infinite
invariant sets. For this, we remind the reader of the Poincar\'e
compactification: this transformation allows to project a $n$-dimensional
vector space $X$ on the $n$-dimensional unit hemisphere of $X\times \mathbb{R}$
and infinity on its $(n-1)$-dimensional equator called the sphere at infinity.
Under normalizability condition, vector fields on $X$ transform into vector
fields on the Poincar\'e hemisphere whose associated flows let the equator
invariant. The dynamics on the equator reflects the dynamics at infinity, but
is now finite and may be studied by Conley index techniques. Furthermore, we
observe that some non-isolated behavior may occur around the equator, and
introduce the concept of invariant sets at infinity of isolated invariant
dynamical complement. Through the construction of an extended phase space
together which an extended flow, we are able to adapt the Conley index
techniques and prove the existence of connections to such non-isolated
invariant sets.
|
Chaotic micromixers such as the staggered herringbone mixer developed by
Stroock et al. allow efficient mixing of fluids even at low Reynolds number by
repeated stretching and folding of the fluid interfaces. The ability of the
fluid to mix well depends on the rate at which "chaotic advection" occurs in
the mixer. An optimization of mixer geometries is a non trivial task which is
often performed by time consuming and expensive trial and error experiments. In
this paper an algorithm is presented that applies the concept of finite-time
Lyapunov exponents to obtain a quantitative measure of the chaotic advection of
the flow and hence the performance of micromixers. By performing lattice
Boltzmann simulations of the flow inside a mixer geometry, introducing massless
and non-interacting tracer particles and following their trajectories the
finite time Lyapunov exponents can be calculated. The applicability of the
method is demonstrated by a comparison of the improved geometrical structure of
the staggered herringbone mixer with available literature data.
|
We consider serious conceptual problems with the application of standard
perturbation theory, in its zero temperature version, to the computation of the
dressed Fermi surface for an interacting electronic system. In order to
overcome these difficulties, we set up a variational approach which is shown to
be equivalent to the renormalized perturbation theory where the dressed Fermi
surface is fixed by recursively computed counterterms. The physical picture
that emerges is that couplings that are irrelevant tend to deform the Fermi
surface in order to become more relevant (irrelevant couplings being those that
do not exist at vanishing excitation energy because of kinematical constraints
attached to the Fermi surface). These insights are incorporated in a
renormalization group approach, which allows for a simple approximate
computation of Fermi surface deformation in quasi one-dimensional electronic
conductors. We also analyze flow equations for the effective couplings and
quasiparticle weights. For systems away from half-filling, the flows show three
regimes corresponding to a Luttinger liquid at high energies, a Fermi liquid,
and a low-energy incommensurate spin-density wave. At half-filling Umklapp
processes allow for a Mott insulator regime where the dressed Fermi surface is
flat, implying a confined phase with vanishing effective transverse
single-particle coherence. The boundary between the confined and Fermi liquid
phases is found to occur for a bare transverse hopping amplitude of the order
of the Mott charge gap of a single chain.
|
We provide predictions on small-scale cosmological density power spectrum
from supernova lensing dispersion. Parameterizing the primordial power spectrum
with running $\alpha$ and running of running $\beta$ of the spectral index, we
exclude large positive $\alpha$ and $\beta$ parameters which induce too large
lensing dispersions over current observational upper bound. We ran cosmological
N-body simulations of collisionless dark matter particles to investigate
non-linear evolution of the primordial power spectrum with positive running
parameters. The initial small-scale enhancement of the power spectrum is
largely erased when entering into the non-linear regime. For example, even if
the linear power spectrum at $k>10h {\rm Mpc}^{-1}$ is enhanced by $1-2$ orders
of magnitude, the enhancement much decreases to a factor of $2-3$ at late time
($z \leq 1.5$). Therefore, the lensing dispersion induced by the dark matter
fluctuations weakly constrains the running parameters. When including
baryon-cooling effects (which strongly enhance the small-scale clustering), the
constraint is comparable or tighter than the PLANCK constraint, depending on
the UV cut-off. Further investigations of the non-linear matter spectrum with
baryonic processes is needed to reach a firm constraint.
|
Despite considerable progress in developing artificial intelligence (AI)
algorithms for prostate cancer detection from whole slide images, the clinical
applicability of these models remains limited due to variability in
pathological annotations and existing dataset limitations. This article
proposes a novel approach to overcome these challenges by leveraging a Bayesian
framework to seamlessly integrate new data, and present results as a panel of
annotations. The framework is demonstrated by integrating a Bayesian prior with
one trained AI model to generate a distribution of Gleason patterns for each
pixel of an image. It is shown that using this distribution of Gleason patterns
rather than a ground-truth label can improve model applicability, mitigate
errors, and highlight areas of interest for pathologists. Additionally, we
present a high-quality, hand-curated dataset of prostate histopathological
images annotated at the gland level by trained pre-medical students and
verified by an expert pathologist. We highlight the potential of this adaptive
and uncertainty-aware framework for developing clinically deployable AI tools
that can support pathologists in accurate prostate cancer grading, improve
diagnostic accuracy, and create positive patient outcomes.
|
I propose a model for determining the hearer's attentional state which
depends solely on a list of salient discourse entities (S-list). The ordering
among the elements of the S-list covers also the function of the
backward-looking center in the centering model. The ranking criteria for the
S-list are based on the distinction between hearer-old and hearer-new discourse
entities and incorporate preferences for inter- and intra-sentential anaphora.
The model is the basis for an algorithm which operates incrementally, word by
word.
|
We present an approach called Q-probing to adapt a pre-trained language model
to maximize a task-specific reward function. At a high level, Q-probing sits
between heavier approaches such as finetuning and lighter approaches such as
few shot prompting, but can also be combined with either. The idea is to learn
a simple linear function on a model's embedding space that can be used to
reweight candidate completions. We theoretically show that this sampling
procedure is equivalent to a KL-constrained maximization of the Q-probe as the
number of samples increases. To train the Q-probes we consider either reward
modeling or a class of novel direct policy learning objectives based on
importance weighted policy gradients. With this technique, we see gains in
domains with ground-truth rewards (code generation) as well as implicit rewards
defined by preference data, even outperforming finetuning in data-limited
regimes. Moreover, a Q-probe can be trained on top of an API since it only
assumes access to sampling and embeddings. Code:
https://github.com/likenneth/q_probe .
|
We prove a unique ergodicity theorem for singular holomorphic foliations of
$\mathbb{P}^3(\mathbb{C})$ with hyperbolic singularities and with an invariant
plane with no foliation cycle, in analogy with a result of Dinh-Sibony
concerning unique ergodicity for foliations of $\mathbb{P}^2(\mathbb{C})$ with
an invariant line. The proof is dynamical in nature and adapts the work of
Deroin-Kleptsyn to a singular context, using the fundamental integrability
estimate of Nguy\^en.
|
We study quantum versions of the Shannon capacity of graphs and
non-commutative graphs. We introduce the asymptotic spectrum of graphs with
respect to quantum and entanglement-assisted homomorphisms, and we introduce
the asymptotic spectrum of non-commutative graphs with respect to
entanglement-assisted homomorphisms. We apply Strassen's spectral theorem (J.
Reine Angew. Math., 1988) in order to obtain dual characterizations of the
corresponding Shannon capacities and asymptotic preorders in terms of their
asymptotic spectra. This work extends the study of the asymptotic spectrum of
graphs initiated by Zuiddam (Combinatorica, 2019) to the quantum domain.
We then exhibit spectral points in the new quantum asymptotic spectra and
discuss their relations with the asymptotic spectrum of graphs. In particular,
we prove that the (fractional) real and complex Haemers bounds upper bound the
quantum Shannon capacity, which is defined as the regularization of the quantum
independence number (Man\v{c}inska and Roberson, J. Combin. Theory Ser. B,
2016), and that the fractional real and complex Haemers bounds are elements in
the quantum asymptotic spectrum of graphs. This is in contrast to the Haemers
bounds defined over certain finite fields, which can be strictly smaller than
the quantum Shannon capacity. Moreover, since the Haemers bound can be strictly
smaller than the Lov\'asz theta function (Haemers, IEEE Trans. Inf. Theory,
1979), we find that the quantum Shannon capacity and the Lov\'asz theta
function do not coincide. As a consequence, two well-known conjectures in
quantum information theory cannot both be true.
|
Given the task of positioning a ball-like object to a goal region beyond
direct reach, humans can often throw, slide, or rebound objects against the
wall to attain the goal. However, enabling robots to reason similarly is
non-trivial. Existing methods for physical reasoning are data-hungry and
struggle with complexity and uncertainty inherent in the real world. This paper
presents PhyPlan, a novel physics-informed planning framework that combines
physics-informed neural networks (PINNs) with modified Monte Carlo Tree Search
(MCTS) to enable embodied agents to perform dynamic physical tasks. PhyPlan
leverages PINNs to simulate and predict outcomes of actions in a fast and
accurate manner and uses MCTS for planning. It dynamically determines whether
to consult a PINN-based simulator (coarse but fast) or engage directly with the
actual environment (fine but slow) to determine optimal policy. Evaluation with
robots in simulated 3D environments demonstrates the ability of our approach to
solve 3D-physical reasoning tasks involving the composition of dynamic skills.
Quantitatively, PhyPlan excels in several aspects: (i) it achieves lower regret
when learning novel tasks compared to state-of-the-art, (ii) it expedites skill
learning and enhances the speed of physical reasoning, (iii) it demonstrates
higher data efficiency compared to a physics un-informed approach.
|
Typical Node.js applications extensively rely on packages hosted in the npm
registry. As such packages may be used by thousands of other packages or
applications, it is important to assess their code coverage. Moreover,
increasing code coverage may help detect previously unknown issues. In this
paper, we introduce TESA, a new tool that automatically assembles a test suite
for any package in the npm registry. The test suite includes 1) tests written
for the target package and usually hosted in its development repository, and 2)
tests selected from dependent packages. The former tests allow assessing the
code coverage of the target package, while the latter ones can increase code
coverage by exploiting third-party tests that also exercise code in the target
package. We use TESA to assess the code coverage of 500 popular npm packages.
Then, we demonstrate that TESA can significantly increase code coverage by
including tests from dependent packages. Finally, we show that the test suites
assembled by TESA increase the effectiveness of existing dynamic program
analyses to identify performance issues that are not detectable when only
executing the developer's tests.
|
A novel stearic acid (SA)/3-aminopropyltrethoxysilane (APS) composite
structure was fabricated using the combined method of the Langmuir-Blodgett
technique and self-assembly monolayer (SAM) technique. Its frictional, adhesive
properties and interface contact types between the atomic force microscope tip
and the samples were evaluated based on Amonton's laws and the general
Carpick's transition equation, respectively. The results showed that the
tip-sample contacts corresponded to the
Johnson-Kendall-Robert/Derjaguin-Muller-Toporov (DMT) transition model for
SiO2, APS-SAMs, and the unheated SA-APS composite structure, and for the heated
SA-APS bilayer to the DMT model. Frictional forces for the four samples were
linearly dependent on external loads at higher loads, and at lower loads they
were significantly affected by adhesive forces. Frictional and scratching tests
showed that the heated SA-APS composite structure exhibited the best
lubricating properties and adhesion resistance ability, and its wear resistance
capacity was greatly improved due to the binding-mode conversion from hydrogen
bonds to covalent bonds. Thus, this kind of composite bilayer might be
promising for applications in the lubrication of nano/microelectromechanical
systems. I.
|
The High-Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory located on the
side of the Sierra Negra volcano in Mexico, has been fully operational since
2015. The HAWC collaboration has recently significantly improved their
extensive-air-shower reconstruction algorithms, which has notably advanced the
observatory performance. The energy resolution for primary gamma rays with
energies below 1~TeV was improved by including a noise-suppression algorithm.
Corrections have also been made to systematic errors in direction fitting
related to the detector and shower plane inclinations,
$\mathcal{O}(0.1^{\circ})$ biases in highly inclined showers, as well as
enhancements to the core reconstruction. The angular resolution for gamma rays
approaching the HAWC array from large zenith angles ($> 37^{\circ}$) has
improved by a factor of four at the highest energies ($> 70$~TeV) as compared
to previous reconstructions. The inclusion of a lateral distribution function
fit to the extensive air shower footprint on the array to separate gamma-ray
primaries from cosmic-ray ones, based on the resulting $\chi^{2}$ values,
improved the background rejection performance at all inclinations. At large
zenith angles, the improvement in significance is a factor of four compared to
previous HAWC publications. These enhancements have been verified by observing
the Crab Nebula, which is an overhead source for the HAWC Observatory. We show
that the sensitivity to Crab-like point sources ($E^{-2.63}$) with locations
overhead to 30$^{\circ}$ zenith is comparable or less than 10\% of the Crab
Nebula's flux between 2 and 50~TeV. Thanks to these improvements, HAWC can now
detect more sources, including the Galactic Center.
|
Let $X, Y$ be smooth algebraic varieties of the same dimension. Let $f, g : X
\to Y$ be finite polynomial mappings. We say that $f, g$ are equivalent if
there exists a regular automorphism $\Phi \in Aut(X)$ such that $f = g\circ
\Phi$. Of course if $f, g$ are equivalent, then they have the same discriminant
and the same geometric degree. We show, that conversely there is only a finite
number of non-equivalent proper polynomial mappings $f : X \to Y$, such that
$D(f) = V$ and $\mu(f) = k.$ We prove the same statement in the local
holomorphic situation. In particular we show that if $f : (\Bbb C^n, 0) \to
(\Bbb C^n, 0)$ is a proper and holomorphic mapping of topological degree two,
then there exist biholomorphisms $P,Q : (\Bbb C^n, 0) \to (\Bbb C^n, 0)$ such
that $P\circ f\circ Q (x_1, x_2,..., x_n) = (x_1^2, x_2, ..., x_n)$. Moreover,
for every proper holomorphic mapping $f : (\Bbb C^n, 0) \to (\Bbb C^n, 0)$ with
smooth discriminant there exist biholomorphisms $P,Q : (\Bbb C^n, 0) \to (\Bbb
C^n, 0)$ such that $P\circ f\circ Q (x_1, x_2,..., x_n) = (x_1^k, x_2, ...,
x_n)$, where $k = \mu(f).$
|
The hyper-netted chain (HNC) and Percus-Yevick (PY) approximations are used
to study the phase diagram of a simple hard-core Yukawa model of
charge-stabilized colloidal particles in a two-dimensional system. We calculate
the static structure factor and the pair distribution function over a wide
range of parameters. Using the statics correlation functions we present an
estimate for the liquid-solid phase diagram for the wide range of the
parameters.
|
At the CMS experiment, a growing reliance on the fast Monte Carlo application
(FastSim) will accompany the high luminosity and detector granularity expected
in Phase 2. The FastSim chain is roughly 10 times faster than the application
based on the GEANT4 detector simulation and full reconstruction referred to as
FullSim. However, this advantage comes at the price of decreased accuracy in
some of the final analysis observables. In this contribution, a machine
learning-based technique to refine those observables is presented. We employ a
regression neural network trained with a sophisticated combination of multiple
loss functions to provide post-hoc corrections to samples produced by the
FastSim chain. The results show considerably improved agreement with the
FullSim output and an improvement in correlations among output observables and
external parameters. This technique is a promising replacement for existing
correction factors, providing higher accuracy and thus contributing to the
wider usage of FastSim.
|
It is proved that the information divergence statistic is infinitely more
Bahadur efficient than the power divergence statistics of the orders $\alpha
>1$ as long as the sequence of alternatives is contiguous with respect to the
sequence of null-hypotheses and the the number of observations per bin
increases to infinity is not very slow. This improves the former result in
Harremo\"es and Vajda (2008) where the the sequence of null-hypotheses was
assumed to be uniform and the restrictions on on the numbers of observations
per bin were sharper. Moreover, this paper evaluates also the Bahadur
efficiency of the power divergence statistics of the remaining positive orders
$0< \alpha \leq 1.$ The statistics of these orders are mutually
Bahadur-comparable and all of them are more Bahadur efficient than the
statistics of the orders $\alpha > 1.$ A detailed discussion of the technical
definitions and conditions is given, some unclear points are resolved, and the
results are illustrated by examples.
|
One of the most popular time-frequency representation is certainly the Wigner
distribution. To reduce the interferences coming from its quadratic nature,
several related distributions have been proposed, among which the so-called
Born-Jordan distribution. It is well known that in the Born-Jordan distribution
the ghost frequencies are in fact damped quite well, and the noise is in
general reduced. However, the horizontal and vertical directions escape from
this general smoothing effect, so that the interferences arranged along these
directions are in general kept. Whereas these features are graphically evident
on examples and heuristically well understood in the engineering community,
there is not at present a mathematical explanation of these phenomena, valid
for general signals in L^2 and, more in general, in the space S' of temperate
distributions. In the present note we provide such a rigorous study using the
notion of wave-front set of a distribution. We use techniques from
Time-frequency Analysis, such as the modulation and Wiener amalgam spaces, and
also results of microlocal regularity of linear partial differential operators.
|
Segmenting the retinal vasculature entails a trade-off between how much of
the overall vascular structure we identify vs. how precisely we segment
individual vessels. In particular, state-of-the-art methods tend to
under-segment faint vessels, as well as pixels that lie on the edges of thicker
vessels. Thus, they underestimate the width of individual vessels, as well as
the ratio of large to small vessels. More generally, many crucial
bio-markers---including the artery-vein (AV) ratio, branching angles, number of
bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratio
and wall-to-lumen length---require precise measurements of individual vessels.
To address this limitation, we propose a novel, stochastic training scheme for
deep neural networks that better classifies the faint, ambiguous regions of the
image. Our approach relies on two key innovations. First, we train our deep
networks with dynamic weights that fluctuate during each training iteration.
This stochastic approach forces the network to learn a mapping that robustly
balances precision and recall. Second, we decouple the segmentation process
into two steps. In the first half of our pipeline, we estimate the likelihood
of every pixel and then use these likelihoods to segment pixels that are
clearly vessel or background. In the latter part of our pipeline, we use a
second network to classify the ambiguous regions in the image. Our proposed
method obtained state-of-the-art results on five retinal datasets---DRIVE,
STARE, CHASE-DB, AV-WIDE, and VEVIO---by learning a robust balance between
false positive and false negative rates. In addition, we are the first to
report segmentation results on the AV-WIDE dataset, and we have made the
ground-truth annotations for this dataset publicly available.
|
We show that the dynamics of the kinematic space of a 2-dimensional CFT is
gravitational and described by Jackiw-Teitelboim theory. We discuss the first
law of this 2-dimensional dilaton gravity theory to support the relation
between modular Hamiltonian and dilaton that underlies the kinematic space
construction. It is further argued that Jackiw-Teitelboim gravity can be
derived from a 2-dimensional version of Jacobson's maximal vacuum entanglement
hypothesis. Applied to the kinematic space context, this leads us to the
statement that the kinematic space of a 2-dimensional boundary CFT can be
obtained from coupling the boundary CFT to JT gravity through a maximal vacuum
entanglement principle.
|
Using a recent Mergelyan type theorem, we show the existence of universal
Taylor series on products of planar simply connected domains Oi that extend
continuously on the product of the union of Oi with Si , where Si are subsets
of the boundary of Oi, open in the relative topology. The universal
approximation occurs on every product of compact sets Ki such that C - Ki are
connected and for some i0 it holds that Ki0 is contained in the complement of
the union of Oi0 with the closure of Si0. Furthermore,we introduce some
topological properties of universal Taylor series that lead to the voidance of
some families of functions.
|
BD+53 2790, an O9.5Vp star, is the optical counterpart to the HMXRB 4U
2206+54. This system was classified initially as a BeX, but observational
evidence soon stressed the need to revise this classification. The permanent
asymmetry in the H-alpha line profiles (in contrast with the cyclic variations
shown by Be stars), the variations in the profile of this line in time scales
of hours (while time scales from weeks to months are expected in Be stars), and
the lack of correlation between IR observables and H-alpha line parameters,
strongly suggest that, while BD+53 2790 contains a circunstellar disc, it is
not like the one present in Be stars. Furthermore, there is evidence of
overabundance of He in BD+53 2790. Together with the presence of an anomalous
wind, found through UV spectroscopy, the possibility to link this star with the
group of He rich stars is open. We will discuss the work done with IUE data
from BD+53 2790 and the unexpected finding of a slow and dense wind, very rare
for an O9.5V star.
|
We report resistivity as well as the Hall, Seebeck and Nernst coefficients
data for Fe{1+d}Te{1-x}Se{x} single crystals with x = 0, 0.38, and 0.40. In the
parent compound Fe{1.04}Te we observe at Tn = 61 K a sudden change of all
quantities studied, which can be ascribed to the Fermi surface reconstruction
due to onset of the antiferromagnetic order. Two very closely doped samples:
Fe{1.01}Te{0.62}Se{0.38} (Se38) and Fe{1.01}Te{0.60}Se{0.40} (Se40) are
superconductors with Tc = 13.4 K and 13.9 K, respectively. There are no evident
magnetic transitions in either Se38 or Se40. Properties of these two single
crystals are almost identical at high temperatures, but start to diverge below
T ~ 80 K. Perhaps we see the onset of scattering that might be a related to
changes in short range magnetic correlations caused by selenium doping.
|
We calculate the optical (cutoff >> frequency >> temperature) conductivity in
clean graphene in the ultimate low-energy regime, when retardation effects of
the electromagnetic interaction become important and when the full Lorentz
symmetry emerges. In contrast to what happens with the short range or with the
Coulomb long-range instantaneous interactions, the optical conductivity is now
no longer equal to its non interacting value, but acquires universal
corrections in powers of the fine structure constant. The coefficient of the
first order correction is computed, and found to be of order one. We also
present the result for the conductivity in the large-N limit, with $N$ as the
number of Dirac fermions species, to the order $1/N^2$.
|
In this paper we discuss progress made in the study of the Jones polynomial
from the point of view of quantum mechanics. This study reduces to the
understanding of the quantization of the moduli space of flat SU(2)-connections
on a surface with the Chern-Simons lagrangian. We outline some background
material, then present the particular example of the torus, in which case it is
known that the quantization in question is the Weyl quantization. The paper
concludes with a possible application of this theory to the study of the
fractional quantum Hall effect, an idea originating in the works of Moore and
Read.
|
We have recently introduced a novel statistical measure of dark matter
clustering in phase space, the particle phase space average density ($P^2SAD$).
In a two-paper series, we studied the structure of $P^2SAD$ in the
Milky-Way-size Aquarius haloes, constructed a physically motivated model to
describe it, and illustrated its potential as a powerful tool to predict
signals sensitive to the nanostructure of dark matter haloes. In this letter,
we report a remarkable universality of the clustering of dark matter in phase
space as measured by $P^2SAD$ within the subhaloes of host haloes across
different environments covering a range from dwarf-size to cluster-size haloes
($10^{10}-10^{15}$ M$_\odot$). Simulations show that the universality of
$P^2SAD$ holds for more than 7 orders of magnitude, over a 2D phase space,
covering over 3 orders of magnitude in distance/velocity, with a simple
functional form that can be described by our model. Invoking the universality
of $P^2SAD$, we can accurately predict the non-linear power spectrum of dark
matter at small scales all the way down to the decoupling mass limit of cold
dark matter particles. As an application, we compute the subhalo boost to the
annihilation of dark matter in a wide range of host halo masses.
|
Following the original idea of Debye, we define and extract a gauge-invariant
screening mass from the complex static in-medium heavy-quark potential
$V_{Q\bar{Q}}$, recently obtained from lattice QCD. To this end we derive a
field theoretically motivated analytic formula that faithfully reproduces both
the screened real- as well as the imaginary part of the lattice potential with
a single temperature dependent fit parameter $m_D(T)$. Using values of the real
part of $V_{Q\bar{Q}}$ in a gluonic medium, we obtain Debye masses compatible
with predictions from HTL perturbation theory.
|
Distributed systems in general and cloud systems in particular, are
susceptible to failures that can lead to substantial economic and data losses,
security breaches, and even potential threats to human safety. Software ageing
is an example of one such vulnerability. It emerges due to routine re-usage of
computational systems units which induce fatigue within the components,
resulting in an increased failure rate and potential system breakdown. Due to
its stochastic nature, ageing cannot be directly measured, instead ageing
indicators as proxies are used. While there are dozens of studies on different
ageing indicators, their comprehensive comparison in different settings remains
underexplored. In this paper, we compare two ageing indicators in OpenStack as
a use case. Specifically, our evaluation compares memory usage (including swap
memory) and request response time, as readily available indicators. By
executing multiple OpenStack deployments with varying configurations, we
conduct a series of experiments and analyze the ageing indicators. Comparative
analysis through statistical tests provides valuable insights into the
strengths and weaknesses of the utilised ageing indicators. Finally, through an
in-depth analysis of other OpenStack failures, we identify underlying failure
patterns and their impact on the studied ageing indicators.
|
Human activity recognition aims to recognize the activities of daily living
by utilizing the sensors on different body parts. However, when the labeled
data from a certain body position (i.e. target domain) is missing, how to
leverage the data from other positions (i.e. source domain) to help learn the
activity labels of this position? When there are several source domains
available, it is often difficult to select the most similar source domain to
the target domain. With the selected source domain, we need to perform accurate
knowledge transfer between domains. Existing methods only learn the global
distance between domains while ignoring the local property. In this paper, we
propose a \textit{Stratified Transfer Learning} (STL) framework to perform both
source domain selection and knowledge transfer. STL is based on our proposed
\textit{Stratified} distance to capture the local property of domains. STL
consists of two components: Stratified Domain Selection (STL-SDS) can select
the most similar source domain to the target domain; Stratified Activity
Transfer (STL-SAT) is able to perform accurate knowledge transfer. Extensive
experiments on three public activity recognition datasets demonstrate the
superiority of STL. Furthermore, we extensively investigate the performance of
transfer learning across different degrees of similarities and activity levels
between domains. We also discuss the potential applications of STL in other
fields of pervasive computing for future research.
|
Stein discrepancies have emerged as a powerful tool for retrospective
improvement of Markov chain Monte Carlo output. However, the question of how to
design Markov chains that are well-suited to such post-processing has yet to be
addressed. This paper studies Stein importance sampling, in which weights are
assigned to the states visited by a $\Pi$-invariant Markov chain to obtain a
consistent approximation of $P$, the intended target. Surprisingly, the optimal
choice of $\Pi$ is not identical to the target $P$; we therefore propose an
explicit construction for $\Pi$ based on a novel variational argument. Explicit
conditions for convergence of Stein $\Pi$-Importance Sampling are established.
For $\approx 70\%$ of tasks in the PosteriorDB benchmark, a significant
improvement over the analogous post-processing of $P$-invariant Markov chains
is reported.
|
This paper proves that robustness implies generalization via data-dependent
generalization bounds. As a result, robustness and generalization are shown to
be connected closely in a data-dependent manner. Our bounds improve previous
bounds in two directions, to solve an open problem that has seen little
development since 2010. The first is to reduce the dependence on the covering
number. The second is to remove the dependence on the hypothesis space. We
present several examples, including ones for lasso and deep learning, in which
our bounds are provably preferable. The experiments on real-world data and
theoretical models demonstrate near-exponential improvements in various
situations. To achieve these improvements, we do not require additional
assumptions on the unknown distribution; instead, we only incorporate an
observable and computable property of the training samples. A key technical
innovation is an improved concentration bound for multinomial random variables
that is of independent interest beyond robustness and generalization.
|
We investigate the hot electrons generated from two-plasmon decay (TPD)
instability driven by laser pulses with intensity modulated by a frequency
$\Delta \omega_m$. Our primary focus lies on scenarios where $\Delta \omega_m$
is on the same order of the TPD growth rate $ \gamma_0$ ( $\Delta \omega_m \sim
\gamma_0$), corresponding to moderate laser frequency bandwidths for TPD
mitigation. With $\Delta \omega_m$ conveniently modeled by a basic two-color
scheme of the laser wave fields in fully-kinetic particle-in-cell simulations,
we demonstrate that the energies of TPD modes and hot electrons exhibit
intermittent evolution at the frequency $\Delta \omega_m$, particularly when
$\Delta \omega_m \sim \gamma_0$. With the dynamic TPD behavior, the overall
ratio of hot electron energy to the incident laser energy, $f_{hot}$, changes
significantly with $\Delta \omega_m$. While $f_{hot}$ drops notably with
increasing $\Delta \omega_m$ at large $\Delta \omega_m$ limit as expected, it
goes anomalously beyond the hot electron energy ratio for a single-frequency
incident laser pulse with the same average intensity when $\Delta \omega_m$
falls below a specific threshold frequency $\Delta \omega_c$. We find this
threshold frequency primarily depends on $\gamma_0$ and the collisional damping
rate of plasma waves, with relatively lower sensitivity to the density scale
length. We develop a scaling model characterizing the relation of $\Delta
\omega_c$ and laser plasma conditions, enabling the potential extention of our
findings to more complex and realistic scenarios.
|
We investigate dissipative anomalies in a turbulent fluid governed by the
compressible Navier-Stokes equation. We follow an exact approach pioneered by
Onsager, which we explain as a non-perturbative application of the principle of
renormalization-group invariance. In the limit of high Reynolds and P\'eclet
numbers, the flow realizations are found to be described as distributional or
"coarse-grained" solutions of the compressible Euler equations, with standard
conservation laws broken by turbulent anomalies. The anomalous dissipation of
kinetic energy is shown to be due not only to local cascade, but also to a
distinct mechanism called pressure-work defect. Irreversible heating in
stationary, planar shocks with an ideal-gas equation of state exemplifies the
second mechanism. Entropy conservation anomalies are also found to occur by two
mechanisms: an anomalous input of negative entropy (negentropy) by
pressure-work and a cascade of negentropy to small scales. We derive
"4/5th-law"-type expressions for the anomalies, which allow us to characterize
the singularities (structure-function scaling exponents) required to sustain
the cascades. We compare our approach with alternative theories and empirical
evidence. It is argued that the "Big Power-Law in the Sky" observed in electron
density scintillations in the interstellar medium is a manifestation of a
forward negentropy cascade, or an inverse cascade of usual thermodynamic
entropy.
|
We suggest a new class of hyperbolic metamaterials for THz frequencies based
on multilayer graphene structures. We calculate the dielectric permittivity
tensor of the effective nonlocal medium with a periodic stack of graphene
layers and demonstrate that tuning from elliptic to hyperbolic dispersion can
be achieved with an external gate voltage. We reveal that such graphene
structures can demonstrate a giant Purcell effect that can be used for boosting
the THz emission in semiconductor devices. Tunability of these structures can
be enhanced further with an external magnetic field which leads to the
unconventional hybridization of the TE and TM polarized waves.
|
Various widely-used mean-field type theories for a dilute Bose gas are
critically examined in the light of the recent discovery of Bose-Einstein
condensation of atomic gases in a confined geometry. By numerically solving the
mean-field equations within the framework of the Bogoliubov approximation both
stationary non-uniform case and the vortex case under rotation in a
cylindrically symmetric vessel are investigated. We obtain spatial structures
of condensate, non-condensate, anomalous correlation. The low lying excitation
spectra, the local density of states and the circulating current density in a
vortex corresponding to various levels of mean-field theories are predicted.
|
We study the rigidity properties of Grassmannian frames: basis-like sets of
unit vectors that correspond to optimal Grassmannian line packings. It is known
that Grassmannian frames characterized by the Welch bound must satisfy the
restrictive geometric and spectral conditions of being both equiangular and
tight; however, less is known about the necessary properties of other types of
Grassmannian frames. We examine explicit low-dimensional examples of
orthoplectic Grassmannian frames and conclude that, in general, the necessary
conditions for the existence of Grassmannian frames can be much less
restrictive. In particular, we exhibit a pair of $5$-element Grassmannian
frames in $\mathbb C^2$ manifesting with differently sized angle sets and
different reconstructive properties (ie, only one of them is a tight frame).
This illustrates the complexity of the line packing problem, as there are cases
where a solution may coexist with another solution of a different geometric and
spectral character. Nevertheless, we find that these "twin" instances still
respect a certain rigidity, as there is a necessary trade-off between their
tightness properties and the cardinalities of their angle sets. The proof of
this depends on the observation that the traceless embedding of Conway, Hardin
and Sloane sends the vectors of a unit-norm, tight frame to a zero-summing set
on a higher dimensional sphere. In addition, we review some of the known bounds
for characterizing optimal line packings in $\mathbb C^2$ and discuss several
examples of Grassmannian frames achieving them.
|
The global structure of galaxy clusters and its evolution are tested within a
large set of TREESPH simulations, so to allow a fair statistical comparison
with available X-ray data. Structure tests are based on the "power ratios",
introduced by Buote & Tsai. Cosmological models considered are CDM, LCDM
(Omega_L=0.7) and CHDM (1 mass.neu., Omega_h = 0.2). All models are normalized
to provide a fair number density of clusters. For each model we run a P3M
simulation in a large box, where we select the most massive 40 clusters. Going
back to the initial redshift we run a hydro-TREESPH simulation for each of
them. In this way we perform a statistical comparison of the global morphology
of clusters, for each cosmological model, with ROSAT data, using Student
t-test, F-test and K-S test. The last test and its generalization to 2--D
distributions are also used to compare the joint distributions of 2 or 3 power
ratios. We find that, using DM distribution, instead of gas, as done by some
authors, leads to biased results, as baryons are distributed in a less
structured way than DM. We also find that the cosmological models considered
have different behaviours in these tests: LCDM has the worst performance. CDM
and our CHDM have similar scores. The general trend of power ratio
distributions is already fit by these models, but a further improvement is
expected either from a different DM mix or a non-flat CDM model.
|
The entanglement properties of a multiparty pure state are invariant under
local unitary transformations. The stabilizer dimension of a multiparty pure
state characterizes how many types of such local unitary transformations
existing for the state. We find that the stabilizer dimension of an $n$-qubit
($n\ge 2$) graph state is associated with three specific configurations in its
graph. We further show that the stabilizer dimension of an $n$-qubit ($n\ge 3$)
graph state is equal to the degree of irreducible two-qubit correlations in the
state.
|
We consider the family of non-local and non-convex functionals introduced by
H. Brezis and H.-M. Nguyen in a recent paper. These functionals Gamma-converge
to a multiple of the Sobolev norm or the total variation, depending on a
summability exponent, but the exact values of the constants are unknown in many
cases.
We describe a new approach to the Gamma-convergence result that leads in some
special cases to the exact value of the constants, and to the existence of
smooth recovery families.
|
One of the unique features of non-Hermitian Hamiltonians is the non-Hermitian
skin effect, namely that the eigenstates are exponentially localized at the
boundary of the system. For open quantum systems, a short-time evolution can
often be well described by the effective non-Hermitian Hamiltonians, while
long-time dynamics calls for the Lindblad master equations, in which the
Liouvillian superoperators generate time evolution. In this Letter, we find
that Liouvillian superoperators can exhibit the non-Hermitian skin effect, and
uncover its unexpected physical consequences. It is shown that the
non-Hermitian skin effect dramatically shapes the long-time dynamics, such that
the damping in a class of open quantum systems is algebraic under periodic
boundary condition but exponential under open boundary condition. Moreover, the
non-Hermitian skin effect and non-Bloch bands cause a chiral damping with a
sharp wavefront. These phenomena are beyond the effective non-Hermitian
Hamiltonians; instead, they belong to the non-Hermitian physics of full-fledged
open quantum dynamics.
|
The amount of energy consumed during the execution of software, and the
ability to predict future consumption, is an important factor in the design of
embedded electronic systems. In this technical report I examine factors in the
execution of software that can affect energy consumption. Taking a simple
embedded software benchmark I measure to what extent input data can affect
energy consumption, and propose a method for reflecting this in software energy
models.
|
In this work, we study two fundamental graph optimization problems, minimum
vertex cover (MVC) and maximum-cardinality matching (MCM), for intersection
graphs of geometric objects, e.g., disks, rectangles, hypercubes, etc., in
$d$-dimensional Euclidean space. We consider the problems in fully dynamic
settings, allowing insertions and deletions of objects.
We develop a general framework for dynamic MVC in intersection graphs,
achieving sublinear amortized update time for most natural families of
geometric objects. In particular, we show that -
- For a dynamic collection of disks in $\mathbb{R}^2$ or hypercubes in
$\mathbb{R}^d$ (for constant $d$), it is possible to maintain a
$(1+\varepsilon)$-approximate vertex cover in polylog amortized update time.
These results also hold in the bipartite case.
- For a dynamic collection of rectangles in $\mathbb{R}^2$, it is possible to
maintain a $(\frac{3}{2}+\varepsilon)$-approximate vertex cover in polylog
amortized update time.
Along the way, we obtain the first near-linear time static algorithms for MVC
in the above two cases with the same approximation factors.
Next, we turn our attention to the MCM problem. Although our MVC algorithms
automatically allow us to approximate the size of the MCM in bipartite
geometric intersection graphs, they do not produce a matching. We give another
general framework to maintain an approximate maximum matching, and further
extend the approach to handle non-bipartite intersection graphs. In particular,
we show that -
- For a dynamic collection of (bichromatic or monochromatic) disks in
$\mathbb{R}^2$ or hypercubes in $\mathbb{R}^d$ (for constant $d$), it is
possible to maintain a $(1+\varepsilon)$-approximate matching in polylog
amortized update time.
|
The temperature of the interstellar medium (ISM) is governed by several
physical process, among which radiative cooling, external UV/cosmic ray
heating, and the mechanical work by compression and expansion. In regimes where
the dynamical effect is important, the temperature deviates from that derived
by simply balancing the heating and cooling functions. This renders the
expression of the gas energy evolution with a simple equation of state (EOS)
less straightforward. Given a cooling function, the behavior of the gas is
subject to the combined effect of dynamical compression and radiative cooling.
The goal of the present work is to derive the effective EOS of a collapsing gas
within a full fluid solution. We solve the Navier-Stokes equations with a
parametric cooling term in spherical coordinate and look for a self-similar
collapse solution. We present a solution which describes a cloud that is
contracting while losing energy through radiation. This yields an effective EOS
that can be generally applied to various ISM context, where the cooling
function is available from first principles, and expressed as powerlaw product
of the density and temperature. Our findings suggest that a radiatively cooling
gas under self-gravitating collapse can easily manifest an effective polytropic
EOS, even isothermal in many scenarios. The present model provides theoretical
justification for the simplifying isothermal assumptions of simulations at
various scales, and can also provide a more realistic thermal recipe without
additional computation cost.
|
Information technology (IT) is able to fulfill one of the main needs of an
organization, such as how the executive know and manage the performance of the
organization he leads, including the human resources (HR). Faculty of
Agriculture, University of Muhammadiyah Palembang (UMP) has had personnel
information system which is used to manage HR data both employees and
lecturers. However, the information system is to support the operational
activities only. Therefore it is necessary to build an executive information
system (SIE) in the faculty is the dean. By using the SIE, the dean can easily
access summary data visualization, namely the appearance of information in the
form of graphs making it easier for executives to make decisions.
|
In this paper we apply known techniques from semigroup theory to the
Schr\"odinger problem with initial conditions. To this end, we define the
regularized Schr\"odinger semigroup acting on a space-time domain and show that
it is strongly continuous and contractive in $L_p,$ with $3/2 < p < 3.$ These
results can easily be extended to the case of conformal operators acting in the
context of differential forms, but they require positiveness conditions on the
curvature of the considered Minkowski manifold. For that purpose, we will use a
Clifford algebra setting in order to highlight the geometric characteristics of
the manifold. We give an application of such methods to the regularized
Schr\"odinger problem with initial condition and we will extended our
conclusions to the limit case. For the torus case and a class of non-oriented
higher dimensional M\"obius strip like domains we also give some explicit
formulas for the fundamental solution.
|
Constant entropy rate (conditional entropies must remain constant as the
sequence length increases) and uniform information density (conditional
probabilities must remain constant as the sequence length increases) are two
information theoretic principles that are argued to underlie a wide range of
linguistic phenomena. Here we revise the predictions of these principles to the
light of Hilberg's law on the scaling of conditional entropy in language and
related laws. We show that constant entropy rate (CER) and two interpretations
for uniform information density (UID), full UID and strong UID, are
inconsistent with these laws. Strong UID implies CER but the reverse is not
true. Full UID, a particular case of UID, leads to costly uncorrelated
sequences that are totally unrealistic. We conclude that CER and its particular
cases are incomplete hypotheses about the scaling of conditional entropies.
|
We report on the first results of a search for H2 emission from
protoplanetary disks using CRIRES, ESO's new VLT high resolution NIR
spectrograph. We observed the CTTS LkHa 264 and the debris disk 49 Cet, and
searched for the 1-0 S(1), 1-0 S(0) and 2-1 S(1) H2 emission lines. The H2 line
at 2.1218 micron is detected in LkHa 264. Our CRIRES spectra reveal the
previously observed but not detected H2 line at 2.2233 micron in LkHa 264. An
upper limit on the 2-1 S(1) H2 line flux in LkHalpha 264 is derived. These
observations are the first simultaneous detection of 1-0 S(1) and 1-0 S(0) H2
emission from a protoplanetary disk. 49 Cet does not exhibit H2 emission in any
of the three observed lines. There are a few lunar masses of optically thin hot
H2 in the inner disk (~0.1 AU) of LkHa 264, and less than a tenth of a lunar
mass of hot H2 in the inner disk of 49 Cet. The measured 1-0 S(0)/1-0 S(1) and
2-1 S(1)/1-0 S(1) line ratios in LkHa 264 indicate that the H2 emitting gas is
at T<1500 K and that the H2 is most likely thermally excited by UV photons.
Modeling of the shape of the line suggests that the disk should be seen close
to face-on (i<35). A comparative analysis of the physical properties of CTTS in
which the H2 1-0 S(1) line has been detected and non-detected indicates that
the presence of H2 emission is correlated with the magnitude of the UV excess
and the strength of the Halpha line. The lack of H2 emission in the NIR spectra
of 49 Cet and the absence of Halpha emission suggest that the gas in the inner
disk of 49 Cet has dissipated. The disk surrounding 49 Cet should have an inner
hole.
|
We study the problem of comparing a pair of geometric networks that may not
be similarly defined, i.e., when they do not have one-to-one correspondences
between their nodes and edges. Our motivating application is to compare power
distribution networks of a region. Due to the lack of openly available power
network datasets, researchers synthesize realistic networks resembling their
actual counterparts. But the synthetic digital twins may vary significantly
from one another and from actual networks due to varying underlying assumptions
and approaches. Hence the user wants to evaluate the quality of networks in
terms of their structural similarity to actual power networks. But the lack of
correspondence between the networks renders most standard approaches, e.g.,
subgraph isomorphism and edit distance, unsuitable.
We propose an approach based on the multiscale flat norm, a notion of
distance between objects defined in the field of geometric measure theory, to
compute the distance between a pair of planar geometric networks. Using a
triangulation of the domain containing the input networks, the flat norm
distance between two networks at a given scale can be computed by solving a
linear program. In addition, this computation automatically identifies the 2D
regions (patches) that capture where the two networks are different. We
demonstrate through 2D examples that the flat norm distance can capture the
variations of inputs more accurately than the commonly used Hausdorff distance.
As a notion of stability, we also derive upper bounds on the flat norm distance
between a simple 1D curve and its perturbed version as a function of the radius
of perturbation for a restricted class of perturbations. We demonstrate our
approach on a set of actual power networks from a county in the USA.
|
What will the future of cislunar communications be? The ever-expanding
horizons of the space exploration missions, and the need for establishing
sustainable space communication and navigation infrastructure necessitate to
think this question thoroughly. In this article, we examine how some of the
concepts of 6G technologies developed for terrestrial networks can be relevant
in the context of cislunar networks. We discuss how 6G concepts, such as
reconfigurable intelligent surfaces, quantum-resistant physical layer security,
private information read/write/cache networks, semantic and goal-oriented
communications, information freshness based quality of communication metrics,
multi-relay and cooperative networks, hold the potential to shape the future of
cislunar communications.
|
Using single crystal inelastic neutron scattering with and without
application of an external magnetic field and powder neutron diffraction, we
have characterized magnetic interactions in Ba$_3$Cr$_2$O$_8$. Even without
field, we found that there exist three singlet-to-triplet excitation modes in
$(h,h,l)$ scattering plane. Our complete analysis shows that the three modes
are due to spatially anisotropic interdimer interactions that are induced by
local distortions of the tetrahedron of oxygens surrounding the Jahn-Teller
active Cr$^{5+} (3d^1)$. The strong intradimer coupling of $J_0 = 2.38(2)$ meV
and weak interdimer interactions ($|J_{\rm inter}| \leq 0.52(2)$ meV) makes
Ba$_3$Cr$_2$O$_8$ a good model system for weakly-coupled $s = 1/2$ quantum spin
dimers.
|
The vertex H+-W-+h0, involving the gauge bosons W-+, the charged (H+-) and
the lightest neutral (h0) Higgs bosons, arises within the context of many
extensions of the SM, and it can be used to probe the Higgs sector of such
extensions via the decay H+- -> W+- h0. We discuss the strength of this vertex
for an extension of the MSSM with an additional complex Higgs triplet. By using
this model, we find regions of the parameter space where the decay H+- -> W+-
h0 is not only kinematically allowed, but it also becomes an important decay
mode and in some cases the dominant one.
|
X-ray spectral timing analysis is presented of XMM-Newton observations of the
narrow line Seyfert 1 galaxy I Zwicky 1 (I Zw 1) taken in 2015 January. After
exploring the effect of background flaring on timing analyses, X-ray time lags
between the reflection-dominated 0.3-1.0keV energy and continuum-dominated
1.0-4.0keV band are measured, indicative of reverberation off the inner
accretion disc. The reverberation lag time is seen to vary as a step function
in frequency; across lower frequency components of the variability, 3e-4 to
1.2e-3Hz a lag of 160s is measured, but the lag shortens to (59 +/- 4)s above
1.2e-3Hz. The lag-energy spectrum reveals differing profiles between these
ranges with a change in the dip showing the earliest arriving photons. The low
frequency signal indicates reverberation of X-rays emitted from a corona
extended at low height over the disc while at high frequencies, variability is
generated in a collimated core of the corona through which luminosity
fluctuations propagate upwards. Principal component analysis of the variability
supports this interpretation, showing uncorrelated variation in the spectral
slope of two power law continuum components. The distinct evolution of the two
components of the corona is seen as a flare passes inwards from the extended to
the collimated portion. An increase in variability in the extended corona was
found preceding the initial increase in X-ray flux. Variability from the
extended corona was seen to die away as the flare passed into the collimated
core leading to a second sharper increase in the X-ray count rate.
|
We prove that, if $A$ is a positively graded, graded commutative, local,
finite Hopf algebra, its cohomology is finitely generated, thus unifying
classical results of Wilkerson and Hopkins-Smith, and of Friedlander-Suslin. We
do this by showing the existence of conormal elementary quotients.
|
We study the thermodynamics of a Brownian particle under the influence of a
time multiplexed harmonic potential of finite width. The memory storage
mechanism and the erasure protocol realized by time multiplexed potentials are
utilized to experimentally realize erasure with work close to the Landauer's
bound. We quantify the work done on the system with respect to the duty-ratio
of time multiplexing, which also provides a handle to approach reversible
erasures. A Langevin dynamics based simulation model is developed for the
proposed memory bit and the erasure protocol, which guides the experimental
realization. The study also provides insights into transport at the micro
scale.
|
We describe arithmetic computations in terms of operations on some well known
free algebras (S1S, S2S and ordered rooted binary trees) while emphasizing the
common structure present in all them when seen as isomorphic with the set of
natural numbers.
Constructors and deconstructors seen through an initial algebra semantics are
generalized to recursively defined functions obeying similar laws.
Implementation using Scala's apply and unapply are discussed together with an
application to a realistic arbitrary size arithmetic package written in Scala,
based on the free algebra of rooted ordered binary trees, which also supports
rational number operations through an extension to signed rationals of the
Calkin-Wilf bijection.
|
The multi-messenger observations of the merger event in GW170817 did not rule
out the possibility that the remnant might be a dynamically stable neutron star
with $\mathcal{M}_{rm}\geq 2.69 \,\mathcal{M}_{\odot}.$ Based on this and other
recent events, I argue that the universal maximum density hypothesis should be
revived. Accordingly, the central densities in the cores of ultra-compact
objects must be upper-limited by the critical density number $n_{cr},$ beyond
which supradense nuclear matter becomes purely incompressible.
Based on the spacetime-matter coupling in GR, it is shown that the topology
of spacetime embedding incompressible quantum fluids with $n =n_{cr}$ must be
Minkowski flat, which implies that spacetime at the background of ultra-compact
objects should be of bimetric type.
|
Unbalanced optimal power flow refers to a class of optimization problems
subject to the steady state physics of three-phase power grids with
nonnegligible phase unbalance. Significant progress on this problem has been
made on the mathematical modelling side of unbalanced OPF, however there is a
lack of information on implementation aspects as well as data sets for
benchmarking. One of the key problems is the lack of definitions of current and
voltage bounds across different classes of representations of the power flow
equations. Therefore, this tutorial-style paper summarizes the structural
features of the unbalanced (optimal) power problem for three-phase systems. The
resulting nonlinear complex-value matrix formulations are presented for both
the bus injection and branch flow formulation frameworks, which typically
cannot be implemented as-is in optimization toolboxes. Therefore, this paper
also derives the equivalent real-value formulations, and discusses challenges
related to the implementation in optimization modeling toolboxes. The derived
formulations can be re-used easily for continuous and discrete optimization
problems in distribution networks for a variety of operational and planning
problems. Finally, bounds are derived for all variables involved, to further
the development of benchmarks for unbalanced optimal power flow, where
consensus on bound semantics is a pressing need. We believe benchmarks remain a
cornerstone for the development and validation of scalable and reproducible
optimization models and tools. The soundness of the derivations is confirmed
through numerical experiments, validated w.r.t. OpenDSS for IEEE test feeders
with 3x3 impedance matrices.
|
We study the gravitational lensing effects of spiral galaxies by taking a
model of the Milky Way and computing its lensing properties. The model is
composed of a spherical Hernquist bulge, a Miyamoto-Nagai disc and an
isothermal halo. As a strong lens, a spiral galaxy like the Milky Way can give
rise to four different imaging geometries. They are (i) three images on one
side of the galaxy centre (`disc triplets'), (ii) three images with one close
to the centre (`core triplets'), (iii) five images and (iv) seven images.
Neglecting magnification bias, we show that the core triplets, disc triplets
and fivefold imaging are roughly equally likely. Even though our models contain
edge-on discs, their image multiplicities are not dominated by disc triplets.
The halo has a small effect on the caustic structure, the time delays and
brightnesses of the images. The Milky Way model has a maximum disc (i.e., the
halo is not dynamically important in the inner parts). Strong lensing by nearly
edge-on disc galaxies breaks the degeneracy between the relative contribution
of the disc and halo to the overall rotation curve. If a spiral galaxy has a
sub-maximum disc, then the astroid caustic shrinks dramatically in size, whilst
the radial caustic shrinks more modestly. This causes changes in the relative
likelihood of the image geometries, specifically (i) core triplets are now 9/2
times more likely than disc triplets, (ii) the cross section for threefold
imaging is reduced by a factor of 2/3, whilst (iii) the cross section for
fivefold imaging is reduced by 1/2. Although multiple imaging is less likely
(the cross sections are smaller), the average total magnification is greater.
|
We present a generalization of the dynamical model of information
transmission and herd behavior proposed by Eguiluz and Zimmermann. A
characteristic size of group of agents $s_{0}$ is introduced. The fragmentation
and coagulation rates of groups of agents are assumed to depend on the size of
the group. We present results of numerical simulations and mean field analysis.
It is found that the size distribution of groups of agents $n_{s}$ exhibits two
distinct scaling behavior depending on $s \leq s_{0}$ or $s > s_{0}$. For $s
\leq s_{0}$, $n_{s} \sim s^{-(5/2 + \delta)}$, while for $s > s_{0}$, $n_{s}
\sim s^{-(5/2 -\delta)}$, where $\delta$ is a model parameter representing the
sensitivity of the fragmentation and coagulation rates to the size of the
group. Our model thus gives a tunable exponent for the size distribution
together with two scaling regimes separated by a characteristic size $s_{0}$.
Suitably interpreted, our model can be used to represent the formation of
groups of customers for certain products produced by manufacturers. This, in
turn, leads to a distribution in the size of businesses. The characteristic
size $s_{0}$, in this context, represents the size of a business for which the
customer group becomes too large to be kept happy but too small for the
business to become a brand name.
|
We study the relationship between recent conjectures on slopes of
overconvergent p-adic modular forms "near the boundary" of p-adic weight space.
We also prove in tame level 1 that the coefficients of the Fredholm series of
the U_p operator never vanish modulo p, a phenomenon that fails at higher
level. In higher level, we do check that infinitely many coefficients are
non-zero modulo p using a modular interpretation of the mod p reduction of the
Fredholm series recently discovered by Andreatta, Iovita and Pilloni.
|
Non-Euclidean data that are indexed with a scalar predictor such as time are
increasingly encountered in data applications, while statistical methodology
and theory for such random objects are not well developed yet. To address the
need for new methodology in this area, we develop a total variation
regularization technique for nonparametric Fr\'echet regression, which refers
to a regression setting where a response residing in a metric space is paired
with a scalar predictor and the target is a conditional Fr\'echet mean.
Specifically, we seek to approximate an unknown metric-space valued function by
an estimator that minimizes the Fr\'echet version of least squares and at the
same time has small total variation, appropriately defined for metric-space
valued objects. We show that the resulting estimator is representable by a
piece-wise constant function and establish the minimax convergence rate of the
proposed estimator for metric data objects that reside in Hadamard spaces. We
illustrate the numerical performance of the proposed method for both simulated
and real data, including metric spaces of symmetric positive-definite matrices
with the affine-invariant distance, of probability distributions on the real
line with the Wasserstein distance, and of phylogenetic trees with the
Billera--Holmes--Vogtmann metric.
|
We report on light emission from high-Q neodymium-implanted silica
microtoroids. Following the description of the fabrication process of
microtoroids, neodymium light emission is analysed. This emission is coupled to
various cavity modes. Using evanescent wave coupling we achieve selective
detection of Whispering Gallery Modes of a microtoroid.
|
We introduce new natural generalizations of the classical descent and
inversion statistics for permutations, called width-$k$ descents and width-$k$
inversions. These variations induce generalizations of the excedance and major
statistics, providing a framework in which the most well-known
equidistributivity results for classical statistics are paralleled. We explore
additional relationships among the statistics providing specific formulas in
certain special cases. Moreover, we explore the behavior of these width-$k$
statistics in the context of pattern avoidance.
|
We investigate the superradiant instability of dyonic Reissner-Nordstr\"{o}m
(dRN) black holes with two charges under a charged massive scalar perturbation.
Two conditions for possessing a trapping well are obtained from analyzing
asymptotic scalar potential and far-region wave functions. It is clear that the
superradiant instability is not allowed in the dRN black holes because the
conditions for a trapping well are not compatible with the superradiance
condition.
|
We consider two-alternative elections where voters' preferences depend on a
state variable that is not directly observable. Each voter receives a private
signal that is correlated to the state variable. Voters may be "contingent"
with different preferences in different states; or predetermined with the same
preference in every state. In this setting, even if every voter is a contingent
voter, agents voting according to their private information need not result in
the adoption of the universally preferred alternative, because the signals can
be systematically biased.
We present an easy-to-deploy mechanism that elicits and aggregates the
private signals from the voters, and outputs the alternative that is favored by
the majority. In particular, voters truthfully reporting their signals forms a
strong Bayes Nash equilibrium (where no coalition of voters can deviate and
receive a better outcome).
|
This paper explores the use of optimization to design multifunctional
metamaterials, and proposes a methodology for constructing a design envelope of
potential properties. A thermal-mechanical metamaterial, proposed by Ai and Gao
(2017), is used as the subject of the study. The properties of the metamaterial
are computed using finite element-based periodic homogenization, which is
implemented in Abaqus utilizing an open-source plugin (EasyPBC). Several
optimization problems are solved using a particle swarm-based optimization
method from the pyOpt package. A series of constrained optimization problems
are used to construct a design envelop of potential properties. The design
envelope more fully captures the potential of the metamaterial, compared with
the current practice of using parametric studies. This is because the optimizer
can change all parameters simultaneously to find the optimal design. This
demonstrates the potential of using an optimization-based approach for
designing and exploring multifunctional metamaterial properties. This proposed
approach is general and can be applied to any metamaterial design, assuming an
accurate numerical model exists to evaluate its properties.
|
Rare properties remain a challenge for statistical model checking (SMC) due
to the quadratic scaling of variance with rarity. We address this with a
variance reduction framework based on lightweight importance splitting
observers. These expose the model-property automaton to allow the construction
of score functions for high performance algorithms.
The confidence intervals defined for importance splitting make it appealing
for SMC, but optimising its performance in the standard way makes distribution
inefficient. We show how it is possible to achieve equivalently good results in
less time by distributing simpler algorithms. We first explore the challenges
posed by importance splitting and present an algorithm optimised for
distribution. We then define a specific bounded time logic that is compiled
into memory-efficient observers to monitor executions. Finally, we demonstrate
our framework on a number of challenging case studies.
|
Aero-optical beam control relies on the development of low-latency
forecasting techniques to quickly predict wavefronts aberrated by the Turbulent
Boundary Layer (TBL) around an airborne optical system, and its study applies
to a multi-domain need from astronomy to microscopy for high-fidelity laser
propagation. We leverage the forecasting capabilities of the Dynamic Mode
Decomposition (DMD) -- an equation-free, data-driven method for identifying
coherent flow structures and their associated spatiotemporal dynamics -- in
order to estimate future state wavefront phase aberrations to feed into an
adaptive optic (AO) control loop. We specifically leverage the optimized DMD
(opt-DMD) algorithm on a subset of the Airborne Aero-Optics Laboratory
Transonic (AAOL-T) experimental dataset, characterizing aberrated wavefront
dynamics for 23 beam propagation directions via the spatiotemporal
decomposition underlying DMD. Critically, we show that opt-DMD produces an
optimally de-biased eigenvalue spectrum with imaginary eigenvalues, allowing
for arbitrarily long forecasting to produce a robust future-state prediction,
while exact DMD loses structural information due to modal decay rates.
|
Joint studies of imaging and spectroscopic samples, informed by theory and
simulations, offer the potential for comprehensive tests of the cosmological
model over redshifts z<1.5. Spectroscopic galaxy samples at these redshifts can
be increased beyond the planned Dark Energy Spectroscopic Instrument (DESI)
program by at least an order of magnitude, thus offering significantly more
constraining power for these joint studies. Spectroscopic observations of these
galaxies in the latter half of the 2020's and beyond would leverage the theory
and simulation effort in this regime. In turn, these high density observations
will allow enhanced tests of dark energy, physics beyond the standard model,
and neutrino masses that will greatly exceed what is currently possible. Here,
we present a coordinated program of simulations, theoretical modeling, and
future spectroscopy that would enable precise cosmological studies in the
accelerating epoch where the effects of dark energy are most apparent.
|
In this article we offer the interpretation of the fermionic T-duality of the
type II superstring theory in double space. We generalize the idea of double
space doubling the fermionic sector of the superspace. In such doubled space
fermionic T-duality is represented as permutation of the fermionic coordinates
$\theta^\alpha$ and $\bar\theta^\alpha$ with the corresponding fermionic T-dual
ones, $\vartheta_\alpha$ and $\bar\vartheta_\alpha$, respectively. Demanding
that T-dual transformation law has the same form as inital one, we obtain the
known form of the fermionic T-dual NS-R i R-R background fields. Fermionic
T-dual NS-NS background fields are obtained under some assumptions. We conclude
that only symmetric part of R-R field strength and symmetric part of its
fermionic T-dual contribute to the fermionic T-duality transformation of
dilaton field and analyze the dilaton field in fermionic double space. As a
model we use the ghost free action of type II superstring in pure spinor
formulation in approximation of constant background fields up to the quadratic
terms.
|
In this paper we study the monotonicity, in-betweenness and in-sphere
properties of matrix means with respect to Bures-Wasserstein, Hellinger and
Log-Determinant metrics. More precisely, we show that the matrix power means
(Kubo-Ando and non-Kubo-Ando extensions) satisfy the in-betweenness property in
the Hellinger metric. We also show that for two positive definite matrices $A$
and $B$, the curve of weighted Heron means, the geodesic curve of the
arithmetic and the geometric mean lie inside the sphere centered at the
geometric mean with the radius equal to half of the Log-Determinant distance
between $A$ and $B$.
|
In environments with high dense neutrino gases, such as in core-collapse
supernovae, the neutrinos can experience collective neutrino oscillation due to
their self-interactions. In particular, fast flavor conversion driven by the
crossings in the neutrino angular distribution can affect explosion mechanism,
nucleosynthesis, and neutrino observation. We perform the numerical computation
of nonlinear flavor evolution on the neutrino angular distribution with tiny
crossings expected to be generated in the preshock region. We demonstrate that
the fast instability is triggered and a cascade develops under a realistic
three-flavor model considering muon production and weak magnetism in the SN
dynamics. The tiny crossing excites specific spatial modes, and then the flavor
instability propagates into other modes which otherwise remain stable due to
the nonlinear effects. Our results indicate that fast flavor conversion can
rise in the preshock region and have a sufficient impact on the flavor
contents.
|
Obtaining accurate and reliable images from low-dose computed tomography (CT)
is challenging. Regression convolutional neural network (CNN) models that are
learned from training data are increasingly gaining attention in low-dose CT
reconstruction. This paper modifies the architecture of an iterative regression
CNN, BCD-Net, for fast, stable, and accurate low-dose CT reconstruction, and
presents the convergence property of the modified BCD-Net. Numerical results
with phantom data show that applying faster numerical solvers to model-based
image reconstruction (MBIR) modules of BCD-Net leads to faster and more
accurate BCD-Net; BCD-Net significantly improves the reconstruction accuracy,
compared to the state-of-the-art MBIR method using learned transforms; BCD-Net
achieves better image quality, compared to a state-of-the-art iterative NN
architecture, ADMM-Net. Numerical results with clinical data show that BCD-Net
generalizes significantly better than a state-of-the-art deep (non-iterative)
regression NN, FBPConvNet, that lacks MBIR modules.
|
In this note, we show that there exist cusped hyperbolic $3$-manifolds that
embed geodesically, but cannot bound geometrically. Thus, being a geometric
boundary is a non-trivial property for such manifolds. Our result complements
the work by Long and Reid on geometric boundaries of compact hyperbolic
$4$-manifolds, and by Kolpakov, Reid and Slavich on embedding arithmetic
hyperbolic manifolds.
|
In this paper, optimal power allocation and capacity regions are derived for
GSIC (groupwise successive interference cancellation) systems operating in
multipath fading channels, under imperfect channel estimation conditions. It is
shown that the impact of channel estimation errors on the system capacity is
two-fold: it affects the receivers' performance within a group of users, as
well as the cancellation performance (through cancellation errors). An
iterative power allocation algorithm is derived, based on which it can be shown
that the total required received power is minimized when the groups are ordered
according to their cancellation errors, and the first detected group has the
smallest cancellation error.
Performace/complexity tradeoff issues are also discussed by directly
comparing the system capacity for different implementations: GSIC with linear
minimum-mean-square error (LMMSE) receivers within the detection groups, GSIC
with matched filter receivers, multicode LMMSE systems, and simple all matched
filter receivers systems.
|
Consider a MIMO interference channel whereby each transmitter and receiver
are equipped with multiple antennas. The basic problem is to design optimal
linear transceivers (or beamformers) that can maximize system throughput. The
recent work [1] suggests that optimal beamformers should maximize the total
degrees of freedom and achieve interference alignment in high SNR. In this
paper we first consider the interference alignment problem in spatial domain
and prove that the problem of maximizing the total degrees of freedom for a
given MIMO interference channel is NP-hard. Furthermore, we show that even
checking the achievability of a given tuple of degrees of freedom for all
receivers is NP-hard when each receiver is equipped with at least three
antennas. Interestingly, the same problem becomes polynomial time solvable when
each transmit/receive node is equipped with no more than two antennas. Finally,
we propose a distributed algorithm for transmit covariance matrix design, while
assuming each receiver uses a linear MMSE beamformer. The simulation results
show that the proposed algorithm outperforms the existing interference
alignment algorithms in terms of system throughput.
|
A recent line of work has shown a surprising connection between
multicalibration, a multi-group fairness notion, and omniprediction, a learning
paradigm that provides simultaneous loss minimization guarantees for a large
family of loss functions. Prior work studies omniprediction in the batch
setting. We initiate the study of omniprediction in the online adversarial
setting. Although there exist algorithms for obtaining notions of
multicalibration in the online adversarial setting, unlike batch algorithms,
they work only for small finite classes of benchmark functions $F$, because
they require enumerating every function $f \in F$ at every round. In contrast,
omniprediction is most interesting for learning theoretic hypothesis classes
$F$, which are generally continuously large.
We develop a new online multicalibration algorithm that is well defined for
infinite benchmark classes $F$, and is oracle efficient (i.e. for any class
$F$, the algorithm has the form of an efficient reduction to a no-regret
learning algorithm for $F$). The result is the first efficient online
omnipredictor -- an oracle efficient prediction algorithm that can be used to
simultaneously obtain no regret guarantees to all Lipschitz convex loss
functions. For the class $F$ of linear functions, we show how to make our
algorithm efficient in the worst case. Also, we show upper and lower bounds on
the extent to which our rates can be improved: our oracle efficient algorithm
actually promises a stronger guarantee called swap-omniprediction, and we prove
a lower bound showing that obtaining $O(\sqrt{T})$ bounds for
swap-omniprediction is impossible in the online setting. On the other hand, we
give a (non-oracle efficient) algorithm which can obtain the optimal
$O(\sqrt{T})$ omniprediction bounds without going through multicalibration,
giving an information theoretic separation between these two solution concepts.
|
Diversity conveys advantages in nature, yet homogeneous neurons typically
comprise the layers of artificial neural networks. Here we construct neural
networks from neurons that learn their own activation functions, quickly
diversify, and subsequently outperform their homogeneous counterparts on image
classification and nonlinear regression tasks. Sub-networks instantiate the
neurons, which meta-learn especially efficient sets of nonlinear responses.
Examples include conventional neural networks classifying digits and
forecasting a van der Pol oscillator and physics-informed Hamiltonian neural
networks learning H\'enon-Heiles stellar orbits and the swing of a video
recorded pendulum clock. Such \textit{learned diversity} provides examples of
dynamical systems selecting diversity over uniformity and elucidates the role
of diversity in natural and artificial systems.
|
COVID-19 pandemic is an ongoing global pandemic which has caused
unprecedented disruptions in the public health sector and global economy. The
virus, SARS-CoV-2 is responsible for the rapid transmission of coronavirus
disease. Due to its contagious nature, the virus can easily infect an
unprotected and exposed individual from mild to severe symptoms. The study of
the virus effects on pregnant mothers and neonatal is now a concerning issue
globally among civilians and public health workers considering how the virus
will affect the mother and the neonates health. This paper aims to develop a
predictive model to estimate the possibility of death for a COVID-diagnosed
mother based on documented symptoms: dyspnea, cough, rhinorrhea, arthralgia,
and the diagnosis of pneumonia. The machine learning models that have been used
in our study are support vector machine, decision tree, random forest, gradient
boosting, and artificial neural network. The models have provided impressive
results and can accurately predict the mortality of pregnant mothers with a
given input.The precision rate for 3 models(ANN, Gradient Boost, Random Forest)
is 100% The highest accuracy score(Gradient Boosting,ANN) is 95%,highest
recall(Support Vector Machine) is 92.75% and highest f1 score(Gradient
Boosting,ANN) is 94.66%. Due to the accuracy of the model, pregnant mother can
expect immediate medical treatment based on their possibility of death due to
the virus. The model can be utilized by health workers globally to list down
emergency patients, which can ultimately reduce the death rate of COVID-19
diagnosed pregnant mothers.
|
The relation between the central mass and quasar luminosity (M_BH \propto
L^{\alpha}FHWM^2) links a given Eddington ratio with a value of H_0, within a
cosmology with fixed (\Omega_m,\Omega_{\Lambda}). We point out that because the
relation is calibrated at low z using distance independent reverberation
mapping to get the BLR size, the derived M_BH interestingly does not depend on
H_0, while L/L_Edd is sensitive to H_0, but rather robust to changes of
\Omega_{\Lambda} in the standard flat model. This means, e.g., that enough of
extragalactic objects radiating at the Eddington limit could be used to study
the global Hubble constant in a new way, bypassing the local distance ladder.
The method could become practical when systematic errors in derived M_BH are
understood and objects with L /leq L_Edd can be independently identified. As an
illustration, if we take a sample of tranquil very luminous quasars in the
redshift range 0.5 < z < 1.6, and assume that they are radiating with L_bol
\leq L_Edd, then the usual numeric factors used for calculating M_BH and L_bol
would lead to the result that the Hubble constant must be larger than 45
km/s/Mpc.
|
Nakano's later modality allows types to express that the output of a function
does not immediately depend on its input, and thus that computing its fixpoint
is safe. This idea, guarded recursion, has proved useful in various contexts,
from functional programming with infinite data structures to formulations of
step-indexing internal to type theory. Categorical models have revealed that
the later modality corresponds in essence to a simple reindexing of the
discrete time scale.
Unfortunately, existing guarded type theories suffer from significant
limitations for programming purposes. These limitations stem from the fact that
the later modality is not expressive enough to capture precise input-output
dependencies of functions. As a consequence, guarded type theories reject many
productive definitions.
Combining insights from guarded type theories and synchronous programming
languages, we propose a new modality for guarded recursion. This modality can
apply any well-behaved reindexing of the time scale to a type. We call such
reindexings time warps. Several modalities from the literature, including
later, correspond to fixed time warps, and thus arise as special cases of ours.
|
Recent observations have revealed the eccentricity and inclination
distributions of close-in super-Earths. These distributions have the potential
to constrain their formation processes. In the in-situ formation scenario, the
eccentricities and inclinations of planets are determined by gravitational
scattering and collisions between protoplanets on the giant impact stage. We
investigate the effect of the initial eccentricities and inclinations of
protoplanets on the formation of close-in super-Earths. We perform $N$-body
simulations of protoplanets in gas-free disks, changing the initial
eccentricities and inclinations systematically. We find that while the
eccentricities of protoplanets are well relaxed through their evolution, the
inclinations are not. When the initial inclinations are small, they are not
generally pumped up since scattering is less effective and collisions occur
immediately after orbital crossing. On the other hand, when the initial
inclinations are large, they tend to be kept large since collisional damping is
less effective. Not only the resultant inclinations of planets, but also their
number, eccentricities, angular momentum deficit, and orbital separations are
affected by the initial inclinations of protoplanets.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.