text
stringlengths 6
128k
|
---|
We show that the variance of the number of simultaneous zeros of $m$ i.i.d.
Gaussian random polynomials of degree $N$ in an open set $U \subset C^m$ with
smooth boundary is asymptotic to $N^{m-1/2} \nu_{mm} Vol(\partial U)$, where
$\nu_{mm}$ is a universal constant depending only on the dimension $m$. We also
give formulas for the variance of the volume of the set of simultaneous zeros
in $U$ of $k<m$ random degree-$N$ polynomials on $C^m$. Our results hold more
generally for the simultaneous zeros of random holomorphic sections of the
$N$-th power of any positive line bundle over any $m$-dimensional compact
K\"ahler manifold.
|
In a previous paper we constructed rank and support variety theories for
"quantum elementary abelian groups," that is, tensor products of copies of Taft
algebras. In this paper we use both variety theories to classify the thick
tensor ideals in the stable module category, and to prove a tensor product
property for the support varieties.
|
We present a spectrum of the symbiotic star V1016 Cyg observed with the 3.6 m
Canada-France-Hawaii Telescope, in order to illustrate a method to measure the
covering factor of the neutral scattering region around the giant component
with respect to the hot emission region around the white dwarf component. In
the spectrum, we find broad wings around H$\alpha$ and a broad emission feature
around 6545${\rm \AA}$ that is blended with the [N II]$ \lambda$ 6548 line.
These two features are proposed to be formed by Raman scattering by atomic
hydrogen, where the incident radiation is proposed to be UV continuum radiation
around Ly$\beta$ in the former case and He II $\lambda$ 1025 emission line
arising from $n=6\to n=2$ transitions for the latter feature. We remove the
H$\alpha$ wings by a template Raman scattering wing profile and subtract the [N
II] $\lambda$ 6548 line using the 3 times stronger [N II] $\lambda$ 6583
feature in order to isolate the He II Raman scattered 6545 \AA line. We obtain
the flux ratio $F_{6545}/F_{6560}=0.24$ of the He II $\lambda$ 6560 emission
line and the 6545 \AA feature for V1016 Cyg. Under the assumption that the He
II emission from this object is isotropic, this ratio is converted to the ratio
$\Phi_{6545}/\Phi_{1025}=0.17$ of the number of the incident photons and that
of the scattered photons. This implies that the scattering region with H I
column density $N_{HI}\ge 10^{20}{\rm cm^{-2}}$ covers 17 per cent of the
emission region. By combining the presumed binary period $\sim 100$ yrs of this
system we infer that a significant fraction of the slow stellar wind from the
Mira component is ionized and that the scattering region around the Mira
extends a few tens of AU, which is closely associated with the mass loss
process of the Mira component.
|
Recently, the Navier-Stokes-Voight (NSV) model of viscoelastic incompressible
fluid has been proposed as a regularization of the 3D Navier-Stokes equations
for the purpose of direct numerical simulations. In this work we prove that the
global attractor of the $D NSV equations, driven by an analytic forcing,
consists of analytic functions. A consequence of this result is that the
spectrum of the solutions of the 3D NSV system, lying on the global attractor,
have exponentially decaying tail, despite the fact that the equations behave
like a damped hyperbolic system, rather than the parabolic one. This result
provides an additional evidence that the 3D NSV with the small regularization
parameter enjoys similar statistical properties as the 3D Navier-Stokes
equations. Finally, we calculate a lower bound for the exponential decaying
scale -- the scale at which the spectrum of the solution start to decay
exponentially, and establish a similar bound for the steady state solutions of
the 3D NSV and 3D Navier-Stokes equations. Our estimate coincides with similar
available lower bound for the smallest dissipation length scale of solutions of
the 3D Navier-Stokes equations.
|
Recently the EDGES experiment reported an enhanced 21cm absorption signal in
the radio wave observation, which may be interpreted as either anomalous
cooling of baryons or heating of cosmic microwave background photons. In this
paper, we pursue the latter possibility. We point out that dark radiation
consisting of axion-like particles can resonantly convert into photons under
the intergalactic magnetic field, which can effectively heat up the radiation
in the frequency range relevant for the EDGES experiment. This may explain the
EDGES anomaly.
|
Recently it was shown that quantum corrections to the Newton potential can
explain the rotation curves in spiral galaxies without introducing the Dark
Matter halo. The unique phenomenological parameter $\al\nu$ of the theory grows
with the mass of the galaxy. In order to better investigate the mass-dependence
of $\al\nu$ one needs to check the upper bound for $\al\nu$ at a smaller scale.
Here we perform the corresponding calculation by analyzing the dynamics of the
Laplace-Runge-Lenz vector. The resulting limitation on quantum corrections is
quite severe, suggesting a strong mass-dependence of $\al\nu$.
|
We study the functional codes $C_h(X)$ defined by G. Lachaud in $\lbrack 10
\rbrack$ where $X \subset {\mathbb{P}}^N$ is an algebraic projective variety of
degree $d$ and dimension $m$. When $X$ is a hermitian surface in $PG(3,q)$,
S{\o}rensen in \lbrack 15\rbrack, has conjectured for $h\le t$ (where $q=t^2$)
the following result : $$# X_{Z(f)}(\mathbb{F}_{q}) \le h(t^{3}+ t^{2}-t)+t+1$$
which should give the exact value of the minimum distance of the functional
code $C_h(X)$. In this paper we resolve the conjecture of S{\o}rensen in the
case of quadrics (i.e. $h=2$), we show the geometrical structure of the minimum
weight codewords and their number; we also estimate the second weight and the
geometrical structure of the codewords reaching this second weight
|
This work demonstrates the development of a strong and ductile medium entropy
alloy by employing conventional alloying and thermomechanical processing to
induce partial recrystallization (PR) and precipitation strengthening in the
microstructure. The combined usage of electron microscopy and atom probe
tomography reveals the sequence of microstructural evolution during the
process. First, the cold working of homogenized alloy resulted in a highly
deformed microstructure. On annealing at 700{\deg}C, B2 ordered precipitates
heterogeneously nucleate on the highly misoriented sites. These B2 promotes
particle stimulated nucleation (PSN) of new recrystallized strain-free grains.
The migration of recrystallized grain boundaries leads to discontinuous
precipitation of L12 ordered regions in highly dense lamellae structures.
Atomic-scale compositional analysis reveals a significant amount of Ni confined
to the GB regions between B2 and L12 precipitates, indicating Ni as a
rate-controlling element for coarsening the microstructure. On 20 hours of
annealing, the alloy comprises a composite microstructure of soft
recrystallized and hard non-recrystallized zones, B2 particles at the grain
boundaries (GBs), and coherent L12 precipitates inside the grains. The B2 pins
the GB movement during recrystallization while the latter provides high
strength. The microstructure results in a 0.2% yield stress (YS) value of 1030
MPa with 32% elongation at ambient temperature and retains up to 910 MPa at
670{\deg}C. Also, it shows exceptional microstructural stability at 700 {\deg}C
and resistance to deformation at high temperatures up to 770{\deg}C.
Examination of deformed microstructure reveals excessive twinning, formation of
stacking faults, shearing of L12 precipitates, and accumulation of dislocations
at around the B2 precipitates and GBs attributed to high strain hardening of
the alloy.
|
QCD factorization takes different forms in the large-x and small-x regimes.
At large-x, collinear factorization leads to the DGLAP evolution equation,
while at small-x, rapidity factorization results in the BFKL equation. To unify
these different regimes, a new TMD factorization based on the background field
method is proposed. This factorization not only reduces to CSS and DGLAP in the
large-x limit and BFKL in the small-x limit, but also defines a general
evolution away from these regimes.
|
Dielectric response in methanol measured in wide pressure and temperature
range ($P < 6.0$ GPa; 100 K $<T<$ 360 K) reveals a series of anomalies which
can be interpreted as a transformation between several solid phases of methanol
including a hitherto unknown high-pressure low-temperature phase with stability
range $P > $ 1.2 GPa $T < 270$ K. In the intermediate P-T region $P \approx
3.4-3.7$ GPa $T \approx 260-280$ K a set of complicated structural
transformations occurs involving four methanol crystalline structures. At
higher pressures within a narrow range $P \approx 4.3-4.5$ GPa methanol can be
obtained in the form of fragile glass ($T_g \approx 200$ K, $m_p \approx 80$ at
$P= 4.5$ GPa) by relatively slow cooling.
|
We characterize discrete (anti-)unitary symmetries and their non-invertible
generalizations in $2+1$d topological quantum field theories (TQFTs) through
their actions on line operators and fusion spaces. We explain all possible
sources of non-invertibility that can arise in this context. Our approach gives
a simple $2+1$d proof that non-invertible generalizations of unitary symmetries
exist if and only if a bosonic TQFT contains condensable bosonic line operators
(i.e., these non-invertible symmetries are necessarily "non-intrinsic"). Moving
beyond unitary symmetries and their non-invertible cousins, we define a
non-invertible generalization of time-reversal symmetries and derive various
properties of TQFTs with such symmetries. Finally, using recent results on
2-categories, we extend our results to corresponding statements in $2+1$d
quantum field theories that are not necessarily topological.
|
In this article, we review a series of recent theoretical results regarding a
conventional approach to the dark energy (DE) concept. This approach is
distinguished among others for its simplicity and its physical relevance. By
compromising General Relativity (GR) and Thermodynamics at cosmological scale,
we end up with a model without DE. Instead, the Universe we are proposing is
filled with a perfect fluid of self-interacting dark matter (DM), the volume
elements of which perform hydrodynamic flows. To the best of our knowledge, it
is the first time in a cosmological framework that the energy of the cosmic
fluid internal motions is also taken into account as a source of the universal
gravitational field. As we demonstrate, this form of energy may compensate for
the DE needed to compromise spatial flatness, while, depending on the
particular type of thermodynamic processes occurring in the interior of the DM
fluid (isothermal or polytropic), the Universe depicts itself as either
decelerating or accelerating (respectively). In both cases, there is no
disagreement between observations and the theoretical prediction of the distant
supernovae (SNe) Type Ia distribution. In fact, the cosmological model with
matter content in the form of a thermodynamically-involved DM fluid not only
interprets the observational data associated with the recent history of
Universe expansion, but also confronts successfully with every major
cosmological issue (such as the age and the coincidence problems). In this way,
depending on the type of thermodynamic processes in it, such a model may serve
either for a conventional DE cosmology or for a viable alternative one.
|
Clinical trials (CTs) often fail due to inadequate patient recruitment. This
paper tackles the challenges of CT retrieval by presenting an approach that
addresses the patient-to-trials paradigm. Our approach involves two key
components in a pipeline-based model: (i) a data enrichment technique for
enhancing both queries and documents during the first retrieval stage, and (ii)
a novel re-ranking schema that uses a Transformer network in a setup adapted to
this task by leveraging the structure of the CT documents. We use named entity
recognition and negation detection in both patient description and the
eligibility section of CTs. We further classify patient descriptions and CT
eligibility criteria into current, past, and family medical conditions. This
extracted information is used to boost the importance of disease and drug
mentions in both query and index for lexical retrieval. Furthermore, we propose
a two-step training schema for the Transformer network used to re-rank the
results from the lexical retrieval. The first step focuses on matching patient
information with the descriptive sections of trials, while the second step aims
to determine eligibility by matching patient information with the criteria
section. Our findings indicate that the inclusion criteria section of the CT
has a great influence on the relevance score in lexical models, and that the
enrichment techniques for queries and documents improve the retrieval of
relevant trials. The re-ranking strategy, based on our training schema,
consistently enhances CT retrieval and shows improved performance by 15\% in
terms of precision at retrieving eligible trials. The results of our
experiments suggest the benefit of making use of extracted entities. Moreover,
our proposed re-ranking schema shows promising effectiveness compared to larger
neural models, even with limited training data.
|
The density matrix renormalization group is applied to a relativistic complex
scalar field at finite chemical potential. The two-point function and various
bulk quantities are studied. It is seen that bulk quantities do not change with
the chemical potential until it is larger than the minimum excitation energy.
The technical limitations of the density matrix renormalization group for
treating bosons in relativistic field theories are discussed. Applications to
other relativistic models and to nontopological solitons are also suggested.
|
Let $R$ be a commutative Noetherian ring, $\mathfrak a$ and $\mathfrak b$
ideals of $R$. In this paper, we study the finiteness dimension $f_{\mathfrak
a}(M)$ of $M$ relative to $\mathfrak a$ and the $\mathfrak b$-minimum
$\mathfrak a$-adjusted depth $\lambda_{\mathfrak a}^{\mathfrak b}(M)$ of $M$,
where the underlying module $M$ is relative Cohen-Macaulay w.r.t $\mathfrak a$.
Some applications of such modules are given.
|
The entanglement entropy of a black hole, and that of its Hawking radiation,
are expected to follow the so-called Page curve: After an increase in line with
Hawking's calculation, it is expected to decrease back to zero once the black
hole has fully evaporated, as demanded by unitarity. Recently, a simple
system-plus-bath model has been proposed which shows a similar behaviour. Here,
we make a general argument as to why such a Page-curve-like entanglement
dynamics should be expected to hold generally for system-plus-bath models at
small coupling and low temperatures, when the system is initialised in a pure
state far from equilibrium. The interaction with the bath will then generate
entanglement entropy, but it eventually has to decrease to the value prescribed
by the corresponding mean-force Gibbs state. Under those conditions, it is
close to the system ground state. We illustrate this on two paradigmatic
open-quantum-system models, the exactly solvable harmonic quantum Brownian
motion and the spin-boson model, which we study numerically. In the first
example we find that the intermediate entropy of an initially localised
impurity is higher for more localised initial states. In the second example,
for an impurity initialised in the excited state, the Page time--when the
entropy reaches its maximum--occurs when the excitation has half decayed.
|
We study some basic properties and examples of Hermitian metrics on complex
manifolds whose traces of the curvature of the Chern connection are
proportional to the metric itself.
|
We define $\Delta$-Baire spaces. If a paratopological group $G$ is
$\Delta$-Baire space, then $G$ is a topological group. Locally pseudocompact
spaces, Baire $p$-spaces, Baire $\Sigma$-spaces, products of \v{C}ech-complete
spaces are $\Delta$-Baire spaces.
|
In NiTe$_3$O$_6$ with a chiral crystal structure, we report on a giant
natural optical rotation of the lowest-energy magnon. This polarization
rotation, as large as 140 deg/mm, corresponds to a path difference between
right and left circular polarizations that is comparable to the sample
thickness. Natural optical rotation, being a measure of structural chirality,
is highly unusual for long-wavelength magnons. The collinear antiferromagnetic
order of NiTe$_3$O$_6$ makes this giant effect even more peculiar: Chirality of
the crystal structure does not affect the magnetic ground state but is strongly
manifested in the lowest excited state. We show that the dynamic
magnetoelectric effect, turning this magnon to a magnetic- and electric-dipole
active hybrid mode, generates the giant natural optical rotation. In finite
magnetic fields, it also leads to a strong optical magnetochiral effect.
|
We construct a class of chiral fermionic CFTs from classical codes over
finite fields whose order is a prime number. We exploit the relationship
between classical codes and Euclidean lattices to provide the Neveu-Schwarz
sector of fermionic CFTs. On the other hand, we construct the Ramond sector
using the shadow theory of classical codes and Euclidean lattices. We give
various examples of chiral fermionic CFTs through our construction. We also
explore supersymmetric CFTs in terms of classical codes by requiring the
resulting fermionic CFTs to satisfy some necessary conditions for
supersymmetry.
|
Both boron nitride (BN) and carbon (C) have sp, sp2 and sp3 hybridization
modes, and thus resulting in a variety of BN and C polymorphs with similar
structures, such as hexagonal BN (hBN) and graphite, cubic BN (cBN) and
diamond. Here, five types of BN polymorph structures were proposed
theoretically, inspired by the graphite-diamond hybrid structures discovered in
recent experiment. These BN polymorphs with graphite-diamond hybrid structures
possessed excellent mechanical properties with combined high hardness and high
ductility, and also exhibited various electronic properties such as
semi-conductivity, semi-metallicity, and even one- and two-dimensional
conductivity, differing from known insulators hBN and cBN. The simulated
diffraction patterns of these BN hybrid structures could account for the
unsolved diffraction patterns of intermediate products composed of "compressed
hBN" and diamond-like BN, caused by phase transitions in previous experiments.
Thus, this work provides a theoretical basis for the presence of these types of
hybrid materials during phase transitions between graphite-like and
diamond-like BN polymorphs.
|
Intercalation of atomic species through epitaxial graphene layers began only
a few years following its initial report in 2004. The impact of intercalation
on the electronic properties of the graphene is well known; however, the
intercalant itself can also exhibit intriguing properties not found in nature.
This suggests that a shift in the focus of epitaxial graphene intercalation
studies may lead to fruitful exploration of many new forms of traditionally 3D
materials. In the following forward-looking review, we summarize the primary
techniques used to achieve and characterize EG intercalation, and introduce a
new, facile approach to readily achieve metal intercalation at the
graphene/silicon carbide interface. We show that simple thermal
evaporation-based methods can effectively replace complicated synthesis
techniques to realize large-scale intercalation of non-refractory metals. We
also show that these methods can be extended to the formation of compound
materials based on intercalation. Two-dimensional (2D) silver (2D-Ag) and
large-scale 2D gallium nitride (2D-GaNx) are used to demonstrate these
approaches.
|
We address the convolutive blind source separation problem for the
(over-)determined case where (i) the number of nonstationary target-sources $K$
is less than that of microphones $M$, and (ii) there are up to $M - K$
stationary Gaussian noises that need not to be extracted. Independent vector
analysis (IVA) can solve the problem by separating into $M$ sources and
selecting the top $K$ highly nonstationary signals among them, but this
approach suffers from a waste of computation especially when $K \ll M$. Channel
reductions in preprocessing of IVA by, e.g., principle component analysis have
the risk of removing the target signals. We here extend IVA to resolve these
issues. One such extension has been attained by assuming the orthogonality
constraint (OC) that the sample correlation between the target and noise
signals is to be zero. The proposed IVA, on the other hand, does not rely on OC
and exploits only the independence between sources and the stationarity of the
noises. This enables us to develop several efficient algorithms based on block
coordinate descent methods with a problem specific acceleration. We clarify
that one such algorithm exactly coincides with the conventional IVA with OC,
and also explain that the other newly developed algorithms are faster than it.
Experimental results show the improved computational load of the new algorithms
compared to the conventional methods. In particular, a new algorithm
specialized for $K = 1$ outperforms the others.
|
Pairwise ordered tree alignment are combinatorial objects that appear in RNA
secondary structure comparison. However, the usual representation of tree
alignments as supertrees is ambiguous, i.e. two distinct supertrees may induce
identical sets of matches between identical pairs of trees. This ambiguity is
uninformative, and detrimental to any probabilistic analysis.In this work, we
consider tree alignments up to equivalence. Our first result is a precise
asymptotic enumeration of tree alignments, obtained from a context-free grammar
by mean of basic analytic combinatorics. Our second result focuses on
alignments between two given ordered trees $S$ and $T$. By refining our grammar
to align specific trees, we obtain a decomposition scheme for the space of
alignments, and use it to design an efficient dynamic programming algorithm for
sampling alignments under the Gibbs-Boltzmann probability distribution. This
generalizes existing tree alignment algorithms, and opens the door for a
probabilistic analysis of the space of suboptimal RNA secondary structures
alignments.
|
Ultracold atom-traps on a chip enhances the practical application of atom
traps in quantum information processing, sensing, and metrology. Plasmon
mediated near-field optical potentials are promising for trapping atoms. The
combination of plasmonic nanostructures and ultracold atoms has the potential
to create a two dimensional array of neutral atoms with lattice spacing smaller
than that of lattices created from interfering light fields -- the optical
lattices. We report the design, fabrication and characterization of a
nano-scale array of near-field optical traps for neutral atoms using plasmonic
nanostructures. The building block of the array is a metallic nano-disc
fabricated on the surface of an ITO-coated glass substrate. We numerically
simulate the electromagnetic field-distribution using Finite Difference Time
Domain method around the nanodisc, and calculate the intensity, optical
potential and the dipole force for $^{87}$Rb atoms. The optical near-field
generated from the fabricated nanostructures is experimentally characterized by
using Near-field Scanning Optical Microscopy. We find that the optical
potential and dipole force has all the desired characteristics to trap cold
atoms when a blue-detuned light-field is used to excite the nanostructures.
This trap can be used for effective trapping and manipulation of isolated atoms
and also for creating a lattice of neutral atoms having sub-optical wavelength
lattice spacing. Near-field measurements are affected by the influence of tip
on the sub-wavelength structure. We present a deconvolution method to extract
the actual near-field profile from the measured data.
|
Future gravitational-wave observations will enable unprecedented and unique
science in extreme gravity and fundamental physics answering questions about
the nature of dynamical spacetimes, the nature of dark matter and the nature of
compact objects.
|
We present the coupled oscillator: a new mechanism for signal amplification
with widespread application in metrology. We introduce the mechanical theory of
this framework, and support it by way of simulations. We present a particular
implementation of coupled oscillators: a microelectromechanical system (MEMS)
that uses one large (~100mm) N52 magnet coupled magnetically to a small
(~0.25mm), oscillating N52 magnet, providing a force resolution of 200zN
measured over 1s in a noiseless environment. We show that the same system is
able to resolve magnetic gradients of 130aT/cm at a single point (within
500um). This technology therefore has the potential to revolutionize force and
magnetic gradient sensing, including high-impact areas such cardiac and brain
imaging.
|
We studied the effective electrical conductivity of dense random resistor
networks (RRNs) produced using a Voronoi tessellation when its seeds are
generated by means of a homogeneous Poisson point process in the
two-dimensional Euclidean space. Such RRNs are isotropic and in average
homogeneous, however, local fluctuations of the number of edges per unit area
are inevitably. These RRNs may mimic, e.g., crack-template-based transparent
conductive films. The RRNs were treated within a mean-field approach (MFA). We
found an analytical dependency of the effective electrical conductivity on the
number of conductive edges (resistors) per unit area, $n_\text{E}$. The
effective electrical conductivity is proportional to $\sqrt{n_\text{E}}$ when
$n_\text{E} \gg 1$.
|
The main topic considered is maximizing the number of cycles in a graph with
given number of edges. In 2009, Kir\'aly conjectured that there is constant $c$
such that any graph with $m$ edges has at most $(1.4)^m$ cycles. In this paper,
it is shown that for sufficiently large $m$, a graph with $m$ edges has at most
$(1.443)^m$ cycles. For sufficiently large $m$, examples of a graph with $m$
edges and $(1.37)^m$ cycles are presented. For a graph with given number of
vertices and edges an upper bound on the maximal number of cycles is given.
Also, exponentially tight bounds are proved for the maximum number of cycles in
a multigraph with given number of edges, as well as in a multigraph with given
number of vertices and edges.
|
We revisit the Rellich inequality from the viewpoint of isolating the
contributions from radial and spherical derivatives. This naturally leads to a
comparison of the norms of the radial Laplacian and Laplace{Beltrami operators
with the standard Laplacian. In the case of the Laplace{ Beltrami operator, the
three-dimensional case is the most subtle and here we improve a result of Evans
and Lewis by identifying the best constant. Our arguments build on certain
identities recently established by Wadade and the second and third authors,
along with use of spherical harmonics.
|
We continue our studies on stellar latitudinal differential rotation. The
presented work is a sequel of the work of Reiners et al. who studied the
spectral line broadening profile of hundreds of stars of spectral types A
through G at high rotational speed (vsini > 12 km/s). While most stars were
found to be rigid rotators, only a few tens show the signatures of differential
rotation. The present work comprises the rotational study of some 180
additional stars. The overall broadening profile is derived according to
Reiners et al. from hundreds of spectral lines by least-squares deconvolution,
reducing spectral noise to a minimum. Projected rotational velocities vsini are
measured for about 120 of the sample stars. Differential rotation produces a
cuspy line shape which is best measured in inverse wavelength space by the
first two zeros of its Fourier transform. Rigid and differential rotation can
be distinguished for more than 50 rapid rotators (vsini > 12 km/s) among the
sample stars from the available spectra. Ten stars with significant
differential rotation rates of 10-54 % are identified, which add to the few
known rapid differential rotators. Differential rotation measurements of 6 %
and less for four of our targets are probably spurious and below the detection
limit. Including these objects, the line shapes of more than 40 stars are
consistent with rigid rotation.
|
Two series of Sm-, Gd-codoped aluminoborosilicate glasses with different
total rare earth content have been studied in order to examine the codoping
effect on the structural modifications of beta-irradiated glasses. The data
obtained by Electron Paramagnetic Resonance spectroscopy indicated that
relative amount of Gd3+ ions located in network former position reveals
non-linear dependence on Sm/Gd ratio. Besides, codoping leads to the evolution
of the EPR signal attributed to defects created by irradiation: superhyperfine
structure of boron oxygen hole centres EPR line becomes less noticeable and
resolved with increase of Gd amount. This fact manifests that Gd3+ ions are
mainly diluted in vicinity of the boron network. By Raman spectroscopy, we
showed that the structural changes induced by the irradiation also reveal
non-linear behaviour with Sm/Gd ratio. In fact, the shift of the Si-O-Si
bending vibration modes has a clear minimum for the samples containing equal
amount of Sm and Gd (50:50) in both series of the investigated glasses. In
contrast, for single doped glass there is no influence of dopant's content on
Si-O-Si shift (in case of Gd) or its diminution (in case of Sm) occurs which is
explained by the reduction process influence. At the same time, no noticeable
effect of codoping on Sm3+ intensity as well as on Sm2+ emission or on Sm
reduction process was observed.
|
We present APO and Gemini time-series photometry of WD
J004917.14$-$252556.81, an ultramassive DA white dwarf with $T_{\rm eff} =
13020$ K and $\log{g} = 9.34$. We detect variability at two significant
frequencies, making J0049$-$2525 the most massive pulsating white dwarf
currently known with $M_\star=1.31~M_{\odot}$ (for a CO core) or
$1.26~M_{\odot}$ (for an ONe core). J0049$-$2525 does not display any of the
signatures of binary mergers, there is no evidence of magnetism, large
tangential velocity, or rapid rotation. Hence, it likely formed through single
star evolution and is likely to have an ONe core. Evolutionary models indicate
that its interior is $\gtrsim99$% crystallized. Asteroseismology offers an
unprecedented opportunity to probe its interior structure. However, the
relatively few pulsation modes detected limit our ability to obtain robust
seismic solutions. Instead, we provide several representative solutions that
could explain the observed properties of this star. Extensive follow-up
time-series photometry of this unique target has the potential to discover a
significant number of additional pulsation modes that would help overcome the
degeneracies in the asteroseismic fits, and enable us to probe the interior of
an $\approx1.3~M_{\odot}$ crystallized white dwarf.
|
We prove mixed inequalities for the Hardy-Littlewood maximal function
$M^{\rho,\sigma}$, where $\rho$ is a critical radius function and $\sigma\geq
0$. We also exhibit and prove an extension of Cruz-Uribe, Martell and P\'erez
extrapolation result in \cite{CruzUribe-Martell-Perez} to the setting of
Muckenhoupt weights associated to a critical radius function $\rho$. This
theorem allows us to give mixed inequalities for
Schr\"odinger-Calder\'on-Zygmund operators, extending some previous estimates
that we have already proved in \cite{BPQ}. Since we are dealing with unrelated
weights, the proof involves a quite subtle argument related with the original
ideas from Sawyer in \cite{Sawyer}.
|
Although supersymmetry has not been seen directly by experiment, there are
powerful physics reasons to suspect that it should be an ingredient of nature
and that superpartner masses should be somewhat near the weak scale. I present
an argument that if we dismiss our ordinary intuition of finetuning, and focus
entirely on more concrete physics issues, the PeV scale might be the best place
for supersymmetry. PeV-scale supersymmetry admits gauge coupling unification,
predicts a Higgs mass between 125 GeV and 155 GeV, and generally disallows
flavor changing neutral currents and CP violating effects in conflict with
current experiment. The PeV scale is motivated independently by dark matter and
neutrino mass considerations.
|
The Higgs-boson production channel $gg\to h$ mediated by light-quark loops
receives large logarithmic corrections in QCD, which can be resummed using
factorization formulae derived in soft-collinear effective theory. In these
factorization formulae the radiative gluon jet function appears, which is a
central object in the study of factorization beyond the leading order in scale
ratios. We calculate this function at two-loop order for the first time and
present the subtleties that come along with this.
|
A widely used method for determining the similarity of two labeled trees is
to compute a maximum agreement subtree of the two trees. Previous work on this
similarity measure is only concerned with the comparison of labeled trees of
two special kinds, namely, uniformly labeled trees (i.e., trees with all their
nodes labeled by the same symbol) and evolutionary trees (i.e., leaf-labeled
trees with distinct symbols for distinct leaves). This paper presents an
algorithm for comparing trees that are labeled in an arbitrary manner. In
addition to this generality, this algorithm is faster than the previous
algorithms.
Another contribution of this paper is on maximum weight bipartite matchings.
We show how to speed up the best known matching algorithms when the input
graphs are node-unbalanced or weight-unbalanced. Based on these enhancements,
we obtain an efficient algorithm for a new matching problem called the
hierarchical bipartite matching problem, which is at the core of our maximum
agreement subtree algorithm.
|
The nonliteral interpretation of a text is hard to be understood by machine
models due to its high context-sensitivity and heavy usage of figurative
language. In this study, inspired by human reading comprehension, we propose a
novel, simple, and effective deep neural framework, called Skim and Intensive
Reading Model (SIRM), for figuring out implied textual meaning. The proposed
SIRM consists of two main components, namely the skim reading component and
intensive reading component. N-gram features are quickly extracted from the
skim reading component, which is a combination of several convolutional neural
networks, as skim (entire) information. An intensive reading component enables
a hierarchical investigation for both local (sentence) and global (paragraph)
representation, which encapsulates the current embedding and the contextual
information with a dense connection. More specifically, the contextual
information includes the near-neighbor information and the skim information
mentioned above. Finally, besides the normal training loss function, we employ
an adversarial loss function as a penalty over the skim reading component to
eliminate noisy information arisen from special figurative words in the
training data. To verify the effectiveness, robustness, and efficiency of the
proposed architecture, we conduct extensive comparative experiments on several
sarcasm benchmarks and an industrial spam dataset with metaphors. Experimental
results indicate that (1) the proposed model, which benefits from context
modeling and consideration of figurative language, outperforms existing
state-of-the-art solutions, with comparable parameter scale and training speed;
(2) the SIRM yields superior robustness in terms of parameter size sensitivity;
(3) compared with ablation and addition variants of the SIRM, the final
framework is efficient enough.
|
Given a perfect coloring of a graph, we prove that the $L_1$ distance between
two rows of the adjacency matrix of the graph is not less than the $L_1$
distance between the corresponding rows of the parameter matrix of the
coloring. With the help of an algebraic approach, we deduce corollaries of this
result for perfect $2$-colorings, perfect colorings in distance-$l$ graphs and
in distance-regular graphs. We also provide examples when the obtained property
reject several putative parameter matrices of perfect colorings in infinite
graphs.
|
Temperature measurements of galaxy clusters are used to determine their
masses, which in turn are used to determine cosmological parameters. However,
systematic differences between the temperatures measured by different
telescopes imply a significant source of systematic uncertainty on such mass
estimates. We perform the first systematic comparison between cluster
temperatures measured with Chandra and NuSTAR. This provides a useful
contribution to the effort of cross-calibrating cluster temperatures due to the
harder response of NuSTAR compared with most other observatories. We measure
average temperatures for 8 clusters observed with NuSTAR and Chandra. We fit
the NuSTAR spectra in a hard (3-10 keV) energy band, and the Chandra spectra in
both the hard and a broad (0.6-9 keV) band. We fit a power-law
cross-calibration model to the resulting temperatures. At a Chandra temperature
of 10 keV, the average NuSTAR temperature was $(10.5 \pm 3.7)\%$ and $(15.7 \pm
4.6)\%$ lower than Chandra for the broad and hard band fits respectively. We
explored the impact of systematics from background modelling and multiphase
temperature structure of the clusters, and found that these did not affect our
results. Our sample are primarily merging clusters with complex thermal
structures so are not ideal calibration targets. However, given the harder
response of NuSTAR it would be expected to measure a higher average temperature
than Chandra for a non-isothermal cluster, so we interpret our measurement as a
lower limit on the difference in temperatures between NuSTAR and Chandra.
|
We discuss information theory as a tool to investigate constrained minimal
supersymmetric Standard Model (CMSSM) in the light of observation of Higgs
boson at the Large Hadron Collider. The entropy of the Higgs boson using its
various detection modes has been constructed as a measure of the information
and has been utilized to explore a wide range of CMSSM parameter space after
including various experimental constraints from the LEP data, B-physics,
electroweak precision observables and relic density of dark matter. According
to our study while the lightest neutralino is preferred to have a mass around
1.92 TeV, the gluino mass is estimated to be around 7.44 TeV. The values of
CMSSM parameters $m_0$, $m_{1/2}$, $A_0$ and $\tan\beta$ correspond to the most
preferred scenario are found to be about 6 TeV, 3.6 TeV, $-$6.9 TeV and 36.8
respectively.
|
We calculate the total number of humps in Dyck and in Motzkin paths, and we
give Standard-Young-Tableaux-interpretations of the numbers involved. One then
observes the intriguing phenomena that the humps-calculations change the
partitions in a strip to partitions in a hook.
|
Although double-peaked narrow emission-line galaxies have been studied
extensively in the past years, only a few are reported with the green pea
galaxies (GPs). Here we present our discovery of five GPs with double-peaked
narrow [OIII] emission lines, referred to as DPGPs, selected from the LAMOST
and SDSS spectroscopic surveys. We find that these five DPGPs have blueshifted
narrow components more prominent than the redshifted components, with velocity
offsets of [OIII]$\lambda$5007 lines ranging from 306 to 518 $\rm km\, s^{-1}$
and full widths at half maximums (FWHMs) of individual components ranging from
263 to 441 $\rm km\, s^{-1}$. By analyzing the spectra and the spectral energy
distributions (SEDs), we find that they have larger metallicities and stellar
masses compared with other GPs. The H$\alpha$ line width, emission-line
diagnostic, mid-infrared color, radio emission, and SED fitting provide
evidence of the AGN activities in these DPGPs. They have the same spectral
properties of Type 2 quasars. Furthermore, we discuss the possible nature of
the double-peaked narrow emission-line profiles of these DPGPs and find that
they are more likely to be dual AGN. These DPGP galaxies are ideal laboratories
for exploring the growth mode of AGN in the extremely luminous emission-line
galaxies, the co-evolution between AGN and host galaxies, and the evolution of
high-redshift galaxies in the early Universe.
|
Hybrid superconductor-semiconductor heterostructures are promising platforms
for realizing topological superconductors and exploring Majorana bound states
physics. Motivated by recent experimental progress, we theoretically study how
magnetic insulators offer an alternative to the use of external magnetic fields
for reaching the topological regime. We consider different setups, where: (1)
the magnetic insulator induces an exchange field in the superconductor, which
leads to a splitting in the semiconductor by proximity effect, and (2) the
magnetic insulator acts as a spin-filter tunnel barrier between the
superconductor and the semiconductor. We show that the spin splitting in the
superconductor alone cannot induce a topological transition in the
semiconductor. To overcome this limitation, we propose to use a spin-filter
barrier that enhances the magnetic exchange and provides a mechanism for a
topological phase transition. Moreover, the spin-dependent tunneling introduces
a strong dependence on the band alignment, which can be crucial in
quantum-confined systems. This mechanism opens up a route towards networks of
topological wires with fewer constraints on device geometry compared to
previous devices that require external magnetic fields.
|
In this paper, we are concerned with the global existence and stability of a
smooth supersonic flow with vacuum state at infinity in a 3-D infinitely long
divergent nozzle. The flow is described by a 3-D steady potential equation,
which is multi-dimensional quasilinear hyperbolic (but degenerate at infinity)
with respect to the supersonic direction, and whose linearized part admits the
form $\p_t^2-\ds\f{1}{(1+t)^{2(\g-1)}}(\p_1^2+\p_2^2)+\ds\f{2(\g-1)}{1+t}\p_t$
for $1<\g<2$. From the physical point of view, due to the expansive geometric
property of the divergent nozzle and the mass conservation of gas, the moving
gas in the nozzle will gradually become rarefactive and tends to a vacuum state
at infinity, which implies that such a smooth supersonic flow should be
globally stable for small perturbations since there are no strong resulting
compressions in the motion of the flow. We will confirm such a global stability
phenomena by rigorous mathematical proofs and further show that there do not
exist vacuum domains in any finite part of the nozzle.
|
We show that the energy levels predicted by a 1/N-expansion method for an
N-dimensional Hydrogen atom in a spherical potential are always lower than the
exact energy levels but monotonically converge towards their exact eigenstates
for higher ordered corrections. The technique allows a systematic approach for
quantum many body problems in a confined potential and explains the remarkable
agreement of such approximate theories when compared to the exact numerical
spectrum.
|
Accurate classification of mode choice datasets is crucial for transportation
planning and decision-making processes. However, conventional classification
models often struggle to adequately capture the nuanced patterns of minority
classes within these datasets, leading to sub-optimal accuracy. In response to
this challenge, we present Ensemble Synthesizer (ENSY) which leverages
probability distribution for data augmentation, a novel data model tailored
specifically for enhancing classification accuracy in mode choice datasets. In
our study, ENSY demonstrates remarkable efficacy by nearly quadrupling the F1
score of minority classes and improving overall classification accuracy by
nearly 3%. To assess its performance comprehensively, we compare ENSY against
various augmentation techniques including Random Oversampling, SMOTE-NC, and
CTGAN. Through experimentation, ENSY consistently outperforms these methods
across various scenarios, underscoring its robustness and effectiveness
|
In this paper, we present HoloBoard, an interactive large-format
pseudo-holographic display system for lecture-based classes. With its unique
properties of immersive visual display and transparent screen, we designed and
implemented a rich set of novel interaction techniques like immersive
presentation, role-play, and lecturing behind the scene that is potentially
valuable for lecturing in class. We conducted a controlled experimental study
to compare a HoloBoard class with a normal class by measuring students'
learning outcomes and three dimensions of engagement (i.e., behavioral,
emotional, and cognitive engagement). We used pre-/post- knowledge tests and
multimodal learning analytics to measure students' learning outcomes and
learning experiences. Results indicated that the lecture-based class utilizing
HoloBoard lead to slightly better learning outcomes and a significantly higher
level of student engagement. Given the results, we discussed the impact of
HoloBoard as an immersive media in the classroom setting and suggest several
design implications for deploying HoloBoard in immersive teaching practices.
|
In this paper, we establish the sharp boundedness of p-adic multilinear
Hausdorff operators on the product of Lebesgue and central Morrey spaces
associated with both power weights and Muckenhoupt weights. Moreover, the
boundedness for the commutators of p-adic multilinear Hausdorff operators on
the such spaces with symbols in central BMO space is also obtained.
|
We present a metamaterial-based random polarization control plate to produce
incoherent laser irradiation by exploiting the ability of metamaterial in local
polarization manipulation of beam upon transmission via tuning its local
geometry. As a proof-of-principle, we exemplify this idea numerically in a
simple optical system using a typical L-shaped plasmonic metamaterial with
locally varying geometry, from which the desired polarization distribution can
be obtained. The calculating results illustrate that this scheme can
effectively suppress the speckle contrast and increase irradiation uniformity,
which has potential to satisfy the increasing requirements for incoherent laser
irradiation.
|
For Open IoT, we have proposed Tacit Computing technology to discover the
devices that have data users need on demand and use them dynamically and an
automatic GPU offloading technology as an elementary technology of Tacit
Computing. However, it can improve limited applications because it only
optimizes parallelizable loop statements extraction. Thus, in this paper, to
improve performances of more applications automatically, we propose an improved
method with reduction of data transfer between CPU and GPU. We evaluate our
proposed offloading method by applying it to Darknet and find that it can
process it 3 times as quickly as only using CPU.
|
Vanadium (IV) oxide is one of the most promising materials for thermochromic
films due to its unique, reversible crystal phase transition from monoclinic
(M1) to rutile (R) at its critical temperature (T$_c$) which corresponds to a
change in optical properties: above T$_c$, VO$_2$ films exhibit a decreased
transmittance for wavelengths of light in the near-infrared region. However, a
high transmittance modulation often sacrifices luminous transmittance which is
necessary for commercial and residential applications of this technology. In
this study, we explore the potential for synthesis of VO$_2$ films in a matrix
of metal oxide nanocrystals, using In$_2$O$_3$, TiO$_2$, and ZnO as diluents.
We seek to optimize the annealing conditions to yield desirable optical
properties. Although the films diluted with TiO$_2$ and ZnO failed to show
transmittance modulation, those diluted with In$_2$O$_3$ exhibited strong
thermochromism. Our investigation introduces a novel window film consisting of
a 0.93 metal ionic molar ratio VO$_2$-In$_2$O$_3$ nanocrystalline matrix,
demonstrating a significant increase in luminous transmittance without any
measurable impact on thermochromic character. Furthermore, solution-processing
mitigates costs, allowing this film to be synthesized 4x-7x cheaper than
industry standards. This study represents a crucial development in film
chemistry and paves the way for further application of VO$_2$ nanocomposite
films in chromogenic fenestration.
|
Electronic Health Records (EHRs) in hospital information systems contain
patients' diagnosis and treatments, so EHRs are essential to clinical data
mining. Of all the tasks in the mining process, Chinese Word Segmentation (CWS)
is a fundamental and important one, and most state-of-the-art methods greatly
rely on large-scale of manually-annotated data. Since annotation is
time-consuming and expensive, efforts have been devoted to techniques, such as
active learning, to locate the most informative samples for modeling. In this
paper, we follow the trend and present an active learning method for CWS in
EHRs. Specically, a new sampling strategy combining Normalized Entropy with
Loss Prediction (NE-LP) is proposed to select the most representative data.
Meanwhile, to minimize the computational cost of learning, we propose a joint
model including a word segmenter and a loss prediction model. Furthermore, to
capture interactions between adjacent characters, bigram features are also
applied in the joint model. To illustrate the effectiveness of NE-LP, we
conducted experiments on EHRs collected from the Shuguang Hospital Affiliated
to Shanghai University of Traditional Chinese Medicine. The results demonstrate
that NE-LP consistently outperforms conventional uncertainty-based sampling
strategies for active learning in CWS.
|
We study global dynamics for the focusing nonlinear Klein-Gordon equation
with the energy-critical nonlinearity in two or higher dimensions when the
energy equals the threshold given by the ground state of a mass-shifted
equation, and prove that the solutions are divided into scattering and blowup.
In short, the Kenig-Merle scattering/blowup dichotomy extends to the threshold
energy in the case of mass-shift for the critical nonlinear Klein-Gordon
equation.
|
We study the equidistribution of multiplicatively defined sets, such as the
squarefree integers, quadratic non-residues or primitive roots, in sets which
are described in an additive way, such as sumsets or Hilbert cubes. In
particular, we show that if one fixes any proportion less than $40\%$ of the
digits of all numbers of a given binary bit length, then the remaining set
still has the asymptotically expected number of squarefree integers. Next, we
investigate the distribution of primitive roots modulo a large prime $p$,
establishing a new upper bound on the largest dimension of a Hilbert cube in
the set of primitive roots, improving on a previous result of the authors.
Finally, we study sumsets in finite fields and asymptotically find the expected
number of quadratic residues and non-residues in such sumsets, given their
cardinalities are big enough. This significantly improves on a recent result by
Dartyge, Mauduit and S\'ark\"ozy. Our approach introduces several new ideas,
combining a variety of methods, such as bounds of exponential and character
sums, geometry of numbers and additive combinatorics.
|
This work investigates the emergence of oscillations in one of the simplest
cellular signaling networks exhibiting oscillations, namely, the dual-site
phosphorylation and dephosphorylation network (futile cycle), in which the
mechanism for phosphorylation is processive while the one for dephosphorylation
is distributive (or vice-versa). The fact that this network yields oscillations
was shown recently by Suwanmajo and Krishnan. Our results, which significantly
extend their analyses, are as follows. First, in the three-dimensional space of
total amounts, the border between systems with a stable versus unstable steady
state is a surface defined by the vanishing of a single Hurwitz determinant.
Second, this surface consists generically of simple Hopf bifurcations. Next,
simulations suggest that when the steady state is unstable, oscillations are
the norm. Finally, the emergence of oscillations via a Hopf bifurcation is
enabled by the catalytic and association constants of the distributive part of
the mechanism: if these rate constants satisfy two inequalities, then the
system generically admits a Hopf bifurcation. Our proofs are enabled by the
Routh-Hurwitz criterion, a Hopf-bifurcation criterion due to Yang, and a
monomial parametrization of steady states.
|
We present detailed spectral and timing analysis of the hard x-ray transient
IGR J16358-4726 using multi-satellite archival observations. A study of the
source flux time history over 6 years, suggests that lower luminosity transient
outbursts can be occuring in intervals of at most 1 year. Joint spectral fits
of the higher luminosity outburst using simultaneous Chandra/ACIS and
INTEGRAL/ISGRI data reveal a spectrum well described by an absorbed power law
model with a high energy cut-off plus an Fe line. We detected the 1.6 hour
pulsations initially reported using Chandra/ACIS also in the INTEGRAL/ISGRI
light curve and in subsequent XMM-Newton observations. Using the INTEGRAL data
we identified a spin up of 94 s (dP/dt = 1.6E-4), which strongly points to a
neutron star nature for IGR J16358-4726. Assuming that the spin up is due to
disc accretion, we estimate that the source magnetic field ranges between 10^13
- 10^15 G, depending on its distance, possibly supporting a magnetar nature for
IGR J16358-4726.
|
We describe a strategy for a non-perturbative computation of the b-quark mass
to leading order in 1/m in the Heavy Quark Effective Theory (HQET). The
approach avoids the perturbative subtraction of power law divergencies, and the
continuum limit may be taken. First numerical results in the quenched
approximation demonstrate the potential of the method with a preliminary result
m_b(4GeV)=4.56(2)(7) GeV. In principle, the idea may also be applied to the
matching of composite operators or the computation of 1/m corrections in HQET.
|
In this note we describe our personal encounters with the $p$-adic Stark
conjecture. Gross describes the period between $1977$ and $1986$ when he came
to formulate these conjectures (\S 1-8), and Dasgupta describes the period
between $1998$ and $2021$, when he worked with others to finally prove them (\S
9-12).
|
One of the optimization goals of a particle accelerator is to reach the
highest possible beam peak current. For that to happen the electron bunch
propagating through the accelerator should be kept relatively short along the
direction of its travel. In order to obtain a better understanding of the beam
composition it is crucial to evaluate the electric charge distribution along
the micrometer-scale packets. The task of the Electro-Optic Detector (EOD) is
to imprint the beam charge profile on the spectrum of light of a laser pulse.
The actual measurement of charge distribution is then extracted with a
spectrometer based on a diffraction grating.
The article focuses on developed data acquisition and processing system
called the High-speed Optical Line Detector (HOLD). It is a 1D image
acquisition system which solves several challenges related to capturing,
buffering, processing and transmitting large data streams with use of the FPGA
device. It implements a latency-optimized custom architecture based on the AXI
interfaces. The HOLD device is realized as an FPGA Mezzanine Card (FMC) carrier
with single High Pin-Count connector hosting the KIT KALYPSO detector.
The solution presented in the paper is probably one of the world fastest line
cameras. Thanks to its custom architecture it is capable of capturing at least
10 times more frames per second than fastest comparable commercially available
devices.
|
Cross-validation (CV) is a technique for evaluating the ability of
statistical models/learning systems based on a given data set. Despite its wide
applicability, the rather heavy computational cost can prevent its use as the
system size grows. To resolve this difficulty in the case of Bayesian linear
regression, we develop a formula for evaluating the leave-one-out CV error
approximately without actually performing CV. The usefulness of the developed
formula is tested by statistical mechanical analysis for a synthetic model.
This is confirmed by application to a real-world supernova data set as well.
|
In this paper, we investigate the null geodesics of the static charged black
hole in heterotic string theory. A detailed analysis of the geodesics are done
in the Einstein frame as well as in the string frame. In the Einstein frame,
the geodesics are solved exactly in terms of the Jacobi-elliptic integrals for
all possible energy levels and angular momentum of the photons. In the string
frame, the geodesics are presented for the circular orbits. As a physical
application of the null geodesics, we have obtained the angle of deflection for
the photons and the quasinormal modes of a massless scalar field in the eikonal
limit.
|
We investigate a recently proposed Higgs-like model (arXiv:0811.4423
[hep-th]), in the framework of a gauge-invariant but path-dependent variables
formalism. We compute the static potential between test charges in a condensate
of scalars and fermions. In the case of charged massive scalar we recover the
screening potential. On the other hand, in the Higgs case, with a "tachyonic"
mass term and a quartic potential in the Lagrangian, unexpected features are
found. It is observed that the interaction energy is the sum of an
effective-Yukawa and a linear potential, leading to the confinement of static
charges.
|
We study the system of D6+D0 branes at sub-stringy scale. We show that the
proper description of the system, for large background field associated with
the D0-branes, is via spinning chargeless black holes in five dimensions. The
repulsive force between the D6-branes and the D0-branes is understood through
the centrifugal barrier. We discuss the implication on the stability of the
D6+D0 solution.
|
Evolution Strategies (ESs) have recently become popular for training deep
neural networks, in particular on reinforcement learning tasks, a special form
of controller design. Compared to classic problems in continuous direct search,
deep networks pose extremely high-dimensional optimization problems, with many
thousands or even millions of variables. In addition, many control problems
give rise to a stochastic fitness function. Considering the relevance of the
application, we study the suitability of evolution strategies for
high-dimensional, stochastic problems. Our results give insights into which
algorithmic mechanisms of modern ES are of value for the class of problems at
hand, and they reveal principled limitations of the approach. They are in line
with our theoretical understanding of ESs. We show that combining ESs that
offer reduced internal algorithm cost with uncertainty handling techniques
yields promising methods for this class of problems.
|
Composition is a powerful principle for systems biology, focused on the
interfaces, interconnections, and orchestration of distributed processes.
Whereas most systems biology models focus on the structure or dynamics of
specific subsystems in controlled conditions, compositional systems biology
aims to connect such models into integrative multiscale simulations. This
emphasizes the space between models--a compositional perspective asks what
variables should be exposed through a submodel's interface? How do coupled
models connect and translate across scales? How can we connect domain-specific
models across biological and physical research areas to drive the synthesis of
new knowledge? What is required of software that integrates diverse datasets
and submodels into unified multiscale simulations? How can the resulting
integrative models be accessed, flexibly recombined into new forms, and
iteratively refined by a community of researchers? This essay offers a
high-level overview of the key components for compositional systems biology,
including: 1) a conceptual framework and corresponding graphical framework to
represent interfaces, composition patterns, and orchestration patterns; 2)
standardized composition schemas that offer consistent formats for composable
data types and models, fostering robust infrastructure for a registry of
simulation modules that can be flexibly assembled; 3) a foundational set of
biological templates--schemas for cellular and molecular interfaces, which can
be filled with detailed submodels and datasets, and are designed to integrate
knowledge that sheds light on the molecular emergence of cells; and 4)
scientific collaboration facilitated by user-friendly interfaces for connecting
researchers with datasets and models, and which allows a community of
researchers to effectively build integrative multiscale models of cellular
systems.
|
Developing accurate, efficient, and robust closure models is essential in the
construction of reduced order models (ROMs) for realistic nonlinear systems,
which generally require drastic ROM mode truncations. We propose a deep
residual neural network (ResNet) closure learning framework for ROMs of
nonlinear systems. The novel ResNet-ROM framework consists of two steps: (i) In
the first step, we use ROM projection to filter the given nonlinear PDE and
construct a filtered ROM. This filtered ROM is low-dimensional, but is not
closed (because of the PDE nonlinearity). (ii) In the second step, we use
ResNet to close the filtered ROM, i.e., to model the interaction between the
resolved and unresolved ROM modes. We emphasize that in the new ResNet-ROM
framework, data is used only to complement classical physical modeling (i.e.,
only in the closure modeling component), not to completely replace it. We also
note that the new ResNet-ROM is built on general ideas of spatial filtering and
deep learning and is independent of (restrictive) phenomenological arguments,
e.g., of eddy viscosity type. The numerical experiments for the 1D Burgers
equation show that the ResNet-ROM is significantly more accurate than the
standard projection ROM. The new ResNet-ROM is also more accurate and
significantly more efficient than other modern ROM closure models.
|
Psycholinguistic properties of words have been used in various approaches to
Natural Language Processing tasks, such as text simplification and readability
assessment. Most of these properties are subjective, involving costly and
time-consuming surveys to be gathered. Recent approaches use the limited
datasets of psycholinguistic properties to extend them automatically to large
lexicons. However, some of the resources used by such approaches are not
available to most languages. This study presents a method to infer
psycholinguistic properties for Brazilian Portuguese (BP) using regressors
built with a light set of features usually available for less resourced
languages: word length, frequency lists, lexical databases composed of school
dictionaries and word embedding models. The correlations between the properties
inferred are close to those obtained by related works. The resulting resource
contains 26,874 words in BP annotated with concreteness, age of acquisition,
imageability and subjective frequency.
|
In this paper, we study a strongly correlated quantum system that has become
amenable to experiment by the advent of ultracold bosonic atoms in optical
lattices, a chain of two different bosonic constituents. Excitations in this
system are first considered within the framework of bosonization and Luttinger
liquid theory which are applicable if the Luttinger liquid parameters are
determined numerically. The occurrence of a bosonic counterpart of fermionic
spin-charge separation is signalled by a characteristic two-peak structure in
the spectral functions found by dynamical DMRG in good agreement with
analytical predictions. Experimentally, single-particle excitations as probed
by spectral functions are currently not accessible in cold atoms. We therefore
consider the modifications needed for current experiments, namely the
investigation of the real-time evolution of density perturbations instead of
single particle excitations, a slight inequivalence between the two
intraspecies interactions in actual experiments, and the presence of a
confining trap potential. Using time-dependent DMRG we show that only
quantitative modifications occur. With an eye to the simulation of strongly
correlated quantum systems far from equilibrium we detect a strong dependence
of the time-evolution of entanglement entropy on the initial perturbation,
signalling limitations to current reasonings on entanglement growth in
many-body systems.
|
We perform a hierarchical Bayesian inference to investigate the population
properties of the coalescing compact binaries involving at least one neutron
star (NS). With the current observation data, we can not rule out either of the
Double Gaussian, Single Gaussian and Uniform NS mass distribution models,
although the mass distribution of the Galactic NSs is slightly preferred by the
gravitational wave (GW) observations. The mass distribution of black holes
(BHs) in the neutron star-black hole (NSBH) population is found to be similar
to that for the Galactic X-ray binaries. Additionally, the ratio of the merger
rate densities between NSBHs and BNSs is estimated to be about 3 : 7. The spin
properties of the binaries, though constrained relatively poor, play nontrivial
role in reconstructing the mass distribution of NSs and BHs. We find that a
perfectly aligned spin distribution can be ruled out, while a purely isotropic
distribution of spin orientation is still allowed.
|
We present a prediction of chiral topological metals with several classes of
unconventional quasiparticle fermions in a family of SrGePt-type materials in
terms of first-principles calculations. In these materials, fourfold spin-3/2
Rarita-Schwinger-Weyl (RSW) fermion, sixfold excitation, and Weyl fermions
coexist around the Fermi level as spin-orbit coupling is considered, and the
Chern number for the first two kinds of fermions is the maximal value four. We
found that large Fermi arcs from spin-3/2 RSW fermion emerge on the
(010)-surface, spanning the whole surface Brillouin zone. Moreover, there exist
Fermi arcs originating from Weyl points, which further overlap with trivial
bulk bands. In addition, we revealed that the large spin Hall conductivity can
be obtained, which attributed to the remarkable spin Berry curvature around the
degenerate nodes and band-splitting induced by spin-orbit coupling. Our
findings indicate that the SrGePt family of compounds provide an excellent
platform for studying on topological electronic states and the intrinsic spin
Hall effect.
|
This paper is concerned with semilinear equations in divergence form \[
\diver(A(x)Du) = f(u) \] where $f :\R \to [0,\infty)$ is nondecreasing. We
prove a sharp Harnack type inequality for nonnegative solutions which is
closely connected to the classical Keller-Osserman condition for the existence
of entire solutions.
|
In this paper, we argue that some fundamental concepts and tools of signal
processing may be effectively applied to represent and interpret social
cognition processes. From this viewpoint, individuals or, more generally,
social stimuli are thought of as a weighted sum of harmonics with different
frequencies: Low frequencies represent general categories such as gender,
ethnic group, nationality, etc., whereas high frequencies account for personal
characteristics. Individuals are then seen by observers as the output of a
filter that emphasizes a certain range of high or low frequencies. The
selection of the filter depends on the social distance between the observing
individual or group and the person being observed as well as on motivation,
cognitive resources and cultural background. Enhancing low- or high-frequency
harmonics is not on equal footing, the latter requiring supplementary energy.
This mirrors a well-known property of signal processing filters. More
generally, in the light of this correspondence, we show that several
established results of social cognition admit a natural interpretation and
integration in the signal processing language. While the potential of this
connection between an area of social psychology and one of information
engineering appears considerable (compression, information retrieval,
filtering, feedback, feedforward, sampling, aliasing, etc.), in this paper we
shall limit ourselves to laying down what we consider the pillars of this
bridge on which future research may be founded.
|
We show that as $T\to \infty$, for all $t\in [T,2T]$ outside of a set of
measure $\mathrm{o}(T)$, $$ \int_{-(\log T)^{\theta}}^{(\log T)^{\theta}}
|\zeta(\tfrac 12 + \mathrm{i} t + \mathrm{i} h)|^{\beta} \mathrm{d} h = (\log
T)^{f_{\theta}(\beta) + \mathrm{o}(1)}, $$ for some explicit exponent
$f_{\theta}(\beta)$, where $\theta > -1$ and $\beta > 0$. This proves an
extended version of a conjecture of Fyodorov and Keating (2014). In particular,
it shows that, for all $\theta > -1$, the moments exhibit a phase transition at
a critical exponent $\beta_c(\theta)$, below which $f_\theta(\beta)$ is
quadratic and above which $f_\theta(\beta)$ is linear. The form of the exponent
$f_\theta$ also differs between mesoscopic intervals ($-1<\theta<0$) and
macroscopic intervals ($\theta>0$), a phenomenon that stems from an approximate
tree structure for the correlations of zeta. We also prove that, for all $t\in
[T,2T]$ outside a set of measure $\mathrm{o}(T)$, $$ \max_{|h| \leq (\log
T)^{\theta}} |\zeta(\tfrac{1}{2} + \mathrm{i} t + \mathrm{i} h)| = (\log
T)^{m(\theta) + \mathrm{o}(1)}, $$ for some explicit $m(\theta)$. This
generalizes earlier results of Najnudel (2018) and Arguin et al. (2019) for
$\theta = 0$. The proofs are unconditional, except for the upper bounds when
$\theta > 3$, where the Riemann hypothesis is assumed.
|
We discuss generalizations of Ozsvath-Szabo's spectral sequence relating
Khovanov homology and Heegaard Floer homology, focusing attention on an
explicit relationship between natural Z (resp., 1/2 Z) gradings appearing in
the two theories. These two gradings have simple representation-theoretic
(resp., geometric) interpretations, which we also review.
|
Motived by the heat flow and bubble analysis of biharmonic mappings, we study
further regularity issues of the fourth order Lamm-Riviere system
$$\Delta^{2}u=\Delta(V\cdot\nabla u)+{\rm div}(w\nabla
u)+(\nabla\omega+F)\cdot\nabla u+f$$ in dimension four, with an inhomogeneous
term $f$ which belongs to some natural function space. We obtain optimal higher
order regularity and sharp Holder continuity of weak solutions. Among several
applications, we derive weak compactness for sequences of weak solutions with
uniformly bounded energy, which generalizes the weak convergence theory of
approximate biharmonic mappings.
|
Deep Learning refers to a set of machine learning techniques that utilize
neural networks with many hidden layers for tasks, such as image
classification, speech recognition, language understanding. Deep learning has
been proven to be very effective in these domains and is pervasively used by
many Internet services. In this paper, we describe different automotive uses
cases for deep learning in particular in the domain of computer vision. We
surveys the current state-of-the-art in libraries, tools and infrastructures
(e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural
networks. We particularly focus on convolutional neural networks and computer
vision use cases, such as the visual inspection process in manufacturing plants
and the analysis of social media data. To train neural networks, curated and
labeled datasets are essential. In particular, both the availability and scope
of such datasets is typically very limited. A main contribution of this paper
is the creation of an automotive dataset, that allows us to learn and
automatically recognize different vehicle properties. We describe an end-to-end
deep learning application utilizing a mobile app for data collection and
process support, and an Amazon-based cloud backend for storage and training.
For training we evaluate the use of cloud and on-premises infrastructures
(including multiple GPUs) in conjunction with different neural network
architectures and frameworks. We assess both the training times as well as the
accuracy of the classifier. Finally, we demonstrate the effectiveness of the
trained classifier in a real world setting during manufacturing process.
|
Characteristic points have been a primary tool in the study of a generating
function defined by a single recursive equation. We investigate the proper way
to adapt this tool when working with multi-equation recursive systems.
|
About half of the world population already live in urban areas. It is
projected that by 2050, approximately 70% of the world population will live in
cities. In addition to this, most developing countries do not have reliable
population census figures, and periodic population censuses are extremely
resource expensive. In Africa's most populous country, Nigeria, for instance,
the last decennial census was conducted in 2006. The relevance of near-accurate
population figures at the local levels cannot be overemphasized for a broad
range of applications by government agencies and non-governmental
organizations, including the planning and delivery of services, estimating
populations at risk of hazards or infectious diseases, and disaster relief
operations. Using GRID3 (Geo-Referenced Infrastructure and Demographic Data for
Development) high-resolution spatially disaggregated population data estimates,
this study proposed a framework for aggregating population figures at micro
levels within a larger geographic jurisdiction. Python, QGIS, and machine
learning techniques were used for data visualization, spatial analysis, and
zonal statistics. Lagos Island, Nigeria was used as a case study to demonstrate
how to obtain a more precise population estimate at the lowest administrative
jurisdiction and eliminate ambiguity caused by antithetical parameters in the
calculations. We also demonstrated how the framework can be used as a benchmark
for estimating the carrying capacities of urban basic services like healthcare,
housing, sanitary facilities, education, water etc. The proposed framework
would help urban planners and government agencies to plan and manage cities
better using more accurate data.
|
The recently discovered three-dimensional hyperhoneycomb iridate,
$\beta$-Li$_2$IrO$_3$, has raised hopes for the realization of dominant Kitaev
interaction between spin-orbit entangled local moments due to its near-ideal
lattice structure. If true, this material may lie close to the sought-after
quantum spin liquid phase in three dimensions. Utilizing ab-initio electronic
structure calculations, we first show that the spin-orbit entangled basis,
$j_{\rm eff}$=1/2, correctly captures the low energy electronic structure. The
effective spin model derived in the strong coupling limit supplemented by the
ab-initio results is shown to be dominated by the Kitaev interaction. We
demonstrated that the possible range of parameters is consistent with a
non-coplanar spiral magnetic order found in a recent experiment. All of these
analyses suggest that $\beta$-Li$_2$IrO$_3$ may be the closest among known
materials to the Kitaev spin liquid regime.
|
A great deal of progress has been made in image captioning, driven by
research into how to encode the image using pre-trained models. This includes
visual encodings (e.g. image grid features or detected objects) and more
recently textual encodings (e.g. image tags or text descriptions of image
regions). As more advanced encodings are available and incorporated, it is
natural to ask: how to efficiently and effectively leverage the heterogeneous
set of encodings? In this paper, we propose to regard the encodings as
augmented views of the input image. The image captioning model encodes each
view independently with a shared encoder efficiently, and a contrastive loss is
incorporated across the encoded views in a novel way to improve their
representation quality and the model's data efficiency. Our proposed
hierarchical decoder then adaptively weighs the encoded views according to
their effectiveness for caption generation by first aggregating within each
view at the token level, and then across views at the view level. We
demonstrate significant performance improvements of +5.6% CIDEr on MS-COCO and
+12.9% CIDEr on Flickr30k compared to state of the arts, and conduct rigorous
analyses to demonstrate the importance of each part of our design.
|
The theory ACFA admits a primitive recursive quantifier elimination
procedure. It is therefore primitive recursively decidable.
|
We discuss the current status of the resonant spin-flavor precession (RSFP)
solution to the solar neutrino problem. We perform a fit to all the latest
solar neutrino data for various assumed magnetic field profiles in the sun. We
show that the RSFP can account for all the solar neutrino experiments, giving
as good fit as other alternative solutions such as MSW or Just so, and
therefore can be a viable solution to the solar neutrino problem
|
We present optical and near-infrared (NIR) photometry of a classical nova,
V2362 Cyg (= Nova Cygni 2006). V2362 Cyg experienced a peculiar rebrightening
with a long duration from 100 to 240 d after the maximum of the nova. Our
multicolor observation indicates an emergence of a pseudophotosphere with an
effective temperature of 9000 K at the rebrightening maximum. After the
rebrightening maximum, the object showed a slow fading homogeneously in all of
the used bands for one week. This implies that the fading just after the
rebrightening maximum ( less or equal 1 week ) was caused by a slowly shrinking
pseudophotosphere. Then, the NIR flux drastically increased, while the optical
flux steeply declined. The optical and NIR flux was consistent with blackbody
radiation with a temperature of 1500 K during this NIR rising phase. These
facts are likely to be explained by dust formation in the nova ejecta. Assuming
an optically thin case, we estimate the dust mass of 10^(-8) -- 10^(-10)
M_solar, which is less than those in typical dust-forming novae. These results
support the senario that a second, long-lasting outflow, which caused the
rebrightening, interacted with a fraction of the initial outflow and formed
dust grains.
|
It is introduced an open class of linear operators on Banach and Hilbert
spaces such that their non-wandering set is an infinite dimensional
topologically mixing subspace. In certain cases, the non-wandering set
coincides with the whole space.
|
Accurately detecting active objects undergoing state changes is essential for
comprehending human interactions and facilitating decision-making. The existing
methods for active object detection (AOD) primarily rely on visual appearance
of the objects within input, such as changes in size, shape and relationship
with hands. However, these visual changes can be subtle, posing challenges,
particularly in scenarios with multiple distracting no-change instances of the
same category. We observe that the state changes are often the result of an
interaction being performed upon the object, thus propose to use informed
priors about object related plausible interactions (including semantics and
visual appearance) to provide more reliable cues for AOD. Specifically, we
propose a knowledge aggregation procedure to integrate the aforementioned
informed priors into oracle queries within the teacher decoder, offering more
object affordance commonsense to locate the active object. To streamline the
inference process and reduce extra knowledge inputs, we propose a knowledge
distillation approach that encourages the student decoder to mimic the
detection capabilities of the teacher decoder using the oracle query by
replicating its predictions and attention. Our proposed framework achieves
state-of-the-art performance on four datasets, namely Ego4D, Epic-Kitchens,
MECCANO, and 100DOH, which demonstrates the effectiveness of our approach in
improving AOD.
|
The deployment of artificial intelligence (AI) in decision-making
applications requires ensuring an appropriate level of safety and reliability,
particularly in changing environments that contain a large number of unknown
observations. To address this challenge, we propose a novel safe reinforcement
learning (RL) approach that utilizes an anomalous state sequence to enhance RL
safety. Our proposed solution Safe Reinforcement Learning with Anomalous State
Sequences (AnoSeqs) consists of two stages. First, we train an agent in a
non-safety-critical offline 'source' environment to collect safe state
sequences. Next, we use these safe sequences to build an anomaly detection
model that can detect potentially unsafe state sequences in a 'target'
safety-critical environment where failures can have high costs. The estimated
risk from the anomaly detection model is utilized to train a risk-averse RL
policy in the target environment; this involves adjusting the reward function
to penalize the agent for visiting anomalous states deemed unsafe by our
anomaly model. In experiments on multiple safety-critical benchmarking
environments including self-driving cars, our solution approach successfully
learns safer policies and proves that sequential anomaly detection can provide
an effective supervisory signal for training safety-aware RL agents
|
Two-dimensional (2D) materials are particularly attractive to build the
channel of next-generation field-effect transistors (FETs) with gate lengths
below 10-15 nm. Because the 2D technology has not yet reached the same level of
maturity as its Silicon counterpart, device simulation can be of great help to
predict the ultimate performance of 2D FETs and provide experimentalists with
reliable design guidelines. In this paper, an ab initio modelling approach
dedicated to well-known and exotic 2D materials is presented and applied to the
simulation of various components, from thermionic to tunnelling transistors
based on mono- and multi-layer channels. Moreover, the physics of metal - 2D
semiconductor contacts is revealed and the importance of different scattering
sources on the mobility of selected 2D materials is discussed. It is expected
that modeling frameworks similar to the one described here will not only
accompany future developments of 2D devices, but will also enable them.
|
In order to deal with issues caused by the increasing penetration of
renewable resources in power systems, this paper proposes a novel distributed
frequency control algorithm for each generating unit and controllable load in a
transmission network to replace the conventional automatic generation control
(AGC). The targets of the proposed control algorithm are twofold. First, it is
to restore the nominal frequency and scheduled net inter-area power exchanges
after an active power mismatch between generation and demand. Second, it is to
optimally coordinate the active powers of all controllable units in a
distributed manner. The designed controller only relies on local information,
computation, and peer-to-peer communication between cyber-connected buses, and
it is also robust against uncertain system parameters. Asymptotic stability of
the closed-loop system under the designed algorithm is analysed by using a
nonlinear structure-preserving model including the first-order turbine-governor
dynamics. Finally, case studies validate the effectiveness of the proposed
method.
|
The pseudo likelihood method of Besag(1974), has remained a popular method
for estimating Markov random field on a very large lattice, despite various
documented deficiencies. This is partly because it remains the only
computationally tractable method for large lattices. We introduce a novel
method to estimate Markov random fields defined on a regular lattice. The
method takes advantage of conditional independence structures and recursively
decomposes a large lattice into smaller sublattices. An approximation is made
at each decomposition. Doing so completely avoids the need to compute the
troublesome normalising constant. The computational complexity is $O(N)$, where
$N$ is the the number of pixels in lattice, making it computationally
attractive for very large lattices. We show through simulation, that the
proposed method performs well, even when compared to the methods using exact
likelihoods.
|
This paper has been withdrawn temporarily.
|
This essay is a nontechnical primer for a broader audience, in which I paint
a broad-brush picture of modern cosmology. I begin by reviewing the evidence
for the big bang, including the expansion of our Universe, the cosmic microwave
background, and the primordial abundances of the light elements. Next, I
discuss how these and other cosmological observations can be well explained by
means of the concordance model of cosmology, putting a particular emphasis on
the composition of the cosmic energy budget in terms of visible matter, dark
matter, and dark energy. This sets the stage for a short overview of the
history of the Universe from the earliest moments of its existence all the way
to the accelerated expansion at late times and beyond. Finally, I summarize the
current status of the field, including the challenges it is currently facing
such as the Hubble tension, and conclude with an outlook onto the bright future
that awaits us in the coming years and decades. The text is complemented by an
extensive bibliography serving as a guide for readers who wish to delve deeper.
|
In this paper, we analyze the impact of compressed sensing with complex
random matrices on Fisher information and the Cram\'{e}r-Rao Bound (CRB) for
estimating unknown parameters in the mean value function of a complex
multivariate normal distribution. We consider the class of random compression
matrices whose distribution is right-orthogonally invariant. The compression
matrix whose elements are i.i.d. standard normal random variables is one such
matrix. We show that for all such compression matrices, the Fisher information
matrix has a complex matrix beta distribution. We also derive the distribution
of CRB. These distributions can be used to quantify the loss in CRB as a
function of the Fisher information of the non-compressed data. In our numerical
examples, we consider a direction of arrival estimation problem and discuss the
use of these distributions as guidelines for choosing compression ratios based
on the resulting loss in CRB.
|
We present the results of the X-ray XMM-Newton observations of NGC 507, a
dominant elliptical galaxy in a small group of galaxies, and report
'super-solar' metal abundances of both Fe and a-elements in the hot ISM of this
galaxy. We find Z_Fe = 2-3 times solar inside the D25 ellipse of NGC 507. This
is the highest Z_Fe reported so far for the hot halo of an elliptical galaxy;
this high Iron abundance is fully consistent with the predictions of stellar
evolution models, which include the yield of both type II and Ia supernovae.
The spatially resolved, high quality XMM spectra provide enough statistics to
formally require at least three emission components: two soft thermal
components indicating a range of temperatures in the hot ISM, plus a harder
component, consistent with the integrated output of low mass X-ray binaries
(LMXBs). The abundance of a-elements (most accurately determined by Si) is also
found to be super-solar. The a-elements to Fe abundance ratio is close to the
solar ratio, suggesting that ~70% of the Iron mass in the hot ISM was
originated from SNe Type Ia. The a-element to Fe abundance ratio remains
constant out to at least 100 kpc, indicating that SNe Type II and Ia ejecta are
well mixed in a scale much larger than the extent of the stellar body.
|
Using the Sen's entropy function formalism, we compute the entropy for the
extremal dyonic black hole solutions of theories in the presence of dilaton
field coupled to the field strength and a dilaton potential. We solve the
attractor equations analytically and determine the near horizon metric, the
value of the scalar fields and the electric field on the horizon, and
consequently the entropy of these black holes. The attractor mechanism plays a
very important role for these systems, and after studying the simplest systems
involving dilaton fields, we propose a general ansatz for the value of the
scalar field on the horizon, which allows us to solve the attractor equations
for gauged supergravity theories in $AdS_4$ spaces. In particular, we derive an
expression for the dyonic black hole entropy for the $\mathcal{N}=8$ gauged
supergravity in 4 dimensions which does not contain explicitly the gauge
parameter of the potential.
|
An encoder wishes to minimize the bit rate necessary to guarantee that a
decoder is able to calculate a symbolwise function of a sequence available only
at the encoder and a sequence that can be measured only at the decoder. This
classical problem, first studied by Yamamoto, is addressed here by including
two new aspects: (i) The decoder obtains noisy measurements of its sequence,
where the quality of such measurements can be controlled via a cost-constrained
"action" sequence; (ii) Measurement at the decoder may fail in a way that is
unpredictable to the encoder, thus requiring robust encoding. The considered
scenario generalizes known settings such as the Heegard-Berger-Kaspi and the
"source coding with a vending machine" problems. The rate-distortion-cost
function is derived and numerical examples are also worked out to obtain
further insight into the optimal system design.
|
We present a theory of the finite temperature thermo-electric response
functions of graphene, in the hydrodynamic regime induced by electron-electron
collisions. In moderate magnetic fields, the Dirac particles undergo a
collective cyclotron motion with a temperature-dependent relativistic cyclotron
frequency proportional to the net charge density of the Dirac plasma. In
contrast to the undamped cyclotron pole in Galilean-invariant systems (Kohn's
theorem), here there is a finite damping induced by collisions between the
counter-propagating particles and holes. This cyclotron motion shows up as a
damped pole in the frequency dependent conductivities, and should be readily
detectable in microwave measurements at room temperature. We also discuss the
large Nernst effect to be expected in graphene.
|
The problem of determining whether a graph $G$ contains another graph $H$ as
a minor, referred to as the minor containment problem, is a fundamental problem
in the field of graph algorithms. While it is NP-complete when $G$ and $H$ are
general graphs, it is sometimes tractable on more restricted graph classes.
This study focuses on the case where both $G$ and $H$ are trees, known as the
tree minor containment problem. Even in this case, the problem is known to be
NP-complete. In contrast, polynomial-time algorithms are known for the case
when both trees are caterpillars or when the maximum degree of $H$ is a
constant. Our research aims to clarify the boundary of tractability and
intractability for the tree minor containment problem. Specifically, we provide
dichotomies for the computational complexities of the problem based on three
structural parameters: the diameter, pathwidth, and path eccentricity.
|
The scattering theory of Lax and Phillips, designed primarily for hyperbolic
systems, such as electromagnetic or acoustic waves, is described. This theory
provides a realization of the theorem of Foias and Nagy; there is a subspace of
the Hilbert space in which the unitary evolution of the system, restricted to
this subspace, is realized as a semigroup. The embedding of the quantum theory
into this structure, carried out by Flesia and Piron, is reviewed. We show how
the density matrix for an effectively pure state can evolve to an effectively
mixed state (decoherence) in this framework. Necessary conditions are given for
the realization of the relation between the spectrum of the generator of the
semigroup and the singularities of the $S$-matrix (in energy representation).
It is shown that these conditions may be met in the Liouville space formulation
of quantum evolution, and in the Hilbert space of relativistic quantum theory.
|
We derive viscous forces for vortices in a thin-film ferromagnet. The viscous
force acting on vortex $i$ is a linear superposition $\mathbf F_i = - \sum_{j}
\hat{D}_{ij} \mathbf V_j$, where $\mathbf V_j$ is the velocity of vortex $j$.
Thanks to the long-range nature of vortices, the mutual drag tensor
$\hat{D}_{ij}$ is comparable in magnitude to the coefficient of self-drag
$D_{ii}$.
|
Subsets and Splits