text
stringlengths 6
128k
|
---|
In this note we make a universal construction of Bruhat-Tits group scheme on
wonderful embeddings of adjoint groups in the absolute and relative settings
and of adjoint Kac-Moody groups. These have natural classifying properties
reflecting the orbit structure on the wonderful embeddings.
|
In deep learning, the load data with non-temporal factors are difficult to
process by sequence models. This problem results in insufficient precision of
the prediction. Therefore, a short-term load forecasting method based on
convolutional neural network (CNN), self-attention encoder-decoder network
(SAEDN) and residual-refinement (Res) is proposed. In this method, feature
extraction module is composed of a two-dimensional convolutional neural
network, which is used to mine the local correlation between data and obtain
high-dimensional data features. The initial load fore-casting module consists
of a self-attention encoder-decoder network and a feedforward neural network
(FFN). The module utilizes self-attention mechanisms to encode high-dimensional
features. This operation can obtain the global correlation between data.
Therefore, the model is able to retain important information based on the
coupling relationship between the data in data mixed with non-time series
factors. Then, self-attention decoding is per-formed and the feedforward neural
network is used to regression initial load. This paper introduces the residual
mechanism to build the load optimization module. The module generates residual
load values to optimize the initial load. The simulation results show that the
proposed load forecasting method has advantages in terms of prediction accuracy
and prediction stability.
|
Because of long-wavelength fluctuations, the nature of solids and phase
transitions in 2D are different from those in 3D systems, and have been heavily
debated in past decades, in which the focus was on the existence of hexatic
phase. Here, by using large scale computer simulations, we investigate the
melting transition in 2D systems of polydisperse hard disks. We find that, with
increasing the particle size polydispersity, the melting transition can be
qualitatively changed from the recently proposed two-stage process to the
Kosterlitz-Thouless-Halperin-Nelson-Young scenario with significantly enlarged
stability range for hexatic phase. Moreover, re-entrant melting transitions are
found in high density systems of polydisperse hard disks, which were proven
impossible in 3D polydisperse hard-sphere systems. These suggest a new
fundamental difference between phase transitions in polydisperse systems in 2D
and 3D.
|
A key component of any robot is the interface between ROS2 software and
physical motors. New robots often use arbitrary, messy mixtures of closed and
open motor drivers and error-prone physical mountings, wiring, and connectors
to interface them. There is a need for a standardizing OSH component to
abstract this complexity, as Arduino did for interfacing to smaller components.
We present a OSH printed circuit board to solve this problem once and for all.
On the high-level side, it interfaces to Arduino Giga -- acting as an unusually
large and robust shield -- and thus to existing open source ROS software
stacks. On the lower-level side, it interfaces to existing emerging standard
open hardware including OSH motor drivers and relays, which can already be used
to drive fully open hardware wheeled and arm robots. This enables the creation
of a family of standardized, fully open hardware, fully reproducible, research
platforms.
|
We have dealt with the Euler's alternating series of the Riemann zeta
function to define a regularized ratio appeared in the functional equation even
in the critical strip and showed some evidence to indicate the hypothesis. We
briefly review the essential points and we also define a finite ratio in the
functional equation from divergent quantities in this note.
|
We consider the properties of listwise deletion when both $n$ and the number
of variables grow large. We show that when (i) all data has some idiosyncratic
missingness and (ii) the number of variables grows superlogarithmically in $n$,
then, for large $n$, listwise deletion will drop all rows with probability 1.
Using two canonical datasets from the study of comparative politics and
international relations, we provide numerical illustration that these problems
may emerge in real world settings. These results suggest, in practice, using
listwise deletion may mean using few of the variables available to the
researcher.
|
Luminosity is the key quantity characterizing the performance of charged
particle colliders. Precise luminosity determination is an important task in
collider physics. Part of this task is the proper calibration of detectors
dedicated for luminosity measurements. The wide-used experi-mental method of
calibration is the van-der-Meer scan, which is the beam separation scan
performed at specifically optimized beam conditions. This work is devoted to
modeling this scan with the q-Gaussian distribution of particles in colliding
beams. Because of its properties, the Q-Gaussian distribution is believed to
describe the density closer to reality than regular Gaussian-based models. In
this work, the q-Gaussian model is applied for van-der-Meer scan modeling, and
the benefits of this model for luminosity calibration task are demonstrated.
|
We introduce the fundamental ideas and challenges of Predictable AI, a
nascent research area that explores the ways in which we can anticipate key
indicators of present and future AI ecosystems. We argue that achieving
predictability is crucial for fostering trust, liability, control, alignment
and safety of AI ecosystems, and thus should be prioritised over performance.
While distinctive from other areas of technical and non-technical AI research,
the questions, hypotheses and challenges relevant to Predictable AI were yet to
be clearly described. This paper aims to elucidate them, calls for identifying
paths towards AI predictability and outlines the potential impact of this
emergent field.
|
In this paper we derive some identities and inequalities on the M\"obius mu
function. Our main tool is phi functions for intervals of positive integers and
their unions.
|
Uniform shear flow is a paradigmatic example of a nonequilibrium fluid state
exhibiting non-Newtonian behavior. It is characterized by uniform density and
temperature and a linear velocity profile $U_x(y)=a y$, where $a$ is the
constant shear rate. In the case of a rarefied gas, all the relevant physical
information is represented by the one-particle velocity distribution function
$f({\bf r},{\bf v})=f({\bf V})$, with ${\bf V}\equiv {\bf v}-{\bf U}({\bf r})$,
which satisfies the standard nonlinear integro-differential Boltzmann equation.
We have studied this state for a two-dimensional gas of Maxwell molecules with
grazing collisions in which the nonlinear Boltzmann collision operator reduces
to a Fokker-Planck operator. We have found analytically that for shear rates
larger than a certain threshold value the velocity distribution function
exhibits an algebraic high-velocity tail of the form $f({\bf V};a)\sim |{\bf
V}|^{-4-\sigma(a)}\Phi(\phi; a)$, where $\phi\equiv \tan V_y/V_x$ and the
angular distribution function $\Phi(\phi; a)$ is the solution of a modified
Mathieu equation. The enforcement of the periodicity condition $\Phi(\phi;
a)=\Phi(\phi+\pi; a)$ allows one to obtain the exponent $\sigma(a)$ as a
function of the shear rate. As a consequence of this power-law decay, all the
velocity moments of a degree equal to or larger than $2+\sigma(a)$ are
divergent. In the high-velocity domain the velocity distribution is highly
anisotropic, with the angular distribution sharply concentrated around a
preferred orientation angle which rotates counterclock-wise as the shear rate
increases.
|
The transverse momentum distributions of various hadrons produced in most
central Pb+Pb collisions at LHC energy Root(s_NN) = 2.76 TeV have been studied
using our earlier proposed unified statistical thermal freeze-out model. The
calculated results are found to be in good agreement with the experimental data
measured by the ALICE experiment. The model calculation fits provide the
thermal freeze-out conditions in terms of the temperature and collective flow
effect parameters for different particle species. Interestingly the model
parameter fits reveal a strong collective flow in the system which appears to
be a consequence of the increasing particle density at LHC. The model used
incorporates a longitudinal as well as transverse hydrodynamic flow. The
chemical potential has been assumed to be nearly equal to zero for the bulk of
the matter owing to a high degree of nuclear transparency effect at such
energies. The contributions from heavier decay resonances are also taken into
account in our calculations.
|
Three-scale homogenization procedure is proposed in this paper to provide
estimates of the effective thermal conductivities of porous carbon-carbon
textile composites. On each scale - the level of fiber tow (micro-scale), the
level of yarns (meso-scale) and the level of laminate (macro-scale) - a two
step homogenization procedure based on the Mori-Tanaka averaging scheme is
adopted. This involves evaluation of the effective properties first in the
absence of pores. In the next step, an ellipsoidal pore is introduced into a
new, generally orthotropic, matrix to make provision for the presence of crimp
voids and transverse and delamination cracks resulting from the thermal
transformation of a polymeric precursor into the carbon matrix. Other sources
of imperfections also attributed to the manufacturing processes, including
non-uniform texture of the reinforcements, are taken into consideration through
the histograms of inclination angles measured along the fiber tow path together
with a particular shape of the equivalent ellipsoidal inclusion. The analysis
shows that a reasonable agreement of the numerical predictions with
experimental measurements can be achieved.
|
The simplicity in the nuclear quadrupole moments reported recently in
$^{107-129}$Cd, i.e., a linear increase of the ${11/2}^-$ quadrupole moments,
is investigated microscopically with the covariant density functional theory.
Using the newly developed functional PC-PK1, the quadrupole moments as well as
their linear increase tendency with the neutron number are excellently
reproduced without any {\it ad hoc} parameters. The core polarization is found
to be very important and contributes almost half of the quadrupole moments. The
simplicity of the linear increase is revealed to be due to the pairing
correlation which smears out the abrupt changes induced by the single-particle
shell structure.
|
We consider a contextual online learning (multi-armed bandit) problem with
high-dimensional covariate $\mathbf{x}$ and decision $\mathbf{y}$. The reward
function to learn, $f(\mathbf{x},\mathbf{y})$, does not have a particular
parametric form. The literature has shown that the optimal regret is
$\tilde{O}(T^{(d_x+d_y+1)/(d_x+d_y+2)})$, where $d_x$ and $d_y$ are the
dimensions of $\mathbf x$ and $\mathbf y$, and thus it suffers from the curse
of dimensionality. In many applications, only a small subset of variables in
the covariate affect the value of $f$, which is referred to as
\textit{sparsity} in statistics. To take advantage of the sparsity structure of
the covariate, we propose a variable selection algorithm called
\textit{BV-LASSO}, which incorporates novel ideas such as binning and voting to
apply LASSO to nonparametric settings. Our algorithm achieves the regret
$\tilde{O}(T^{(d_x^*+d_y+1)/(d_x^*+d_y+2)})$, where $d_x^*$ is the effective
covariate dimension. The regret matches the optimal regret when the covariate
is $d^*_x$-dimensional and thus cannot be improved. Our algorithm may serve as
a general recipe to achieve dimension reduction via variable selection in
nonparametric settings.
|
Young massive clusters are perfect astrophysical laboratories for study of
massive stars. Clusters with Wolf-Rayet (WR) stars are of special importance,
since this enables us to study a coeval WR population at a uniform metallicity
and known age. GLIMPSE30 (G30) is one of them. The cluster is situated near the
Galactic plane (l=298.756deg, b=-0.408deg) and we aimed to determine its
physical parameters and to investigate its high-mass stellar content and
especially WR stars. Our analysis is based on SOFI/NTT JsHKs imaging and low
resolution (R~2000) spectroscopy of the brightest cluster members in the K
atmospheric window. For the age determination we applied isochrone fits for MS
and Pre-MS stars. We derived stellar parameters of the WR stars candidates
using a full nonLTE modeling of the observed spectra. Using a variety of
techniques we found that G30 is very young cluster, with age t~4Myr. The
cluster is located in Carina spiral arm, it is deeply embedded in dust and
suffers reddening of Av~10.5+-1.1mag. The distance to the object is
d=7.2+-0.9kpc. The mass of the cluster members down to 2.35Msol is ~1600Msol.
Cluster's MF for the mass range of 5.6 to 31.6Msol shows a slope of
Gamma=-1.01+-0.03. The total mass of the cluster obtained by this MF down to
1Msol is about 3x10^3Msol. The spectral analysis and the models allow us to
conclude that in G30 are at least one Ofpe/WN and two WR stars. The WR stars
are of WN6-7 hydrogen rich type with progenitor masses more than 60Msol. G30 is
a new member of the exquisite family of young Galactic clusters, hosting WR
stars. It is a factor of two to three less massive than some of the youngest
super-massive star clusters like Arches, Quintuplet and Central cluster and is
their smaller analog.
|
We propose ZeroSARAH -- a novel variant of the variance-reduced method SARAH
(Nguyen et al., 2017) -- for minimizing the average of a large number of
nonconvex functions $\frac{1}{n}\sum_{i=1}^{n}f_i(x)$. To the best of our
knowledge, in this nonconvex finite-sum regime, all existing variance-reduced
methods, including SARAH, SVRG, SAGA and their variants, need to compute the
full gradient over all $n$ data samples at the initial point $x^0$, and then
periodically compute the full gradient once every few iterations (for SVRG,
SARAH and their variants). Note that SVRG, SAGA and their variants typically
achieve weaker convergence results than variants of SARAH: $n^{2/3}/\epsilon^2$
vs. $n^{1/2}/\epsilon^2$. Thus we focus on the variant of SARAH. The proposed
ZeroSARAH and its distributed variant D-ZeroSARAH are the \emph{first}
variance-reduced algorithms which \emph{do not require any full gradient
computations}, not even for the initial point. Moreover, for both standard and
distributed settings, we show that ZeroSARAH and D-ZeroSARAH obtain new
state-of-the-art convergence results, which can improve the previous best-known
result (given by e.g., SPIDER, SARAH, and PAGE) in certain regimes. Avoiding
any full gradient computations (which are time-consuming steps) is important in
many applications as the number of data samples $n$ usually is very large.
Especially in the distributed setting, periodic computation of full gradient
over all data samples needs to periodically synchronize all
clients/devices/machines, which may be impossible or unaffordable. Thus, we
expect that ZeroSARAH/D-ZeroSARAH will have a practical impact in distributed
and federated learning where full device participation is impractical.
|
In this work we study the problem of linear stability of gravitational
perturbations in stationary and spherically symmetric wormholes. For this
purpose, we employ the Newman-Penrose formalism which is well-suited for
treating gravitational radiation in General Relativity, as well as the
geometrical aspect of this theory. With this method we obtain a "master
equation" that describes the behavior of gravitational perturbations that are
of odd-parity in the Regge-Wheeler gauge. This equation is later applied to a
specific class of Morris-Thorne wormholes and also to the metric of an
asymptotically flat scalar field wormhole. The analysis of the equations that
these space-times yield reveals that there are no unstable vibrational modes
generated by the type of perturbations here studied.
|
XL-Calibur is a balloon-borne Compton polarimeter for X-rays in the
$\sim$15-80 keV range. Using an X-ray mirror with a 12 m focal length for
collecting photons onto a beryllium scattering rod surrounded by CZT detectors,
a minimum-detectable polarization as low as $\sim$3% is expected during a
24-hour on-target observation of a 1 Crab source at 45$^{\circ}$ elevation.
Systematic effects alter the reconstructed polarization as the mirror focal
spot moves across the beryllium scatterer, due to pointing offsets, mechanical
misalignment or deformation of the carbon-fiber truss supporting the mirror and
the polarimeter. Unaddressed, this can give rise to a spurious polarization
signal for an unpolarized flux, or a change in reconstructed polarization
fraction and angle for a polarized flux. Using bench-marked Monte-Carlo
simulations and an accurate mirror point-spread function characterized at
synchrotron beam-lines, systematic effects are quantified, and mitigation
strategies discussed. By recalculating the scattering site for a shifted beam,
systematic errors can be reduced from several tens of percent to the
few-percent level for any shift within the scattering element. The treatment of
these systematic effects will be important for any polarimetric instrument
where a focused X-ray beam is impinging on a scattering element surrounded by
counting detectors.
|
Pre-trained contrastive vision-language models have demonstrated remarkable
performance across a wide range of tasks. However, they often struggle on
fine-trained datasets with categories not adequately represented during
pre-training, which makes adaptation necessary. Recent works have shown
promising results by utilizing samples from web-scale databases for
retrieval-augmented adaptation, especially in low-data regimes. Despite the
empirical success, understanding how retrieval impacts the adaptation of
vision-language models remains an open research question. In this work, we
adopt a reflective perspective by presenting a systematic study to understand
the roles of key components in retrieval-augmented adaptation. We unveil new
insights on uni-modal and cross-modal retrieval and highlight the critical role
of logit ensemble for effective adaptation. We further present theoretical
underpinnings that directly support our empirical observations.
|
Despite the observable benefit of Natural Language Processing (NLP) in
processing a large amount of textual medical data within a limited time for
information retrieval, a handful of research efforts have been devoted to
uncovering novel data-cleaning methods. Data cleaning in NLP is at the centre
point for extracting validated information. Another observed limitation in the
NLP domain is having limited medical corpora that provide answers to a given
medical question. Realising the limitations and challenges from two
perspectives, this research aims to clean a medical dataset using ensemble
techniques and to develop a corpus. The corpora expect that it will answer the
question based on the semantic relationship of corpus sequences. However, the
data cleaning method in this research suggests that the ensemble technique
provides the highest accuracy (94%) compared to the single process, which
includes vectorisation, exploratory data analysis, and feeding the vectorised
data. The second aim of having an adequate corpus was realised by extracting
answers from the dataset. This research is significant in machine learning,
specifically data cleaning and the medical sector, but it also underscores the
importance of NLP in the medical field, where accurate and timely information
extraction can be a matter of life and death. It establishes text data
processing using NLP as a powerful tool for extracting valuable information
like image data.
|
Bursts of images exhibit significant self-similarity across both time and
space. This motivates a representation of the kernels as linear combinations of
a small set of basis elements. To this end, we introduce a novel basis
prediction network that, given an input burst, predicts a set of global basis
kernels -- shared within the image -- and the corresponding mixing coefficients
-- which are specific to individual pixels. Compared to state-of-the-art
techniques that output a large tensor of per-pixel spatiotemporal kernels, our
formulation substantially reduces the dimensionality of the network output.
This allows us to effectively exploit comparatively larger denoising kernels,
achieving both significant quality improvements (over 1dB PSNR) and faster
run-times over state-of-the-art methods.
|
The Folium of Descartes in $\mathbb{K}\times\mathbb{K}$ carries group laws,
defined entirely in terms of algebraic operations over the field $\mathbb{K}$.
The problems discussed in this paper include: normalization of Descartes
Folium, group laws and morphisms, exotic structures, exotic structures, second
exotic structure, some topologies on Descartes Folium, differential structure
on Descartes Folium, first isomorphism of algebraic Lie groups over
$\mathbb{K}$, second isomorphism of algebraic Lie groups over $\mathbb{K}$,
derived structures of algebraic Lie groups, a differential/complex analytic
structure on Descartes Folium, Descartes Folium as a topological field, etc.
For predicting these terms, we focus on methods that exploit diagram
manipulation techniques (as alternatives to algebraic method of proofs). All
our results confirm that the Descartes Folium stores natural group structures,
unsuspected till now.
|
Let function $f$ be analytic in the unit disk ${\mathbb D}$ and be normalized
so that $f(z)=z+a_2z^2+a_3z^3+\cdots$. In this paper we give sharp bounds of
the modulus of its second, third and fourth coefficient, if $f$ satisfies \[
\left|\arg \left[\left(\frac{z}{f(z)}\right)^{1+\alpha}f'(z) \right]
\right|<\gamma\frac{\pi}{2} \quad\quad (z\in {\mathbb D}),\] for $0<\alpha<1$
and $0<\gamma\leq1$.
|
Brain networks have received considerable attention given the critical
significance for understanding human brain organization, for investigating
neurological disorders and for clinical diagnostic applications. Structural
brain network (e.g. DTI) and functional brain network (e.g. fMRI) are the
primary networks of interest. Most existing works in brain network analysis
focus on either structural or functional connectivity, which cannot leverage
the complementary information from each other. Although multi-view learning
methods have been proposed to learn from both networks (or views), these
methods aim to reach a consensus among multiple views, and thus distinct
intrinsic properties of each view may be ignored. How to jointly learn
representations from structural and functional brain networks while preserving
their inherent properties is a critical problem. In this paper, we propose a
framework of Siamese community-preserving graph convolutional network (SCP-GCN)
to learn the structural and functional joint embedding of brain networks.
Specifically, we use graph convolutions to learn the structural and functional
joint embedding, where the graph structure is defined with structural
connectivity and node features are from the functional connectivity. Moreover,
we propose to preserve the community structure of brain networks in the graph
convolutions by considering the intra-community and inter-community properties
in the learning process. Furthermore, we use Siamese architecture which models
the pair-wise similarity learning to guide the learning process. To evaluate
the proposed approach, we conduct extensive experiments on two real brain
network datasets. The experimental results demonstrate the superior performance
of the proposed approach in structural and functional joint embedding for
neurological disorder analysis, indicating its promising value for clinical
applications.
|
PIP-II is an 800 MEV superconducting linac that is in the initial
acceleration chain for the Fermilab accelerator complex. The RF system consists
of a warm front-end with an ion source, RFQ and buncher cavities along with 25
superconducting cryo-modules comprised of five different acceleration
\(\beta\). The LLRF system for the LINAC has to provide field and resonance
control for a total of 125 RF cavities.The LLRF system design is in the final
design review phase and will enter the production phase next year. The PIP-II
project is an international collaboration with various partner labs
contributing subsystems. The LLRF system design for the PIP-II Linac is
presented and the specification requirements and system performance in various
stages of testing are described in this paper.
|
The first known interstellar object 'Oumuamua exhibited a nongravitational
acceleration that appeared inconsistent with cometary outgassing, leaving
radiation pressure as the most likely force. Bar the alien lightsail
hypothesis, an ultra-low density due to a fractal structure might also explain
the acceleration of 'Oumuamua by radiation pressure (Moro-Martin 2019). In this
paper we report a decrease in 'Oumuamua's rotation period based on ground-based
observations, and show that this spin-down can be explained by the YORP effect
if 'Oumuamua is indeed a fractal body with the ultra-low density of $10^{-2}$
kg m$^{-3}$. We also investigate the mechanical consequences of 'Oumuamua as a
fractal body subjected to rotational and tidal forces, and show that a fractal
structure can survive these mechanical forces.
|
Zurek's and Kibble's causal constraints for defect production at continuous
transitions are encoded in the field equations that condensed matter systems
and quantum fields satisfy. In this article we highlight some of the properties
of the solutions to the equations and show to what extent they support the
original ideas.
|
Recall that two geodesics in a negatively curved surface $S$ are of the same
type if their free homotopy classes differ by a homeomorphism of the surface.
In this note we study the distribution in the unit tangent bundle of the
geodesics of fixed type, proving that they are asymptotically equidistributed
with respect to a certain measure $\mathfrak{m}^S$ on $T^1S$. We study a few
properties of this measure, showing for example that it distinguishes between
hyperbolic surfaces.
|
Common models of synchronizable oscillatory systems consist of a collection
of coupled oscillators governed by a collection of differential equations. The
ubiquitous Kuramoto models rely on an {\em a priori} fixed connectivity pattern
facilitates mutual communication and influence between oscillators. In
biological synchronizable systems, like the mammalian suprachaismatic nucleus,
enabling communication comes at a cost -- the organism expends energy creating
and maintaining the system -- linking their development to evolutionary
selection. Here, we introduce and analyze a new evolutionary game theoretic
framework modeling the behavior and evolution of systems of coupled
oscillators. Each oscillator in our model is characterized by a pair of dynamic
behavioral traits: an oscillatory phase and whether they connect and
communicate to other oscillators or not. Evolution of the system occurs along
these dimensions, allowing oscillators to change their phases and/or their
communication strategies. We measure success of mutations by comparing the
benefit of phase synchronization to the organism balanced against the cost of
creating and maintaining connections between the oscillators. Despite such a
simple setup, this system exhibits a wealth of nontrivial behaviors, mimicking
different classical games -- the Prisoner's Dilemma, the snowdrift game, and
coordination games -- as the landscape of the oscillators changes over time.
Despite such complexity, we find a surprisingly simple characterization of
synchronization through connectivity and communication: if the benefit of
synchronization $B(0)$ is greater than twice the cost $c$, $B(0) > 2c$, the
organism will evolve towards complete communication and phase synchronization.
Taken together, our model demonstrates possible evolutionary constraints on
both the existence of a synchronized oscillatory system and its overall
connectivity.
|
We present a self-contained proof of a formula for the $L^q$ dimensions of
self-similar measures on the real line under exponential separation (up to the
proof of an inverse theorem for the $L^q$ norm of convolutions). This is a
special case of a more general result of the author from [Shmerkin, Pablo. On
Furstenberg's intersection conjecture, self-similar measures, and the $L^q$
norms of convolutions. Ann. of Math., 2019], and one of the goals of this
survey is to present the ideas in a simpler, but important, setting. We also
review some applications of the main result to the study of Bernoulli
convolutions and intersections of self-similar Cantor sets.
|
Social media websites, electronic newspapers and Internet forums allow
visitors to leave comments for others to read and interact. This exchange is
not free from participants with malicious intentions, who troll others by
positing messages that are intended to be provocative, offensive, or menacing.
With the goal of facilitating the computational modeling of trolling, we
propose a trolling categorization that is novel in the sense that it allows
comment-based analysis from both the trolls' and the responders' perspectives,
characterizing these two perspectives using four aspects, namely, the troll's
intention and his intention disclosure, as well as the responder's
interpretation of the troll's intention and her response strategy. Using this
categorization, we annotate and release a dataset containing excerpts of Reddit
conversations involving suspected trolls and their interactions with other
users. Finally, we identify the difficult-to-classify cases in our corpus and
suggest potential solutions for them.
|
We present a metamaterial that acts as a strongly resonant absorber at
terahertz frequencies. Our design consists of a bilayer unit cell which allows
for maximization of the absorption through independent tuning of the electrical
permittivity and magnetic permeability. An experimental absorptivity of 70% at
1.3 terahertz is demonstrated. We utilize only a single unit cell in the
propagation direction, thus achieving an absorption coefficient $\alpha$ = 2000
cm$^{-1}$. These metamaterials are promising candidates as absorbing elements
for thermally based THz imaging, due to their relatively low volume, low
density, and narrow band response.
|
The congruent number elliptic curves are defined by $E_d: y^2=x^3-d^2x$,
where $d\in \mathbb{N}.$ We give a simple proof of a formula for
$L(\mathrm{Sym}^2(E_d),3)$ in terms of the determinant of the elliptic
trilogarithm evaluated at some degree zero divisors supported on the torsion
points on $E_d(\overline{\mathbb{Q}})$.
|
High spin magnetic molecules are promising candidates for quantum information
processing because they intrinsically have multiple sublevels for information
storage and computational operations. However, due to their susceptibility to
the environment and limitation from the selection rule, the arbitrary control
of the quantum state of a multilevel system on a molecular and electron spin
basis has not been realized. Here we exploit the photoexcited triplet of C70 as
a molecular electron spin qutrit. After the system was initialized by
photoexcitation, we prepared it into representative three-level superposition
states characteristic of the qutrit, measured their density matrices, and
showed the interference of the quantum phases in the superposition. The
interference pattern is further interpreted as a map of evolution through time
under different conditions.
|
Based on the vertex-face correspondence, we give an algebraic analysis
formulation of correlation functions of the $k\times k$ fusion eight-vertex
model in terms of the corresponding fusion SOS model. Here $k\in Z_{>0}$. A
general formula for correlation functions is derived as a trace over the space
of states of lattice operators such as the corner transfer matrices, the half
transfer matrices (vertex operators) and the tail operator. We give a
realization of these lattice operators as well as the space of states as
objects in the level $k$ representation theory of the elliptic algebra
$U_{q,p}(\hat{sl}_2)$.
|
Signals of bimodality have been investigated in experimental data of
quasi-projectile decay produced in Au+Au collisions at 35 AMeV. This same data
set was already shown to provide several signals characteristic of a first
order, liquid-gas-like phase transition. Different event sortings proposed in
the recent literature are analyzed. A sudden change in the fragmentation
pattern is revealed by the distribution of the charge of the largest fragment,
compatible with a bimodal behavior.
|
A surprising number of new results in "core" SPM in the last quarter of 2007,
and some other beautiful fundamental results are announced.
|
Is more always better? We address this question in the context of
bibliometric indices that aim to assess the scientific impact of individual
researchers by counting their number of highly cited publications. We propose a
simple model in which the number of citations of a publication depends not only
on the scientific impact of the publication but also on other 'random' factors.
Our model indicates that more need not always be better. It turns out that the
most influential researchers may have a systematically lower performance, in
terms of highly cited publications, than some of their less influential
colleagues. The model also suggests an improved way of counting highly cited
publications.
|
A quantum computer -- i.e., a computer capable of manipulating data in
quantum superposition -- would find applications including factoring, quantum
simulation and tests of basic quantum theory. Since quantum superpositions are
fragile, the major hurdle in building such a computer is overcoming noise.
Developed over the last couple of years, new schemes for achieving fault
tolerance based on error detection, rather than error correction, appear to
tolerate as much as 3-6% noise per gate -- an order of magnitude better than
previous procedures. But proof techniques could not show that these promising
fault-tolerance schemes tolerated any noise at all.
With an analysis based on decomposing complicated probability distributions
into mixtures of simpler ones, we rigorously prove the existence of constant
tolerable noise rates ("noise thresholds") for error-detection-based schemes.
Numerical calculations indicate that the actual noise threshold this method
yields is lower-bounded by 0.1% noise per gate.
|
Some necessary and sufficient optimality conditions for inequality
constrained problems with continuously differentiable data were obtained in the
papers [I. Ginchev and V.I. Ivanov, Second-order optimality conditions for
problems with C$\sp{1}$ data, J. Math. Anal. Appl., v. 340, 2008, pp.
646--657], [V.I. Ivanov, Optimality conditions for an isolated minimum of order
two in C$\sp{1}$ constrained optimization, J. Math. Anal. Appl., v. 356, 2009,
pp. 30--41] and [V. I. Ivanov, Second- and first-order optimality conditions in
vector optimization, Internat. J. Inform. Technol. Decis. Making, 2014, DOI:
10.1142/S0219622014500540].
In the present paper, we continue these investigations. We obtain some
necessary optimality conditions of Karush--Kuhn--Tucker type for scalar and
vector problems. A new second-order constraint qualification of Zangwill type
is introduced. It is applied in the optimality conditions.
|
We present 1420 MHz polarization images of a 5x5 degree region around the
planetary nebula (PN) DeHt 5. The images reveal narrow Faraday-rotation
structures on the visible disk of DeHt 5, as well as two wider, tail-like,
structures "behind" DeHt 5. Though DeHt 5 is an old PN known to be interacting
with the interstellar medium (ISM), a tail has not previously been identified
for this object. The innermost tail is approximately 3 pc long and runs away
from the north-east edge of DeHt 5 in a direction roughly opposite that of the
sky-projected space velocity of the white dwarf central star, WD 2218+706. We
believe this tail to be the signature of ionized material ram-pressure stripped
and deposited downstream during a >74,000 yr interaction between DeHt 5 and the
ISM. We estimate the rotation measure (RM) through the inner tail to be -15 +/-
5 rad/m^2, and, using a realistic estimate for the line-of-sight component of
the ISM magnetic field around DeHt 5, derive an electron density in the inner
tail of n_e = 3.6 +/- 1.8 cm^-3. Assuming the material is fully ionized, we
estimate a total mass in the inner tail of 0.68 +/- 0.33 solar masses, and
predict that 0.49 +/- 0.33 solar masses was added during the PN-ISM
interaction. The outermost tail consists of a series of three roughly circular
components, which have a collective length of approximately 11.0 pc. This tail
is less conspicuous than the inner tail, and may be the signature of the
earlier interaction between the WD 2218+706 asymptotic giant branch (AGB)
progenitor and the ISM. The results for the inner and outer tails are
consistent with hydrodynamic simulations, and may have implications for the PN
missing-mass problem as well as for models which describe the impact of the
deaths of intermediate-mass stars on the ISM.
|
Margin-Based Principle has been proposed for a long time, it has been proved
that this principle could reduce the structural risk and improve the
performance in both theoretical and practical aspects. Meanwhile, feed-forward
neural network is a traditional classifier, which is very hot at present with a
deeper architecture. However, the training algorithm of feed-forward neural
network is developed and generated from Widrow-Hoff Principle that means to
minimize the squared error. In this paper, we propose a new training algorithm
for feed-forward neural networks based on Margin-Based Principle, which could
effectively promote the accuracy and generalization ability of neural network
classifiers with less labelled samples and flexible network. We have conducted
experiments on four UCI open datasets and achieved good results as expected. In
conclusion, our model could handle more sparse labelled and more high-dimension
dataset in a high accuracy while modification from old ANN method to our method
is easy and almost free of work.
|
Based on the one-parameter generalization of Shannon-Khinchin (SK) axioms
presented by one of the authors, and utilizing a tree-graphical representation,
we have further developed the SK Axioms in accordance with the two-parameter
entropy introduced by Sharma-Taneja, Mittal, Borges-Roditi, and
Kaniadakis-Lissia-Scarfone. The corresponding unique theorem is proved. It is
shown that the obtained two-parameter Shannon additivity is a natural
consequence from the Leibniz rule of the two-parameter Chakrabarti-Jagannathan
difference operator.
|
We discuss existence and regularity results for multi-channel images in the
setting of isotropic and anisotropic variants of the TV-model.
|
The advective Cahn-Hilliard equation describes the competing processes of
stirring and separation in a two-phase fluid. Intuition suggests that bubbles
will form on a certain scale, and previous studies of Cahn-Hilliard dynamics
seem to suggest the presence of one dominant length scale. However, the
Cahn-Hilliard phase-separation mechanism contains a hyperdiffusion term and we
show that, by stirring the mixture at a sufficiently large amplitude, we excite
the diffusion and overwhelm the segregation to create a homogeneous liquid. At
intermediate amplitudes we see regions of bubbles coexisting with regions of
hyperdiffusive filaments. Thus, the problem possesses two dominant length
scales, associated with the bubbles and filaments. For simplicity, we use use a
chaotic flow that mimics turbulent stirring at large Prandtl number. We compare
our results with the case of variable mobility, in which growth of bubble size
is dominated by interfacial rather than bulk effects, and find qualitatively
similar results.
|
The attenuation of small-amplitude acoustic waves in a suspension containing
ultrasound contrast agents (UCAs, coated microbubbles) is determined by the
linear oscillation of the UCAs in the medium, which can be estimated via a
linear attenuation theory. Recently, several nonlinear phenomena of energy
attenuation at very low-intensity of acoustic pressures have been observed
experimentally, raising concerns on the validity of the linear attenuation
theory. Explanations of the nonlinear phenomenon are still lacking.
Particularly, the interpretation of the pressure-dependent attenuation
phenomenon is still under debate. In this note, we investigated the energy
dissipation of a single UCA via a nonlinear Rayleigh-Plesset equation and used
a formula capable of estimating attenuation coefficient due to the nonlinear
oscillation of the UCA. The simulation results show the linear oscillation of
an UCA at low excitation pressures does not always guarantee the linearity in
the energy attenuation. Although nonlinear oscillation of the UCA contributes
to the occurrence of nonlinear attenuation phenomena, it is not the only
trigger.
|
By assuming the existence of extra-dimensional sterile neutrinos in big bang
nucleosynthesis (BBN) epoch, we investigate the sterile neutrino ($\nu_{\rm
s}$) effects on the BBN and constrain some parameters associated with the
$\nu_{\rm s}$ properties. First, for cosmic expansion rate, we take into
account effects of a five-dimensional bulk and intrinsic tension of the brane
embedded in the bulk, and constrain a key parameter of the extra dimension by
using the observational element abundances. Second, effects of the $\nu_{\rm
s}$ traveling on or off the brane are considered. In this model, the effective
mixing angle between a $\nu_{\rm s}$ and an active neutrino depends on energy,
which may give rise to a resonance effect on the mixing angle. Consequently,
reaction rate of the $\nu_{\rm s}$ can be drastically changed during the cosmic
evolution. We estimated abundances and temperature of the $\nu_{\rm s}$ by
solving the rate equation as a function of temperature until the sterile
neutrino decoupling. We then find that the relic abundance of the $\nu_{\rm s}$
is drastically enhanced by the extra-dimension and maximized for a
characteristic resonance energy $E_{\rm res}\gtrsim 0.01$ GeV. Finally, some
constraints related to the $\nu_{\rm s}$, mixing angle and mass difference, are
discussed in detail with the comparison of our BBN calculations corrected by
the extra-dimensional $\nu_{\rm s}$ to observational data on light element
abundances.
|
We propose classical equations of motion for a charged particle with magnetic
moment, taking radiation reaction into account. This generalizes the
Landau-Lifshitz equations for the spinless case. In the special case of
spin-polarized motion in a constant magnetic field (synchrotron motion) we
verify that the particle does lose energy. Previous proposals did not predict
dissipation of energy and also suffered from runaway solutions analogous to
those of the Lorentz-Dirac equations of motion.
|
In this paper, we show that for exact area-preserving twist maps on annulus,
the invariant circles with a given rotation number can be destroyed by
arbitrarily small Gevrey-$\alpha$ perturbations of the integrable generating
function in the $C^r$ topology with $r<4-\frac{2}{\alpha}$, where $\alpha>1$.
|
Berry-Esseen bounds for non-linear functionals of infinite Rademacher
sequences are derived by means of the Malliavin-Stein method. Moreover,
multivariate extensions for vectors of Rademacher functionals are shown. The
results establish a connection to small-ball probabilities and shed new light
onto the relation between central limit theorems on the Rademacher chaos and
norms of contraction operators. Applications concern infinite weighted 2-runs,
a combinatorial central limit theorem and traces of Bernoulli random matrices.
|
We study SU(2) gluodynamics at finite temperature on both sides of the
deconfining phase transition. We create the lattice ensembles using the
tree-level tadpole-improved Symanzik action. The Neuberger overlap Dirac
operator is used to determine the following three aspects of vacuum structure:
(i) The topological susceptibility is evaluated at various temperatures across
the phase transition, (ii) the overlap fermion spectral density is determined
and found to depend on the Polyakov loop above the phase transition and (iii)
the corresponding localization properties of low-lying eigenmodes are
investigated. Finally, we compare with zero temperature results.
|
Magnetoresistance (MR) has attracted tremendous attention for possible
technological applications. Understanding the role of magnetism in manipulating
MR may in turn steer the searching for new applicable MR materials. Here we
show that antiferromagnetic (AFM) GdSi metal displays an anisotropic positive
MR value (PMRV), up to $\sim$ 415%, accompanied by a large negative thermal
volume expansion (NTVE). Around $T_\text{N}$ the PMRV translates to negative,
down to $\sim$ -10.5%. Their theory-breaking magnetic-field dependencies [PMRV:
dominantly linear; negative MR value (NMRV): quadratic] and the unusual NTVE
indicate that PMRV is induced by the formation of magnetic polarons in 5$d$
bands, whereas NMRV is possibly due to abated electron-spin scattering
resulting from magnetic-field-aligned local 4$f$ spins. Our results may open up
a new avenue of searching for giant MR materials by suppressing the AFM
transition temperature, opposite the case in manganites, and provide a
promising approach to novel magnetic and electric devices.
|
In this paper, we analyze web-downloaded data on people sharing their music
library. By attributing to each music group usual music genres (Rock, Pop...),
and analysing correlations between music groups of different genres with
percolation-idea based methods, we probe the reality of these subdivisions and
construct a music genre cartography, with a tree representation. We also show
the diversity of music genres with Shannon entropy arguments, and discuss an
alternative objective way to classify music, that is based on the complex
structure of the groups audience. Finally, a link is drawn with the theory of
hidden variables in complex networks.
|
Flame graphs are a popular way of representing profiling data. In this paper
we propose a possible mathematical definition of flame graphs. In doing so, we
gain some interesting algebraic properties almost for free, which in turn allow
us to define some operations that can allow to perform an in-depth performance
regression analysis. The typical documented use of a flame graph is via its
graphical representation, whereby one scans the picture for the largest
plateaux. Whilst this method is effective at finding the main sources of
performance issues, it leaves quite a large amount of data potentially unused.
By combining a mathematical precise definition of flame graphs with some
statistical methods we show how to generalise this visual procedure and make
the best of the full set of collected profiling data.
|
Having accurate tools to describe non-classical, non-Gaussian environmental
fluctuations is crucial for designing effective quantum control protocols and
understanding the physics of underlying quantum dissipative environments. We
show how the Keldysh approach to quantum noise characterization can be usefully
employed to characterize frequency-dependent noise, focusing on the quantum
bispectrum (i.e., frequency-resolved third cumulant). Using the paradigmatic
example of photon shot noise fluctuations in a driven bosonic mode, we show
that the quantum bispectrum can be a powerful tool for revealing distinctive
non-classical noise properties, including an effective breaking of detailed
balance by quantum fluctuations. The Keldysh-ordered quantum bispectrum can be
directly accessed using existing noise spectroscopy protocols.
|
A strong electron current triggered by a femtosecond relativistically intense
laser pulse in a foil coil-like target is shown to be able to generate a
solenoidal-type extremely strong magnetic field. The magnetic field lifetime
sufficiently exceeds the laser pulse duration and is defined mainly by the
target properties. The process of the magnetic field generation was studied
with 3D PIC simulations. It is demonstrated that the pulse and the target
parameters allow controlling the field strength and duration. The scheme
studied is of great importance for laser-based magnetization technologies.
|
The main idea of "Quantum Chaos" studies is that Quantum Mechanics introduces
two energy scales into the study of chaotic systems: One is obviously the mean
level spacing $\Delta\propto\hbar^d$, where $d$ is the dimensionality; The
other is $\Delta_b\propto\hbar$, which is known as the non-universal energy
scale, or as the bandwidth, or as the Thouless energy. Associated with these
two energy scales are two special quantum-mechanical (QM) regimes in the theory
of driven system. These are the QM adiabatic regime, and the QM
non-perturbative regime respectively. Otherwise Fermi golden rule applies, and
linear response theory can be trusted. Demonstrations of this general idea,
that had been published in 1999, have appeared in studies of wavepacket
dynamics, survival probability, dissipation, quantum irreversibility, fidelity
and dephasing.
|
We prove the existence of a quantum isometry groups for new classes of metric
spaces: (i) geodesic metrics for compact connected Riemannian manifolds
(possibly with boundary) and (ii) metric spaces admitting a uniformly
distributed probability measure. In the former case it also follows from recent
results of the second author that the quantum isometry group is classical, i.e.
the commutative $C^*$-algebra of continuous functions on the Riemannian
isometry group.
|
We consider estimation and inference on average treatment effects under
unconfoundedness conditional on the realizations of the treatment variable and
covariates. Given nonparametric smoothness and/or shape restrictions on the
conditional mean of the outcome variable, we derive estimators and confidence
intervals (CIs) that are optimal in finite samples when the regression errors
are normal with known variance. In contrast to conventional CIs, our CIs use a
larger critical value that explicitly takes into account the potential bias of
the estimator. When the error distribution is unknown, feasible versions of our
CIs are valid asymptotically, even when $\sqrt{n}$-inference is not possible
due to lack of overlap, or low smoothness of the conditional mean. We also
derive the minimum smoothness conditions on the conditional mean that are
necessary for $\sqrt{n}$-inference. When the conditional mean is restricted to
be Lipschitz with a large enough bound on the Lipschitz constant, the optimal
estimator reduces to a matching estimator with the number of matches set to
one. We illustrate our methods in an application to the National Supported Work
Demonstration.
|
This paper provides a comparative analysis of the performance of four
state-of-the-art distributional semantic models (DSMs) over 11 languages,
contrasting the native language-specific models with the use of machine
translation over English-based DSMs. The experimental results show that there
is a significant improvement (average of 16.7% for the Spearman correlation) by
using state-of-the-art machine translation approaches. The results also show
that the benefit of using the most informative corpus outweighs the possible
errors introduced by the machine translation. For all languages, the
combination of machine translation over the Word2Vec English distributional
model provided the best results consistently (average Spearman correlation of
0.68).
|
Deposition/removal of metal atoms on the hex reconstructed (100) surface of
Au, Pt and Ir should present intriguing aspects, since a new island implies hex
-> square deconstruction of the substrate, and a new crater the square -> hex
reconstruction of the uncovered layer. To obtain a microscopic understanding of
how islands/craters form in these conditions, we have conducted simulations of
island and crater growth on Au(100), whose atomistic behavior, including the
hex reconstruction on top of the square substrate, is well described by mean s
of classical many-body forces. By increasing/decreasing the Au coverage on
Au(100), we find that island/craters will not grow unless they exceed a
critical size of about 8-10 atoms. This value is close to that which explains
the nonlinear coverage dependence observed in molecular adsorption on the
closely related surface Pt (100). This threshold size is rationalized in terms
of a transverse step correlation length, measuring the spatial extent where
reconstruction of a given plane is disturbed by the nearby step.
|
We study the interaction between graphene and a single-molecule-magnet,
[Fe4(L)2(dpm)6]. Focusing on the closest Iron ion in a hollow position with
respect to the graphene sheet, we derive a channel selective tunneling
Hamiltonian, that couples different d orbitals of the Iron atom to precise
independent combinations of sublattice and valley degrees of freedom of the
electrons in graphene. When looking at the spin-spin interaction between the
molecule and the graphene electrons, close to the Dirac point the channel
selectivity results in a channel decoupling of the Kondo interaction, with two
almost independent Kondo systems weakly interacting among themselves. The
formation of magnetic moments and the development of a full Kondo effect
depends on the charge state of the graphene layer.
|
Computational prediction of stable crystal structures has a profound impact
on the large-scale discovery of novel functional materials. However, predicting
the crystal structure solely from a material's composition or formula is a
promising yet challenging task, as traditional ab initio crystal structure
prediction (CSP) methods rely on time-consuming global searches and
first-principles free energy calculations. Inspired by the recent success of
deep learning approaches in protein structure prediction, which utilize
pairwise amino acid interactions to describe 3D structures, we present
AlphaCrystal-II, a novel knowledge-based solution that exploits the abundant
inter-atomic interaction patterns found in existing known crystal structures.
AlphaCrystal-II predicts the atomic distance matrix of a target crystal
material and employs this matrix to reconstruct its 3D crystal structure. By
leveraging the wealth of inter-atomic relationships of known crystal
structures, our approach demonstrates remarkable effectiveness and reliability
in structure prediction through comprehensive experiments. This work highlights
the potential of data-driven methods in accelerating the discovery and design
of new materials with tailored properties.
|
These notes present an application of the geometric Satake equivalence to the
description of characters of indecomposable tilting modules for reductive
algebraic groups over fields of positive characteristic, obtained in joint work
with G. Williamson.
|
We propose a short proof of the Fundamental Theorem of Algebra based on the
ODE that describes the Newton flow and the fact that the value $|P(z)|$ is a
Lyapunov function. It clarifies an idea that goes back to Cauchy.
|
We investigate the thermal and kinematic configuration of a sunspot penumbra
using very high spectral and spatial resolution intensity profiles of the
non-magnetic Fe I 557.6 nm line. The dataset was acquired with the 2D solar
spectrometer TESOS. The profiles are inverted using a one-component model
atmosphere with gradients of the physical quantities. From this inversion we
obtain the stratification with depth of temperature, line-of-sight velocity,
and microturbulence across the penumbra. Our results suggest that the physical
mechanism(s) responsible for the penumbral filaments operate preferentially in
the lower photosphere. We confirm the existence of a thermal asymmetry between
the center and limb-side penumbra, the former being hotter by 100-150 K on
average. We also investigate the nature of the bright ring that appears in the
inner penumbra when sunspots are observed in the wing of spectral lines. The
line-of-sight velocities retrieved from the inversion are used to determine the
flow speed and flow angle at different heights in the photosphere. Both the
flow speed and flow angle increase with optical depth and radial distance.
Downflows are detected in the mid and outer penumbra, but only in deep layers
(log tau_{500} < -1.4). We demonstrate that the velocity stratifications
retrieved from the inversion are consistent with the idea of penumbral flux
tubes channeling the Evershed flow. Finally, we show that larger Evershed flows
are associated with brighter continuum intensities in the inner center-side
penumbra. Dark structures, however, are also associated with significant
Evershed flows. This leads us to suggest that the bright and dark filaments
seen at 0.5" resolution are not individual flow channels, but a collection of
them.
|
Quantum fluctuations are the key concepts of quantum mechanics. Quantum
fluctuations of quantum fields induce a zero-point energy shift under spatial
boundary conditions. This quantum phenomenon, called the Casimir effect, has
been attracting much attention beyond the hierarchy of energy scales, ranging
from elementary particle physics to condensed matter physics together with
photonics. However, the application of the Casimir effect to spintronics has
not yet been investigated enough, particularly to ferrimagnetic thin films,
although yttrium iron garnet (YIG) is one of the best platforms for
spintronics. Here we fill this gap. Using the lattice field theory, we
investigate the Casimir effect induced by quantum fields for magnons in
insulating magnets and find that the magnonic Casimir effect can arise not only
in antiferromagnets but also in ferrimagnets including YIG thin films. Our
result suggests that YIG, the key ingredient of magnon-based spintronics, can
serve also as a promising platform for manipulating and utilizing Casimir
effects, called Casimir engineering. Microfabrication technology can control
the thickness of thin films and realize the manipulation of the magnonic
Casimir effect. Thus, we pave the way for magnonic Casimir engineering.
|
For a non-compact hyperbolic 3-manifold with cusps we prove an explicit
formula that relates the regularized analytic torsion associated to the even
symmetric powers of the standard representation of SL_2(C) to the corresponding
Reidemeister torsion. Our proof rests on an expression of the analytic torsion
in terms of special values of Ruelle zeta functions as well as on recent work
of Pere Menal-Ferrer and Joan Porti.
|
We consider loop observables in gauged Wess-Zumino-Witten models, and study
the action of renormalization group flows on them. In the WZW model based on a
compact Lie group G, we analyze at the classical level how the space of
renormalizable defects is reduced upon the imposition of global and affine
symmetries. We identify families of loop observables which are invariant with
respect to an affine symmetry corresponding to a subgroup H of G, and show that
they descend to gauge-invariant defects in the gauged model based on G/H. We
study the flows acting on these families perturbatively, and quantize the fixed
points of the flows exactly. From their action on boundary states, we present a
derivation of the "generalized Affleck-Ludwig rule, which describes a large
class of boundary renormalization group flows in rational conformal field
theories.
|
The algebraic entropy h, defined for endomorphisms f of abelian groups G,
measures the growth of the trajectories of non-empty finite subsets F of G with
respect to f. We show that this growth can be either polynomial or exponential.
The greatest f-invariant subgroup of G where this growth is polynomial
coincides with the greatest f-invariant subgroup P(G,f) of G (named Pinsker
subgroup of f) such that h(f|_P(G,f))=0. We obtain also an alternative
characterization of P(G,f) from the point of view of the quasi-periodic points
of f. This gives the following application in ergodic theory: for every
continuous injective endomorphism g of a compact abelian group K there exists a
largest g-invariant closed subgroup N of K such that g|_N is ergodic;
furthermore, the induced endomorphism g' of the quotient K/N has zero
topological entropy.
|
Bottom baryons decaying to a J/\psi\ meson and a hyperon are reconstructed
using 1.0 fb^{-1} of data collected in 2011 with the LHCb detector. Significant
\Lambda_b^0 \rightarrow J/\psi \Lambda, \Xi_b^-\rightarrow J/\psi \Xi^- and
\Omega_b^- \rightarrow J/\psi \Omega^- signals are observed and the
corresponding masses are measured to be M(\Lambda_b^0) = 5619.53 \pm 0.13
(stat) \pm 0.45 (syst) MeV/c^2, M(\Xi_b^-) = 5795.8 \pm 0.9 (stat) \pm 0.4
(syst) MeV/c^2, M(\Omega_b^-) = 6046.0 \pm 2.2 (stat) \pm 0.5 (syst) MeV/c^2,
while the differences with respect to the \Lambda_b^0 mass are
M(\Xi_b^-)-M(\Lambda_b^0) = 176.2 \pm 0.9 (stat) \pm 0.1 (syst) MeV/c^2,
M(\Omega_b^-)-M(\Lambda_b^0) = 426.4 \pm 2.2 (stat) \pm 0.4 (syst) MeV/c^2.
These are the most precise mass measurements of the \Lambda_b^0, \Xi_b^- and
\Omega_b^- baryons to date. Averaging the above \Lambda_b^0 mass measurement
with that published by LHCb using 35 pb^{-1} of data collected in 2010 yields
M(\Lambda_b^0) = 5619.44 \pm 0.13 (stat) \pm 0.38 (syst) MeV/c^2.
|
We study continuous variable coherence of phase-dependent squeezed state
based on an extended Hanbury Brown-Twiss scheme. High-order coherence is
continuously varied by adjusting squeezing parameter $r$, displacement $\alpha
$, and squeezing phase $\theta $. We also analyze effects of background noise
$\gamma $ and detection efficiency $\eta $ on the measurements. As the
squeezing phase shifts from 0 to $\pi $, the photon statistics of the squeezed
state continuously change from the anti-bunching ($g^{(n)}<1$) to
super-bunching ($g^{(n)}>n!$) which shows a transition from particle nature to
wave nature. The experiment feasibility is also examined. It provides a
practical method to generate phase-dependent squeezed states with high-order
continuous-variable coherence by tuning squeezing phase $\theta $. The
controllable coherence source can be applied to sensitivity improvement in
gravitational wave detection and quantum imaging.
|
Given a frame in C^n which satisfies a form of the uncertainty principle (as
introduced by Candes and Tao), it is shown how to quickly convert the frame
representation of every vector into a more robust Kashin's representation whose
coefficients all have the smallest possible dynamic range O(1/\sqrt{n}). The
information tends to spread evenly among these coefficients. As a consequence,
Kashin's representations have a great power for reduction of errors in their
coefficients, including coefficient losses and distortions.
|
The purpose of this article is to give a preliminary clarification on the
relation between crossing number and crossing change. With a main focus on the
span of X polynomial, we prove that, as our theorem claims, the crossing number
of the link after crossing change can be estimated when certain conditions are
met. At the end of the article, we give an example to demonstrate a special
case for the theorem and a counterexample to explain that the theorem cannot be
applied if the obtained link is not alternating.
|
In this paper we introduce non-decreasing jump processes with independent and
time non-homogeneous increments. Although they are not L\'evy processes, they
somehow generalize subordinators in the sense that their Laplace exponents are
possibly different Bern\v{s}tein functions for each time $t$. By means of these
processes, a generalization of subordinate semigroups in the sense of Bochner
is proposed. Because of time-inhomogeneity, two-parameter semigroups
(propagators) arise and we provide a Phillips formula which leads to time
dependent generators. The inverse processes are also investigated and the
corresponding governing equations obtained in the form of generalized variable
order fractional equations. An application to a generalized subordinate
Brownian motion is also examined.
|
We propose a novel scheme for realizing single-photon blockade in a weakly
driven hybrid cavity optomechanical system consisting of a nonlinear photonic
crystal. Sub-Poissonian statistics is realized even when the single-photon
optomechanical coupling strength is smaller than the decay rate of the optical
mode. The scheme relaxes the requirement of strong coupling for photon blockade
in optomechanical systems. It is shown that photon blockade could be generated
at the telecommunication wavelength.
|
The Apollo 12 lunar module (LM) landing near the Surveyor III spacecraft at
the end of 1969 has remained the primary experimental verification of the
predicted physics of plume ejecta effects from a rocket engine interacting with
the surface of the moon. This was made possible by the return of the Surveyor
III camera housing by the Apollo 12 astronauts, allowing detailed analysis of
the composition of dust deposited by the LM plume. It was soon realized after
the initial analysis of the camera housing that the LM plume tended to remove
more dust than it had deposited. In the present study, coupons from the camera
housing have been reexamined. In addition, plume effects recorded in landing
videos from each Apollo mission have been studied for possible clues. Several
likely scenarios are proposed to explain the Surveyor III dust observations.
These include electrostatic levitation of the dust from the surface of the Moon
as a result of periodic passing of the day-night terminator; dust blown by the
Apollo 12 LM flyby while on its descent trajectory; dust ejected from the lunar
surface due to gas forced into the soil by the Surveyor III rocket nozzle,
based on Darcy's law; and mechanical movement of dust during the Surveyor
landing. Even though an absolute answer may not be possible based on available
data and theory, various computational models are employed to estimate the
feasibility of each of these proposed mechanisms. Scenarios are then discussed
which combine multiple mechanisms to produce results consistent with
observations.
|
A central challenge in the computational modeling of neural dynamics is the
trade-off between accuracy and simplicity. At the level of individual neurons,
nonlinear dynamics are both experimentally established and essential for
neuronal functioning. An implicit assumption has thus formed that an accurate
computational model of whole-brain dynamics must also be highly nonlinear,
whereas linear models may provide a first-order approximation. Here, we provide
a rigorous and data-driven investigation of this hypothesis at the level of
whole-brain blood-oxygen-level-dependent (BOLD) and macroscopic field potential
dynamics by leveraging the theory of system identification. Using functional
MRI (fMRI) and intracranial EEG (iEEG), we model the resting state activity of
700 subjects in the Human Connectome Project (HCP) and 122 subjects from the
Restoring Active Memory (RAM) project using state-of-the-art linear and
nonlinear model families. We assess relative model fit using predictive power,
computational complexity, and the extent of residual dynamics unexplained by
the model. Contrary to our expectations, linear auto-regressive models achieve
the best measures across all three metrics, eliminating the trade-off between
accuracy and simplicity. To understand and explain this linearity, we highlight
four properties of macroscopic neurodynamics which can counteract or mask
microscopic nonlinear dynamics: averaging over space, averaging over time,
observation noise, and limited data samples. Whereas the latter two are
technological limitations and can improve in the future, the former two are
inherent to aggregated macroscopic brain activity. Our results, together with
the unparalleled interpretability of linear models, can greatly facilitate our
understanding of macroscopic neural dynamics and the principled design of
model-based interventions for the treatment of neuropsychiatric disorders.
|
In this paper, we build on the 1971 memo "Twenty Things to Do With a
Computer" by Seymour Papert and Cynthia Solomon and propose twenty
constructionist things to do with artificial intelligence and machine learning.
Several proposals build on ideas developed in the original memo while others
are new and address topics in science, mathematics, and the arts. In reviewing
the big themes, we notice a renewed interest in children's engagement not just
for technical proficiency but also to cultivate a deeper understanding of their
own cognitive processes. Furthermore, the ideas stress the importance of
designing personally relevant AI/ML applications, moving beyond isolated models
and off-the-shelf datasets disconnected from their interests. We also
acknowledge the social aspects of data production involved in making AI/ML
applications. Finally, we highlight the critical dimensions necessary to
address potential harmful algorithmic biases and consequences of AI/ML
applications.
|
Visinelli and Gondolo (2015, hereafter VG15) derived analytic expressions for
the evolution of the dark matter temperature in a generic cosmological model.
They then calculated the dark matter kinetic decoupling temperature
$T_{\mathrm{kd}}$ and compared their results to the Gelmini and Gondolo (2008,
hereafter GG08) calculation of $T_{\mathrm{kd}}$ in an early matter-dominated
era (EMDE), which occurs when the Universe is dominated by either a decaying
oscillating scalar field or a semistable massive particle before Big Bang
nucleosynthesis. VG15 found that dark matter decouples at a lower temperature
in an EMDE than it would in a radiation-dominated era, while GG08 found that
dark matter decouples at a higher temperature in an EMDE than it would in a
radiation-dominated era. VG15 attributed this discrepancy to the presence of a
matching constant that ensures that the dark matter temperature is continuous
during the transition from the EMDE to the subsequent radiation-dominated era
and concluded that the GG08 result is incorrect. We show that the disparity is
due to the fact that VG15 compared $T_\mathrm{kd}$ in an EMDE to the decoupling
temperature in a radiation-dominated universe that would result in the same
dark matter temperature at late times. Since decoupling during an EMDE leaves
the dark matter colder than it would be if it decoupled during radiation
domination, this temperature is much higher than $T_\mathrm{kd}$ in a standard
thermal history, which is indeed lower than $T_{\mathrm{kd}}$ in an EMDE, as
stated by GG08.
|
These notes are intended as an introduction to a study of applications of
noncommutative calculus to quantum statistical Physics. Centered on
noncommutative calculus we describe the physical concepts and mathematical
structures appearing in the analysis of large quantum systems, and their
consequences. These include the emergence of algebraic approach and the
necessity of employment of infinite dimensional structures. As an illustration,
a quantization of stochastic processes, new formalism for statistical
mechanics, quantum field theory and quantum correlations are discussed.
|
Despite showing great promise for optoelectronics, the commercialization of
halide perovskite nanostructure-based devices is hampered by inefficient
electrical excitation and strong exciton binding energies. While transport of
excitons in an energy-tailored system via F\"orster resonance energy transfer
(FRET) could be an efficient alternative, halide ion migration makes the
realization of cascaded structures difficult. Here, we show how these could be
obtained by exploiting the pronounced quantum confinement effect in two
dimensional CsPbBr3 based nanoplatelets (NPls). In thin films of NPls of two
predetermined thicknesses, we observe an enhanced acceptor photoluminescence
(PL) emission and a decreased donor PL lifetime. This indicates a FRET-mediated
process, benefitted by the structural parameters of the NPls. We determine
corresponding transfer rates up to k_FRET=0.99 ns^-1 and efficiencies of nearly
\eta_FRET=70%. We also show FRET to occur between perovskite NPls of other
thicknesses. Consequently, this strategy could lead to tailored, energy cascade
nanostructures for improved optoelectronic devices.
|
We construct the boundary conformal field theory that describes the
low-temperature behavior of the two-channel Anderson impurity model. The
presence of an exactly marginal operator is shown to generate a line of stable
fixed points parameterized by the charge valence of the impurity. We calculate
the exact zero-temperature entropy and impurity thermodynamics along the fixed
line. We also derive the critical exponents of the characteristic Fermi edge
singularities caused by time-dependent hybridization between conduction
electrons and impurity. Our results suggest that in the mixed-valent regime the
electrons participate in two competing processes, leading to frustrated
screening of spin and channel degrees of freedom. By combining the boundary
conformal field theory with the Bethe Ansatz solution we obtain a complete
description of the low-energy dynamics of the model.
|
Pruning well-trained neural networks is effective to achieve a promising
accuracy-efficiency trade-off in computer vision regimes. However, most of
existing pruning algorithms only focus on the classification task defined on
the source domain. Different from the strong transferability of the original
model, a pruned network is hard to transfer to complicated downstream tasks
such as object detection arXiv:arch-ive/2012.04643. In this paper, we show that
the image-level pretrain task is not capable of pruning models for diverse
downstream tasks. To mitigate this problem, we introduce image reconstruction,
a pixel-level task, into the traditional pruning framework. Concretely, an
autoencoder is trained based on the original model, and then the pruning
process is optimized with both autoencoder and classification losses. The
empirical study on benchmark downstream tasks shows that the proposed method
can outperform state-of-the-art results explicitly.
|
We present the key results from a comprehensive study of the refraction and
focusing properties of a two-dimensional dodecagonal photonic ``quasicrystal''
(PQC), carried out via both full-wave numerical simulations and microwave
measurements on a slab made of alumina rods inserted in a parallel-plate
waveguide. We observe anomalous refraction and focusing in several frequency
regions, confirming some recently published results. However, our
interpretation, based on numerical and experimental evidence, differs
substantially from the one in terms of ``effective negative refractive-index''
that was originally proposed. Instead, our study highlights the critical role
played by short-range interactions associated with local order and symmetry.
|
We address the problem of when two finite dimensional central division
algebras over the same field are necessarily isomorphic given that they have
the same maximal subfields.
|
Quasi-exactly solvable Rabi model is investigated within the framework of the
Bargmann Hilbert space of analytic functions ${\cal B}$. On applying the theory
of orthogonal polynomials, the eigenvalue equation and eigenfunctions are shown
to be determined in terms of three systems of monic orthogonal polynomials. The
formal Schweber quantization criterion for an energy variable $x$, originally
expressed in terms of infinite continued fractions, can be recast in terms of a
meromorphic function $F(z) = a_0 + \sum_{k=1}^\infty {\cal M}_k/(z-\xi_k)$ in
the complex plane $\mathbb{C}$ with {\em real simple} poles $\xi_k$ and {\em
positive} residues ${\cal M}_k$. The zeros of $F(x)$ on the real axis determine
the spectrum of the Rabi model. One obtains at once that, on the real axis, (i)
$F(x)$ monotonically decreases from $+\infty$ to $-\infty$ between any two of
its subsequent poles $\xi_k$ and $\xi_{k+1}$, (ii) there is exactly one zero of
$F(x)$ for $x\in (\xi_k,\xi_{k+1})$, and (iii) the spectrum corresponding to
the zeros of $F(x)$ does not have any accumulation point. Additionally, one can
provide much simpler proof of that the spectrum in each parity eigenspace
${\cal B}_\pm$ is necessarily {\em nondegenerate}. Thereby the calculation of
spectra is greatly facilitated. Our results allow us to critically examine
recent claims regarding solvability and integrability of the Rabi model.
|
Considering that life on earth evolved about 3.7 billion years ago,
vertebrates are young, appearing in the fossil record during the Cambrian
explosion about 542 to 515 million years ago. Results from sequence analyses of
genomes from bacteria, yeast, plants, invertebrates and vertebrates indicate
that receptors for adrenal steroids (aldosterone, cortisol), and sex steroids
(estrogen, progesterone, testosterone) also are young, with receptors for
estrogens and 3-ketosteroids first appearing in basal chordates
(cephalochordates: amphioxus), which are close ancestors of vertebrates. An
ancestral progesterone receptor and an ancestral corticoid receptor, the common
ancestor of the glucocorticoid and mineralocorticoid receptors, evolved in
jawless vertebrates (cyclostomes: lampreys, hagfish). This was followed by
evolution of an androgen receptor and distinct glucocorticoid and
mineralocorticoid receptors in cartilaginous fishes (gnathostomes: sharks).
Adrenal and sex steroid receptors are not found in echinoderms: and
hemichordates, which are ancestors in the lineage of cephalochordates and
vertebrates. The presence of steroid receptors in vertebrates, in which these
steroid receptors act as master switches to regulate differentiation,
development, reproduction, immune responses, electrolyte homeostasis and stress
responses, argues for an important role for steroid receptors in the
evolutionary success of vertebrates, considering that the human genome contains
about 22,000 genes, which is not much larger than genomes of invertebrates,
such as Caenorhabditis elegans (~18,000 genes) and Drosophila (~14,000 genes).
|
Java's type system mostly relies on type checking augmented with local type
inference to improve programmer convenience. We study global type inference for
Featherweight Generic Java (FGJ), a functional Java core language. Given
generic class headers and field specifications, our inference algorithm infers
all method types if classes do not make use of polymorphic recursion. The
algorithm is constraint-based and improves on prior work in several respects.
Despite the restricted setting, global type inference for FGJ is NP-complete.
|
We show that the multiplier algebra of the Fourier algebra on a locally
compact group $G$ can be isometrically represented on a direct sum on
non-commutative $L^p$ spaces associated to the right von Neumann algebra of
$G$. If these spaces are given their canonical Operator space structure, then
we get a completely isometric representation of the completely bounded
multiplier algebra. We make a careful study of the non-commutative $L^p$ spaces
we construct, and show that they are completely isometric to those considered
recently by Forrest, Lee and Samei; we improve a result about module
homomorphisms. We suggest a definition of a Figa-Talamanca--Herz algebra built
out of these non-commutative $L^p$ spaces, say $A_p(\hat G)$. It is shown that
$A_2(\hat G)$ is isometric to $L^1(G)$, generalising the abelian situation.
|
Axion stars are hypothetical objects formed of axions, obtained as localized
and coherently oscillating solutions to their classical equation of motion.
Depending on the value of the field amplitude at the core $|\theta_0| \equiv
|\theta(r=0)|$, the equilibrium of the system arises from the balance of the
kinetic pressure and either self-gravity or axion self-interactions. Starting
from a general relativistic framework, we obtain the set of equations
describing the configuration of the axion star, which we solve as a function of
$|\theta_0|$. For small $|\theta_0| \lesssim 1$, we reproduce results
previously obtained in the literature, and we provide arguments for the
stability of such configurations in terms of first principles. We compare
qualitative analytical results with a numerical calculation. For large
amplitudes $|\theta_0| \gtrsim 1$, the axion field probes the full non-harmonic
QCD chiral potential and the axion star enters the {\it dense} branch. Our
numerical solutions show that in this latter regime the axions are
relativistic, and that one should not use a single frequency approximation, as
previously applied in the literature. We employ a multi-harmonic expansion to
solve the relativistic equation for the axion field in the star, and
demonstrate that higher modes cannot be neglected in the dense regime. We
interpret the solutions in the dense regime as pseudo-breathers, and show that
the life-time of such configurations is much smaller than any cosmological time
scale.
|
For any positive integer $k$, there exist neural networks with $\Theta(k^3)$
layers, $\Theta(1)$ nodes per layer, and $\Theta(1)$ distinct parameters which
can not be approximated by networks with $\mathcal{O}(k)$ layers unless they
are exponentially large --- they must possess $\Omega(2^k)$ nodes. This result
is proved here for a class of nodes termed "semi-algebraic gates" which
includes the common choices of ReLU, maximum, indicator, and piecewise
polynomial functions, therefore establishing benefits of depth against not just
standard networks with ReLU gates, but also convolutional networks with ReLU
and maximization gates, sum-product networks, and boosted decision trees (in
this last case with a stronger separation: $\Omega(2^{k^3})$ total tree nodes
are required).
|
In this article we investigate how to score a dichotomous scored question
when co-mingled with a typically scored set of Likert scale questions. The goal
is to find the upper value of the dichotomous response such that no single
question is overly weighted when analyzing the summed values of the entire set
of questions. Results demonstrate that setting the upper value of the
dichotomous value to the max value of the Likert scale question scale is
inappropriate. We provide a more appropriate value to use when considering
Likert scale questions up to the max value of 10.
|
In this paper, using Kronecker's theorem, we discuss the set of common fixed
points of an n-parameter continuous semigroup of mappings. We also discuss
convergence theorems to a common fixed point of an n-parameter nonexpansive
semigroup.
|
We prove a version of Quillen's stratification theorem in equivariant
homotopy theory for a finite group $G$, generalizing the classical theorem in
two directions. Firstly, we work with arbitrary commutative equivariant ring
spectra as coefficients, and secondly, we categorify it to a result about
equivariant modules. Our general stratification theorem is formulated in the
language of equivariant tensor-triangular geometry, which we show to be tightly
controlled by the non-equivariant tensor-triangular geometry of the geometric
fixed points.
We then apply our methods to the case of Borel-equivariant Lubin--Tate
$E$-theory $\underline{E_n}$, for any finite height $n$ and any finite group
$G$, where we obtain a sharper theorem in the form of cohomological
stratification. In particular, this provides a computation of the Balmer
spectrum as well as a cohomological parametrization of all localizing
$\otimes$-ideals of the category of equivariant modules over $\underline{E_n}$,
thereby establishing a finite height analogue of the work of Benson, Iyengar,
and Krause in modular representation theory.
|
Life expectancies at birth are routinely computed from period life tables.
Such period life expectancies may be distorted by selection when comparing
countries where the living conditions improved earlier (like Norway and Sweden)
with countries where they improved later (like Italy and Japan). One way to get
a fair comparison between the countries, is to use cohort data and consider the
expected number of years lost before a given age a. Contrary to the results
based on period data, one then finds that Italian women may expect to lose more
years than women in Norway and Sweden, while there are no indications that
Japanese women will lose fewer years than Scandinavian women.
|
The ubiquity of offensive and hateful content on online fora necessitates the
need for automatic solutions that detect such content competently across target
groups. In this paper we show that text classification models trained on large
publicly available datasets despite having a high overall performance, may
significantly under-perform on several protected groups. On the
\citet{vidgen2020learning} dataset, we find the accuracy to be 37\% lower on an
under annotated Black Women target group and 12\% lower on Immigrants, where
hate speech involves a distinct style. To address this, we propose to perform
token-level hate sense disambiguation, and utilize tokens' hate sense
representations for detection, modeling more general signals. On two publicly
available datasets, we observe that the variance in model accuracy across
target groups drops by at least 30\%, improving the average target group
performance by 4\% and worst case performance by 13\%.
|
Models trained on synthetic images often face degraded generalization to real
data. As a convention, these models are often initialized with ImageNet
pre-trained representation. Yet the role of ImageNet knowledge is seldom
discussed despite common practices that leverage this knowledge to maintain the
generalization ability. An example is the careful hand-tuning of early stopping
and layer-wise learning rates, which is shown to improve synthetic-to-real
generalization but is also laborious and heuristic. In this work, we explicitly
encourage the synthetically trained model to maintain similar representations
with the ImageNet pre-trained model, and propose a \textit{learning-to-optimize
(L2O)} strategy to automate the selection of layer-wise learning rates. We
demonstrate that the proposed framework can significantly improve the
synthetic-to-real generalization performance without seeing and training on
real data, while also benefiting downstream tasks such as domain adaptation.
Code is available at: https://github.com/NVlabs/ASG.
|
We present near-infrared spectroscopic observations from VLT ISAAC of
thirteen 250\mu m-luminous galaxies in the CDF-S, seven of which have confirmed
redshifts which average to <z > = 2.0 \pm 0.4. Another two sources of the 13
have tentative z > 1 identifications. Eight of the nine redshifts were
identified with H{\alpha} detection in H- and K-bands, three of which are
confirmed redshifts from previous spectroscopic surveys. We use their near-IR
spectra to measure H{\alpha} line widths and luminosities, which average to 415
\pm 20 km/s and 3 \times 10^35 W (implying SFR(H{\alpha})~200 M_\odot /yr),
both similar to the H{\alpha} properties of SMGs. Just like SMGs, 250 \mu
m-luminous galaxies have large H{\alpha} to far-infrared (FIR) extinction
factors such that the H{\alpha} SFRs underestimate the FIR SFRs by ~8-80 times.
Far-infrared photometric points from observed 24\mu m through 870\mu m are used
to constrain the spectral energy distributions (SEDs) even though uncertainty
caused by FIR confusion in the BLAST bands is significant. The population has a
mean dust temperature of Td = 52 \pm 6 K, emissivity {\beta} = 1.73 \pm 0.13,
and FIR luminosity LFIR = 3 \times 10^13 L_\odot. Although selection at 250\mu
m allows for the detection of much hotter dust dominated HyLIRGs than SMG
selection (at 850\mu m), we do not find any >60 K 'hot-dust' HyLIRGs. We have
shown that near-infrared spectroscopy combined with good photometric redshifts
is an efficient way to spectroscopically identify and characterise these rare,
extreme systems, hundreds of which are being discovered by the newest
generation of IR observatories including the Herschel Space Observatory.
|
The scalar scattering of a plane wave by a smooth obstacle with impedance
boundary conditions is considered. Upper bounds for the Total Cross Section and
for the absorbed power are presented.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.