text
stringlengths 6
128k
|
---|
The Drell-Yan process is a copious source of lepton pairs at high energy and
is measured with great precision at the Large Hadron Collider (LHC). Barring
any new light particles, beyond the Standard Model effects can be studied in
Drell-Yan production using an effective field theory. At tree level, new
4-fermion interactions dominate, while at one loop operators modifying 3-gauge
boson couplings contribute effects that are enhanced at high energy. We study
the sensitivity of the neutral Drell-Yan process to these dimension-6 operators
and compare the sensitivity to that of $W^+W^-$ pair production at the LHC.
|
One of the fundamental limitations in photonics is the lack of a
bidirectional transducer that can convert optical information into electronic
signals or vice versa. In acoustics or at microwave frequencies, wave signals
can be simultaneously measured and modulated by a single transducer. In optics,
however, optical fields are generally measured via reference-based
interferometry or holography using silicone-based image sensors, whereas they
are modulated using spatial light modulators. Here, we propose a scheme for an
optical bidirectional transducer using a spatial light modulator. By exploiting
the principle of time-reversal symmetry of light scattering, two-dimensional
reference-free measurement and modulation of optical fields are realized. We
experimentally demonstrate the optical bidirectional transducer for optical
information consisting of 128 x 128 spatial modes at visible and short-wave
infrared wavelengths.
|
The prime graph $\Gamma(G)$ of a finite group $G$ (also known as the
Gruenberg-Kegel graph) has as its vertices the prime divisors of $|G|$, and
$p\text-q$ is an edge in $\Gamma(G)$ if and only if $G$ has an element of order
$pq$. Since their inception in the 1970s these graphs have been studied
extensively; however, completely classifying the possible prime graphs for
larger families of groups remains a difficult problem. For solvable groups such
a classification was found in 2015. In this paper we go beyond solvable groups
for the first time and characterize prime graphs of a more general class of
groups we call pseudo-solvable. These are groups whose composition factors are
either cyclic or $A_5$. The classification is based on two conditions: the
vertices $\{2,3,5\}$ form a triangle in $\overline\Gamma(G)$ or $\{p,3,5\}$
form a triangle for some prime $p\neq 2$.
|
By considering a counting-type argument on Brownian sample paths, we prove a
result similar to that of Orey and Taylor on the exact Hausdorff dimension of
the rapid points of Brownian motion. Because of the nature of the proof we can
then apply the concepts to so-called complex oscillations (or 'algorithmically
random Brownian motion'), showing that their rapid points have the same
dimension.
|
We present a high-fidelity realization of the cosmological $N$-body
simulation from the Schneider et al. (2016) code comparison project. The
simulation was performed with our Abacus $N$-body code, which offers high force
accuracy, high performance, and minimal particle integration errors. The
simulation consists of $2048^3$ particles in a $500\ h^{-1}\mathrm{Mpc}$ box,
for a particle mass of $1.2\times 10^9\ h^{-1}\mathrm{M}_\odot$ with $10\
h^{-1}\mathrm{kpc}$ spline softening. Abacus executed 1052 global time steps to
$z=0$ in 107 hours on one dual-Xeon, dual-GPU node, for a mean rate of 23
million particles per second per step. We find Abacus is in good agreement with
Ramses and Pkdgrav3 and less so with Gadget3. We validate our choice of time
step by halving the step size and find sub-percent differences in the power
spectrum and 2PCF at nearly all measured scales, with $<0.3\%$ errors at $k<10\
\mathrm{Mpc}^{-1}h$. On large scales, Abacus reproduces linear theory better
than $0.01\%$. Simulation snapshots are available at
http://nbody.rc.fas.harvard.edu/public/S2016 .
|
This paper has been wthdrawn by the authors due to a critical error in the
assessment of the entanglement and, consequently, of the protocol fidelity,
when no postselection is performed.
|
BESIII is a currently running tau-charm factory with the largest samples of
on threshold charm meson pairs, directly produced charmonia and some other
unique datasets at BEPCII collider. Machine learning techniques have been
employed to improve the performance of BESIII software. The studies for
reweighing MC, particle identification and cluster reconstruction for the CGEM
(Cylindrical Gas Electron Multiplier) inner tracker are presented.
|
Camouflaged objects adaptively fit their color and texture with the
environment, which makes them indistinguishable from the surroundings. Current
methods revealed that high-level semantic features can highlight the
differences between camouflaged objects and the backgrounds. Consequently, they
integrate high-level semantic features with low-level detailed features for
accurate camouflaged object detection (COD). Unlike previous designs for
multi-level feature fusion, we state that enhancing low-level features is more
impending for COD. In this paper, we propose an overlapped window cross-level
attention (OWinCA) to achieve the low-level feature enhancement guided by the
highest-level features. By sliding an aligned window pair on both the highest-
and low-level feature maps, the high-level semantics are explicitly integrated
into the low-level details via cross-level attention. Additionally, it employs
an overlapped window partition strategy to alleviate the incoherence among
windows, which prevents the loss of global information. These adoptions enable
the proposed OWinCA to enhance low-level features by promoting the separability
of camouflaged objects. The associated proposed OWinCANet fuses these enhanced
multi-level features by simple convolution operation to achieve the final COD.
Experiments conducted on three large-scale COD datasets demonstrate that our
OWinCANet significantly surpasses the current state-of-the-art COD methods.
|
We study the regularity of the free boundary arising in a thermodynamically
consistent two-phase Stefan problem with surface tension by means of a family
of parameter-dependent diffeomorphisms, $L_p$-maximal regularity theory, and
the implicit function theorem.
|
We demonstrate that the most probable state of a conserved system with a
limited number of entities or molecules is the state where non-Gaussian and
non-chi-square distributions govern. We have conducted a thought experiment
using a specific setup. We have verified the mathematical derivation of the
most probable state accurately predicts the results obtained by computer
simulations. The derived distributions approach the Gaussian and the chi-square
distributions as the number of entities approaches infinity. The derived
distributions of the most probable state will have an important role in the
fields of medical research where the number of entities in the system of
interest is limited. Especially, the non-chi-square distribution can be
interpreted by an asset distribution achieved after a repetitive game where an
arbitrary portion of one's assets is transferred to another arbitrary entity
among a number of entities with equal abilities.
|
In this paper, we study block Jacobi operators on $\mathbb{Z}$ with
quasi-periodic meromorphic potential. We prove the non-perturbative Anderson
localization for such operators in the large coupling regime.
|
These are notes for four lectures given at the 2010 CIMPA Research School
"Automorphic Forms and L-functions" in Weihai, China. The lectures focused on a
Burgess-like subconvex bound for twisted Hilbert modular L-functions published
jointly with Valentin Blomer in the same year. They discussed the proof in some
detail, especially how spectral theory can be used to estimate the relevant
shifted convolution sums efficiently. They also discussed briefly an
application for the number of representations by a totally positive ternary
quadratic form over a totally real number field.
|
Monte Carlo calculations of fission of actinides and pre-actinides induced by
protons and neutrons in the energy range from 100 MeV to 1 GeV are carried out
by means of a recent version of the Li\`ege Intranuclear Cascade Model, INCL++,
coupled with two different evaporation-fission codes, GEMINI++ and ABLA07. In
order to reproduce experimental fission cross sections, model parameters are
usually adjusted on available (p,f) cross sections and used to predict (n,f)
cross sections for the same isotopes.
|
Maslov indices are integers that appear in semiclassical wave functions and
quantization conditions. They are often notoriously difficult to compute. We
present methods of computing the Maslov index that rely only on typically
elementary Poisson brackets and simple linear algebra. We also present a
singular differential form, whose integral along a curve gives the Maslov index
of that curve. The form is closed but not exact, and transforms by an exact
differential under canonical transformations. We illustrate the method with the
$6j$-symbol, which is important in angular momentum theory and in quantum
gravity.
|
The production of jets in low $Q^2$ $ep$ scattering (photoproduction) and in
low $Q^2$ $e^+e^-$ scattering ($\gamma\gamma$ scattering) allows for testing
perturbative QCD and for measuring the proton and photon structure functions.
This requires exact theoretical predictions for one- and two-jet cross
sections. We describe the theoretical formalism, giving sufficient details, for
calculating the direct and resolved processes in $\gamma p$ and $\gamma\gamma$
reactions in next-to-leading order QCD. We present the complete analytical
results for the Born terms, the virtual, and the real corrections. To separate
singular and regular regions of phase space we use the phase space slicing
method with an invariant mass cut-off. In this way, all soft and collinear
singularities are either canceled or absorbed into the structure functions.
Using a flexible Monte Carlo program, we evaluate the cross sections
numerically and perform various tests and comparisons with other calculations.
We consider the scale dependence of our results and compare them to data from
the experiments H1 and ZEUS at HERA and OPAL at LEP.
|
It is believed that in presence of some strong electromagnetic fields, called
overcritical, the (Dirac) vacuum becomes unstable and decays leading to a
spontaneous production of an electron-positron pair. Yet, the arguments are
based mainly on the analysis of static fields and are insufficient to explain
this phenomenon completely. Therefore, we consider time-dependent overcritical
fields and show, within the external field formulation, how spontaneous
particle creation can be defined and measured in physical processes. We prove
that the effect exists always when a strongly overcritical field is switched
on, but it becomes unstable and hence generically only approximate and
non-unique when the field is switched on and off. In the latter case, it
becomes unique and stable only in the adiabatic limit.
|
Gradient descent and its variants are de facto standard algorithms for
training machine learning models. As gradient descent is sensitive to its
hyperparameters, we need to tune the hyperparameters carefully using a grid
search, but it is time-consuming, especially when multiple hyperparameters
exist. Recently, parameter-free methods that adjust the hyperparameters on the
fly have been studied. However, the existing work only studied parameter-free
methods for the stepsize, and parameter-free methods for other hyperparameters
have not been explored. For instance, the gradient clipping threshold is also a
crucial hyperparameter in addition to the stepsize to prevent gradient
explosion issues, but none of the existing studies investigated the
parameter-free methods for clipped gradient descent. In this work, we study the
parameter-free methods for clipped gradient descent. Specifically, we propose
Inexact Polyak Stepsize, which converges to the optimal solution without any
hyperparameters tuning, and its convergence rate is asymptotically independent
of L under L-smooth and $(L_0, L_1)$-smooth assumptions of the loss function as
that of clipped gradient descent with well-tuned hyperparameters. We
numerically validated our convergence results using a synthetic function and
demonstrated the effectiveness of our proposed methods using LSTM, Nano-GPT,
and T5.
|
A common theme among the proposed models for network epidemics is the
assumption that the propagating object, i.e., a virus or a piece of
information, is transferred across the nodes without going through any
modification or evolution. However, in real-life spreading processes, pathogens
often evolve in response to changing environments and medical interventions and
information is often modified by individuals before being forwarded. In this
paper, we investigate the evolution of spreading processes on complex networks
with the aim of i) revealing the role of evolution on the threshold,
probability, and final size of epidemics; and ii) exploring the interplay
between the structural properties of the network and the dynamics of evolution.
In particular, we develop a mathematical theory that accurately predicts the
epidemic threshold and the expected epidemic size as functions of the
characteristics of the spreading process, the evolutionary dynamics of the
pathogen, and the structure of the underlying contact network. In addition to
the mathematical theory, we perform extensive simulations on random and
real-world contact networks to verify our theory and reveal the significant
shortcomings of the classical mathematical models that do not capture
evolution. Our results reveal that the classical, single-type bond-percolation
models may accurately predict the threshold and final size of epidemics, but
their predictions on the probability of emergence are inaccurate on both random
and real-world networks. This inaccuracy sheds the light on a fundamental
disconnect between the classical bond-percolation models and real-life
spreading processes that entail evolution. Finally, we consider the case when
co-infection is possible and show that co-infection could lead the order of
phase transition to change from second-order to first-order.
|
Motivated by the recent studies of intrinsic local moments and Kondo-driven
phases in magic-angle twisted bilayer graphene, we investigate the
renormalization of Kondo coupling ($J_K$) and the competing Hund's rule
interaction ($J$) in the low-energy limit. Specifically, we consider a
surrogate single-impurity generalized Kondo model and employ the poor man's
scaling approach. The scale-dependent $J_K$ and $J$ are derived analytically
within the one-loop poor man's scaling approach, and the Kondo temperature
($T_K$) and the characteristic Hund's rule coupling ($J^*$, defined by the
renormalized value of $J$ at some small finite energy scale) are estimated over
a wide range of filling factors. We find that $T_K$ depends strongly on the
filling factors as well as the value of $J_K$. Slightly doping away from
integer fillings and/or increasing $J_K$ may substantially enhance $T_K$ in the
parameter regime relevant to experiments. $J^*$ is always reduced from the bare
value of $J$, but the filling factor dependence is not as significant as it is
for $T_K$. Our results suggest that it is essential to incorporate the
renormalization of $J_K$ and $J$ in the many-body calculations, and Kondo
screening should occur for a wide range of fractional fillings in magic-angle
twisted bilayer graphene, implying the existence of Kondo-driven correlated
metallic phases. We also point out that the observation of distinct phases at
integer fillings in different samples may be due to the variation of $J_K$ in
addition to disorder and strain in the experiments.
|
We present the results of radio continuum and molecular line observations
conducted using the Mopra millimetre-wave telescope and Australia Telescope
Compact Array. These observations reveal the presence of a dense core embedded
within each cloud, and the presence of a layer of hot ionised gas coincided
with their bright-rims. The ionised gas has electron densities significantly
higher than the critical density above which an ionised boundary layer can form
and be maintained, strongly supporting the hypothesis that these clouds are
being photoionised by the nearby OB star(s). From an evaluation of the pressure
balance between the ionised and molecular gas, SFO 58 and SFO 68 are identified
as being in a post-pressure balance state, while SFO 75 and SFO 76 are more
likely to be in a pre-pressure balance state. We find secondary evidence for
the presence of ongoing star formation within SFO 58 and SFO 68, such as
molecular outflows, OH, H$_2$O and methanol masers, and identify a potential
embedded UC HII region, but find no evidence for any ongoing star formation
within SFO 75 and SFO 76. Our results are consistent with the star formation
within SFO 58 and SFO 68 having been triggered by the radiatively driven
implosion of these clouds.
|
This is a brief overview of the role of mathematicians in the so-called
"Luzin Case" as well as some analysis of the mathematical and humanitarian
roots of the affair.
|
We prove that the Weisfeiler-Leman (WL) dimension of the class of all finite
planar graphs is at most 3. In particular, every finite planar graph is
definable in first-order logic with counting using at most 4 variables. The
previously best known upper bounds for the dimension and number of variables
were 14 and 15, respectively.
First we show that, for dimension 3 and higher, the WL-algorithm correctly
tests isomorphism of graphs in a minor-closed class whenever it determines the
orbits of the automorphism group of any arc-colored 3-connected graph belonging
to this class.
Then we prove that, apart from several exceptional graphs (which have
WL-dimension at most 2), the individualization of two correctly chosen vertices
of a colored 3-connected planar graph followed by the 1-dimensional
WL-algorithm produces the discrete vertex partition. This implies that the
3-dimensional WL-algorithm determines the orbits of a colored 3-connected
planar graph.
As a byproduct of the proof, we get a classification of the 3-connected
planar graphs with fixing number 3.
|
In this work, we study Lie groupoids equipped with multiplicative foliations
and the corresponding infinitesimal data. We determine the infinitesimal
counterpart of a multiplicative foliation in terms of its core and sides
together with a partial connection satisfying special properties, giving rise
to the concept of IM-foliation on a Lie algebroid. The main result of this
paper shows that if $G$ is a source simply connected Lie groupoid with Lie
algebroid $A$, then there exists a one-to-one correspondence between
multiplicative foliations on $G$ and IM-foliations on the Lie algebroid $A$.
|
We undertake a rigorous study of the stability of the Cauchy horizon in the
naked self-similar Lemaitre-Tolman-Bondi spacetimes under even parity linear
perturbations. We use a combination of energy methods and results about
L^p-spaces to determine the behaviour of the perturbations as they evolve
through the spacetime. We first establish that an average of the perturbation
generically diverges on the Cauchy horizon. We next introduce a rescaled
version of the perturbation, and show that it is bounded and non-zero on the
Cauchy horizon. This in turn shows that the perturbation itself diverges in a
pointwise fashion on the Cauchy horizon. We give a physical interpretation of
this result using the perturbed Weyl scalars. This result supports the
hypothesis of cosmic censorship.
|
We study distributed optimization problems over a network when the
communication between the nodes is constrained, and so information that is
exchanged between the nodes must be quantized. Recent advances using the
distributed gradient algorithm with a quantization scheme at a fixed resolution
have established convergence, but at rates significantly slower than when the
communications are unquantized.
In this paper, we introduce a novel quantization method, which we refer to as
adaptive quantization, that allows us to match the convergence rates under
perfect communications. Our approach adjusts the quantization scheme used by
each node as the algorithm progresses: as we approach the solution, we become
more certain about where the state variables are localized, and adapt the
quantizer codebook accordingly.
We bound the convergence rates of the proposed method as a function of the
communication bandwidth, the underlying network topology, and structural
properties of the constituent objective functions. In particular, we show that
if the objective functions are convex or strongly convex, then using adaptive
quantization does not affect the rate of convergence of the distributed
subgradient methods when the communications are quantized, except for a
constant that depends on the resolution of the quantizer. To the best of our
knowledge, the rates achieved in this paper are better than any existing work
in the literature for distributed gradient methods under finite communication
bandwidths. We also provide numerical simulations that compare convergence
properties of the distributed gradient methods with and without quantization
for solving distributed regression problems for both quadratic and absolute
loss functions.
|
The Mock LISA Data Challenge is a worldwide effort to solve the LISA data
analysis problem. We present here our results for the Massive Black Hole Binary
(BBH) section of Round 1. Our results cover Challenge 1.2.1, where the
coalescence of the binary is seen, and Challenge 1.2.2, where the coalescence
occurs after the simulated observational period. The data stream is composed of
Gaussian instrumental noise plus an unknown BBH waveform. Our search algorithm
is based on a variant of the Markov Chain Monte Carlo method that uses
Metropolis-Hastings sampling and thermostated frequency annealing. We present
results from the training data sets and the blind data sets. We demonstrate
that our algorithm is able to rapidly locate the sources, accurately recover
the source parameters, and provide error estimates for the recovered
parameters.
|
Fast magnetic reconnection events can be a very powerful mechanism operating
in the core region of microquasars and AGNs. In earlier work, it has been
suggested that the power released by fast reconnection events between the
magnetic field lines lifting from the inner accretion disk region and the lines
anchored into the central black hole could accelerate relativistic particles
and produce the observed radio emission from microquasars and low luminosity
AGNs (LLAGNs). Moreover, it has been proposed that the observed correlation
between the radio emission and the mass of these sources, spanning $10^{10}$
orders of magnitude in mass, might be related to this process. In the present
work, we revisit this model comparing two different fast magnetic reconnection
mechanisms, namely, fast reconnection driven by anomalous resistivity (AR) and
by turbulence (as described in Lazarian and Vishiniac 1999). We apply the
scenario above to a much larger sample of sources (including also blazars, and
gamma-ray bursts - GRBs), and find that LLAGNs and microquasars do confirm the
trend above. Furthermore, when driven by turbulence, not only their radio but
also their gamma-ray emission can be due to magnetic power released by fast
reconnection, which may accelerate particles to relativistic velocities in the
core region of these sources. Thus the turbulent-driven fast reconnection model
is able to reproduce better the observed emission than the AR model. On the
other hand, the emission from blazars and GRBs does not follow the same trend
as that of the LLAGNs and microquasars, suggesting that the radio and gamma-ray
emission in these cases is produced further out along the jet, by another
population of relativistic particles, as expected.
|
In this work we show that given a connectivity graph $G$ of a $[[n,k,d]]$
quantum code, there exists $\{K_i\}_i, K_i \subset G$, such that $\sum_i
|K_i|\in \Omega(k), \ |K_i| \in \Omega(d)$, and the $K_i$'s are
$\tilde{\Omega}( \sqrt{{k}/{n}})$-expander. If the codes are classical we show
instead that the $K_i$'s are $\tilde{\Omega}\left({{k}/{n}}\right)$-expander.
We also show converses to these bounds. In particular, we show that the BPT
bound for classical codes is tight in all Euclidean dimensions. Finally, we
prove structural theorems for graphs with no "dense" subgraphs which might be
of independent interest.
|
The "VOiCES from a Distance Challenge 2019" is designed to foster research in
the area of speaker recognition and automatic speech recognition (ASR) with the
special focus on single channel distant/far-field audio, under noisy
conditions. The main objectives of this challenge are to: (i) benchmark
state-of-the-art technology in the area of speaker recognition and automatic
speech recognition (ASR), (ii) support the development of new ideas and
technologies in speaker recognition and ASR, (iii) support new research groups
entering the field of distant/far-field speech processing, and (iv) provide a
new, publicly available dataset to the community that exhibits realistic
distance characteristics.
|
We present the result of our calculations of ultrafast electron diffraction
(UED) for cyclobutanone excited into $S_2$ electronic state, which are based on
the non-adiabatic dynamics simulations with \textit{Ab Initio} Multiple Cloning
(AIMC) method with the electronic structure calculated at the
SA(3)-CASSCF(12,12)/aug-cc-pVDZ level of theory. The key features in the UED
pattern were identified that can be used to distinguish between the reaction
pathways observed in the AIMC dynamics, although there is a significant overlap
between representative signals due to structural similarity of the products.
The calculated UED pattern can be compared with experiment.
|
We examine the changes in kinetic energy dissipation of a turbulent channel
flow caused by a spanwise magnetic field. The numerical study is based on our
simulation data from [Krasnov, Zikanov, Schumacher, Boeck, Phys. Fluids 20,
095105 (2008)] obtained by direct and large eddy simulations. We find that the
Joule dissipation can exceed the viscous dissipation in the weakly dissipative
bulk region, but remains comparatively small in the turbulence-generating
near-wall region.
|
A new presentation of the Borchers-Buchholz result of the Lorentz-invariance
of the energy-momentum spectrum in theories with broken Lorentz symmetry is
given in terms of properties of the Green's functions of microcausal Bose and
Fermi-fields. Strong constraints based on complex geometry phenomenons are
shown to result from the interplay of the basic principles of causality and
stability in Quantum Field Theory: if microcausality and energy-positivity in
all Lorentz frames are satisfied, then it is unavoidable that all stable
particles of the theory be governed by Lorentz-invariant dispersion laws; in
all the field sectors, discrete parts outside the continuum as well as the
thresholds of the continuous parts of the energy-momentum spectrum, with
possible holes inside it, are necessarily represented by mass-shell
hyperboloids (or the light-cone). No violation of this geometrical fact can be
produced by spontaneous breaking of the Lorentz symmetry, even if the
field-theoretical framework is enlarged so as to include short-distance
singularities wilder than distributions.
|
There are now a half dozen young pulsars detected in high energy photons by
the Compton GRO, showing a variety of emission efficiencies and pulse profiles.
We present here a calculation of the pattern of high energy emission on the sky
in a model which posits $\gamma$-ray production by charge depleted gaps in the
outer magnetosphere. This model accounts for the radio to $\gamma$-ray pulse
offsets of the known pulsars, as well as the shape of the high energy pulse
profiles. We also show that $\sim 1/3$ of emitting young radio pulsars will not
be detected due to beaming effects, while $\sim 2.5 \times$ the number of
radio-selected $\gamma$-ray pulsars will be viewed only high energies. Finally
we compute the polarization angle variation and find that the previously
misunderstood optical polarization sweep of the Crab pulsar arises naturally in
this picture. These results strongly support an outer-magnetosphere location
for the $\gamma-$ray emission.
|
We use the structure theory of minimal dynamical systems to show that, for a
general group $\Gamma$, a tame, metric, minimal dynamical system $(X, \Gamma)$
has the following structure: \begin{equation*} \xymatrix {& \tilde{X}
\ar[dd]_\pi \ar[dl]_\eta & X^* \ar[l]_-{\theta^*} \ar[d]^{\iota}
\ar@/^2pc/@{>}^{\pi^*}[dd]\\ X & & Z \ar[d]^\sigma\\ & Y & Y^* \ar[l]^\theta }
\end{equation*} Here (i) $\tilde{X}$ is a metric minimal and tame system (ii)
$\eta$ is a strongly proximal extension, (iii) $Y$ is a strongly proximal
system, (iv) $\pi$ is a point distal and RIM extension with unique section, (v)
$\theta$, $\theta^*$ and $\iota$ are almost one-to-one extensions, and (vi)
$\sigma$ is an isometric extension.
When the map $\pi$ is also open this diagram reduces to \begin{equation*}
\xymatrix {& \tilde{X} \ar[dl]_\eta \ar[d]^{\iota} \ar@/^2pc/@{>}^\pi[dd]\\ X &
Z \ar[d]^\sigma\\ & Y } \end{equation*}
In general the presence of the strongly proximal extension $\eta$ is
unavoidable. If the system $(X, \Gamma)$ admits an invariant measure $\mu$ then
$Y$ is trivial and $X = \tilde{X}$ is an almost automorphic system; i.e. $X
\overset{\iota}{\to} Z$, where $\iota$ is an almost one-to-one extension and
$Z$ is equicontinuous. Moreover, $\mu$ is unique and $\iota$ is a measure
theoretical isomorphism $\iota : (X,\mu, \Gamma) \to (Z, \lambda, \Gamma)$,
with $\lambda$ the Haar measure on $Z$. Thus, this is always the case when
$\Gamma$ is amenable.
|
Constant-potential molecular dynamics (MD) simulations are indispensable for
understanding the capacitance, structure, and dynamics of electrical double
layers (EDLs) at the atomistic level. However, the classical constant-potential
method, relying on the so-called 'floating charges' to keep electrode
equipotential, overlooks quantum effects on the electrode and always
underestimates EDL capacitance for typical electrochemical systems featuring
metal electrodes in aqueous electrolytes. Here, we propose a universal
theoretical framework as moment-tensor-based constant potential method (mCPM)
to capture electronic structure variations with electric moments. For EDLs at
Au(111) electrodes, mCPM-based MD reveals bell-shaped capacitance curves in
magnitude and shape both quantitatively consistent with experiments. It further
unveils the potential-dependent local electric fields, agreeing with
experimental observations of redshift vibration of interfacial water under
negative polarization and predicting a blueshift under positive polarization,
and identifies geometry dependence of two time scales during EDL formation.
|
We outline two concepts to explain Ultra High Energy Cosmic Rays (UHECRs),
one based on radio galaxies and their relativistic jets and terminal hot spots,
and one based on relativistic Super-Novae (SNe) or Gamma Ray Bursts (GRBs) in
starburst galaxies, one matching the arrival direction data in the South (the
radio galaxy Cen A) and one in the North (the starburst galaxy M82). Ubiquitous
neutrino emission follows accompanied by compact TeV photon emission,
detectable more easily if the direction is towards Earth. The ejection of
UHECRs is last. We have observed particles up to ZeV, neutrinos up to PeV,
photons up to TeV, 30 - 300 Hz GW events, and hope to detect soon of order Hz
to mHz GW events. Energy turnover in single low frequency GW events may be of
order 10^63 erg. How can we further test these concepts? First of all by
associating individual UHECR events, or directional groups of events, with
chemical composition in both the Telescope Array (TA) Coll. and the Auger Coll.
data. Second by identifying more TeV to PeV neutrinos with recent SMBH mergers.
Third by detecting the order < mHz GW events of SMBH binaries, and identifying
the galaxies host to the stellar BH mergers and their GW events in the range up
to 300 Hz. Fourth by finally detecting the formation of the first generation of
SMBHs and their mergers, surely a spectacular discovery.
|
Polar ring galaxies are ideal objects with which to study the
three-dimensional shapes of galactic gravitational potentials since two
rotation curves can be measured in two perpendicular planes. Observational
studies have uncovered systematically larger rotation velocities in the
extended polar rings than in the associated host galaxies. In the dark matter
context, this can only be explained through dark halos that are systematically
flattened along the polar rings. Here, we point out that these objects can also
be used as very effective tests of gravity theories, such as those based on
Milgromian dynamics (MOND). We run a set of polar ring models using both
Milgromian and Newtonian dynamics to predict the expected shapes of the
rotation curves in both planes, varying the total mass of the system, the mass
of the ring with respect to the host, as well as the size of the hole at the
center of the ring. We find that Milgromian dynamics not only naturally leads
to rotation velocities being typically higher in the extended polar rings than
in the hosts, as would be the case in Newtonian dynamics without dark matter,
but that it also gets the shape and amplitude of velocities correct. Milgromian
dynamics thus adequately explains this particular property of polar ring
galaxies.
|
A possible resolution of the early thermalisation puzzle is provided by the
notion of far-from-equilibrium attractors which arise due to the specific
kinematics of heavy-ion collisions. Attractors appear in a wide variety of
dynamical models, and it is plausible that they also occur in QCD. The physical
implications of these observations depend on how robust this effect is when
typically made symmetry restrictions are relaxed. I briefly review this line of
research and its perspectives.
|
In very dense environments, neutrinos can undergo fast flavor conversions on
scales as short as a few centimeters provided that the angular distribution of
the neutrino lepton number crosses zero. This work presents the first attempt
to establish whether the non-negligible abundance of muons and their
interactions with neutrinos in the core of supernovae can affect the occurrence
of such crossings. For this purpose we employ state-of-the-art one-dimensional
core-collapse supernova simulations, considering models that include
muon-neutrino interactions as well as models without these reactions. Although
a consistent treatment of muons in the equation of state and neutrino transport
does not seem to modify significantly the conditions for the occurrence of fast
modes, it allows for the existence of an interesting phenomenon, namely fast
instabilities in the $\mu-\tau$ sector. We also show that crossings below the
supernova shock are a relatively generic feature of the one-dimensional
simulations under investigation, which contrasts with the previous reports in
the literature. Our results highlight the importance of multi-dimensional
simulations with muon creation, where our results must be tested in the future.
|
We discuss the generation of small neutrino masses from effective operators
higher than dimension five, which open new possibilities for low scale see-saw
mechanisms. In order to forbid the radiative generation of neutrino mass by
lower dimensional operators, extra fields are required, which are charged under
a new symmetry. We discuss this mechanism in the framework of a two Higgs
doublet model. We demonstrate that the tree level generation of neutrino mass
from higher dimensional operators often leads to inverse see-saw scenarios in
which small lepton number violating terms are naturally suppressed by the new
physics scale. Furthermore, we systematically discuss tree level
generalizations of the standard see-saw scenarios from higher dimensional
operators. Finally, we point out that higher dimensional operators can also be
generated at the loop level. In this case, we obtain the TeV scale as new
physics scale even with order one couplings.
|
We investigate coarsening and persistence in the voter model by introducing
the quantity $P_n(t)$, defined as the fraction of voters who changed their
opinion n times up to time t. We show that $P_n(t)$ exhibits scaling behavior
that strongly depends on the dimension as well as on the initial opinion
concentrations. Exact results are obtained for the average number of opinion
changes, <n>, and the autocorrelation function, $A(t)\equiv \sum (-1)^n P_n\sim
t^{-d/2}$ in arbitrary dimension d. These exact results are complemented by a
mean-field theory, heuristic arguments and numerical simulations. For
dimensions d>2, the system does not coarsen, and the opinion changes follow a
nearly Poissonian distribution, in agreement with mean-field theory. For
dimensions d<=2, the distribution is given by a different scaling form, which
is characterized by nontrivial scaling exponents. For unequal opinion
concentrations, an unusual situation occurs where different scaling functions
correspond to the majority and the minority, as well as for even and odd n.
|
The certification of entanglement dimensionality is of great importance in
characterizing quantum systems. Recently, it is pointed out that quantum
correlation of high-dimensional states can be simulated with a sequence of
lower-dimensional states. Such problem may render existing characterization
protocols unreliable---the observed entanglement may not be a truly
high-dimensional one. Here, we introduce the notion of irreducible entanglement
to capture its dimensionality that is indecomposable in terms of a sequence of
lower-dimensional entangled systems. We prove this new feature can be detected
in a measurement-device-independent manner with an entanglement witness
protocol. To demonstrate the practicability of this technique, we
experimentally apply it on a 3-dimensional bipartite state and the result
certifies the existence of irreducible (at least) 3-dimensional entanglement.
|
Many applications in science and engineering require the solution of large
linear discrete ill-posed problems that are obtained by the discretization of a
Fredholm integral equation of the first kind in several space-dimensions. The
matrix that defines these problems is very ill-conditioned and generally
numerically singular, and the right-hand side, which represents measured data,
typically is contaminated by measurement error. Straightforward solution of
these problems generally is not meaningful due to severe error propagation.
Tikhonov regularization seeks to alleviate this difficulty by replacing the
given linear discrete ill-posed problem by a penalized least-squares problem,
whose solution is less sensitive to the error in the right-hand side and to
round-off errors introduced during the computations. This paper discusses the
construction of penalty terms that are determined by solving a matrix-nearness
problem. These penalty terms allow partial transformation to standard form of
Tikhonov regularization problems that stem from the discretization of integral
equations on a cube in several space-dimensions.
|
We calculate the structure and neutron content of neutrino-heated MHD winds
driven from the surface of newly-formed magnetars (``proto-magnetars'') and
from the midplane of hyper-accreting disks, two of the possible central engines
for gamma-ray bursts (GRBs) and hyper-energetic supernovae (SNe). Both the
surface of proto-magnetars and the midplane of neutrino-cooled accretion flows
(NDAFs) are electron degenerate and neutron-rich (neutron-to-proton ratio n/p
>> 1). If this substantial free neutron excess is preserved to large radii in
ultra-relativistic outflows, several important observational consequences may
result. Weak interaction processes, however, can drive n/p to ~1 in the
nondegenerate regions that obtain just above the surfaces of NDAFs and
proto-magnetars. Our calculations show that mildly relativistic neutron-rich
outflows from NDAFs are possible in the presence of a strong poloidal magnetic
field. However, we find that neutron-rich winds possess a minimum mass-loss
rate that likely precludes simultaneously neutron-rich and ultra-relativistic
(Lorentz factor > 100) NDAF winds accompanying a substantial accretion power.
In contrast, proto-magnetars are capable of producing neutron-rich
long-duration GRB outflows ~10-30 seconds following core bounce for
sub-millisecond rotation periods; such outflows would, however, accompany only
extremely energetic events, in which the GRB + SN energy budget exceeds ~ 4e52
ergs. Neutron-rich highly relativistic outflows may also be produced during
some short-duration GRBs by geometrically thick accretion disks formed from
compact object mergers. The implications for r-process nucleosynthesis, optical
transients due to non-relativistic neutron-rich winds, and Nickel production in
proto-magnetar and NDAF winds are also briefly discussed.
|
We continue the study of twisted automorphisms of Hopf algebras started in
"Twisted automorphisms of Hopf algebras". In this paper we concentrate on the
group algebra case. We describe the group of twisted automorphisms of the group
algebra of a group of order coprime to 6. The description turns out to be very
similar to the one for the universal enveloping algebra given in "Twisted
automorphisms of Hopf algebras".
|
We design a Convolutional Neural Network (CNN) which studies correlation
between discretized inverse temperature and spin configuration of 2D Ising
model and show that it can find a feature of the phase transition without
teaching any a priori information for it. We also define a new order parameter
via the CNN and show that it provides well approximated critical inverse
temperature. In addition, we compare the activation functions for convolution
layer and find that the Rectified Linear Unit (ReLU) is important to detect the
phase transition of 2D Ising model.
|
It has been a long time that computer architecture and systems are optimized
for efficient execution of machine learning (ML) models. Now, it is time to
reconsider the relationship between ML and systems, and let ML transform the
way that computer architecture and systems are designed. This embraces a
twofold meaning: improvement of designers' productivity, and completion of the
virtuous cycle. In this paper, we present a comprehensive review of the work
that applies ML for computer architecture and system design. First, we perform
a high-level taxonomy by considering the typical role that ML techniques take
in architecture/system design, i.e., either for fast predictive modeling or as
the design methodology. Then, we summarize the common problems in computer
architecture/system design that can be solved by ML techniques, and the typical
ML techniques employed to resolve each of them. In addition to emphasis on
computer architecture in a narrow sense, we adopt the concept that data centers
can be recognized as warehouse-scale computers; sketchy discussions are
provided in adjacent computer systems, such as code generation and compiler; we
also give attention to how ML techniques can aid and transform design
automation. We further provide a future vision of opportunities and potential
directions, and envision that applying ML for computer architecture and systems
would thrive in the community.
|
Almost fifty years ago, D.E. Bell and L. LaPadula published the first formal
model of a secure system, known today as the Bell-LaPadula (BLP) model. BLP is
described as a state machine by means of first-order logic and set theory. The
authors also formalize two state invariants known as security condition and
*-property. Bell and LaPadula prove that all the state transitions preserve
these invariants.
In this paper we present a fully automated proof of the security condition
and the *-property for all the model operations. The model and the proofs are
coded in the {log} tool. As far as we know this is the first time such proofs
are automated. Besides, we show that the {log} model is also an executable
prototype. Therefore we are providing an automatically verified executable
prototype of BLP.
|
We propose an extension of tri-bimaximal mixing to include a non-zero reactor
angle $\theta_{13}$ while maintaining the tri-bimaximal predictions for the
atmospheric angle $\theta_{23}=45^o$ and solar angle $\theta_{12}=35^o$. We
show how such tri-bimaximal-reactor mixing can arise at leading order from
the(type I) see-saw mechanism with partially constrained sequential dominance.
Partially constrained sequential dominance can be realized in GUT models with a
non-Abelian discrete family symmetry, such as $A_4$, spontaneously broken by
flavons with a particular vacuum alignment.
|
An attempt to shed light on the various belief/idea systems in high Tc
superconductivity that are at present popular. This text is in first instance
intended to serve both string theorists and junior condensed matter physicists
who want to enter this field. It departs from the premise that the often
confusing, mutually contradicting portfolio of theories can be best appreciated
by viewing it from a historical perspective. The histories of the following
subjects are chronicled: the spin fluctuation superglue, Mottness, Resonating
Valence Bonds and the gauge theories, pseudo-gap and competing orders, quantum
critical metals. The author is well aware that any attempt to write such a
history is subjective and comments are welcomed.
|
We consider the impact of ambiguity on the optimal timing of a class of
two-dimensional integral option contracts when the exercise payoff is a
positively homogeneous measurable function. Hence, the considered class of
exercise payoffs includes discontinuous functions as well. We identify a
parameterized family of excessive functions generating an appropriate class of
supermartingales for the considered problems and then express the value of the
optimal policy as well as the worst case measure in terms of these processes.
The advantage of our approach is that it reduces the analysis of the
multidimensional problem to the analysis of an ordinary one-dimensional static
optimization problem. In that way it simplifies earlier treatments of the
problem without ambiguity considerably. We also illustrate our findings in
explicitly parameterized examples.
|
With the rapid development of GPS enabled devices (smartphones) and
location-based applications, location privacy is increasingly concerned.
Intuitively, it is widely believed that location privacy can be preserved by
publishing aggregated mobility data, such as the number of users in an area at
some time. However, a recent attack shows that these aggregated mobility data
can be exploited to recover individual trajectories. In this paper, we first
propose two differentially private basic schemes for aggregated mobility data
publication, namely direct perturbation and threshold perturbation, which
preserve location privacy of users and especially resist the trajectory
recovery attack. Then, we explore the moving characteristics of mobile users,
and design an improved scheme named static hybrid perturbation by combining the
two basic schemes according to the moving characteristics. Since static hybrid
perturbation works only for static data, which are entirely available before
publishing, we further adapt the static hybrid perturbation by combining it
with linear regression, and yield another improved scheme named dynamic hybrid
perturbation. The dynamic hybrid perturbation works also for dynamic data,
which are generated on the fly during publication. Privacy analysis shows that
the proposed schemes achieve differential privacy. Extensive experiments on
both simulated and real datasets demonstrate that all proposed schemes resist
the trajectory recovery attack well, and the improved schemes significantly
outperform the basic schemes.
|
The recent observation of Sigma_b^{+-} (uub and ddb) and Xi_b^- (dsb) baryons
at the Tevatron within 2 MeV of our theoretical predictions provides a strong
motivation for applying the same theoretical approach, based on modeling the
color hyperfine interaction, to predict the masses of other bottom baryons
which might be observed in the foreseeable future. For S-wave qqb states we
predict M(Omega_b) = 6052.1+-5.6 MeV, M(Omega^*_b) = 6082.8+-5.6 MeV, and
M(Xi_b^0) = 5786.7 +- 3.0 MeV. For states with one unit of orbital angular
momentum between the b quark and the two light quarks we predict
M(Lambda_{b[1/2]}) = 5929+-2 MeV, M(Lambda_{b[3/2]}) = 5940+-2 MeV,
M(Xi_{b[1/2]}) = 6106+-4 MeV, and M(Xi_{b[3/2]}) = 6115+-4 MeV.
|
We show that deciding whether a given graph $G$ of size $m$ has a unique
perfect matching as well as finding that matching, if it exists, can be done in
time $O(m)$ if $G$ is either a cograph, or a split graph, or an interval graph,
or claw-free. Furthermore, we provide a constructive characterization of the
claw-free graphs with a unique perfect matching.
|
In this paper we investigate the local risk-minimization approach for a
semimartingale financial market where there are restrictions on the available
information to agents who can observe at least the asset prices. We
characterize the optimal strategy in terms of suitable decompositions of a
given contingent claim, with respect to a filtration representing the
information level, even in presence of jumps. Finally, we discuss some
practical examples in a Markovian framework and show that the computation of
the optimal strategy leads to filtering problems under the real-world
probability measure and under the minimalmartingale measure.
|
In the present article, we investigate a possibility of a real-valued map on
the space of tuples of commuting trace-class self-adjoint operators, which
behaves like the usual trace map on the space of trace-class linear operators.
It turns out that such maps are related with continuous group homomorphisms
from the Milnor's $K$-group of the real numbers into the additive group of real
numbers. Using this connection, it is shown that any such trace map must be
trivial.
|
In this paper we propose a new, more appropriate definition of regular and
indeterminate strings. A regular string is one that is "isomorphic" to a string
whose entries all consist of a single letter, but which nevertheless may itself
include entries containing multiple letters. A string that is not regular is
said to be indeterminate. We begin by proposing a new model for the
representation of strings, regular or indeterminate, then go on to describe a
linear time algorithm to determine whether or not a string $x = x[1..n]$ is
regular and, if so, to replace it by a lexicographically least (lex-least)
string $y$ whose entries are all single letters. Furthermore, we connect the
regularity of a string to the transitive closure problem on a graph, which in
our special case can be efficiently solved. We then introduce the idea of a
feasible palindrome array MP of a string, and prove that every feasible MP
corresponds to some (regular or indeterminate) string. We describe an algorithm
that constructs a string $x$ corresponding to given feasible MP, while ensuring
that whenever possible $x$ is regular and if so, then lex-least. A final
section outlines new research directions suggested by this changed perspective
on regular and indeterminate strings.
|
The choice of architecture of artificial neuron network (ANN) is still a
challenging task that users face every time. It greatly affects the accuracy of
the built network. In fact there is no optimal method that is applicable to
various implementations at the same time. In this paper we propose a method to
construct ANN based on clustering, that resolves the problems of random and ad
hoc approaches for multilayer ANN architecture. Our method can be applied to
regression problems. Experimental results obtained with different datasets,
reveals the efficiency of our method.
|
Understanding the operation of biological and artificial networks remains a
difficult and important challenge. To identify general principles, researchers
are increasingly interested in surveying large collections of networks that are
trained on, or biologically adapted to, similar tasks. A standardized set of
analysis tools is now needed to identify how network-level covariates -- such
as architecture, anatomical brain region, and model organism -- impact neural
representations (hidden layer activations). Here, we provide a rigorous
foundation for these analyses by defining a broad family of metric spaces that
quantify representational dissimilarity. Using this framework we modify
existing representational similarity measures based on canonical correlation
analysis to satisfy the triangle inequality, formulate a novel metric that
respects the inductive biases in convolutional layers, and identify approximate
Euclidean embeddings that enable network representations to be incorporated
into essentially any off-the-shelf machine learning method. We demonstrate
these methods on large-scale datasets from biology (Allen Institute Brain
Observatory) and deep learning (NAS-Bench-101). In doing so, we identify
relationships between neural representations that are interpretable in terms of
anatomical features and model performance.
|
We describe a simple method that utilises the standard idea of bias-variance
trade-off to improve the expected accuracy of numerical model forecasts of
future climate. The method can be thought of as an optimal multi-model
combination between the forecast from a numerical model multi-model ensemble,
on one hand, and a simple statistical forecast, on the other. We apply the
method to predictions for UK temperature and precipitation for the period 2010
to 2100. The temperature predictions hardly change, while the precipitation
predictions show large changes.
|
In this paper we consider the families of morphisms of vector fibre bundles
(\cite{Mill1}) defined by the linear systems of differential equations. We
proving that the specified families of morphisms is not saturated
(\cite{Mill2}).
|
The tactile sensation of stroking soft fur, known for its comfort and
emotional benefits, has numerous applications in virtual reality,
animal-assisted therapy, and household products. Previous studies have
primarily utilized actual fur to present a voluminous fur experience that poses
challenges concerning versatility and flexibility. In this study, we develop a
system that integrates a head-mounted display with an ultrasound haptic display
to provide visual and haptic feedback. Measurements taken using an artificial
skin sheet reveal directional differences in tactile and visual responses to
voluminous fur. Based on observations and measurements, we propose interactive
models that dynamically adjust to hand movements, simulating fur-stroking
sensations. Our experiments demonstrate that the proposed model using visual
and haptic modalities significantly enhances the realism of a fur-stroking
experience. Our findings suggest that the interactive visuo-haptic model offers
a promising fur-stroking experience in virtual reality, potentially enhancing
the user experience in therapeutic, entertainment, and retail applications.
|
In the application of machine learning to remote sensing, labeled data is
often scarce or expensive, which impedes the training of powerful models like
deep convolutional neural networks. Although unlabeled data is abundant, recent
self-supervised learning approaches are ill-suited to the remote sensing
domain. In addition, most remote sensing applications currently use only a
small subset of the multi-sensor, multi-channel information available,
motivating the need for fused multi-sensor representations. We propose a new
self-supervised training objective, Contrastive Sensor Fusion, which exploits
coterminous data from multiple sources to learn useful representations of every
possible combination of those sources. This method uses information common
across multiple sensors and bands by training a single model to produce a
representation that remains similar when any subset of its input channels is
used. Using a dataset of 47 million unlabeled coterminous image triplets, we
train an encoder to produce semantically meaningful representations from any
possible combination of channels from the input sensors. These representations
outperform fully supervised ImageNet weights on a remote sensing classification
task and improve as more sensors are fused. Our code is available at
https://storage.cloud.google.com/public-published-datasets/csf_code.zip.
|
The importance and significance of magnetic fields in the astrophysical
scenario is well known. Many domains of astrophysical black hole physics such
as polarized shadow image, high energy emitting processes and jet formation are
dependent on the behavior of the magnetic fields in the vicinity of the compact
objects. In light of this, we determine the master equation and master
differential equation that determine the spatial behavior of the magnetic field
inside a matter distribution or vacuum region, of general spherically symmetric
metric, which is immersed in a test magnetic field. We also investigate here
the case of JMN-1 singularity immersed in a uniform weak magnetic field and
determine the behavior of magnetic fields by defining electromagnetic four
potential vector. We find that the tangential component of the magnetic field
is discontinuous at the matching surface of the JMN-1 singularity with the
external Schwarzschild metric, resulting in surface currents. We define the
covariant expression of surface current density in this scenario. We also
analyze the behavior of center-of-mass energy of two oppositely charged
particles in the geometry of the magnetized JMN-1 singularity. We briefly
discuss the possible scenarios which would possess a discontinuous magnetic
field and implications of the same and future possibilities in the realm of
astrophysics are indicated.
|
The reason for the observed thinness of the solar tachocline is still not
well understood. One of the explanations that have been proposed is that a
primordial magnetic field renders the rotation uniform in the radiation zone.
We test here the validity of this magnetic scenario through 3D numerical MHD
simulations that encompass both the radiation zone and the convection zone. The
numerical simulations are performed with the anelastic spherical harmonics
(ASH) code. The computational domain extends from $0.07\;R_\odot$ to
$0.97\;R_\odot$. In the parameter regime we explored, a dipolar fossil field
aligned with the rotation axis can not remain confined in the radiation zone.
When the field lines are allowed to interact with turbulent unstationary
convective motions at the base of the convection zone, 3D effects prevent the
field confinement. In agreement with previous work, we find that a dipolar
fossil field, even when it is initially buried deep inside the radiation zone,
will spread into the convective zone. According to Ferraro's law of
iso-rotation, it then imprints on the radiation zone the latitudinal
differential rotation of the convection zone, which is not observed.
|
Group meetings are frequent business events aimed to develop and conduct
project work, such as Big Room design and construction project meetings. To be
effective in these meetings, participants need to have an engaged mental state.
The mental state of participants however, is hidden from other participants,
and thereby difficult to evaluate. Mental state is understood as an inner
process of thinking and feeling, that is formed of a conglomerate of mental
representations and propositional attitudes. There is a need to create
transparency of these hidden states to understand, evaluate and influence them.
Facilitators need to evaluate the meeting situation and adjust for higher
engagement and productivity. This paper presents a framework that defines a
spectrum of engagement states and an array of classifiers aimed to detect the
engagement state of participants in real time. The Engagement Framework
integrates multi-modal information from 2D and 3D imaging and sound. Engagement
is detected and evaluated at participants and aggregated at group level. We use
empirical data collected at the lab of Konica Minolta, Inc. to test initial
applications of this framework. The paper presents examples of the tested
engagement classifiers, which are based on research in psychology,
communication, and human computer interaction. Their accuracy is illustrated in
dyadic interaction for engagement detection. In closing we discuss the
potential extension to complex group collaboration settings and future feedback
implementations.
|
IceTop, the surface array of the IceCube Neutrino Observatory, consists of
162 ice-Cherenkov tanks distributed over an area of 1km$^2$. Besides being used
as a veto for the in-ice neutrino detector, IceTop is a powerful cosmic-ray
detector. In the upcoming years, the capabilities of the IceTop array will be
enhanced by augmenting the existing ice-Cherenkov tanks with an array of
elevated scintillator panels and radio antennas. Combining the data obtained
from the different detectors will improve the reconstruction of cosmic-ray
energy and primary mass while reducing the energy threshold and increasing the
aperture of the array. In January 2020, a prototype station consisting of 8
scintillation detectors and 3 antennas was deployed at the IceTop site. The
prototype detectors are connected to one data-acquisition system and the
readout of the radio antennas is triggered using the signals from the
scintillators. This allows us to regularly observe secondary air shower
particles hitting the scintillators, as well as the radio emission of
high-energy air showers. In this contribution, we will discuss the results
obtained from the prototype station in the past year, present the first
cosmic-ray air showers measured with this prototype station, and show how the
observations with the different detector types complement each other.
|
Although large language models (LLMs) are impressive in solving various
tasks, they can quickly be outdated after deployment. Maintaining their
up-to-date status is a pressing concern in the current era. This paper provides
a comprehensive review of recent advances in aligning LLMs with the
ever-changing world knowledge without re-training from scratch. We categorize
research works systemically and provide in-depth comparisons and discussion. We
also discuss existing challenges and highlight future directions to facilitate
research in this field. We release the paper list at
https://github.com/hyintell/awesome-refreshing-llms
|
We consider a variant of Gamow's liquid drop model with an anisotropic
surface energy. Under suitable regularity and ellipticity assumptions on the
surface tension, Wulff shapes are minimizers in this problem if and only if the
surface energy is isotropic. We show that for smooth anisotropies, in the small
nonlocality regime, minimizers converge to the Wulff shape in $C^1$-norm and
quantify the rate of convergence. We also obtain a quantitative extension of
the energy of any minimizer around the energy of a Wulff shape yielding a
geometric stability result. For certain crystalline surface tensions we can
determine the global minimizer and obtain its exact energy expansion in terms
of the nonlocality parameter.
|
Fe3Si is a ferromagnetic material with possible applications in magnetic
tunnel junctions. When doped with Mn, the material shows a complex magnetic
behavior, as suggested by older experiments. We employed the
Korringa-Kohn-Rostoker (KKR) Green function method within density-functional
theory (DFT) in order to study the alloy Fe(3-x)Mn(x)Si, with 0 < x < 1.
Chemical disorder is described within the coherent potential approximation
(CPA). In agreement with experiment, we find that the Mn atoms align
ferromagnetically to the Fe atoms, and that the magnetization and Curie
temperature drop with increasing Mn-concentration $x$. The calculated spin
polarization P at the Fermi level varies strongly with x, from P=-0.3 at x=0
(ordered Fe3Si) through P=0 at x=0.28, to P=+1 for x>0.75; i.e., at high Mn
concentrations the system is half-metallic. We discuss the origin of the trends
of magnetic moments, exchange interactions, Curie temperature and the spin
polarization.
|
Soft materials have many important roles in animal locomotion and object
manipulation. In robotic applications soft materials can store and release
energy, absorb impacts, increase compliance and increase the range of possible
shape profiles using minimal actuators. The shape changing ability is also a
potential tool to manipulate friction forces caused by contact with the
environment. These advantages are accompanied by challenges of soft material
actuation and the need to exploit frictional interactions to generate
locomotion. Accordingly, the design of soft robots involves exploitation of
continuum properties of soft materials for manipulating frictional interactions
that result in robot locomotion. The research presents design and control of a
soft body robot that uses its shape change capability for locomotion. The
bioinspired (caterpillar) modular robot design is a soft monolithic body which
interacts with the environment at discrete contact points (caterpillar
prolegs). The deformable body is actuated by muscle-like shape memory alloy
coils and the discrete contact points manipulate friction in a binary manner.
This novel virtual grip mechanism combines two materials with different
coefficients of frictions (sticky-slippery) to control the robot-environment
friction interactions. The research also introduces a novel control concept
that discretizes the robot-environment-friction interaction into binary states.
This facilitates formulation of a control framework that is independent of the
specific actuator or soft material properties and can be applied to
multi-limbed soft robots. The transitions between individual robot states are
assigned a reward that allow optimized state transition control sequences to be
calculated. This conceptual framework is extremely versatile and we show how it
can be applied to situations in which the robot loses limb function.
|
The interaction between the Earth's magnetic field and the solar wind plasma
results in a natural plasma confinement system which stores energy. Dissipation
of this energy through Joule heating in the ionosphere can be studied via the
Auroral Electrojet (AE) index. The apparent broken power law form of the
frequency spectrum of this index has motivated investigation of whether it can
be described as fractal coloured noise. One frequently-applied test for
self-affinity is to demonstrate linear scaling of the logarithm of the
structure function of a time series with the logarithm of the dilation factor
$\lambda$. We point out that, while this is conclusive when applied to signals
that are self-affine over many decades in $\lambda$, such as Brownian motion,
the slope deviates from exact linearity and the conclusions become ambiguous
when the test is used over shorter ranges of $\lambda$. We demonstrate that non
self-affine time series made up of random pulses can show near-linear scaling
over a finite dynamic range such that they could be misinterpreted as being
self-affine. In particular we show that pulses with functional forms such as
those identified by Weimer within the $AL$ index, from which $AE$ is partly
derived, will exhibit nearly linear scaling over ranges similar to those
previously shown for $AE$ and $AL$. The value of the slope, related to the
Hurst exponent for a self-affine fractal, seems to be a more robust
discriminator for fractality, if other information is available.
|
The space density of white dwarfs is highly uncertain even nearby. This
results from the fact that the known sample of white dwarfs is largely
incomplete in part because most white dwarfs have been discovered as
by-products in non-dedicated surveys. In order to obtain more accurate white
dwarf space densities and scale heights we must build up a complete sample of
white dwarfs. The European Galactic Plane Surveys (EGAPS) are the best database
to search for white dwarfs as they will provide broad band (U, g', r', i') and
narrow band (Halpha and HeI) measurements for one per cent of all the stars in
the Galaxy. By looking at the Galactic Plane, where most stars are, we ensure
that we are obtaining a complete sample. The space densities obtained from
EGAPS can then be compared with those found in high latitude surveys such as
the Sloan Digital Sky Survey (SDSS). The methods used to identify white dwarfs
using the colours available in EGAPS are described and some preliminary results
presented.
|
We consider a neutral bosonic molecule in the Born-Oppenheimer approximation
without spin and assume the physically obvious assertion that a neutral
molecule prefers to break into smaller neutral clusters. We prove the existence
of a global in space/uniform spectral gap between the ground state and first
excited state energies. To do so, we improve upon previous results using a
different tool, the time-independent Feshbach-Schur map.
|
This paper proposes an original exchange property of valuations.This property
is shown to be equivalent to a property described by Dress and Terhalle in the
context of discrete optimization and matroids and shown there to characterize
the valuations for which the demand oracle can be implemented by a greedy
algorithm. The same exchange property is also equivalent to a property
described independently by Reijnierse, van Gellekom and Potters and by Lehmann,
Lehmann and Nisan and shown there to be satisfied by substitutes valuations. It
studies the family of valuations that satisfy this exchange property, the ultra
valuations. Any substitutes valuation is an ultra valuation, but ultra
valuations may exhibit complementarities. Any symmetric valuation is an ultra
valuation. Substitutes valuations are exactly the submodular ultra valuations.
Ultra valuations define ultrametrics on the set of items. The maximum of an
ultra valuation on $n$ items can be found in $O(n^2)$ steps. Finding an
efficient allocation among ultra valuations is NP-hard.
|
Despite the proven utility of large language models (LLMs) in real-world
applications, there remains a lack of understanding regarding how they leverage
their large-scale pretraining text corpora to achieve such capabilities. In
this work, we investigate the interplay between generalization and memorization
in pretrained LLMs at scale, through a comprehensive $n$-gram analysis of their
training data. Our experiments focus on three general task types: translation,
question-answering, and multiple-choice reasoning. With various sizes of
open-source LLMs and their pretraining corpora, we observe that as the model
size increases, the task-relevant $n$-gram pair data becomes increasingly
important, leading to improved task performance, decreased memorization,
stronger generalization, and emergent abilities. Our results support the
hypothesis that LLMs' capabilities emerge from a delicate balance of
memorization and generalization with sufficient task-related pretraining data,
and point the way to larger-scale analyses that could further improve our
understanding of these models.
|
Early studies of nearby Seyfert galaxies have led to the picture that the
Narrow Line Region (NLR) is a cone-shaped region of gas ionized by radiation
from a nuclear source collimated by a dusty torus, where the gas is in outflow.
In this contribution, I discuss a 3D view of the NLR obtained via Integral
Field Spectroscopy, showing that: (1) although the region of highest emission
is elongated (and in some cases cone-shaped), there is also lower level
emission beyond the "ionization cone", indicating that the AGN radiation leaks
through the torus; (2) besides outflows, the gas kinematics include also
rotation in the galaxy plane and inflows; (3) in many cases the outflows are
compact and restricted to the inner few 100pc; we argue that these may be early
stages of an outflow that will evolve to an open-ended, cone-like one. Inflows
are observed in ionized gas in LINERs, and in warm molecular gas in more
luminous AGN, being usually found on hundred of pc scales. Mass outflow rates
in ionized gas are of the order of a few solar masses per year, while the mass
inflow rates are of the order of tenths of solar masses per year. Mass inflow
rates in warm molecular gas are ~4-5 orders of magnitude lower, but these
inflows seem to be only tracers of more massive inflows in cold molecular gas
that should be observable at mm wavelengths.
|
Accretion onto the supermassive black hole in some active galactic nuclei
(AGN) drives relativistic jets of plasma, which dissipate a significant
fraction of their kinetic energy into gamma-ray radiation. The location of
energy dissipation in powerful extragalactic jets is currently unknown, with
implications for particle acceleration, jet formation, jet collimation, and
energy dissipation. Previous studies have been unable to constrain the location
between possibilities ranging from the sub-parsec-scale broad-line region to
the parsec-scale molecular torus, and beyond. Here we show using a simple
diagnostic that the more distant molecular torus is the dominant location for
powerful jets. This diagnostic, called the seed factor, is dependent only on
observable quantities, and is unique to the seed photon population at the
location of gamma-ray emission. Using $62$ multiwavelength, quasi-simultaneous
spectral energy distributions of gamma-ray quasars, we find a seed factor
distribution which peaks at a value corresponding to the molecular torus,
demonstrating that energy dissipation occurs $\sim 1$ parsec from the black
hole (or $\sim 10^{4}$ Schwarzchild radii for a $10^{9}M_{\odot}$ black hole).
|
This is a set of 288 questions written for a Moore-style course in
Mathematical Logic. I have used these (or some variation) four times in a
beginning graduate course. Topics covered are:
propositional logic
axioms of ZFC
wellorderings and equivalents of AC
ordinal and cardinal arithmetic
first order logic, and the compactness theorem
Lowenheim-Skolem theorems
Turing machines, Church's Thesis
completeness theorem and first incompleteness theorem
undecidable theories
second incompleteness theorem
|
For a graph $F$, we say a hypergraph is a Berge-$F$ if it can be obtained
from $F$ by replacing each edge of $F$ with a hyperedge containing it. A
hypergraph is Berge-$F$-free if it does not contain a subhypergraph that is a
Berge-$F$. The weight of a non-uniform hypergraph $\mathcal{H}$ is the quantity
$\sum_{h \in E(\mathcal{H})} |h|$.
Suppose $\mathcal{H}$ is a Berge-$F$-free hypergraph on $n$ vertices. In this
short note, we prove that as long as every edge of $\mathcal{H}$ has size at
least the Ramsey number of $F$ and at most $o(n)$, the weight of $\mathcal{H}$
is $o(n^2)$. This result is best possible in some sense. Along the way, we
study other weight functions, and strengthen results of Gerbner and Palmer; and
Gr\'osz, Methuku and Tompkins.
|
We prove that if the bicanonical map of a minimal surface of general type S
with p_{g}=q=1 and K^2=8 is non birational, then it is a double cover onto a
rational surface. An application of this theorem is the complete classification
of minimal surfaces of general type with p_{g}=q=1, K^2=8 and nonbirational
bicanonical map.
|
We first focus on the finite-gap formalism for type IIB strings in AdS_3 x
S^1, which allows to encode the semiclassical spectrum of a very large family
of string solutions in a Riemann surface, the spectral curve. Then, we show
that, in the large angular momentum limit, it separates into two distinct
surfaces, allowing the derivation of an explicit expression for the spectrum,
which is correspondingly characterised by two separate branches. The latter may
be interpreted in terms of two kinds of spikes appearing on the strings:
"large" spikes, yielding an infinite contribution to the energy and angular
momentum of the string, and "small" spikes, representing finite excitations
over the background of the "large" spikes.
On the other side of the AdS/CFT correspondence, we consider the sl(2) sector
of N=4 super Yang-Mills theory. The corresponding 1-loop spectrum, in the large
conformal spin limit, is also encoded in a spectral curve and characterised in
terms of the so-called holes. We show that, with the appropriate
identifications and with the usual extrapolation from weak to strong 't Hooft
coupling described by the cusp anomalous dimension, the large-S spectra of
gauge theory and of string theory coincide. Furthermore, we explain how "small"
and "large" holes may be identified with "small" and "large" spikes.
Finally, we discuss several explicit spiky string solutions in AdS_3 which
display the finite-gap spectrum. We compute their spectral curves in the large
S limit, finding that they correspond to specific regions of the moduli space
of the finite-gap curves. We also explain how "large" spikes may be used in
order to extract a discrete system of degrees of freedom from string theory,
which can then be matched with the degrees of freedom of the dual gauge theory
operators, and how "small" spikes are in fact very similar to the Giant Magnons
living in R x S^2.
|
We present a Hamiltonian model describing two pairs of mechanical and optical
modes under standard optomechanical interaction. The vibrational modes are
mechanically isolated from each other and the optical modes couple
evanescently. We recover the ranges for variables of interest, such as
mechanical and optical resonant frequencies and naked coupling strengths, using
a finite element model for a standard experimental realization. We show that
the quantum model, under this parameter range and external optical driving, may
be approximated into parametric interaction models for all involved modes. As
an example, we study the effect of detuning in the optical resonant frequencies
modes and optical driving resolved to mechanical sidebands and show an optical
beam splitter with interaction strength dressed by the mechanical excitation
number, a mechanical bidirectional coupler, and a two-mode mechanical squeezer
where the optical state mediates the interaction strength between the
mechanical modes.
|
The entropy-temperature curves are calculated for non-interacting fermions in
a 3D optical lattice. These curves facilitate understanding of how adiabatic
changes in the lattice depth affect the temperature, and we demonstrate regimes
where the atomic sample can be significantly heated or cooled. When the Fermi
energy of the system is near the location of a band gap the cooling regimes
disappear for all temperatures and the system can only be heated by lattice
loading. For samples with greater than one fermion per site we find that
lattice loading can lead to a large increase in the degeneracy of the system.
We study the scaling properties of the system in the degenerate regimes and
assess the effects of non-adiabatic loading.
|
Deep learning-based lane detection (LD) plays a critical role in autonomous
driving systems, such as adaptive cruise control. However, it is vulnerable to
backdoor attacks. Existing backdoor attack methods on LD exhibit limited
effectiveness in dynamic real-world scenarios, primarily because they fail to
consider dynamic scene factors, including changes in driving perspectives
(e.g., viewpoint transformations) and environmental conditions (e.g., weather
or lighting changes). To tackle this issue, this paper introduces BadLANE, a
dynamic scene adaptation backdoor attack for LD designed to withstand changes
in real-world dynamic scene factors. To address the challenges posed by
changing driving perspectives, we propose an amorphous trigger pattern composed
of shapeless pixels. This trigger design allows the backdoor to be activated by
various forms or shapes of mud spots or pollution on the road or lens, enabling
adaptation to changes in vehicle observation viewpoints during driving. To
mitigate the effects of environmental changes, we design a meta-learning
framework to train meta-generators tailored to different environmental
conditions. These generators produce meta-triggers that incorporate diverse
environmental information, such as weather or lighting conditions, as the
initialization of the trigger patterns for backdoor implantation, thus enabling
adaptation to dynamic environments. Extensive experiments on various commonly
used LD models in both digital and physical domains validate the effectiveness
of our attacks, outperforming other baselines significantly (+25.15% on average
in Attack Success Rate). Our codes will be available upon paper publication.
|
It is well known that the number of tilting modules over a path algebra of
type A_n coincides with the Catalan number C(n). Moreover, the number of
support tilting modules of type A_n is the Catalan number C(n+1). We show that
the convex hull of all roots of a root system of type A_n is a polytope with
integral volume (n + 1)C(n+1). Moreover, we associate to the set of tilting
modules and to the set of support tilting modules certain polytopes and show
that their volumes coincide with the number of those modules, respectively.
Finally, we show that these polytopes can be defined just using the root system
and relate their volumes, so that we can derive the above results in a new way.
|
We give a review of the quantum singularity theory of Fan-Jarvis-Ruan and the
r-spin theory of Jarvis-Kimura-Vaintrob and describe the work of
Abramovich-Jarvis showing that for the singularity A_{r-1} = x^r the stack of
A_{r-1}-curves of is canonically isomorphic to the stack of r-spin curves. We
prove that the A_{r-1}-theory satisfies all the axioms of
Jarvis-Kimura-Vaintrob for an r-spin virtual class. Therefore, the results of
Lee, Faber-Shadrin-Zovonkine, and Givental all apply to the A_{r-1}-theory. In
particular, this shows that the Witten Integrable Hierarchies Conjecture is
true for the A_{r-1}-theory; that is, the total descendant potential function
of the A_{r-1}-theory satisfies the r-th Gelfand-Dikii hierarchy.
|
The phrase "(co)simplicial (pre)sheaf" can be reasonably interpreted in
multiple ways. In this survey we study how the various notions familiar to the
author relate to one another. We end by giving some example applications of the
most general of these notions.
|
This chapter covers the development of feedback control of superconducting
qubits using projective measurement and a discrete set of conditional actions,
here referred to as digital feedback. We begin with an overview of the
applications of digital feedback in quantum computing. We then introduce an
implementation of high-fidelity projective measurement of superconducting
qubits. This development lays the ground for closed-loop control based on the
binary measurement result. A first application of digital feedback control is
fast and deterministic qubit reset, allowing the repeated initialization of a
qubit more than an order of magnitude faster than its relaxation rate. A second
application employs feedback in a multi-qubit setting to convert the generation
of entanglement by parity measurement from probabilistic to deterministic,
targeting an entangled state with the desired parity every time.
|
The first three observing runs with Advanced LIGO and Virgo have resulted in
the detection of binary black hole mergers (BBH) with highly unequal mass
components, which are difficult to reconcile with standard formation paradigms.
The most representative of these is GW190814, a highly asymmetric merger
between a 23 M$_{\odot}$ black hole and a 2.6 M$_{\odot}$ compact object. Here,
we explore recent results suggesting that a sizeable fraction of stars with
pre-collapse carbon-oxygen core masses above 10 M$_{\odot}$, and extending up
to at least 30 M$_{\odot}$, may produce objects inside the so-called lower mass
gap that bridges the division between massive pulsars and BHs in Galactic X-ray
binaries. We demonstrate that such an explosion landscape would naturally cause
a fraction of massive binaries to produce GW190814-like systems instead of
symmetric-mass BBHs. We present examples of specific evolutionary channels
leading to the formation of GW190814 and GW200210, a 24+2.8 M$_{\odot}$ merger
discovered during the O3b observing run. We estimate the merger-rate density of
these events in our scenario to be $\mathcal{O}$(5%) of the total BBH merger
rate. Finally, we discuss the broader implications of this formation channel
for compact object populations, and its possible relevance to less asymmetric
merger events such as GW200105 and GW200115
|
In this paper, we present the Gaussian process regression as the predictive
model for Quality-of-Service (QoS) attributes in Web service systems. The goal
is to predict performance of the execution system expressed as QoS attributes
given existing execution system, service repository, and inputs, e.g., streams
of requests. In order to evaluate the performance of Gaussian process
regression the simulation environment was developed. Two quality indexes were
used, namely, Mean Absolute Error and Mean Squared Error. The results obtained
within the experiment show that the Gaussian process performed the best with
linear kernel and statistically significantly better comparing to
Classification and Regression Trees (CART) method.
|
We study the appearance of the giant component in random subgraphs of a given
large finite graph G=(V,E) in which each edge is present independently with
probability p. We show that if G is an expander with vertices of bounded
degree, then for any c in ]0,1[, the property that the random subgraph contains
a giant component of size c|V| has a sharp threshold.
|
We report measurements of Shubnikov-de Haas oscillations in the giant Rashba
semiconductor BiTeI under applied pressures up to $\sim 2\,\mathrm{GPa}$. We
observe one high frequency oscillation at all pressures and one low frequency
oscillation that emerges between $\sim 0.3-0.7\,\mathrm{GPa}$ indicating the
appearance of a second small Fermi surface. BiTeI has a conduction band bottom
that is split into two sub-bands due to the strong Rashba coupling, resulting
in a `Dirac point'. Our results suggest that the chemical potential starts
below the Dirac point in the conduction band at ambient pressure and moves
upward, crossing it as pressure is increased. The presence of the chemical
potential above this Dirac point results in two Fermi surfaces. We present a
simple model that captures this effect and can be used to understand the
pressure dependence of our sample parameters. These extracted parameters are in
quantitative agreement with first-principles calculations and other
experiments. The parameters extracted via our model support the notion that
pressure brings the system closer to the predicted topological quantum phase
transition.
|
Exploring scientific datasets with billions of samples in real-time
visualization presents a challenge - balancing high-fidelity rendering with
speed. This work introduces a novel renderer - Neural Accelerated Renderer
(NAR), that uses the neural deferred rendering framework to visualize
large-scale scientific point cloud data. NAR augments a real-time point cloud
rendering pipeline with high-quality neural post-processing, making the
approach ideal for interactive visualization at scale. Specifically, we train a
neural network to learn the point cloud geometry from a high-performance
multi-stream rasterizer and capture the desired postprocessing effects from a
conventional high-quality renderer. We demonstrate the effectiveness of NAR by
visualizing complex multidimensional Lagrangian flow fields and photometric
scans of a large terrain and compare the renderings against the
state-of-the-art high-quality renderers. Through extensive evaluation, we
demonstrate that NAR prioritizes speed and scalability while retaining high
visual fidelity. We achieve competitive frame rates of $>$ 126 fps for
interactive rendering of $>$ 350M points (i.e., an effective throughput of $>$
44 billion points per second) using $\sim$12 GB of memory on RTX 2080 Ti GPU.
Furthermore, we show that NAR is generalizable across different point clouds
with similar visualization needs and the desired post-processing effects could
be obtained with substantial high quality even at lower resolutions of the
original point cloud, further reducing the memory requirements.
|
The book elucidates the current state of the dark energy problem and presents
the results of the authors, who work in this area. It describes the
observational evidence for the existence of dark energy, the methods and
results of constraining of its parameters, modeling of dark energy by scalar
fields, the space-times with extra spatial dimensions, especially
Kaluza---Klein models, the braneworld models with a single extra dimension as
well as the problems of positive definition of gravitational energy in General
Relativity, energy conditions and consequences of their violation in the
presence of dark energy.
This monograph is intended for science professionals, educators and graduate
students, specializing in general relativity, cosmology, field theory and
particle physics.
|
We study the energy dynamics of a particle in a billiard subject to a rapid
periodic drive. In the regime of large driving frequencies $\omega$, we find
that the particle's energy evolves diffusively, which suggests that the
particle's energy distribution $\eta (E,t)$ satisfies a Fokker-Planck equation.
We calculate the rates of energy absorption and diffusion associated with this
equation, finding that these rates are proportional to $\omega^{-2}$ for large
$\omega$. Our analysis suggests three phases of energy evolution:
Prethermalization on short timescales, then slow energy absorption in
accordance with the Fokker-Planck equation, and finally a breakdown of the
rapid driving assumption for large energies and high particle speeds. We also
present numerical simulations of the evolution of a rapidly driven billiard
particle, which corroborate our theoretical results.
|
Superconductivity emerges from the cuprate antiferromagnetic Mott state with
hole doping. The resulting electronic structure is not understood, although
changes in the state of oxygen atoms appear paramount. Hole doping first
destroys the Mott state yielding a weak insulator where electrons localize only
at low temperatures without a full energy gap. At higher doping, the
'pseudogap', a weakly conducting state with an anisotropic energy gap and
intra-unit-cell breaking of 90\degree-rotational (C4v) symmetry appears.
However, a direct visualization of the emergence of these phenomena with
increasing hole density has never been achieved. Here we report atomic-scale
imaging of electronic structure evolution from the weak-insulator through the
emergence of the pseudogap to the superconducting state in Ca2-xNaxCuO2Cl2. The
spectral signature of the pseudogap emerges at lowest doping from a weakly
insulating but C4v-symmetric matrix exhibiting a distinct spectral shape. At
slightly higher hole-density, nanoscale regions exhibiting pseudogap spectra
and 180\degree-rotational (C2v) symmetry form unidirectional clusters within
the C4v-symmetric matrix. Thus, hole-doping proceeds by the appearance of
nanoscale clusters of localized holes within which the broken-symmetry
pseudogap state is stabilized. A fundamentally two-component electronic
structure11 then exists in Ca2-xNaxCuO2Cl2 until the C2v-symmetric clusters
touch at higher doping, and the long-range superconductivity appears.
|
A key barrier to using reinforcement learning (RL) in many real-world
applications is the requirement of a large number of system interactions to
learn a good control policy. Off-policy and Offline RL methods have been
proposed to reduce the number of interactions with the physical environment by
learning control policies from historical data. However, their performances
suffer from the lack of exploration and the distributional shifts in
trajectories once controllers are updated. Moreover, most RL methods require
that all states are directly observed, which is difficult to be attained in
many settings.
To overcome these challenges, we propose a trajectory generation algorithm,
which adaptively generates new trajectories as if the system is being operated
and explored under the updated control policies. Motivated by the fundamental
lemma for linear systems, assuming sufficient excitation, we generate
trajectories from linear combinations of historical trajectories. For linear
feedback control, we prove that the algorithm generates trajectories with the
exact distribution as if they are sampled from the real system using the
updated control policy. In particular, the algorithm extends to systems where
the states are not directly observed. Experiments show that the proposed method
significantly reduces the number of sampled data needed for RL algorithms.
|
Automatic organization of personal photos is a problem with many real world
ap- plications, and can be divided into two main tasks: recognizing the event
type of the photo collection, and selecting interesting images from the
collection. In this paper, we attempt to simultaneously solve both tasks:
album-wise event recognition and image- wise importance prediction. We
collected an album dataset with both event type labels and image importance
labels, refined from an existing CUFED dataset. We propose a hybrid system
consisting of three parts: A siamese network-based event-specific image
importance prediction, a Convolutional Neural Network (CNN) that recognizes the
event type, and a Long Short-Term Memory (LSTM)-based sequence level event
recognizer. We propose an iterative updating procedure for event type and image
importance score prediction. We experimentally verified that image importance
score prediction and event type recognition can each help the performance of
the other.
|
A family of subsets of $[n]$ is intersecting if every pair of its sets
intersects. Determining the structure of large intersecting families is a
central problem in extremal combinatorics. Frankl-Kupavskii and
Balogh-Das-Liu-Sharifzadeh-Tran independently showed that for $n\geq 2k +
c\sqrt{k\ln k}$, almost all $k$-uniform intersecting families are stars.
Improving their result, we show that the same conclusion holds for $n\geq 2k+
100\ln k$. Our proof uses, among others, Sapozhenko's graph container lemma and
the Das-Tran removal lemma.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.