text
stringlengths 6
128k
|
---|
A minimal permutation representation of a finite group G is a faithful G-set
with the smallest possible size. We study the structure of such representations
and show that for certain groups they may be obtained by a greedy construction.
In these situations (except when central involutions intervene) all minimal
permutation representations have the same set of orbit sizes. Using the same
ideas we also show that if the size d(G) of a minimal faithful G-set is at
least c|G| for some c>0 then d(G) = |G|/m + O(1) for an integer m, with the
implied constant depending on c.
|
QCD in the $\epsilon$-regime at nonzero baryon chemical potential $\mu$ is
reviewed. The focus is on aspects of the sign problem which are relevant for
lattice QCD. It is discussed how spontaneous chiral symmetry breaking and the
sign problem are related through the spectrum of the Dirac operator. The
strength of the sign problem is linked to the quark mass and the chemical
potential. Specific implications for lattice QCD are discussed.
|
Given a sequence of random variables ${\bf X}=X_1,X_2,\ldots$ suppose the aim
is to maximize one's return by picking a `favorable' $X_i$. Obviously, the
expected payoff crucially depends on the information at hand. An optimally
informed person knows all the values $X_i=x_i$ and thus receives $E (\sup
X_i)$. We will compare this return to the expected payoffs of a number of
observers having less information, in particular $\sup_i (EX_i)$, the value of
the sequence to a person who only knows the first moments of the random
variables.
In general, there is a stochastic environment (i.e. a class of random
variables $\cal C$), and several levels of information. Given some ${\bf X} \in
{\cal C}$, an observer possessing information $j$ obtains $r_j({\bf X})$. We
are going to study `information sets' of the form $$ R_{\cal C}^{j,k} = \{
(x,y) | x = r_j({\bf X}), y=r_k({\bf X}), {\bf X} \in {\cal C} \}, $$
characterizing the advantage of $k$ relative to $j$. Since such a set measures
the additional payoff by virtue of increased information, its analysis yields a
number of interesting results, in particular `prophet-type' inequalities.
|
Strong charge-spin coupling is found in a layered transition-metal
trichalcogenide NiPS3, a van derWaals antiferromagnet, from our study of the
electronic structure using several experimental and theoretical tools:
spectroscopic ellipsometry, x-ray absorption and photoemission spectroscopy,
and density-functional calculations. NiPS3 displays an anomalous shift in the
optical spectral weight at the magnetic ordering temperature, reflecting a
strong coupling between the electronic and magnetic structures. X-ray
absorption, photoemission and optical spectra support a self-doped ground state
in NiPS3. Our work demonstrates that layered transition-metal trichalcogenide
magnets are a useful candidate for the study of correlated-electron physics in
two-dimensional magnetic material.
|
In the standard model (SM), lepton flavor violating (LFV) Higgs decay is
absent at renormalizable level and thus it is a good probe to new physics. In
this article we study a type of new physics that could lead to large LFV Higgs
decay, i.e., a lepton-flavored dark matter (DM) model which is specified by a
Majorana DM and scalar lepton mediators. Different from other similar models
with similar setup, we introduce both left-handed and right-handed scalar
leptons. They allow large LFV Higgs decay and thus may explain the tentative
Br$(h\ra\tau\mu)\sim1\%$ experimental results from the LHC. In particular, we
find that the stringent bound from $\tau\ra\mu\gamma$ can be naturally evaded.
One reason, among others, is a large chirality violation in the mediator
sector. Aspects of relic density and especially radiative direct detection of
the leptonic DM are also investigated, stressing the difference from previous
lepton-flavored DM models.
|
We introduce a family of paired-composite-fermion trial wave functions for
any odd Cooper-pair angular momentum. These wave functions are parameter-free
and can be efficiently projected into the lowest Landau level. We use
large-scale Monte Carlo simulations to study three cases: Firstly, the
Moore-Read phase, which serves us as a benchmark. Secondly, we explore the
pairing associated with the anti-Pfaffian and the particle-hole-symmetric
Pfaffian. Specifically, we assess whether their trial states feature
exponentially decaying correlations and thus represent gapped phases of matter.
For Moore-Read and anti-Pfaffian we find decay lengths of
$\xi_\text{Moore-Read}=1.30(5)$ and $\xi_\text{anti-Pfaffian}=1.38(14)$, in
units of the magnetic length. By contrast, for the case of PH-Pfaffian, we find
no evidence of a finite length scale for up to $56$ particles.
|
The aim of this paper is to derive a new uncertainty principle for the
generalized $q$-Bessel wavelet transform studied earlier in \cite{Rezguietal}.
In this paper, an uncertainty principle associated with wavelet transforms in
the $q$-calculus framework has been established. A two-parameters extension of
the classical Bessel operator is applied to generate a wavelet function which
is exploited next to explore a wavelet uncertainty principle already in the
$q$-calculus framework.
|
The classification of one-parameter small quantum groups is an interesting
open question. The current paper reveals a new phenomenon that there exist
abundant exotic (about 5 times the standard) small quantum groups beyond the
Lusztig small quantum groups (with double grouplikes) , which arise from the
two-parameter setting. In particular, when the order $\ell$ of one-parameter
$q$ satisfies $(\ell, 210)\not=1$, the isoclasses of explicit representatives
(most of them have Drinfeld doubles) are new finite dimensional pointed Hopf
algebras, which complement to the work under the assumption $(\ell, 210)=1$
|
Massive stellar clumps in high redshift galaxies interact and migrate to the
center to form a bulge and exponential disk in <1 Gyr. Here we consider the
fate of intermediate mass black holes (BHs) that might form by massive-star
coalescence in the dense young clusters of these disk clumps. We find that the
BHs move inward with the clumps and reach the inner few hundred parsecs in only
a few orbit times. There they could merge into a supermassive BH by dynamical
friction. The ratio of BH mass to stellar mass in the disk clumps is
approximately preserved in the final ratio of BH to bulge mass. Because this
ratio for individual clusters has been estimated to be ~10^{-3}, the observed
BH-to-bulge mass ratio results. We also obtain a relation between BH mass and
bulge velocity dispersion that is compatible with observations of present-day
galaxies.
|
We consider the problem of estimating the phase of squeezed vacuum states
within a Bayesian framework. We derive bounds on the average Holevo variance
for an arbitrary number $N$ of uncorrelated copies. We find that it scales with
the mean photon number, $n$, as dictated by the Heisenberg limit, i.e., as
$n^{-2}$, only for $N>4$. For $N\leq 4$ this fundamental scaling breaks down
and it becomes $n^{-N/2}$. Thus, a single squeezed vacuum state performs worse
than a single coherent state with the same energy. We find the optimal
splitting of a fixed given energy among various copies. We also compute the
variance for repeated individual measurements (without classical communication
or adaptivity) and find that the standard Heisenberg-limited scaling $n^{-2}$
is recovered for large samples.
|
In this paper we study the solitonic string solutions of magnon and single
spike type in the beta-deformed AdS_4 x CP3 background. We find the dispersion
relations which are supposed to give the anomalous dimension of the gauge
theory operators.
|
This thesis work is focused on the study of millisecond pulsars in globular
cluster by using multi-wavelength observations obtained with radio and optical
telescopes. Radio observations have been used to search for and timing the
pulsars. While classical search routines are based on the analysis of single
and long time sequence of data, we present here an alternative method. This
method exploits the large amount of available archival radio data to search for
very faint pulsars by stacking all the daily power spectra. This method led to
the discovery of three new pulsars in the stellar system Terzan 5. Optical
observations have been exploited to search for millisecond pulsar optical
counterparts, whose emission is totally dominated by the companion stars. Six
new companion stars have been discovered, thus increasing by ~40% the total
number of companions identified in globular clusters. In particular, four
companions turned out to be He white dwarfs, as expected from the canonical
formation scenario. One companion turned out to be a very faint and
non-degenerate object, showing a strong variability which is likely the result
of a strong heating of the stellar side exposed to the pulsar injected flux.
Finally, one companion is a main-sequence star which shows a strong H{\alpha}
emission likely due to a low-level accretion probably from a residual disk.
This system can be located in an early evolutive stage, immediately preceding
the reactivation of an already re-accelerated pulsar. Furthermore, we
identified the companion star to a transient low-mass X-ray binary. Conclusion
are drawn in the final chapter, where the evolution of millisecond pulsars is
discussed and possible future developments are suggested.
|
In the context of decision making under explorable uncertainty, scheduling
with testing is a powerful technique used in the management of computer systems
to improve performance via better job-dispatching decisions. Upon job arrival,
a scheduler may run some \emph{testing algorithm} against the job to extract
some information about its structure, e.g., its size, and properly classify it.
The acquisition of such knowledge comes with a cost because the testing
algorithm delays the dispatching decisions, though this is under control. In
this paper, we analyze the impact of such extra cost in a load balancing
setting by investigating the following questions: does it really pay off to
test jobs? If so, under which conditions? Under mild assumptions connecting the
information extracted by the testing algorithm in relationship with its running
time, we show that whether scheduling with testing brings a performance
degradation or improvement strongly depends on the traffic conditions, system
size and the coefficient of variation of job sizes. Thus, the general answer to
the above questions is non-trivial and some care should be considered when
deploying a testing policy. Our results are achieved by proposing a load
balancing model for scheduling with testing that we analyze in two limiting
regimes. When the number of servers grows to infinity in proportion to the
network demand, we show that job-size testing actually degrades performance
unless short jobs can be predicted reliably almost instantaneously and the
network load is sufficiently high. When the coefficient of variation of job
sizes grows to infinity, we construct testing policies inducing an arbitrarily
large performance gain with respect to running jobs untested.
|
We report on the detection of the [CII] 157.7 $\mu$m emission from the Lyman
break galaxy (LBG) MACS0416_Y1 at z = 8.3113, by using the Atacama Large
Millimeter/submillimeter Array (ALMA). The luminosity ratio of [OIII] 88 $\mu$m
(from previous campaigns) to [CII] is 9.31 $\pm$ 2.6, indicative of hard
interstellar radiation fields and/or a low covering fraction of
photo-dissociation regions. The emission of [CII] is cospatial to the 850
$\mu$m dust emission (90 $\mu$m rest-frame, from previous campaigns), however
the peak [CII] emission does not agree with the peak [OIII] emission,
suggesting that the lines originate from different conditions in the
interstellar medium. We fail to detect continuum emission at 1.5 mm (160 $\mu$m
rest-frame) down to 18 $\mu$Jy (3$\sigma$). This nondetection places a strong
limit on the dust spectrum, considering the 137 $\pm$ 26 $\mu$Jy continuum
emission at 850 $\mu$m. This suggests an unusually warm dust component (T $>$
80 K, 90% confidence limit), and/or a steep dust-emissivity index ($\beta_{\rm
dust}$ $>$ 2), compared to galaxy-wide dust emission found at lower redshifts
(typically T $\sim$ 30 - 50 K, $\beta_{\rm dust}$ $\sim$ 1 - 2). If such
temperatures are common, this would reduce the required dust mass and relax the
dust production problem at the highest redshifts. We therefore warn against the
use of only single-wavelength information to derive physical properties,
recommend a more thorough examination of dust temperatures in the early
Universe, and stress the need for instrumentation that probes the peak of warm
dust in the Epoch of Reionization.
|
The ability of double-stranded DNA or RNA to locally melt and form kinks
leads to strong non-linear elasticity effects that qualitatively affect their
packing in confined spaces. Using analytical theory and numerical simulation we
show that kink formation entails a mixed spool-nematic ordering of
double-stranded DNA or RNA in spherical capsids, consisting of an outer spool
domain and an inner, twisted nematic domain. These findings explain the
experimentally observed nematic domains in viral capsids and imply that
non-linear elasticity must be considered to predict the configurations and
dynamics of double-stranded genomes in viruses, bacterial nucleoids or
gene-delivery vehicles.
|
Relational data are usually highly incomplete in practice, which inspires us
to leverage side information to improve the performance of community detection
and link prediction. This paper presents a Bayesian probabilistic approach that
incorporates various kinds of node attributes encoded in binary form in
relational models with Poisson likelihood. Our method works flexibly with both
directed and undirected relational networks. The inference can be done by
efficient Gibbs sampling which leverages sparsity of both networks and node
attributes. Extensive experiments show that our models achieve the
state-of-the-art link prediction results, especially with highly incomplete
relational data.
|
We present neutron diffraction measurements on single crystal samples of
non-superconducting Ba(Fe1-xCrx)2As2 as a function of Cr-doping for 0<=x<=0.47.
The average SDW moment is independent of concentration for x<=0.2 and decreases
rapidly for x>=0.3. For concentrations in excess of 30% chromium, we find a new
G-type antiferromagnetic phase which rapidly becomes the dominant magnetic
ground state. Strong magnetism is observed for all concentrations measured and
competition between these ordered states and superconductivity naturally
explains the absence of superconductivity in the Cr-doped materials.
|
Diffusion magnetic resonance imaging (dMRI) is pivotal for probing the
microstructure of the rapidly-developing fetal brain. However, fetal motion
during scans and its interaction with magnetic field inhomogeneities result in
artifacts and data scattering across spatial and angular domains. The effects
of those artifacts are more pronounced in high-angular resolution fetal dMRI,
where signal-to-noise ratio is very low. Those effects lead to biased estimates
and compromise the consistency and reliability of dMRI analysis. This work
presents HAITCH, the first and the only publicly available tool to correct and
reconstruct multi-shell high-angular resolution fetal dMRI data. HAITCH offers
several technical advances that include a blip-reversed dual-echo acquisition
for dynamic distortion correction, advanced motion correction for model-free
and robust reconstruction, optimized multi-shell design for enhanced
information capture and increased tolerance to motion, and outlier detection
for improved reconstruction fidelity. The framework is open-source, flexible,
and can be used to process any type of fetal dMRI data including single-echo or
single-shell acquisitions, but is most effective when used with multi-shell
multi-echo fetal dMRI data that cannot be processed with any of the existing
tools. Validation experiments on real fetal dMRI scans demonstrate significant
improvements and accurate correction across diverse fetal ages and motion
levels. HAITCH successfully removes artifacts and reconstructs high-fidelity
fetal dMRI data suitable for advanced diffusion modeling, including fiber
orientation distribution function estimation. These advancements pave the way
for more reliable analysis of the fetal brain microstructure and tractography
under challenging imaging conditions.
|
We consider the extraction of shared secret key from correlations that are
generated by either a classical or quantum source. In the classical setting,
two honest parties (Alice and Bob) use public discussion and local randomness
to distill secret key from some distribution $p_{XYZ}$ that is shared with an
unwanted eavesdropper (Eve). In the quantum settings, the correlations
$p_{XYZ}$ are delivered to the parties as either an \textit{incoherent} mixture
of orthogonal quantum states or as \textit{coherent} superposition of such
states; in both cases, Alice and Bob use public discussion and local quantum
operations to distill secret key. While the power of quantum mechanics
increases Alice and Bob's ability to generate shared randomness, it also equips
Eve with a greater arsenal of eavesdropping attacks. Therefore, it is not
obvious who gains the greatest advantage for distilling secret key when
replacing a classical source with a quantum one.
In this paper we first demonstrate that the classical key rate is equivalent
to the quantum key rate when the correlations are generated incoherently in the
quantum setting. For coherent sources, we next show that the rates are
incomparable, and in fact, their difference can be arbitrarily large in either
direction. However, we identify a large class of non-trivial distributions
$p_{XYZ}$ that possess the following properties: (i) Eve's advantage is always
greater in the quantum source than in its classical counterpart, and (ii) for
the quantum entanglement shared between Alice and Bob in the coherent source,
the so-called entanglement cost/squashed entanglement/relative entropy of
entanglement can all be computed. With property (ii), we thus present a rare
instance in which the various entropic entanglement measures of a quantum state
can be explicitly calculated.
|
Exactly which positive integers cannot be expressed as the sum of $j$
positive $k$-th powers? This paper utilizes theoretical and computational
techniques to answer this question for $k\leq9$. Results from Waring's problem
are used throughout to catalogue the sets of such integers. These sets are then
considered in a general setting, and several curious properties are
established.
|
We define the Eulerian ideal of a $k$-uniform hypergraph and study its degree
and Castelnuovo--Mumford regularity. The main tool is a Gr\"obner basis of the
ideal obtained combinatorially from the hypergraph. We define the notion of
parity join in a hypergraph and show that the regularity of the Eulerian ideal
is equal to the maximum cardinality of such a set of edges. The formula for the
degree involves the cardinality of the set of sets of vertices, $T$, that admit
a $T$-join. We compute the degree and regularity explicity in the cases of a
complete $k$-partite hypergraph and a complete hypergraph of rank $3$.
|
The process of turbo-code decoding starts with the formation of a posteriori
probabilities (APPs) for each data bit, which is followed by choosing the
data-bit value that corresponds to the maximum a posteriori (MAP) probability
for that data bit. Upon reception of a corrupted code-bit sequence, the process
of decision making with APPs allows the MAP algorithm to determine the most
likely information bit to have been transmitted at each bit time.
|
Thanks to widely available, cheap Internet access and the ubiquity of
smartphones, millions of people around the world now use online location-based
social networking services. Understanding the structural properties of these
systems and their dependence upon users' habits and mobility has many potential
applications, including resource recommendation and link prediction. Here, we
construct and characterise social and place-focused graphs by using
longitudinal information about declared social relationships and about users'
visits to physical places collected from a popular online location-based social
service. We show that although the social and place-focused graphs are
constructed from the same data set, they have quite different structural
properties. We find that the social and location-focused graphs have different
global and meso-scale structure, and in particular that social and
place-focused communities have negligible overlap. Consequently, group
inference based on community detection performed on the social graph alone
fails to isolate place-focused groups, even though these do exist in the
network. By studying the evolution of tie structure within communities, we show
that the time period over which location data are aggregated has a substantial
impact on the stability of place-focused communities, and that information
about place-based groups may be more useful for user-centric applications than
that obtained from the analysis of social communities alone.
|
We generalize tensor-scalar theories of gravitation by the introduction of an
abnormally weighting type of energy. This theory of tensor-scalar anomalous
gravity is based on a relaxation of the weak equivalence principle that is now
restricted to ordinary visible matter only. As a consequence, the convergence
mechanism toward general relativity is modified and produces naturally cosmic
acceleration as an inescapable gravitational feedback induced by the
mass-variation of some invisible sector. The cosmological implications of this
new theoretical framework are studied. This glimpses at an enticing new
symmetry between the visible and invisible sectors, namely that the scalar
charges of visible and invisible matter are exactly opposite.
|
We investigate mean-field games from the point of view of a large number of
indistinguishable players which eventually converges to infinity. The players
are weakly coupled via their empirical measure. The dynamics of the states of
the individual players is governed by a non-autonomous pure jump type semi
group in a Euclidean space, which is not necessarily smoothing. Investigations
are conducted in the framework of non-linear Markov processes. We show that the
individual optimal strategy results from a consistent coupling of an optimal
control problem with a forward non-autonomous dynamics. In the limit as the
number $N$ of players goes to infinity this leads to a jump-type analog of the
well-known non-linear McKean-Vlasov dynamics. The case where one player has an
individual preference different from the ones of the remaining players is also
covered. The two results combined reveal an epsilon-Nash Equilibrium for the
$N$-player games.
|
In this study, we propose a quantum-classical hybrid scheme for performing
orbital-free density functional theory (OFDFT) using probabilistic
imaginary-time evolution (PITE), designed for the era of fault-tolerant quantum
computers (FTQC), as a material calculation method for large-scale systems.
PITE is applied to the part of OFDFT that searches the ground state of the
Hamiltonian in each self-consistent field (SCF) iteration, while the other
parts such as electron density and Hamiltonian updates are performed by
existing algorithms on classical computers. When the simulation cell is
discretized into $N_\mathrm{g}$ grid points, combined with quantum phase
estimation (QPE), it is shown that obtaining the ground state energy of
Hamiltonian requires a circuit depth of $O(\log N_\mathrm{g})$. The ground
state calculation part in OFDFT is expected to be accelerated, for example, by
creating an appropriate preconditioner from the estimated ground state energy
for the locally optimal block preconditioned conjugate gradient (LOBPCG)
method.
|
We present a local combinatorial formula for the Euler class of a
$n$-dimen\-si\-onal PL spherical fiber bundle as a rational number $e_{\it CH}$
associated to a chain of $n+1$ abstract subdivisions of abstract $n$-spherical
PL cell complexes. The number $e_{\it CH}$ is a combinatorial (or matrix) Hodge
theory twisting cochain in Guy Hirsch's homology model of the bundle associated
with PL combinatorics of the bundle.
|
We study the spin correlations of a few fermions in a quasi one-dimensional
trap. Exact diagonalization calculations demonstrate that repulsive
interactions between the two species drives ferromagnetic correlations. The
ejection probability of an atom provides an experimental probe of the spin
correlations. With more than five atoms trapped, the system approaches the
itinerant Stoner limit. Losses to Feshbach molecules are suppressed by the
discretization of energy levels when fewer than seven atoms are trapped.
|
The asymptotic study of class numbers of binary quadratic forms is a
foundational problem in arithmetic statistics. Here, we investigate finer
statistics of class numbers by studying their self-correlations under additive
shifts. Specifically, we produce uniform asymptotics for the shifted
convolution sum $\sum_{n < X} H(n) H(n+\ell)$ for fixed $\ell \in \mathbb{Z}$,
in which $H(n)$ denotes the Hurwitz class number.
|
We revisit the idea that density-wave wakes of planets drive accretion in
protostellar disks. The effects of many small planets can be represented as a
viscosity if the wakes damp locally, but the viscosity is proportional to the
damping length. Damping occurs mainly by shocks even for earth-mass planets.
The excitation of the wake follows from standard linear theory including the
torque cutoff. We use this as input to an approximate but quantitative
nonlinear theory based on Burger's equation for the subsequent propagation and
shock. Shock damping is indeed local but weakly so. If all metals in a
minimum-mass solar nebula are invested in planets of a few earth masses each,
dimensionless viscosities [alpha] of order dex(-4) to dex(-3) result. We
compare this with observational constraints. Such small planets would have
escaped detection in radial-velocity surveys and could be ubiquitous. If so,
then the similarity of the observed lifetime of T Tauri disks to the
theoretical timescale for assembling a rocky planet may be fate rather than
coincidence.
|
We propose an explicit way to generate a large class of Operator scaling
Gaussian random fields (OSGRF). Such fields are anisotropic generalizations of
selfsimilar fields. More specifically, we are able to construct any Gaussian
field belonging to this class with given Hurst index and exponent. Our
construction provides - for simulations of texture as well as for detection of
anisotropies in an image - a large class of models with controlled anisotropic
geometries and structures.
|
Active inference is a probabilistic framework for modelling the behaviour of
biological and artificial agents, which derives from the principle of
minimising free energy. In recent years, this framework has successfully been
applied to a variety of situations where the goal was to maximise reward,
offering comparable and sometimes superior performance to alternative
approaches. In this paper, we clarify the connection between reward
maximisation and active inference by demonstrating how and when active
inference agents perform actions that are optimal for maximising reward.
Precisely, we show the conditions under which active inference produces the
optimal solution to the Bellman equation--a formulation that underlies several
approaches to model-based reinforcement learning and control. On partially
observed Markov decision processes, the standard active inference scheme can
produce Bellman optimal actions for planning horizons of 1, but not beyond. In
contrast, a recently developed recursive active inference scheme (sophisticated
inference) can produce Bellman optimal actions on any finite temporal horizon.
We append the analysis with a discussion of the broader relationship between
active inference and reinforcement learning.
|
We design a reduced attitude controller for reorienting the spin axis of a
gyroscope in a geometric control framework. The proposed controller preserves
the inherent gyroscopic stability associated with a spinning axis-symmetric
rigid body. The equations of motion are derived in two frames: a non-spinning
frame to show the gyroscopic stability, and a body-fixed spinning frame for
deriving the controller. The proposed controller is designed such that it
retains the gyroscopic stability structure in the closed loop and renders the
desired equilibrium almost-globally asymptotically stable. Due to the
time-critical nature of the control input, in particular its sensitivity with
respect to delays/neglected dynamics, the controller is extended to incorporate
the effect of actuator dynamics for practical implementation. Thereafter, a
comparison in performance is shown between the proposed controller and a
conventional reduced attitude geometric controller with numerical simulation.
The controller is validated experimentally on a spinning tricopter.
|
We establish a new upper bound for the number of rationals up to a given
height in a missing-digit set, making progress towards a conjecture of
Broderick, Fishman, and Reich. This enables us to make novel progress towards
another conjecture of those authors about the corresponding intrinsic
diophantine approximation problem. Moreover, we make further progress towards
conjectures of Bugeaud--Durand and Levesley--Salp--Velani on the distribution
of diophantine exponents in missing-digit sets.
A key tool in our study is Fourier $\ell^1$ dimension introduced by the last
named author in [H. Yu, Rational points near self-similar sets,
arXiv:2101.05910]. An important technical contribution of the paper is a method
to compute this quantity.
|
The decays $J/\psi\to p\bar{p}$ and $J/\psi\to n\bar{n}$ have been
investigated with a sample of 225.2 million $J/\psi$ events collected with the
BESIII detector at the BEPCII $e^+e^-$ collider. The branching fractions are
determined to be $\mathcal{B}(J/\psi\to
p\bar{p})=(2.112\pm0.004\pm0.031)\times10^{-3}$ and $\mathcal{B}(J/\psi\to
n\bar{n})=(2.07\pm0.01\pm0.17)\times10^{-3}$. Distributions of the angle
$\theta$ between the proton or anti-neutron and the beam direction are well
described by the form $1+\alpha\cos^2\theta$, and we find
$\alpha=0.595\pm0.012\pm0.015$ for $J/\psi\to p\bar{p}$ and
$\alpha=0.50\pm0.04\pm0.21$ for $J/\psi\to n\bar{n}$. Our branching-fraction
results suggest a large phase angle between the strong and electromagnetic
amplitudes describing the $J/\psi\to N\bar{N}$ decay.
|
The size distribution and total mass of objects in the Oort Cloud have
important implications to the theory of planets formation, including the
properties of, and the processes taking place in the early solar system. We
discuss the potential of space missions like Kepler and CoRoT, designed to
discover transiting exo-planets, to detect Oort Cloud, Kuiper Belt and main
belt objects by occultations of background stars. Relying on published
dynamical estimates of the content of the Oort Cloud, we find that Kepler's
main program is expected to detect between 0 and ~100 occultation events by
deca-kilometer-sized Oort Cloud objects. The occultations rate depends on the
mass of the Oort cloud, the distance to its "inner edge", and the size
distribution of its objects. In contrast, Kepler is unlikely to find
occultations by Kuiper Belt or main belt asteroids, mainly due to the fact that
it is observing a high ecliptic latitude field. Occultations by Solar System
objects will appear as a photometric deviation in a single measurement,
implying that the information regarding the time scale and light-curve shape of
each event is lost. We present statistical methods that have the potential to
verify the authenticity of occultation events by Solar System objects, to
estimate the distance to the occulting population, and to constrain their size
distribution. Our results are useful for planning of future space-based
exo-planet searches in a way that will maximize the probability of detecting
solar system objects, without hampering the main science goals.
|
Systems with space-periodic Hamiltonians have unique scattering properties.
The discrete translational symmetry associated with periodicity of the
Hamiltonian creates scattering channels that govern the scattering process. We
consider a two-dimensional scattering system in which one dimension is a
periodic lattice and the other is localized in space. The scattering and decay
processes can then be described in terms of channels indexed by the Bloch
momentum. We find the 1D periodic lattice can sustain two types of bound states
in the positive energy continuum (BICs): one protected by reflection symmetry,
the other protected by discrete translational symmetry. The lattice also
sustains long-lived quasibound states. We expect that our results can be
generalized to the behavior of states in the continuum of 2D periodic lattices.
|
Using the far-infrared data obtained by the Herschel Space Observatory, we
study the relation between the infrared luminosity (L_IR) and the dust
temperature (T) of dusty starbursting galaxies at high redshifts (high-z). We
focus on the total infrared luminosity from the cold-dust component
(L_IR^(cd)), whose emission can be described by a modified black body (MBB) of
a single temperature (T_mbb). An object on the (L_IR^(cd), T_mbb) plane can be
explained by the equivalent of the Stefan-Boltzmann law for a MBB with an
effective radius of R_eff. We show that R_eff is a good measure of the combined
size of the dusty starbursting regions (DSBRs) of the host galaxy. In at least
one case where the individual DSBRs are well resolved through strong
gravitational lensing, R_eff is consistent with the direct size measurement. We
show that the observed L_IR-T relation is simply due to the limited R_eff (<~ 2
kpc). The small R_eff values also agree with the compact sizes of the DSBRs
seen in the local universe. However, previous interferometric observations to
resolve high-z dusty starbursting galaxies often quote much larger sizes. This
inconsistency can be reconciled by the blending effect when considering that
the current interferometry might still not be of sufficient resolution. From
R_eff we infer the lower limits to the volume densities of the star formation
rate ("minSFR3D") in the DSBRs, and find that the $L_{IR}$-$T$ relation
outlines a boundary on the (L_IR^(cd), T) plane, below which is the "zone of
avoidance" in terms of minSFR3D.
|
A ``hyperideal circle pattern'' in $S^2$ is a finite family of oriented
circles, similar to the ``usual'' circle patterns but such that the closed
disks bounded by the circles do not cover the whole sphere. Hyperideal circle
patterns are directly related to hyperideal hyperbolic polyhedra, and also to
circle packings.
To each hyperideal circle pattern, one can associate an incidence graph and a
set of intersection angles. We characterize the possible incidence graphs and
intersection angles of hyperideal circle patterns in the sphere, the torus, and
in higher genus surfaces. It is a consequence of a more general result,
describing the hyperideal circle patterns in the boundaries of geometrically
finite hyperbolic 3-manifolds (for the corresponding $\C P^1$-structures). This
more general statement is obtained as a consequence of a theorem of Otal
\cite{otal,bonahon-otal} on the pleating laminations of the convex cores of
geometrically finite hyperbolic manifolds.
|
Euler's equations govern the behavior of gravity waves on the surface of an
incompressible, inviscid, and irrotational fluid of arbitrary depth. We
investigate the spectral stability of sufficiently small-amplitude,
one-dimensional Stokes waves, i.e., periodic gravity waves of permanent form
and constant velocity, in both finite and infinite depth. Using a nonlocal
formulation of Euler's equations developed by Ablowitz et al. (2006), we
develop a perturbation method to describe the first few high-frequency
instabilities away from the origin, present in the spectrum of the
linearization about the small-amplitude Stokes waves. Asymptotic and numerical
computations of these instabilities are compared for the first time to
excellent agreement.
|
The goal of this paper is to describe the structure of finite-dimensional
semi-simple Leibniz algebras in characteristic zero. Our main tool in this
endeavor are hemi-semidirect products. One of the major results of this paper
is a simplicity criterion for hemi-semidirect products. In addition, we
characterize when a hemi-semidirect product is semi-simple or Lie-simple. Using
these results we reduce the classification of finite-dimensional semi-simple
Leibniz algebras over fields of characteristic zero to the well-known
classification of finite-dimensional semi-simple Lie algebras and their
finite-dimensional irreducible modules. As one consequence of our structure
theorem, we determine the derivation algebra of a finite-dimensional
semi-simple Leibniz algebra in characteristic zero as a vector space. This
generalizes a recent result of Ayupov et al. from the complex numbers to
arbitrary fields of characteristic zero.
|
Robust Model Predictive Control (MPC) for nonlinear systems is a problem that
poses significant challenges as highlighted by the diversity of approaches
proposed in the last decades. Often compromises with respect to computational
load, conservatism, generality, or implementation complexity have to be made,
and finding an approach that provides the right balance is still a challenge to
the research community. This work provides a contribution by proposing a novel
shrinking-horizon robust MPC formulation for nonlinear discrete-time systems.
By explicitly accounting for how disturbances and linearization errors are
propagated through the nonlinear dynamics, a constraint tightening-based
formulation is obtained, with guarantees of robust constraint satisfaction. The
proposed controller relies on iteratively solving a Nonlinear Program (NLP) to
simultaneously optimize system operation and the required constraint
tightening. Numerical experiments show the effectiveness of the proposed
controller with three different choices of NLP solvers as well as significantly
improved computational speed, better scalability, and generally reduced
conservatism when compared to an existing technique from the literature.
|
An efficient neutron detection system with good energy resolution is required
to correctly characterize decays of neutron-rich nuclei where $\beta-$delayed
neutron emission is a dominant decay mode. The Neutron dEtector with Xn
Tracking (NEXT) has been designed to measure $\beta$-delayed neutron emitters.
By segmenting the detector along the neutron flight path, NEXT reduces the
associated uncertainties in neutron time-of-flight measurements, improving
energy resolution while maintaining detection efficiency. Detector prototypes
are comprised of optically separated segments of a neutron-gamma discriminating
plastic scintillator coupled to position-sensitive photomultiplier tubes. The
first performance studies of this detector showed that high intrinsic neutron
detection efficiency could be achieved while retaining good energy resolution.
The results from the efficiency measurements using neutrons from direct
reactions are presented.
|
Many link formation mechanisms for the evolution of social networks have been
successful to reproduce various empirical findings in social networks. However,
they have largely ignored the fact that individuals make decisions on whether
to create links to other individuals based on cost and benefit of linking, and
the fact that individuals may use perception of the network in their decision
making. In this paper, we study the evolution of social networks in terms of
perception-based strategic link formation. Here each individual has her own
perception of the actual network, and uses it to decide whether to create a
link to another individual. An individual with the least perception accuracy
can benefit from updating her perception using that of the most accurate
individual via a new link. This benefit is compared to the cost of linking in
decision making. Once a new link is created, it affects the accuracies of other
individuals' perceptions, leading to a further evolution of the actual network.
As for initial actual networks, we consider homogeneous and heterogeneous
cases. The homogeneous initial actual network is modeled by Erd\H{o}s-R\'enyi
(ER) random networks, while we take a star network for the heterogeneous case.
In any cases, individual perceptions of the actual network are modeled by ER
random networks with controllable linking probability. Then the stable link
density of the actual network is found to show discontinuous transitions or
jumps according to the cost of linking. As the number of jumps is the
consequence of the dynamical complexity, we discuss the effect of initial
conditions on the number of jumps to find that the dynamical complexity
strongly depends on how much individuals initially overestimate or
underestimate the link density of the actual network. For the heterogeneous
case, the role of the highly connected individual as an information spreader is
discussed.
|
Sequence optimization, where the items in a list are ordered to maximize some
reward has many applications such as web advertisement placement, search, and
control libraries in robotics. Previous work in sequence optimization produces
a static ordering that does not take any features of the item or context of the
problem into account. In this work, we propose a general approach to order the
items within the sequence based on the context (e.g., perceptual information,
environment description, and goals). We take a simple, efficient,
reduction-based approach where the choice and order of the items is established
by repeatedly learning simple classifiers or regressors for each "slot" in the
sequence. Our approach leverages recent work on submodular function
maximization to provide a formal regret reduction from submodular sequence
optimization to simple cost-sensitive prediction. We apply our contextual
sequence prediction algorithm to optimize control libraries and demonstrate
results on two robotics problems: manipulator trajectory prediction and mobile
robot path planning.
|
We report on a recent chiral extrapolation, based on an SU(3) framework, of
octet baryon masses calculated in 2+1-flavour lattice QCD. Here we further
clarify the form of the extrapolation, the estimation of the infinite-volume
limit, the extracted low-energy constants and the corrections in the
strange-quark mass.
|
The underlying physics that generates the excitations in the global
low-frequency, < 5.3 mHz, solar acoustic power spectrum is a well known process
that is attributed to solar convection; However, a definitive explanation as to
what causes excitations in the high-frequency regime, > 5.3 mHz, has yet to be
found. Karoff and Kjeldsen (Astrophys. J. 678, 73-76, 2008) concluded that
there is a correlation between solar flares and the global high-frequency solar
acoustic waves. We have used the Global Oscillations Network Group (GONG)
helioseismic data in an attempt to verify Karoff and Kjeldsen (2008) results as
well as compare the post-flare acoustic power spectrum to the pre-flare
acoustic power spectrum for 31 solar flares. Among the 31 flares analyzed, we
observe that a decrease in acoustic power after the solar flare is just as
likely as an increase. Furthermore, while we do observe variations in acoustic
power that are most likely associated with the usual p-modes associated with
solar convection, these variations do not show any significant temporal
association with flares. We find no evidence that consistently supports flare
driven high-frequency waves.
|
Although no individual piece of experimental evidence for supersymmetry is
compelling so far, several are about as good as they can be with present
errors. Most important, all pieces of evidence imply the same values for common
parameters --- a necessary condition, and one unlikely to hold if the hints
from data are misleading. The parameters are sparticle or soft-breaking masses
and $ tan \beta.$ For the parameter ranges reported here, there are so far no
signals that should have occurred but did not. Given those parameters a number
of predictions can test whether the evidence is real. It turns out that the
predictions are mostly different from the conventional supersymmetry ones, and
might have been difficult to recognize as signals of superpartners. They are
testable at LEP2, where neutralinos and charginos will appear mainly as
$\gamma\gamma +$ large $\slashchar{E}$ events, $\gamma +$ very large
$\slashchar{E}$ events, and very soft lepton pairs of same or mixed flavor. The
results demonstrate that we understand a lot about how to extract an effective
SUSY Lagrangian from limited data, and that we can reasonably hope to learn
about the theory near the Planck scale from the data at the electroweak scale.
|
This paper is an attempt to mitigate the beam squint happening due to
frequency-dependent phase shifts in the wideband beamforming scenario,
specifically in radar applications. The estimation of the direction of arrival
is significant for precise target detection in radars. The undesirable beam
squint effect due to the phase shift-only mechanism in conventional phased
array systems, which becomes exacerbated when dealing with wide bandwidth
signals, is analyzed for a large set of steering angles in this paper. An
optimum baseband delay combined with the phase shift technique is proposed for
wideband radar beamforming to mitigate beam squint effectively. This technique
has been demonstrated to function properly with 1-GHz carrier frequency for
signals with wide bandwidths of up to +/-250MHz and for steering angles ranging
from 0 to 90 degrees.
|
A few comments regarding the difference between the velocity-Verlet and
position-Verlet integrators
|
The existence and stability of stable bright solitons in one-dimensional (1D)
media with a spatially periodical modulated Kerr nonlinearity are demonstrated
by means of the linear-stability analysis and in direct numerical simulations.
The nonlinear potential landscape can balance the fractional-order diffraction
and thus stabilizes the solitons, making the model unique and governed by the
recently introduced fractional Schr\"{o}dinger equation with a self-focusing
cubic nonlinear lattice. Both 1D fundamental and multihump solitons (in forms
of dipole and tripole ones) are found, which occupy one or three cells of the
nonlinear lattice respectively, depending on the soliton's power (intensity).
We find that the profiles of the predicted soliton families are impacted
intensely by the L\'{e}vy index $\alpha$ which denotes the level of fractional
Laplacian, so does to their stability. The stabilization of soliton families is
possible if $\alpha$ exceeds a threshold value, below which the balance between
fractional-order diffraction and the spatially modulated focusing nonlinearity
will be broken.
|
HE 0107-5240 is a star in more than once sense of the word. Chemically, it is
the most primitive object yet discovered, and it is at the centre of debate
about the origins of the first elements in the Universe.
|
Recent experimental findings have reported the presence of unconventional
charge orders in the enlarged ($2 \times 2$) unit-cell of kagome metals
AV$_3$Sb$_5$ (A=K,Rb,Cs) and hinted towards specific topological signatures.
Motivated by these discoveries, we investigate the types of topological phases
that can be realized in such kagome superlattices. In this context, we employ a
recently introduced statistical method capable of constructing topological
models for any generic lattice. By analyzing large data sets generated from
symmetry-guided distributions of randomized tight-binding parameters, and
labeled with the corresponding topological index, we extract physically
meaningful information. We illustrate the possible real-space manifestations of
charge and bond modulations and associated flux patterns for different
topological classes, and discuss their relation to present theoretical
predictions and experimental signatures for the AV$_3$Sb$_5$ family.
Simultaneously, we predict new higher-order topological phases that may be
realized by appropriately manipulating the currently known systems.
|
We propose LLM-Eval, a unified multi-dimensional automatic evaluation method
for open-domain conversations with large language models (LLMs). Existing
evaluation methods often rely on human annotations, ground-truth responses, or
multiple LLM prompts, which can be expensive and time-consuming. To address
these issues, we design a single prompt-based evaluation method that leverages
a unified evaluation schema to cover multiple dimensions of conversation
quality in a single model call. We extensively evaluate the performance of
LLM-Eval on various benchmark datasets, demonstrating its effectiveness,
efficiency, and adaptability compared to state-of-the-art evaluation methods.
Our analysis also highlights the importance of choosing suitable LLMs and
decoding strategies for accurate evaluation results. LLM-Eval offers a
versatile and robust solution for evaluating open-domain conversation systems,
streamlining the evaluation process and providing consistent performance across
diverse scenarios.
|
The use of the AdS/CFT correspondence to arrive at quiver gauge field
theories is discussed, focussing on the orbifolded case without supersymmetry.
An abelian orbifold with the finite group Z(12) can accommodate unification at
about 4 TeV.
|
The spatial fluctuations of the extragalactic background light trace the
total emission from all stars and galaxies in the Universe. A multi-wavelength
study can be used to measure the integrated emission from first galaxies during
reionization when the Universe was about 500 million years old. Here we report
arcminute-scale spatial fluctuations in one of the deepest sky surveys with the
Hubble Space Telescope in five wavebands between 0.6 and 1.6 $\mu$m. We
model-fit the angular power spectra of intensity fluctuation measurements to
find the ultraviolet luminosity density of galaxies at $z$ > 8 to be $\log
\rho_{\rm UV} = 27.4^{+0.2}_{-1.2}$ erg s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$
$(1\sigma)$. This level of integrated light emission allows for a significant
surface density of fainter primeval galaxies that are below the point source
detection level in current surveys.
|
Stories generated with neural language models have shown promise in
grammatical and stylistic consistency. However, the generated stories are still
lacking in common sense reasoning, e.g., they often contain sentences deprived
of world knowledge. We propose a simple multi-task learning scheme to achieve
quantitatively better common sense reasoning in language models by leveraging
auxiliary training signals from datasets designed to provide common sense
grounding. When combined with our two-stage fine-tuning pipeline, our method
achieves improved common sense reasoning and state-of-the-art perplexity on the
Writing Prompts (Fan et al., 2018) story generation dataset.
|
We investigate the construction of tree-level MHV gluon amplitudes in
multiplet bases using BCFW recursion. The multiplet basis decomposition can
either be obtained by decomposing results derived in (for example) the DDM
basis or by formulating the recursion directly in the multiplet basis. We focus
on the latter approach and show how to efficiently deal with the color
structure appearing in the recursion. For illustration, we also explicitly
calculate the four-, five- and six-gluon amplitudes.
|
A two-dimensional cellular automaton(CA) associated with a two-dimensional
Burgers equation is presented. The 2D Burgers equation is an integrable
generalization of the well-known Burgers equation, and is transformed into a 2D
diffusion equation by the Cole-Hopf transformation. The CA is derived from the
2D Burgers equation by using the ultradiscrete method, which can transform
dependent variables into discrete ones. Some exact solutions of the CA, such as
shock wave solutions, are studied in detail.
|
In this paper we investigate the applicability of standard model checking
approaches to verifying properties in probabilistic programming. As the
operational model for a standard probabilistic program is a potentially
infinite parametric Markov decision process, no direct adaption of existing
techniques is possible. Therefore, we propose an on-the-fly approach where the
operational model is successively created and verified via a step-wise
execution of the program. This approach enables to take key features of many
probabilistic programs into account: nondeterminism and conditioning. We
discuss the restrictions and demonstrate the scalability on several benchmarks.
|
We present a theoretical profile of the Lyman Beta line of atomic hydrogen
perturbed by collisions with neutral hydrogen atoms and protons. We use a
general unified theory in which the electric dipole moment varies during a
collision. A collision-induced satellite appears on Lyman Beta, correlated to
the B''\barB 1Sigma+u - X 1Sigma+g asymptotically forbidden transition of H_2.
As a consequence, the appearance of the line wing between Lyman Alpha and Lyman
Beta is shown to be sensitive to the relative abundance of hydrogen ions and
neutral atoms, and thereby to provide a temperature diagnostic for stellar
atmospheres and laboratory plasmas.
|
Recent research has seen many behavioral comparisons between humans and deep
neural networks (DNNs) in the domain of image classification. Often, comparison
studies focus on the end-result of the learning process by measuring and
comparing the similarities in the representations of object categories once
they have been formed. However, the process of how these representations emerge
-- that is, the behavioral changes and intermediate stages observed during the
acquisition -- is less often directly and empirically compared. Here we report
a detailed investigation of the learning dynamics in human observers and
various classic and state-of-the-art DNNs. We develop a constrained supervised
learning environment to align learning-relevant conditions such as starting
point, input modality, available input data and the feedback provided. Across
the whole learning process we evaluate and compare how well learned
representations can be generalized to previously unseen test data. Comparisons
across the entire learning process indicate that DNNs demonstrate a level of
data efficiency comparable to human learners, challenging some prevailing
assumptions in the field. However, our results also reveal representational
differences: while DNNs' learning is characterized by a pronounced
generalisation lag, humans appear to immediately acquire generalizable
representations without a preliminary phase of learning training set-specific
information that is only later transferred to novel data.
|
As the largest mass concentrations in the local Universe, nearby clusters of
galaxies and their central galaxies are prime targets in searching for indirect
signatures of dark matter annihilation (DMA). We seek to constrain the dark
matter annihilation emission component from multi-frequency observations of the
central galaxy of the Virgo cluster. The annihilation emission component is
modeled by the prompt and inverse-Compton gamma rays from the hadronization of
annihilation products from generic weakly interacting dark matter particles.
This component is fitted to the excess of the observed data above the spectral
energy distribution (SED) of the jet in M87, described with a best-fit
synchrotron-self-Compton (SSC) spectrum. While this result is not sufficiently
significant to claim a detection, we emphasize that a dark matter "double hump
signature" can be used to unambiguously discriminate the dark matter emission
component from the variable jet-related emission of M87 in future, more
extended observation campaigns.
|
The recent announcement of a Neptune-sized exomoon candidate orbiting the
Jupiter-sized object Kepler-1625b has forced us to rethink our assumptions
regarding both exomoons and their host exoplanets. In this paper I describe
calculations of the habitable zone for Earthlike exomoons in orbit of
Kepler-1625b under a variety of assumptions. I find that the candidate exomoon,
Kepler-1625b-i, does not currently reside within the exomoon habitable zone,
but may have done so when Kepler-1625 occupied the main sequence. If it were to
possess its own moon (a "moon-moon") that was Earthlike, this could potentially
have been a habitable world. If other exomoons orbit Kepler-1625b, then there
are a range of possible semimajor axes/eccentricities that would permit a
habitable surface during the main sequence phase, while remaining dynamically
stable under the perturbations of Kepler-1625b-i. This is however contingent on
effective atmospheric CO$_2$ regulation.
|
Personalized PageRank (PPR) is a graph algorithm that evaluates the
importance of the surrounding nodes from a source node. Widely used in social
network related applications such as recommender systems, PPR requires
real-time responses (latency) for a better user experience. Existing works
either focus on algorithmic optimization for improving precision while
neglecting hardware implementations or focus on distributed global graph
processing on large-scale systems for improving throughput rather than response
time. Optimizing low-latency local PPR algorithm with a tight memory budget on
edge devices remains unexplored. In this work, we propose a memory-efficient,
low-latency PPR solution, namely MeLoPPR, with largely reduced memory
requirement and a flexible trade-off between latency and precision. MeLoPPR is
composed of stage decomposition and linear decomposition and exploits the node
score sparsity: Through stage and linear decomposition, MeLoPPR breaks the
computation on a large graph into a set of smaller sub-graphs, that
significantly saves the computation memory; Through sparsity exploitation,
MeLoPPR selectively chooses the sub-graphs that contribute the most to the
precision to reduce the required computation. In addition, through
software/hardware co-design, we propose a hardware implementation on a hybrid
CPU and FPGA accelerating platform, that further speeds up the sub-graph
computation. We evaluate the proposed MeLoPPR on memory-constrained devices
including a personal laptop and Xilinx Kintex-7 KC705 FPGA using six real-world
graphs. First, MeLoPPR demonstrates significant memory saving by 1.5x to 13.4x
on CPU and 73x to 8699x on FPGA. Second, MeLoPPR allows flexible trade-offs
between precision and execution time: when the precision is 80%, the speedup on
CPU is up to 15x and up to 707x on FPGA; when the precision is around 90%, the
speedup is up to 70x on FPGA.
|
The availability of powerful computing hardware in IaaS clouds makes cloud
computing attractive also for computational workloads that were up to now
almost exclusively run on HPC clusters.
In this paper we present the VM-MAD Orchestrator software: an open source
framework for cloudbursting Linux-based HPC clusters into IaaS clouds but also
computational grids. The Orchestrator is completely modular, allowing flexible
configurations of cloudbursting policies. It can be used with any batch system
or cloud infrastructure, dynamically extending the cluster when needed. A
distinctive feature of our framework is that the policies can be tested and
tuned in a simulation mode based on historical or synthetic cluster accounting
data.
In the paper we also describe how the VM-MAD Orchestrator was used in a
production environment at the FGCZ to speed up the analysis of mass
spectrometry-based protein data by cloudbursting to the Amazon EC2. The
advantages of this hybrid system are shown with a large evaluation run using
about hundred large EC2 nodes.
|
We propose a method to produce entangled states of several particles starting
from a Bose-Einstein condensate. In the proposal, a single fast $\pi/2$ pulse
is applied to the atoms and due to the collisional interaction, the subsequent
free time evolution creates an entangled state involving all atoms in the
condensate. The created entangled state is a spin-squeezed state which could be
used to improve the sensitivity of atomic clocks.
|
The James Webb Space Telescope will allow to spectroscopically study an
unprecedented number of galaxies deep into the reionization era, notably by
detecting [OIII] and H$\beta$ nebular emission lines. To efficiently prepare
such observations, we photometrically select a large sample of galaxies at
$z\sim8$ and study their rest-frame optical emission lines. Combining data from
the GOODS Re-ionization Era wide-Area Treasury from Spitzer (GREATS) survey and
from HST, we perform spectral energy distribution (SED) fitting, using
synthetic SEDs from a large grid of photoionization models. The deep
Spitzer/IRAC data combined with our models exploring a large parameter space
enables to constrain the [OIII]+H$\beta$ fluxes and equivalent widths for our
sample, as well as the average physical properties of $z\sim8$ galaxies, such
as the ionizing photon production efficiency with
$\log(\xi_\mathrm{ion}/\mathrm{erg}^{-1}\hspace{1mm}\mathrm{Hz})\geq25.77$. We
find a relatively tight correlation between the [OIII]+H$\beta$ and UV
luminosity, which we use to derive for the first time the [OIII]+H$\beta$
luminosity function (LF) at $z\sim8$. The $z\sim8$ [OIII]+H$\beta$ LF is higher
at all luminosities compared to lower redshift, as opposed to the UV LF, due to
an increase of the [OIII]+H$\beta$ luminosity at a given UV luminosity from
$z\sim3$ to $z\sim8$. Finally, using the [OIII]+H$\beta$ LF, we make
predictions for JWST/NIRSpec number counts of $z\sim8$ galaxies. We find that
the current wide-area extragalactic legacy fields are too shallow to use JWST
at maximal efficiency for $z\sim8$ spectroscopy even at 1hr depth and JWST
pre-imaging to $\gtrsim30$ mag will be required.
|
Bottom-up prepared carbon nanostructures appear as promising platforms for
future carbon-based nanoelectronics, due to their atomically precise and
versatile structure. An important breakthrough is the recent preparation of
nanoporous graphene (NPG) as an ordered covalent array of graphene nanoribbons
(GNRs). Within NPG, the GNRs may be thought of as 1D electronic nanochannels
through which electrons preferentially move, highlighting NPG's potential for
carbon nanocircuitry. However, the {\pi}-conjugated bonds bridging the GNRs
give rise to electronic cross-talk between the individual 1D channels, leading
to spatially dispersing electronic currents. Here, we propose a chemical design
of the bridges resulting in destructive quantum interference, which blocks the
cross-talk between GNRs in NPG, electronically isolating them. Our multiscale
calculations reveal that injected currents can remain confined within a single,
0.7 nm wide, GNR channel for distances as long as 100 nm. The concepts
developed in this work thus provide an important ingredient for the quantum
design of future carbon nanocircuitry.
|
A multilayer edge molecular electronics device (MEMED), which utilize the two
metal electrodes of a metal-insulator-metal tunnel junction as the two
electrical leads to molecular channels, can overcome the long standing
fabrication challenges for developing futuristic molecular devices. However,
producing ultrathin insulator is the most challenging step in MEMED
fabrication. A simplified molecular device approach was developed by avoiding
the need of depositing a new materiel on the bottom electrode for growing
ultrathin insulator. This paper discuss the approach for MEMED's insulator
growth by one-step oxidation of a tantalum (Ta) bottom electrode, in the
pholithographically defined region; i.e. ultrathin tantalum oxide (TaOx)
insulator was grown by oxidizing bottom metal electrode itself. Organometallic
molecular clusters (OMCs) were bridged across 1-3 nm TaOx along the perimeter
of a tunnel junction to establish the highly efficient molecular conduction
channels. OMC transformed the asymmetric transport profile of TaOx based tunnel
junction into symmetric one. A TaOx based tunnel junction with top
ferromagnetic (NiFe) electrode exhibited the transient current suppression by
several orders. Further studies will be needed to strengthen the current
suppression phenomenon, and to realize the full potential of TaOx based
multilayer edge molecular spintronics devices.
|
Our main aim in this paper is to promote the coframe variational method as a
unified approach to derive field equations for any given gravitational action
containing the algebraic functions of the scalars constructed from the Riemann
curvature tensor and its contractions. We are able to derive a master equation
which expresses the variational derivatives of the generalized gravitational
actions in terms of the variational derivatives of its constituent curvature
scalars. Using the Lagrange multiplier method relative to an orthonormal
coframe, we investigate the variational procedures for modified gravitational
Lagrangian densities in spacetime dimensions $n\geqslant 3$. We study
well-known gravitational actions such as those involving the Gauss-Bonnet and
Ricci-squared, Kretchmann scalar, Weyl-squared terms and their algebraic
generalizations similar to generic $f(R)$ theories and the algebraic
generalization of sixth order gravitational Lagrangians. We put forth a new
model involving the gravitational Chern-Simons term and also give three
dimensional New massive gravity equations in a new form in terms of the Cotton
2-form.
|
The existence of spurious correlations such as image backgrounds in the
training environment can make empirical risk minimization (ERM) perform badly
in the test environment. To address this problem, Kirichenko et al. (2022)
empirically found that the core features that are related to the outcome can
still be learned well even with the presence of spurious correlations. This
opens a promising strategy to first train a feature learner rather than a
classifier, and then perform linear probing (last layer retraining) in the test
environment. However, a theoretical understanding of when and why this approach
works is lacking. In this paper, we find that core features are only learned
well when their associated non-realizable noise is smaller than that of
spurious features, which is not necessarily true in practice. We provide both
theories and experiments to support this finding and to illustrate the
importance of non-realizable noise. Moreover, we propose an algorithm called
Freeze then Train (FTT), that first freezes certain salient features and then
trains the rest of the features using ERM. We theoretically show that FTT
preserves features that are more beneficial to test time probing. Across two
commonly used spurious correlation datasets, FTT outperforms ERM, IRM, JTT and
CVaR-DRO, with substantial improvement in accuracy (by 4.5%) when the feature
noise is large. FTT also performs better on general distribution shift
benchmarks.
|
We propose a rate-distortion optimization method for 3D videos based on
visual discomfort estimation. We calculate visual discomfort in the encoded
depth maps using two indexes: temporal outliers (TO) and spatial outliers (SO).
These two indexes are used to measure the difference between the processed
depth map and the ground truth depth map. These indexes implicitly depend on
the amount of edge information within a frame and on the amount of motion
between frames. Moreover, we fuse these indexes considering the temporal and
spatial complexities of the content. We test the proposed method on a number of
videos and compare the results with the default rate-distortion algorithms in
the H.264/AVC codec. We evaluate rate-distortion algorithms by comparing
achieved bit-rates, visual degradations in the depth sequences and the fidelity
of the depth videos measured by SSIM and PSNR.
|
We characterize the quantum states dual to entanglement wedges in arbitrary
spacetimes, in settings where the matter entropy can be neglected compared to
the geometric entropy. In AdS/CFT, such states obey special entropy
inequalities known as the holographic entropy cone. In particular, the mutual
information of CFT subregions is monogamous (MMI). We extend this result to
arbitrary spacetimes, using a recent proposal for the generalized entanglement
wedge e(a) of a gravitating region a. Given independent input regions a, b, and
c, we prove MMI:
Area[e(a)]+Area[e(b)]+Area[e(c)]-Area[e(ab)]-Area[e(bc)]-Area[e(ca)]+Area[e(abc)]
$\leq$ 0. We expect that the full holographic entropy cone can be extended to
arbitrary spacetimes using similar methods.
|
There are several attitude estimation algorithms in existence, all of which
use local coordinate representations for the group of rigid body orientations.
All local coordinate representations of the group of orientations have
associated problems. While minimal coordinate representations exhibit kinematic
singularities for large rotations, the quaternion representation requires
satisfaction of an extra constraint. This paper treats the attitude estimation
and filtering problem as an optimization problem, without using any local
coordinates for the group of rotations. An attitude determination algorithm and
attitude estimation filters are developed, that minimize the attitude and
angular velocity estimation errors. For filter propagation, the attitude
kinematics and deterministic dynamics equations (Euler's equations) for a rigid
body in an attitude-dependent potential are used. Vector attitude measurements
are used for attitude and angular velocity estimation, with or without angular
velocity measurements.
|
This paper focuses on weakly-supervised action alignment, where only the
ordered sequence of video-level actions is available for training. We propose a
novel Duration Network, which captures a short temporal window of the video and
learns to predict the remaining duration of a given action at any point in time
with a level of granularity based on the type of that action. Further, we
introduce a Segment-Level Beam Search to obtain the best alignment, that
maximizes our posterior probability. Segment-Level Beam Search efficiently
aligns actions by considering only a selected set of frames that have more
confident predictions. The experimental results show that our alignments for
long videos are more robust than existing models. Moreover, the proposed method
achieves state of the art results in certain cases on the popular Breakfast and
Hollywood Extended datasets.
|
Rate splitting multiple access (RSMA) relies on beamforming design for
attaining spectral efficiency and energy efficiency gains over traditional
multiple access schemes. While conventional optimization approaches such as
weighted minimum mean square error (WMMSE) achieve suboptimal solutions for
RSMA beamforming optimization, they are computationally demanding. A novel
approach based on fractional programming (FP) has unveiled the optimal
beamforming structure (OBS) for RSMA. This method, combined with a hyperplane
fixed point iteration (HFPI) approach, named FP-HFPI, provides suboptimal
beamforming solutions with identical sum rate performance but much lower
computational complexity compared to WMMSE. Inspired by such an approach, in
this work, a novel deep unfolding framework based on FP-HFPI, named
rate-splitting-beamforming neural network (RS-BNN), is proposed to unfold the
FP-HFPI algorithm. Numerical results indicate that the proposed RS-BNN attains
a level of performance closely matching that of WMMSE and FP-HFPI, while
dramatically reducing the computational complexity.
|
We study the spin magnetic moment of a single impurity embedded in a
finite-size non-magnetic host exhibiting a band gap. The calculations were
performed using a tight-binding model Hamiltonian. The simple criterion for the
magnetic to non-magnetic transition as given in the Anderson impurity model
breaks down in these cases. We show how the spin magnetic moment of the
impurity that normally would be quenched can be restored upon introducing a gap
at the Fermi level in the host density of states. The magnitude of the impurity
spin magnetic moment scales monotonically with the size of the band gap. This
observation even holds for a host material featuring a strongly discretized
density of states. Thus, it should be possible to tune the magnetic moment of
doped nano-particles by varying their size and thereby their band gap.
|
Phase transition dynamics may play important roles in the evolution history
of the early universe, such as its possible roles in electroweak baryogenesis
and dark matter.We systematically discuss and clarify the important details of
the phase transition dynamics during a strong first-order phase transition
(SFOPT). We classify the SFOPT into four types: slight supercooling, mild
supercooling, strong supercooling, and ultra supercooling. Using different
characteristic temperatures, length scales and bubble wall velocities, the
corresponding gravitational wave (GW) spectra are investigated in details. We
emphasize the essential importance of using the correct characteristic
temperature and length scale when the phase transition dynamics and GW spectra
are calculated. Especially, for strong supercooling and ultra supercooling
cases, there are obvious differences of the phase transition strength and GW
spectra between the results calculated at the nucleation temperature and those
derived at the percolation temperature. For ultra supercooling case, we propose
a criterion to quantify whether the phase transition can terminate. Besides the
model-independent discussions, we also study three representative models as
concrete examples to clearly show the subtle points therein.
|
Convolution neural networks were applied to classify speckle images generated
from nano-particle suspensions and thus to recognise suspensions. The speckle
images in the form of movies were obtained from suspensions placed in a thin
cuvette. The classifier was trained, validated and tested on both single
component monodispersive suspensions, as well as on two-component suspensions.
It was able to properly recognise all the 73 classes - different suspensions
from the training set, which is far beyond the capabilities of the human
experimenter, and shows the capability of learning many more. The classes
comprised different nanoparticle material and size, as well as different
concentrations of the suspended phase. We also examined the capability of the
system to generalise, by testing a system trained on single-component
suspensions with two-component suspensions. The capability to generalise was
found promising but significantly limited. A classification system using neural
network was also compared with the one using support vector machine (SVM). SVM
was found much more resource-consuming and thus could not be tested on
full-size speckle images. Using image fragments very significantly deteriorates
results for both SVM and neural networks. We showed that nanoparticle
(colloidal) suspensions comprising even a large multi-parameter set of classes
can be quickly identified using speckle images classified with convolution
neural network.
|
We elaborate on the fact that quarkonium in hot QCD should not be thought of
as a stationary bound state in a temperature-dependent real potential, but as a
short-lived transient, with an exponentially decaying wave function. The reason
is the existence of an imaginary part in the pertinent static potential,
signalling the ``disappearance'', due to inelastic scatterings with hard
particles in the plasma, of the off-shell gluons that bind the quarks together.
By solving the corresponding Schr\"odinger equation, we estimate numerically
the near-threshold spectral functions in scalar, pseudoscalar, vector and axial
vector channels, as a function of the temperature and of the heavy quark mass.
In particular, we point out a subtlety in the determination of the scalar
channel spectral function and, resolving it to the best of our understanding,
suggest that at least in the bottomonium case, a resonance peak can be observed
also in the scalar channel, even though it is strongly suppressed with respect
to the peak in the vector channel. Finally, we plot the physical dilepton
production rate, stressing that despite the eventual disappearance of the
resonance peak from the corresponding spectral function, the quarkonium
contribution to the dilepton rate becomes more pronounced with increasing
temperature, because of the yield from free heavy quarks.
|
We show that weak solutions to parabolic equations in divergence form with
zero Dirichlet boundary conditions are continuously differentiable up to the
boundary when the leading coefficients have Dini mean oscillation and the lower
order coefficients verify certain conditions. Similar results are obtained for
non-divergence form parabolic operators and their adjoint operators. Under
similar conditions, we also establish a Harnack inequality for nonnegative
adjoint solutions, together with upper and lower Gaussian bounds for the global
fundamental solution.
|
For type IIB supergravity with a running axio-dilaton, we construct bulk
solutions which admit a cosmological background metric of
Friedmann-Robertson-Walker type. These solutions include both a dark radiation
term in the bulk as well as a four-dimensional (boundary) cosmological
constant, while gravity at the boundary remains non-dynamical. We
holographically calculate the stress-energy tensor, showing that it consists of
two contributions: The first one, generated by the dark radiation term, leads
to the thermal fluid of N = 4 SYM theory, while the second, the conformal
anomaly, originates from the boundary cosmological constant. Conservation of
the boundary stress tensor implies that the boundary cosmological constant is
time-independent, such that there is no exchange between the two stress-tensor
contributions. We then study (de)confinement by evaluating the Wilson loop in
these backgrounds. While the dark radiation term favours deconfinement, a
negative cosmological constant drives the system into a confined phase. When
both contributions are present, we find an oscillating universe with negative
cosmological constant which undergoes periodic (de)confinement transitions as
the scale of three space expands and re-contracts.
|
We consider fractal percolation (or Mandelbrot percolation) which is one of
the most well studied example of random Cantor sets. Rams and the first author
studied the projections (orthogonal, radial and co-radial) of fractal
percolation sets on the plane. We extend their results to higher dimension.
|
Given a positive function $f$ on $(0,\infty)$ and a non-zero real parameter
$\theta$, we consider a function $I_f^\theta(A,B,X)=Tr
X^*(f(L_AR_B^{-1})R_B)^\theta(X)$ in three matrices $A,B>0$ and $X$. In the
literature $\theta=\pm1$ has been typical. The concept unifies various quantum
information quantities such as quasi-entropy, monotone metrics, etc. We
characterize joint convexity/concavity and monotonicity properties of the
function $I_f^\theta$, thus unifying some known results for various quantum
quantities.
|
Reduced exciton mass, polarizability, and dielectric constant of the
surrounding medium are essential properties for semiconducting materials, and
they have been extracted recently from the magnetoexciton energies. However,
the acceptable accuracy of the suggested method requires very high magnetic
intensity. Therefore, in the present paper, we propose an alternative method of
extracting these material properties from recently available experimental
magnetoexciton s-state energies in monolayer transition-metal dichalcogenides
(TMDCs). The method is based on the high sensitivity of exciton energies to the
material parameters in the Rytova-Keldysh model. It allows us to vary the
considered material parameters to get the best fit of the theoretical
calculation to the experimental exciton energies for the $1s$, $2s$, and $3s$
states. This procedure gives values of the exciton reduced mass and $2D$
polarizability. Then, the experimental magnetoexciton spectra compared to the
theoretical calculation also determine the average dielectric constant.
Concrete applications are presented only for monolayers WSe$_2$ and WS$_2$ from
the recently available experimental data; however, the presented approach is
universal and can be applied to other monolayer TMDCs. The mentioned fitting
procedure requires a fast and effective method of solving the Schr\"{o}dinger
equation of an exciton in monolayer TMDCs with a magnetic field. Therefore, we
also develop such a method in this paper for highly accurate magnetoexciton
energies.
|
Network information theory is the study of communication problems involving
multiple senders, multiple receivers and intermediate relay stations. The
purpose of this thesis is to extend the main ideas of classical network
information theory to the study of classical-quantum channels. We prove coding
theorems for quantum multiple access channels, quantum interference channels,
quantum broadcast channels and quantum relay channels.
A quantum model for a communication channel describes more accurately the
channel's ability to transmit information. By using physically faithful models
for the channel outputs and the detection procedure, we obtain better
communication rates than would be possible using a classical strategy. In this
thesis, we are interested in the transmission of classical information, so we
restrict our attention to the study of classical-quantum channels. These are
channels with classical inputs and quantum outputs, and so the coding theorems
we present will use classical encoding and quantum decoding. We study the
asymptotic regime where many copies of the channel are used in parallel, and
the uses are assumed to be independent. In this context, we can exploit
information-theoretic techniques to calculate the maximum rates for error-free
communication for any channel, given the statistics of the noise on that
channel. These theoretical bounds can be used as a benchmark to evaluate the
rates achieved by practical communication protocols.
Most of the results in this thesis consider classical-quantum channels with
finite dimensional output systems, which are analogous to classical discrete
memoryless channels. In the last chapter, we will show some applications of our
results to a practical optical communication scenario, in which the information
is encoded in continuous quantum degrees of freedom, which are analogous to
classical channels with Gaussian noise.
|
The 3D quasi-static particle-in-cell (PIC) algorithm is a very efficient
method for modeling short-pulse laser or relativistic charged particle
beam-plasma interactions. In this algorithm, the plasma response to a
non-evolving laser or particle beam is calculated using Maxwell's equations
based on the quasi-static approximate equations that exclude radiation. The
plasma fields are then used to advance the laser or beam forward using a large
time step. The algorithm is many orders of magnitude faster than a 3D fully
explicit relativistic electromagnetic PIC algorithm. It has been shown to be
capable to accurately model the evolution of lasers and particle beams in a
variety of scenarios. At the same time, an algorithm in which the fields,
currents and Maxwell equations are decomposed into azimuthal harmonics has been
shown to reduce the complexity of a 3D explicit PIC algorithm to that of a 2D
algorithm when the expansion is truncated while maintaining accuracy for
problems with near azimuthal symmetry. This hybrid algorithm uses a PIC
description in r-z and a gridless description in $\phi$. We describe a novel
method that combines the quasi-static and hybrid PIC methods. This algorithm
expands the fields, charge and current density into azimuthal harmonics. A set
of the quasi-static field equations are derived for each harmonic. The complex
amplitudes of the fields are then solved using the finite difference method.
The beam and plasma particles are advanced in Cartesian coordinates using the
total fields. Details on how this algorithm was implemented using a similar
workflow to an existing quasi-static code, QuickPIC, are presented. The new
code is called QPAD for QuickPIC with Azimuthal Decomposition. Benchmarks and
comparisons between a fully 3D explicit PIC code, a full 3D quasi-static code,
and the new quasi-static PIC code with azimuthal decomposition are also
presented.
|
Growing networks decorated with antiferromagnetically coupled spins are
archetypal examples of complex systems due to the frustration and the
multivalley character of their energy landscapes. Here we use the damage
spreading method (DS) to investigate the cohesion of spin avalanches in the
exponential networks and the scale-free networks. On the contrary to the
conventional methods, the results obtained from DS suggest that the avalanche
spectra are characterized by the same statistics as the degree distribution in
their home networks. Further, the obtained mean range $Z$ of an avalanche, i.e.
the maximal distance reached by an avalanche from the damaged site, scales with
the avalanche size $s$ as $Z/N^\beta =f(s/N^{\alpha})$, where $\alpha=0.5$ and
$\beta=0.33$. These values are true for both kinds of networks for the number
$M$ of nodes to which new nodes are attached between 4 and 10; a check for M=25
confirms these values as well.
|
Combining dependent p-values to evaluate the global null hypothesis presents
a longstanding challenge in statistical inference, particularly when
aggregating results from diverse methods to boost signal detection. P-value
combination tests using heavy-tailed distribution based transformations, such
as the Cauchy combination test and the harmonic mean p-value, have recently
garnered significant interest for their potential to efficiently handle
arbitrary p-value dependencies. Despite their growing popularity in practical
applications, there is a gap in comprehensive theoretical and empirical
evaluations of these methods. This paper conducts an extensive investigation,
revealing that, theoretically, while these combination tests are asymptotically
valid for pairwise quasi-asymptotically independent test statistics, such as
bivariate normal variables, they are also asymptotically equivalent to the
Bonferroni test under the same conditions. However, extensive simulations
unveil their practical utility, especially in scenarios where stringent type-I
error control is not necessary and signals are dense. Both the heaviness of the
distribution and its support substantially impact the tests' non-asymptotic
validity and power, and we recommend using a truncated Cauchy distribution in
practice. Moreover, we show that under the violation of quasi-asymptotic
independence among test statistics, these tests remain valid and, in fact, can
be considerably less conservative than the Bonferroni test. We also present two
case studies in genetics and genomics, showcasing the potential of the
combination tests to significantly enhance statistical power while effectively
controlling type-I errors.
|
The indirect effect of an exposure on an outcome through an intermediate
variable can be identified by a product of regression coefficients under
certain causal and regression modeling assumptions. Thus, the null hypothesis
of no indirect effect is a composite null hypothesis, as the null holds if
either regression coefficient is zero. A consequence is that existing
hypothesis tests are either severely underpowered near the origin (i.e., when
both coefficients are small with respect to standard errors) or do not preserve
type 1 error uniformly over the null hypothesis space. We propose hypothesis
tests that (i) preserve level alpha type 1 error, (ii) meaningfully improve
power when both true underlying effects are small relative to sample size, and
(iii) preserve power when at least one is not. One approach gives a closed-form
test that is minimax optimal with respect to local power over the alternative
parameter space. Another uses sparse linear programming to produce an
approximately optimal test for a Bayes risk criterion. We provide an R package
that implements the minimax optimal test.
|
In the classical non-adaptive group testing setup, pools of items are tested
together, and the main goal of a recovery algorithm is to identify the
"complete defective set" given the outcomes of different group tests. In
contrast, the main goal of a "non-defective subset recovery" algorithm is to
identify a "subset" of non-defective items given the test outcomes. In this
paper, we present a suite of computationally efficient and analytically
tractable non-defective subset recovery algorithms. By analyzing the
probability of error of the algorithms, we obtain bounds on the number of tests
required for non-defective subset recovery with arbitrarily small probability
of error. Our analysis accounts for the impact of both the additive noise
(false positives) and dilution noise (false negatives). By comparing with the
information theoretic lower bounds, we show that the upper bounds on the number
of tests are order-wise tight up to a $\log^2K$ factor, where $K$ is the number
of defective items. We also provide simulation results that compare the
relative performance of the different algorithms and provide further insights
into their practical utility. The proposed algorithms significantly outperform
the straightforward approaches of testing items one-by-one, and of first
identifying the defective set and then choosing the non-defective items from
the complement set, in terms of the number of measurements required to ensure a
given success rate.
|
Is undecidability a requirement for open-ended evolution (OEE)? Using methods
derived from algorithmic complexity theory, we propose robust computational
definitions of open-ended evolution and the adaptability of computable
dynamical systems. Within this framework, we show that decidability imposes
absolute limits to the stable growth of complexity in computable dynamical
systems. Conversely, systems that exhibit (strong) open-ended evolution must be
undecidable, establishing undecidability as a requirement for such systems.
Complexity is assessed in terms of three measures: sophistication, coarse
sophistication and busy beaver logical depth. These three complexity measures
assign low complexity values to random (incompressible) objects. As time grows,
the stated complexity measures allow for the existence of complex states during
the evolution of a computable dynamical system. We show, however, that finding
these states involves undecidable computations. We conjecture that for similar
complexity measures that assign low complexity values, decidability imposes
comparable limits to the stable growth of complexity, and that such behaviour
is necessary for non-trivial evolutionary systems. We show that the
undecidability of adapted states imposes novel and unpredictable behaviour on
the individuals or populations being modelled. Such behaviour is irreducible.
Finally, we offer an example of a system, first proposed by Chaitin, that
exhibits strong OEE.
|
A proof is concurrent zero-knowledge if it remains zero-knowledge when many
copies of the proof are run in an asynchronous environment, such as the
Internet. It is known that zero-knowledge is not necessarily preserved in such
an environment. Designing concurrent zero-knowledge proofs is a fundamental
issue in the study of zero-knowledge since known zero-knowledge protocols
cannot be run in a realistic modern computing environment. In this paper we
present a concurrent zero-knowledge proof systems for all languages in NP.
Currently, the proof system we present is the only known proof system that
retains the zero-knowledge property when copies of the proof are allowed to run
in an asynchronous environment. Our proof system has $\tilde{O}(\log^2 k)$
rounds (for a security parameter $k$), which is almost optimal, as it is shown
by Canetti Kilian Petrank and Rosen that black-box concurrent zero-knowledge
requires $\tilde{\Omega}(\log k)$ rounds.
Canetti, Goldreich, Goldwasser and Micali introduced the notion of {\em
resettable} zero-knowledge, and modified an earlier version of our proof system
to obtain the first resettable zero-knowledge proof system. This protocol
requires $k^{\theta(1)}$ rounds. We note that their technique also applies to
our current proof system, yielding a resettable zero-knowledge proof for NP
with $\tilde{O}(\log^2 k)$ rounds.
|
Accurate prediction of students knowledge is a fundamental building block of
personalized learning systems. Here, we propose a novel ensemble model to
predict student knowledge gaps. Applying our approach to student trace data
from the online educational platform Duolingo we achieved highest score on both
evaluation metrics for all three datasets in the 2018 Shared Task on Second
Language Acquisition Modeling. We describe our model and discuss relevance of
the task compared to how it would be setup in a production environment for
personalized education.
|
The string theory swampland proposes that there is no UV-completion for an
effective field theory with an exact (metastable) de Sitter vacua, thereby
ruling out standard $\Lambda$CDM cosmology if the conjecture is taken
seriously. The swampland criteria have also been shown to be in sharp tension
with quintessence models under current and forthcoming observational bounds. As
a logical next step, we introduce higher derivative self-interactions in the
low-energy effective Lagrangian and show that one can satisfy observational
constraints as well as the swampland criteria for some specific models. In
particular, the cubic Galileon term, in the presence of an exponential
potential, is examined to demonstrate that parts of the Horndeski parameter
space survives the swampland and leads to viable cosmological histories.
|
Polymer compounds from titania-doped polyethylene are fabricated and their
linear optical properties characterized by THz-TDS. We show that high
concentration of dopants not only enhances the refractive index of the
composite material, but also can dramatically raise its absorption coefficient.
We demonstrate that the design of Bragg reflectors based on lossy composite
polymers depends on finding a compromise between index contrast and
corresponding losses. A small absorption value is also shown to be favorable,
compared to an ideal lossless reflector, as it enables to smooth the
transmission passbands. Transmission measurements of a fabricated hollowcore
Bragg fiber confirm simulation results.
|
We have investigated the ground state properties of the orthorhombic
structure compound PrRu$_2$Ga$_8$ through electronic and magnetic properties
studies. The compound crystallizes in the CaCo$_2$Al$_8$-type structure,
belonging to space group $Pbam$ (No. 55). The temperature dependence specific
heat shows a $\lambda$-type anomaly at $T_N$ = 3.3 K, indicating a bulk phase
transition probably of antiferromagnetic origin. At the N\'{e}el temperature
$T_N$, the entropy approaches the value of 4.66~J/mol.K which is about
0.8Rln(2), where R is the universal gas constant. The analysis of the low
temperature specific heat gives $\gamma$ = 46 mJ/mol.K$^2$. The temperature
dependence DC magnetic susceptibility $\chi(T)$ confirms the anomaly at 3.3 K
and follows the Curie-Weiss law for temperatures above 50~K, with the
calculated effective magnetic moment, $\mu_\mathrm{{eff}}$ = 3.47(2)~$\mu_B$/Pr
and Weiss temperature $\theta_p$ = --7.80(1)~K. This effective magnetic moment
value is in good agreement with the Hund$^\prime$s rule theoretical free-ion
value of 3.58~$\mu_B$ for Pr$^{3+}$. The electrical resistivity data also shows
an anomaly at $T_N$ and a broad curvature at intermediate temperatures probably
due to crystalline electric field (CEF) effects. The Pr$^{3+}$ in this
structure type has a site symmetry of $C_s$ which predicts a CEF splitting of
the $J$ = 4 multiplet into 9 singlets and thus rule out in principle the
occurrence of spontaneous magnetic order. In this article we discuss the
magnetic order in PrRu$_2$Ga$_8$ in line with an induced type of magnetism
resulting from the admixture of the lowest CEF level with the first excited
state.
|
A complex quantum system can be constructed by coupling simple quantum
elements to one another. For example, trapped-ion or superconducting quantum
bits may be coupled by Coulomb interactions, mediated by the exchange of
virtual photons. Alternatively quantum objects can be coupled by the exchange
of real photons, particularly when driven within resonators that amplify
interactions with a single electro-magnetic mode. However, in such an open
system, the capacity of a coupling channel to convey quantum information or
generate entanglement may be compromised. Here, we realize phase-coherent
interactions between two spatially separated, near-ground-state mechanical
oscillators within a driven optical cavity. We observe also the noise imparted
by the optical coupling, which results in correlated mechanical fluctuations of
the two oscillators. Achieving the quantum backaction dominated regime opens
the door to numerous applications of cavity optomechanics with a complex
mechanical system. Our results thereby illustrate the potential, and also the
challenge, of coupling quantum objects with light.
|
Materials with non-Kramers doublet ground states naturally manifest the
two-channel Kondo effect, as the valence fluctuations are from a non-Kramers
doublet ground state to an excited Kramers doublet. Here, the development of a
heavy Fermi liquid requires a channel symmetry breaking spinorial hybridization
that breaks both single and double time-reversal symmetry, and is known as
hastatic order. Motivated by cubic Pr-based materials with $\Gamma_3$
non-Kramers ground state doublets, this paper provides a survey of cubic
hastatic order using the simple two-channel Kondo-Heisenberg model. Hastatic
order necessarily breaks time-reversal symmetry, but the spatial arrangement of
the hybridization spinor can be either uniform (ferrohastatic) or break
additional lattice symmetries (antiferrohastatic). The experimental signatures
of both orders are presented in detail, and include tiny conduction electron
magnetic moments. Interestingly, there can be several distinct
antiferrohastatic orders with the same moment pattern that break different
lattice symmetries, revealing a potential experimental route to detect the
spinorial nature of the hybridization. We employ an SU(N) fermionic mean-field
treatment on square and simple cubic lattices, and examine how the nature and
stability of hastatic order varies as we vary the Heisenberg coupling,
conduction electron density, band degeneracies, and apply both channel and spin
symmetry breaking fields. We find that both ferrohastatic and several types of
antiferrohastatic orders are stabilized in different regions of the mean-field
phase diagram, and evolve differently in strain and magnetic fields.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.