text
stringlengths 6
128k
|
---|
We study continual learning for natural language instruction generation, by
observing human users' instruction execution. We focus on a collaborative
scenario, where the system both acts and delegates tasks to human users using
natural language. We compare user execution of generated instructions to the
original system intent as an indication to the system's success communicating
its intent. We show how to use this signal to improve the system's ability to
generate instructions via contextual bandit learning. In interaction with real
users, our system demonstrates dramatic improvements in its ability to generate
language over time.
|
We study the decay of the false vacuum in the scaling Ising and tricritical
Ising field theories using the Truncated Conformal Space Approach and compare
the numerical results to theoretical predictions in the thin wall limit. In the
Ising case, the results are consistent with previous studies on the quantum
spin chain and the $\varphi^4$ quantum field theory; in particular we confirm
that while the theoretical predictions get the dependence of the bubble
nucleation rate on the latent heat right, they are off by a model dependent
overall coefficient. The tricritical Ising model allows us on the other hand to
examine more exotic vacuum degeneracy structures, such as three vacua or two
asymmetric vacua, which leads us to study several novel scenarios of false
vacuum decay by lifting the vacuum degeneracy using different perturbations.
|
We have used the Spitzer 22-um peakup array to observe thermal emission from
the nucleus and trail of comet 103P/Hartley 2, the target of NASA's Deep Impact
Extended mission. The comet was observed on UT 2008 August 12 and 13, while the
comet was 5.5 AU from the Sun. We obtained two 200-frame sets of photometric
imaging over a 2.7-hour period. To within the errors of the measurement, we
find no detection of any temporal variation between the two images. The comet
showed extended emission beyond a point source in the form of a faint trail
directed along the comet's anti-velocity vector. After modeling and removing
the trail emission, a NEATM model for the nuclear emission with beaming
parameter of 0.95 +/- 0.20 indicates a small effective radius for the nucleus
of 0.57 +/- 0.08 km and low geometric albedo 0.028 +/- 0.009 (1 sigma). With
this nucleus size and a water production rate of 3 x 10^28 molecules s-1 at
perihelion (A'Hearn et al. 1995) we estimate that ~100% of the surface area is
actively emitting volatile material at perihelion. Reports of emission activity
out to ~5 AU (Lowry et al. 2001, Snodgrass et al. 2008) support our finding of
a highly active nuclear surface. Compared to Deep Impact's first target, comet
9P/Tempel 1, Hartley 2's nucleus is one-fifth as wide (and about one-hundredth
the mass) while producing a similar amount of outgassing at perihelion with
about 13 times the active surface fraction. Unlike Tempel 1, it should be
highly susceptible to jet driven spin-up torques, and so could be rotating at a
much higher frequency. Barring a catastrophic breakup or major fragmentation
event, the comet should be able to survive up to another 100 apparitions (~700
yrs) at its current rate of mass loss.
|
Developing and integrating advanced image sensors with novel algorithms in
camera systems are prevalent with the increasing demand for computational
photography and imaging on mobile platforms. However, the lack of high-quality
data for research and the rare opportunity for in-depth exchange of views from
industry and academia constrain the development of mobile intelligent
photography and imaging (MIPI). With the success of the 1st MIPI Workshop@ECCV
2022, we introduce the second MIPI challenge including four tracks focusing on
novel image sensors and imaging algorithms. In this paper, we summarize and
review the Nighttime Flare Removal track on MIPI 2023. In total, 120
participants were successfully registered, and 11 teams submitted results in
the final testing phase. The developed solutions in this challenge achieved
state-of-the-art performance on Nighttime Flare Removal. A detailed description
of all models developed in this challenge is provided in this paper. More
details of this challenge and the link to the dataset can be found at
https://mipi-challenge.org/MIPI2023/ .
|
We theoretically study the dynamic time evolution following laser pulse
pumping in an antiferromagnetic insulator Cr$_{2}$O$_{3}$. From the
photoexcited high-spin quartet states to the long-lived low-spin doublet
states, the ultrafast demagnetization processes are investigated by solving the
dissipative Schr\"odinger equation. We find that the demagnetization times are
of the order of hundreds of femtosecond, in good agreement with recent
experiments. The switching times could be strongly reduced by properly tuning
the energy gaps between the multiplet energy levels of Cr$^{3+}$. Furthermore,
the relaxation times also depend on the hybridization of atomic orbitals in the
first photoexcited state. Our results suggest that the selective manipulation
of electronic structure by engineering stress-strain or chemical substitution
allows effective control of the magnetic state switching in photoexcited
insulating transition-metal oxides.
|
A model describing the influence of torsion stress on the giant
magnetoimpedance in amorphous wires with negative magnetostriction is proposed.
The wire impedance is found by means of the solution of Maxwell equations
together with the Landau-Lifshitz equation, assuming a simplified spatial
distribution of the magnetoelastic anisotropy induced by the torsion stress.
The impedance is analyzed as a function of the external magnetic field, torsion
stress and frequency. It is shown that the magnetoimpedance ratio torsion
dependence has an asymmetric shape, with a sharp peak at some value of the
torsion stress. The calculated field and stress dependences of the impedance
are in qualitative agreement with results of the experimental study of the
torsion stress giant magnetoimpedance in Co-based amorphous wires.
|
The concept of reflection positivity has its origins in the work of
Osterwalder--Schrader on constructive quantum field theory. It is a fundamental
tool to construct a relativistic quantum field theory as a unitary
representation of the Poincare group from a non-relativistic field theory as a
representation of the euclidean motion group. This is the second article in a
series on the mathematical foundations of reflection positivity. We develop the
theory of reflection positive one-parameter groups and the dual theory of
dilations of contractive hermitian semigroups. In particular, we connect
reflection positivity with the outgoing realization of unitary one-parameter
groups by Lax and Phillips. We further show that our results provide effective
tools to construct reflection positive representations of general symmetric Lie
groups, including the ax+b-group, the Heisenberg group, the euclidean motion
group and the euclidean conformal group.
|
Recent observations of high energy (> 20 keV) X-ray emission in a few
clusters extend and broaden our knowledge of physical phenomena in the
intracluster space. This emission is likely to be nonthermal, probably
resulting from Compton scattering of relativistic electrons by the cosmic
microwave background radiation. Direct evidence for the presence of
relativistic electrons in some 30 clusters comes from measurements of extended
radio emission in their central regions. I first review the results from RXTE
and BeppoSAX measurements of a small sample of clusters, and then discuss their
implications on the mean values of intracluster magnetic fields and
relativistic electron energy densities. Implications on the origin of the
fields and electrons are briefly considered.
|
Unsteady laminar vortex shedding over a circular cylinder is predicted using
a deep learning technique, a generative adversarial network (GAN), with a
particular emphasis on elucidating the potential of learning the solution of
the Navier-Stokes equations. Numerical simulations at two different Reynolds
numbers with different time-step sizes are conducted to produce training
datasets of flow field variables. Unsteady flow fields in the future at a
Reynolds number which is not in the training datasets are predicted using a
GAN. Predicted flow fields are found to qualitatively and quantitatively agree
well with flow fields calculated by numerical simulations. The present study
suggests that a deep learning technique can be utilized for prediction of
laminar wake flow in lieu of solving the Navier-Stokes equations.
|
This paper describes a general-purpose programming technique, called the
Simulation of Simplicity, which can be used to cope with degenerate input data
for geometric algorithms. It relieves the programmer from the task to provide a
consistent treatment for every single special case that can occur. The programs
that use the technique tend to be considerably smaller and more robust than
those that do not use it. We believe that this technique will become a standard
tool in writing geometric software.
|
In the Network Inference problem, one seeks to recover the edges of an
unknown graph from the observations of cascades propagating over this graph. In
this paper, we approach this problem from the sparse recovery perspective. We
introduce a general model of cascades, including the voter model and the
independent cascade model, for which we provide the first algorithm which
recovers the graph's edges with high probability and $O(s\log m)$ measurements
where $s$ is the maximum degree of the graph and $m$ is the number of nodes.
Furthermore, we show that our algorithm also recovers the edge weights (the
parameters of the diffusion process) and is robust in the context of
approximate sparsity. Finally we prove an almost matching lower bound of
$\Omega(s\log\frac{m}{s})$ and validate our approach empirically on synthetic
graphs.
|
We consider global fluctuations of the spectrum of the GUE. Using results on
the linear statistics of such matrices as well as variance bounds on the
eigenvalues, we show that under a suitable scaling, global fluctuations of the
spectrum can be asymptotically described in terms of a logarithmically
correlated Gaussian field. We also discuss briefly connections between
different objects in RMT giving rise to log-correlated fields: the logarithm of
the absolute value of the characteristic polynomial, the eigenvalue counting
function, and the field of fluctuations of the eigenvalues around their
expected locations.
|
The algebra of generalized linear quantum canonical transformations is
examined in the prespective of Schwinger's unitary-canonical basis. Formulation
of the quantum phase problem within the theory of quantum canonical
transformations and in particular with the generalized quantum action-angle
phase space formalism is established and it is shown that the conceptual
foundation of the quantum phase problem lies within the algebraic properties of
the quantum canonical transformations in the quantum phase space. The
representations of the Wigner function in the generalized action-angle unitary
operator pair for certain Hamiltonian systems with the dynamical symmetry are
examined. This generalized canonical formalism is applied to the quantum
harmonic oscillator to examine the properties of the unitary quantum phase
operator as well as the action-angle Wigner function.
|
Asymmetric Dark Stars, i.e., compact objects formed from the collapse of
asymmetric dark matter could potentially produce a detectable photon flux if
dark matter particles self-interact via dark photons that kinetically mix with
ordinary photons. The morphology of the emitted spectrum is significantly
different and therefore distinguishable from a typical black-body one. Given
the above and the fact that asymmetric dark stars can have masses outside the
range of neutron stars, the detection of such a spectrum can be considered as a
smoking gun signature for the existence of these exotic stars.
|
A recent proposal for a superdeterministic account of quantum mechanics,
named Invariant-set theory, appears to bring ideas from several diverse fields
like chaos theory, number theory and dynamical systems to quantum foundations.
However, a clear cut hidden-variable model has not been developed, which makes
it difficult to assess the proposal from a quantum foundational perspective. In
this article, we first build a hidden-variable model based on the proposal, and
then critically analyse several aspects of the proposal using the model. We
show that several arguments related to counter-factual measurements,
nonlocality, non-commutativity of quantum observables, measurement independence
etcetera that appear to work in the proposal fail when considered in our model.
We further show that our model is not only superdeterministic but also
nonlocal, with an ontic quantum state. We argue that the bit string defined in
the model is a hidden variable and that it contains redundant information.
Lastly, we apply the analysis developed in a previous work (Proc. R. Soc. A,
476(2243):20200214, 2020) to illustrate the issue of superdeterministic
conspiracy in the model. Our results lend further support to the view that
superdeterminism is unlikely to solve the puzzle posed by the Bell
correlations.
|
Essential features of the Multigrid Ensemble Kalman Filter (G. Moldovan, G.
Lehnasch, L. Cordier, M. Meldi, A multigrid/ensemble Kalman filter strategy for
assimilation of unsteady flows, Journal of Computational Physics 443-110481)
recently proposed for Data Assimilation of fluid flows are investigated and
assessed in this article. The analysis is focused on the improvement in
performance due to the inner loop. In this step, data from solutions calculated
on the higher resolution levels of the multigrid approach are used as surrogate
observations to improve the model prediction on the coarsest levels of the
grid. The latter represents the level of resolution used to run the ensemble
members for global Data Assimilation. The method is tested over two classical
one-dimensional problems, namely the linear advection problem and the Burgers'
equation. The analyses encompass a number of different aspects, such as
different grid resolutions. The results indicate that the contribution of the
inner loop is essential in obtaining accurate flow reconstruction and global
parametric optimization. These findings open exciting perspectives of
application to grid-dependent reduced-order models extensively used in fluid
mechanics applications for complex flows, such as Large Eddy Simulation (LES).
|
Can the spatial distance between two identical particles be explained in
terms of the extent that one can be distinguished from the other? Is the
geometry of space a macroscopic manifestation of an underlying microscopic
statistical structure? Is geometrodynamics derivable from general principles of
inductive inference? Tentative answers are suggested by a model of
geometrodynamics based on the statistical concepts of entropy, information
geometry, and entropic dynamics.
|
Graphene has an extremely high carrier mobility partly due to its planar
mirror symmetry inhibiting scattering by the highly occupied acoustic flexural
phonons. Electrostatic gating of a graphene device can break the planar mirror
symmetry yielding a coupling mechanism to the flexural phonons. We examine the
effect of the gate-induced one-phonon scattering on the mobility for several
gate geometries and dielectric environments using first-principles calculations
based on density functional theory (DFT) and the Boltzmann equation. We
demonstrate that this scattering mechanism can be a mobility-limiting factor,
and show how the carrier density and temperature scaling of the mobility
depends on the electrostatic environment. Our findings may explain the high
deformation potential for in-plane acoustic phonons extracted from experiments
and furthermore suggest a direct relation between device symmetry and resulting
mobility.
|
We study the electric permittivity of the QED vacuum in the presence of a
strong constant electric field, motivated by the analogy between the
dynamically-assisted Schwinger effect in strong-field QED and the Franz-Keldysh
effect in semiconductor physics. We develop a linear-response theory based on
the non-equilibrium in-in formalism and the Furry-picture perturbation theory,
with which and also utilizing the Kramers-Kr\"onig relation, we calculate the
electric permittivity without assuming weak fields and low-frequency probes. We
discover that the electric permittivity exhibits characteristic oscillating
dependence on the probe frequency, which directly reflects the change of the
QED-vacuum structure by the strong field. We also establish a quantitative
correspondence between the electric permittivity and the number of
electron-positron pairs produced by the dynamically-assisted Schwinger effect.
|
The generation of laser-driven dense relativistic electron layers from
ultra-thin foils and their use for coherent Thomson backscattering is
discussed, applying analytic theory and one-dimensional particle-in-cell
simulation. The blow-out regime is explored in which all foil electrons are
separated from ions by direct laser action. The electrons follow the light wave
close to its leading front. Single electron solutions are applied to initial
acceleration, phase switching, and second-stage boosting. Coherently reflected
light shows Doppler-shifted spectra, chirped over several octaves. The Doppler
shift is found to be proportional to \gamma_x^2=1/(1-\beta_x^2), where \beta_x
is the electron velocity component in normal direction of the electron layer
which is also the direction of the driving laser pulse. Due to transverse
electron momentum p_y, the Doppler shift by
4*\gamma_x^2=4*\gamma^2/(1+(p_y/mc)^2) ~= 2*\gamma is significantly smaller
than full shift of 4*\gamma^2. Methods to turn p_y -> 0 and to recover the full
Doppler shift are proposed and verified by 1D-PIC simulation. These methods
open new ways to design intense single attosecond pulses.
|
Federated Learning (FL) aims to train machine learning models for multiple
clients without sharing their own private data. Due to the heterogeneity of
clients' local data distribution, recent studies explore the personalized FL
that learns and deploys distinct local models with the help of auxiliary global
models. However, the clients can be heterogeneous in terms of not only local
data distribution, but also their computation and communication resources. The
capacity and efficiency of personalized models are restricted by the
lowest-resource clients, leading to sub-optimal performance and limited
practicality of personalized FL. To overcome these challenges, we propose a
novel approach named pFedGate for efficient personalized FL by adaptively and
efficiently learning sparse local models. With a lightweight trainable gating
layer, pFedGate enables clients to reach their full potential in model capacity
by generating different sparse models accounting for both the heterogeneous
data distributions and resource constraints. Meanwhile, the computation and
communication efficiency are both improved thanks to the adaptability between
the model sparsity and clients' resources. Further, we theoretically show that
the proposed pFedGate has superior complexity with guaranteed convergence and
generalization error. Extensive experiments show that pFedGate achieves
superior global accuracy, individual accuracy and efficiency simultaneously
over state-of-the-art methods. We also demonstrate that pFedGate performs
better than competitors in the novel clients participation and partial clients
participation scenarios, and can learn meaningful sparse local models adapted
to different data distributions.
|
We carried out a comprehensive study of the electronic, magnetic, and
thermodynamic properties of Ni-doped ZrTe$_2$. High quality
Ni$_{0.04}$ZrTe$_{1.89}$ single crystals show a possible coexistence of charge
density waves (CDW, T$_{CDW}\approx287$\,K) with superconductivity (T$_c\approx
4.1$\,K), which we report here for the first time. The temperature dependence
of the lower (H$_{c_1}$) and upper (H$_{c_2}$) critical magnetic fields both
deviate significantly from the behaviors expected in conventional single-gap
s-wave superconductors. However, the behaviors of the normalized superfluid
density $\rho_s(T)$ and H$_{c_2}(T)$ can be described well using a two-gap
model for the Fermi surface, in a manner consistent with conventional multiband
superconductivity. Electrical resistivity and specific heat measurements show
clear anomalies centered near 287\,K consistent with a CDW phase transition.
Additionally, electronic-structure calculations support the coexistence of
electron-phonon multiband superconductivity and CDW order due to the
compensated disconnected nature of the electron- and hole-pockets at the Fermi
surface. Our electronic structure calculations also suggest that ZrTe$_2$ could
reach a non-trivial topological type-II Dirac semimetallic state. These
findings highlight that Ni-doped ZrTe2 can be uniquely important for probing
the coexistence of superconducting and CDW ground states in an electronic
system with non-trivial topology.
|
Solving by a canonical string theory, of closed strings for the glueballs and
open strings for the mesons, the 't Hooft large-$N$ expansion of QCD is a
long-standing problem that resisted all the attempts despite the advent of the
celebrated gauge/gravity duality in the framework of string theory. We
demonstrate that in the canonical string framework such a solution does not
actually exist because an inconsistency arises between the renormalization
properties of the QCD S matrix at large $N$ -- a consequence of the asymptotic
freedom (AF) -- and the open/closed duality of the would-be string solution.
Specifically, the would-be open-string one-loop corrections to the tree
glueball amplitudes must be ultraviolet (UV) divergent. Hence, naively, the
inconsistency arises because these amplitudes are dual to tree closed-string
diagrams, which are universally believed to be both UV finite -- since they are
closed-string tree diagrams -- and infrared finite because of the glueball mass
gap. In fact, the inconsistency follows from a low-energy theorem of the NSVZ
type that controls the renormalization in QCD-like theories. The inconsistency
extends to the would-be canonical string for a vast class of 't Hooft large-$N$
QCD-like theories including $\mathcal{N}=1$ SUSY QCD. We also demonstrate that
the presently existing SUSY string models with a mass gap -- such as
Klebanov-Strassler, Polchinski-Strassler (PS) and certain PS variants -- cannot
contradict the above-mentioned results since they are not asymptotically free.
Moreover, we shed light on the way the open/closed string duality may be
perturbatively realized in these string models compatibly with a mass gap in
the 't Hooft-planar closed-string sector and the low-energy theorem because of
the lack of AF. Finally, we suggest a noncanonical way-out for QCD-like
theories based on topological strings on noncommutative twistor space.
|
Neutron stars in low-mass X-ray binaries are thought to be heated up by
accretion-induced exothermic nuclear reactions in the crust. The energy release
and the location of the heating sources are important ingredients of the
thermal evolution models. Here we present thermodynamically consistent
calculations of the energy release in three zones of the stellar crust: at the
outer-inner crust interface, in the upper layers of the inner crust (up to the
density $\rho \leq 2\times 10^{12}$ g cm$^{-3}$), and in the underlying crustal
layers. We consider three representative models of thermonuclear ashes
(Superburst, Extreme rp, and Kepler ashes). The energy release in each zone is
parametrized by the pressure at the outer-inner crust interface, which encodes
all uncertainties related to the physics of the deepest inner-crust layers. Our
calculations allow us, in particular, to set new lower limits on the net energy
release (per accreted baryon): $Q\gtrsim0.28$ MeV for Extreme rp ashes and
Q~0.43-0.51 MeV for Superburst and Kepler ashes.
|
For data isolated islands and privacy issues, federated learning has been
extensively invoking much interest since it allows clients to collaborate on
training a global model using their local data without sharing any with a third
party. However, the existing federated learning frameworks always need
sophisticated condition configurations (e.g., sophisticated driver
configuration of standalone graphics card like NVIDIA, compile environment)
that bring much inconvenience for large-scale development and deployment. To
facilitate the deployment of federated learning and the implementation of
related applications, we innovatively propose WebFed, a novel browser-based
federated learning framework that takes advantage of the browser's features
(e.g., Cross-platform, JavaScript Programming Features) and enhances the
privacy protection via local differential privacy mechanism. Finally, We
conduct experiments on heterogeneous devices to evaluate the performance of the
proposed WebFed framework.
|
In [Combinatorics, Probability and Computing 16 (2007), 557 - 593, Theorem 1]
we proved a polynomial-time bound on the mixing rate of the switch chain for
sampling d-regular graphs. This corrigendum corrects a technical error in the
proof. In order to fix the error, we must multiply the bound on the mixing time
by a factor of d^8 .
|
In this paper we study interpretations and equivalences of propositional
deductive systems by using a quantale-theoretic approach introduced by Galatos
and Tsinakis. Our aim is to provide a general order-theoretic framework which
is able to describe and characterize both strong and weak forms of
interpretations among propositional deductive systems also in the cases where
the systems have different underlying languages.
|
Video-based facial affect analysis has recently attracted increasing
attention owing to its critical role in human-computer interaction. Previous
studies mainly focus on developing various deep learning architectures and
training them in a fully supervised manner. Although significant progress has
been achieved by these supervised methods, the longstanding lack of large-scale
high-quality labeled data severely hinders their further improvements.
Motivated by the recent success of self-supervised learning in computer vision,
this paper introduces a self-supervised approach, termed Self-supervised Video
Facial Affect Perceiver (SVFAP), to address the dilemma faced by supervised
methods. Specifically, SVFAP leverages masked facial video autoencoding to
perform self-supervised pre-training on massive unlabeled facial videos.
Considering that large spatiotemporal redundancy exists in facial videos, we
propose a novel temporal pyramid and spatial bottleneck Transformer as the
encoder of SVFAP, which not only enjoys low computational cost but also
achieves excellent performance. To verify the effectiveness of our method, we
conduct experiments on nine datasets spanning three downstream tasks, including
dynamic facial expression recognition, dimensional emotion recognition, and
personality recognition. Comprehensive results demonstrate that SVFAP can learn
powerful affect-related representations via large-scale self-supervised
pre-training and it significantly outperforms previous state-of-the-art methods
on all datasets. Codes will be available at https://github.com/sunlicai/SVFAP.
|
Local density fluctuations near the QCD critical point can be probed by
intermittency analysis of scaled factorial moments in relativistic heavy-ion
collisions. We report the first measurement of intermittency for charged
particles in Au + Au collisions at $\sqrt{s_\mathrm{NN}}$ = 7.7-200 GeV from
the STAR experiment at RHIC. We observe scaling behaviors in central Au + Au
collisions, with the extracted scaling exponent decreasing from mid-central to
the most central Au + Au collisions. Furthermore, the scaling exponent exhibits
a non-monotonic energy dependence with a minimum around $\sqrt{s_\mathrm{NN}}$
= 20-30 GeV in central Au + Au collisions.
|
We construct a parametrization of the deep-inelastic structure function of
the proton F_2 based on all available experimental information from charged
lepton deep-inelastic scattering experiments. The parametrization effectively
provides a bias-free determination of the probability measure in the space of
structure functions, which retains information on experimental errors and
correlations. The result is obtained in the form of a Monte Carlo sample of
neural networks trained on an ensemble of replicas of the experimental data. We
discuss in detail the techniques required for the construction of bias-free
parameterizations of large amounts of structure function data, in view of
future applications to the determination of parton distributions based on the
same method.
|
Abductive reasoning is inference to the most plausible explanation. For
example, if Jenny finds her house in a mess when she returns from work, and
remembers that she left a window open, she can hypothesize that a thief broke
into her house and caused the mess, as the most plausible explanation. While
abduction has long been considered to be at the core of how people interpret
and read between the lines in natural language (Hobbs et al., 1988), there has
been relatively little research in support of abductive natural language
inference and generation. We present the first study that investigates the
viability of language-based abductive reasoning. We introduce a challenge
dataset, ART, that consists of over 20k commonsense narrative contexts and 200k
explanations. Based on this dataset, we conceptualize two new tasks -- (i)
Abductive NLI: a multiple-choice question answering task for choosing the more
likely explanation, and (ii) Abductive NLG: a conditional generation task for
explaining given observations in natural language. On Abductive NLI, the best
model achieves 68.9% accuracy, well below human performance of 91.4%. On
Abductive NLG, the current best language generators struggle even more, as they
lack reasoning capabilities that are trivial for humans. Our analysis leads to
new insights into the types of reasoning that deep pre-trained language models
fail to perform--despite their strong performance on the related but more
narrowly defined task of entailment NLI--pointing to interesting avenues for
future research.
|
Tool-augmented large language models (LLMs) are rapidly being integrated into
real-world applications. Due to the lack of benchmarks, the community still
needs to fully understand the hallucination issues within these models. To
address this challenge, we introduce a comprehensive diagnostic benchmark,
ToolBH. Specifically, we assess the LLM's hallucinations through two
perspectives: depth and breadth. In terms of depth, we propose a multi-level
diagnostic process, including (1) solvability detection, (2) solution planning,
and (3) missing-tool analysis. For breadth, we consider three scenarios based
on the characteristics of the toolset: missing necessary tools, potential
tools, and limited functionality tools. Furthermore, we developed seven tasks
and collected 700 evaluation samples through multiple rounds of manual
annotation. The results show the significant challenges presented by the ToolBH
benchmark. The current advanced models Gemini-1.5-Pro and GPT-4o only achieve a
total score of 45.3 and 37.0, respectively, on a scale of 100. In this
benchmark, larger model parameters do not guarantee better performance; the
training data and response strategies also play a crucial role in tool-enhanced
LLM scenarios. Our diagnostic analysis indicates that the primary reason for
model errors lies in assessing task solvability. Additionally, open-weight
models suffer from performance drops with verbose replies, whereas proprietary
models excel with longer reasoning.
|
A zero-one matrix $A$ contains another zero-one matrix $P$ if some submatrix
of $A$ can be transformed to $P$ by changing some ones to zeros. $A$ avoids $P$
if $A$ does not contain $P$. The Pattern Avoidance Game is played by two
players. Starting with an all-zero matrix, two players take turns changing
zeros to ones while keeping $A$ avoiding $P$. We study the strategies of this
game for some patterns $P$. We also study some generalizations of this game.
|
We give a case-free proof that the lattice of noncrossing partitions
associated to any finite real reflection group is EL-shellable. Shellability of
these lattices was open for the groups of type $D_n$ and those of exceptional
type and rank at least three.
|
The OTELO project is the extragalactic survey currently under way using the
tunable filters of the OSIRIS instrument at the GTC. OTELO is already providing
the deepest emission line object survey of the universe up to a redshift 7. In
this contribution, the status of the survey and the first results obtained are
presented.
|
Using state-of-the-art first-principles calculations we have elucidated the
complex magnetic and structural dependence of LaOFeAs upon doping. Our key
findings are that (i) doping results in an orthorhombic ground state and (ii)
there is a commensurate to incommensurate transition in the magnetic structure
between $x=0.025$ and $x=0.04$. Our calculations further imply that in this
system magnetic order persists up to the onset of superconductivity at the
critical doping of $x=0.05$. Finally, our investigations of the undoped parent
compound reveal an unusually pronounced dependence of the magnetic moment on
details of the exchange-correlation (xc) functional used in the calculation.
However, for all choices of xc functional an orthorhombic structure is found.
|
Cuprate high-temperature superconductors exhibit a pseudogap in the normal
state that decreases monotonically with increasing hole doping and closes at x
\approx 0.19 holes per planar CuO2 while the superconducting doping range is
0.05 < x < 0.27 with optimal Tc at x \approx 0.16. Using ab initio quantum
calculations at the level that leads to accurate band gaps, we found that
four-Cu-site plaquettes are created in the vicinity of dopants. At x \approx
0.05 the plaquettes percolate, so that the Cu dx2y2/O p{\sigma} orbitals inside
the plaquettes now form a band of states along the percolating swath. This
leads to metallic conductivity and below Tc to superconductivity. Plaquettes
disconnected from the percolating swath are found to have degenerate states at
the Fermi level that split and lead to the pseudogap. The pseudogap can be
calculated by simply counting the spatial distribution of isolated plaquettes,
leading to an excellent fit to experiment. This provides strong evidence in
favor of inhomogeneous plaquettes in cuprates.
|
In the present manuscript, we present a new sequence of operators, $i.e.$,
$\alpha$-Bernstein-Schurer-Kantorovich operators depending on two parameters
$\alpha\in[0,1]$ and $\rho>0$ for one and two variables to approximate
measurable functions on $[0: 1+q], q>0$. Next, we give basic results and
discuss the rapidity of convergence and order of approximation for univariate
and bivariate of these sequences in their respective sections. Further,
Graphical and numerical analysis are presented. Moreover, local and global
approximation properties are discussed in terms of first and second order
modulus of smoothness, Peetre's K-functional and weight functions for these
sequences in different spaces of functions.
|
We prove that a finite-dimensional Hopf algebra with the dual Chevalley
Property over a field of characteristic zero is quasi-isomorphic to a
Radford-Majid bosonization whenever the third Hochschild cohomology group in
the category of Yetter-Drinfeld modules of its diagram with coefficients in the
base field vanishes. Moreover we show that this vanishing occurs in meaningful
examples where the diagram is a Nichols algebra.
|
We consider the truncated equation for the centrally symmetric $V$-states in
the 2d Euler dynamics of patches and prove the existence of solutions
$y(x,\lambda),\, x\in [-1,1],\,\lambda\in (0,\lambda_0)$. These functions
$y(x,\lambda)\in C^\infty(-1,1)$ and $\|y(x,\lambda)-|x|\|_{C[-1,1]}\to 0,
\lambda\to 0$.
|
The halfspace depth is a prominent tool of nonparametric multivariate
analysis. The upper level sets of the depth, termed the trimmed regions of a
measure, serve as a natural generalization of the quantiles and inter-quantile
regions to higher-dimensional spaces. The smallest non-empty trimmed region,
coined the halfspace median of a measure, generalizes the median. We focus on
the (inverse) ray basis theorem for the halfspace depth, a crucial theoretical
result that characterizes the halfspace median by a covering property. First, a
novel elementary proof of that statement is provided, under minimal assumptions
on the underlying measure. The proof applies not only to the median, but also
to other trimmed regions. Motivated by the technical development of the amended
ray basis theorem, we specify connections between the trimmed regions, floating
bodies, and additional equi-affine convex sets related to the depth. As a
consequence, minimal conditions for the strict monotonicity of the depth are
obtained. Applications to the computation of the depth and robust estimation
are outlined.
|
We present an analysis of the 3-D structure of the Magellanic Clouds, using
period-luminosity (P-L) relations of pulsating red giants in the OGLE-II
sample. By interpreting deviations from the mean P-L relations as distance
modulus variations, we examine the three-dimensional distributions of the
sample. The results for the Large Magellanic Cloud, based solely on stars below
the tip of the Red Giant Branch, confirm previous results on the inclined and
possibly warped bar of the LMC. The depth variation across the OGLE-II field is
about 2.4 kpc, interpreted as the distance range of a thin but inclined
structure. The inclination angle is about 29 deg. A comparison with OGLE-II red
clump distances revealed intriguing differences that seem to be connected to
the red clump reddening correction. A spatially variable red clump population
in the LMC can explain the deviations, which may have a broader impact on our
understanding of the LMC formation history. For the Small Magellanic Cloud, we
find a complex structure showing patchy distribution scattered within 3.2 kpc
of the mean. However, the larger range of the overall depth on every
line-of-sight is likely to smooth out significantly the real variations.
|
We experimentally study the heating of trapped atomic ions during measurement
of their internal qubit states. During measurement, ions are projected into one
of two basis states and discriminated by their state-dependent fluorescence. We
observe that ions in the fluorescing state rapidly scatter photons and heat at
a rate of $\dot{\bar{n}}\sim 2\times 10^4$ quanta/s, which is $\sim 30$ times
faster than the anomalous ion heating rate. We introduce a quantum
trajectory-based framework that accurately reproduces the experimental results
and provides a unified description of ion heating for both continuous and
discrete sources.
|
Dephasing is a main noise mechanism that afflicts quantum information, it
reduces visibility, and destroys coherence and entanglement. Therefore, it must
be reduced, mitigated, and if possible corrected, to allow for the
demonstration of quantum advantage in any application of quantum technology,
from computing to sensing and communications. Here we discuss a hardware scheme
of error filtration to mitigate the effects of dephasing in optical quantum
metrology. The scheme uses only passive linear optics and ancillary vacuum
modes, and we do not need single-photon sources or entanglement. It exploits
constructive and destructive interference to partially cancel the detrimental
effects of statistically independent sources of dephasing. We apply this scheme
to preserve coherent states and to phase stabilize stellar interferometry, and
show that a significant improvement can be obtained by using only a few
ancillary modes.
|
This paper focuses on the simultaneous homogenization and dimension reduction
of periodic composite plates within the framework of non-linear elasticity. The
composite plate in its reference (undeformed) configuration consists of a
periodic perforated plate made of stiff material with holes filled by soft
matrix material. The structure is clamped on a cylindrical part. Two cases of
asymptotic analysis are considered: one without pre-strain and the other with
matrix pre-strain. In both cases, the total elastic energy is in the
von-K\'arm\'an (vK) regime ($\varepsilon^5$).
A new splitting of the displacements is introduced to analyze the asymptotic
behavior. The displacements are decomposed using the Kirchhoff-Love (KL) plate
displacement decomposition. The use of a re-scaling unfolding operator allows
for deriving the asymptotic behavior of the Green St. Venant's strain tensor in
terms of displacements. The limit homogenized energy is shown to be of vK type
with linear elastic cell problems, established using the $\Gamma$-convergence.
Additionally, it is shown that for isotropic homogenized material, our limit
vK plate is orthotropic. The derived results have practical applications in the
design and analysis of composite structures.
|
We suggest an attack on a symmetric non-ideal quantum coin-tossing protocol
suggested by Mayers Salvail and Chiba-Kohno. The analysis of the attack shows
that the protocol is insecure.
|
I give an introductory review of recent, fascinating developments in
supersymmetric gauge theories. I explain pedagogically the miraculous
properties of supersymmetric gauge dynamics allowing one to obtain exact
solutions in many instances. Various dynamical regimes emerging in
supersymmetric Quantum Chromodynamics and its generalizations are discussed. I
emphasize those features that have a chance of survival in QCD and those which
are drastically different in supersymmetric and non-supersymmetric gauge
theories.
Unlike most of the recent reviews focusing almost entirely on the progress in
extended supersymmetries (the Seiberg-Witten solution of N=2 models), these
lectures are mainly devoted to N=1 theories. The primary task is extracting
lessons for non-supersymmetric theories.
|
Let $K$ denote the contact Lie superalgebra $K(m,n;\underline{t})$ over a
field of characteristic $p>3$, which has a finite $\mathbb{Z}$-graded
structure. Let $T_K$ be the canonical torus of $K$, which is an abelian
subalgebra of $K_{0}$ and operates on $K_{-1}$ by semisimple endomorphisms.
Utilizing the weight space decomposition of $K$ with respect to $T_K$, %we show
the action of the skew-symmetric super-biderivation on the elements of $T$ and
the contact of $K$. %Moreover, we prove that each skew-symmetric
super-biderivation of $K$ is inner.
|
We construct colloidal ``sticky'' rods from the semi-flexible filamentous fd
virus and temperature-sensitive polymers poly(N-isopropylacrylamide) (PNIPAM).
The phase diagram of fd-PNIPAM system becomes independent of ionic strength at
high salt concentration and low temperature, i.e. the rods are sterically
stabilized by the polymer. However, the network of sticky rods undergoes a
sol-gel transition as the temperature is raised. The viscoelastic moduli of fd
and fd-PNIPAM suspensions are compared as a function of temperature, and the
effect of ionic strength on the gelling behavior of fd-PNIPAM solution is
measured. For all fluidlike and solidlike samples, the frequency-dependant
linear viscoelastic moduli can be scaled onto universal master curves.
|
We construct the exponential map associated to a nonholonomic system that
allows us to define an exact discrete nonholonomic constraint submanifold. We
reproduce the continuous nonholonomic flow as a discrete flow on this discrete
constraint submanifold deriving an exact discrete version of the nonholonomic
equations. Finally, we derive a general family of nonholonomic integrators.
|
The distances to which the optical flash destroys dust via sublimation, and
the burst and afterglow change the size distribution of the dust via
fragmentation, are functions of grain size. Furthermore, the sublimation
distance is a decreasing function of grain size, while the fragmentation
distance is a decreasing function of grain size for large grains and an
increasing function of grain size for small grains. We investigate how these
very different, but somewhat complementary, processes change the optical depth
of the circumburst medium. To this end, we adopt a canonical distribution of
graphite and silicate grain sizes, and a simple fragmentation model, and we
compute the post-burst/optical flash/afterglow optical depth of a circumburst
cloud of constant density n and size R as a function of burst and afterglow
isotropic-equivalent X-ray energy E and spectral index alpha, and optical flash
isotropic-equivalent peak luminosity L: This improves upon previous analyses
that consider circumburst dust of a uniform grain size. We find that
circumburst clouds do not significantly extinguish (tau < 0.3) the optical
afterglow if R < 10L_{49}^{1/2} pc, fairly independent of n, E, and alpha, or
if N_H < 5x10^{20} cm^{-2}. On the other hand, we find that circumburst clouds
do significantly extinguish (tau > 3) the optical afterglow if R >
10L_{49}^{1/2} pc and N_H > 5x10^{21} cm^{-2}, creating a so-called `dark
burst'. The majority of bursts are dark, and as circumburst extinction is
probably the primary cause of this, this implies that most dark bursts occur in
clouds of this size and mass M > 3x10^5L_{49} M_{sun}. Clouds of this size and
mass are typical of giant molecular clouds, and are active regions of star
formation.
|
Matrix-variate time series data are largely available in applications.
However, no attempt has been made to study their conditional heteroskedasticity
that is often observed in economic and financial data. To address this gap, we
propose a novel matrix generalized autoregressive conditional
heteroskedasticity (GARCH) model to capture the dynamics of conditional row and
column covariance matrices of matrix time series. The key innovation of the
matrix GARCH model is the use of a univariate GARCH specification for the trace
of conditional row or column covariance matrix, which allows for the
identification of conditional row and column covariance matrices. Moreover, we
introduce a quasi maximum likelihood estimator (QMLE) for model estimation and
develop a portmanteau test for model diagnostic checking. Simulation studies
are conducted to assess the finite-sample performance of the QMLE and
portmanteau test. To handle large dimensional matrix time series, we also
propose a matrix factor GARCH model. Finally, we demonstrate the superiority of
the matrix GARCH and matrix factor GARCH models over existing multivariate
GARCH-type models in volatility forecasting and portfolio allocations using
three applications on credit default swap prices, global stock sector indices,
and future prices.
|
We prove the local well-posedness for a two phase problem of
magnetohydrodynamics with a sharp interface. The solution is obtained in the
maximal regularity space: $H^1_p((0, T), L_q) \cap L_p((0, T), H^2_q)$ with $1
< p, q < \infty$ and 2/p + N/q < 1$, where $N$ is the space dimension
|
A warped extra-dimensional model, where the Standard Model Yukawa hierarchy
is set by UV physics, is shown to have a sweet spot of parameters with improved
experimental visibility and possibly naturalness. Upon marginalizing over all
the model parameters, a Kaluza-Klein scale of 2.1 TeV can be obtained at 2
sigma (95.4 CL) without conflicting with electroweak precision measurements.
Fitting all relevant parameters simultaneously can relax this bound to 1.7 TeV.
In this bulk version of the Rattazzi-Zaffaroni shining model, flavor violation
is also highly suppressed, yielding a bound of 2.4 TeV. Non-trivial flavor
physics at the LHC in the form of flavor gauge bosons is predicted. The model
is also characterized by a depletion of the third generation couplings -- as
predicted by the general minimal flavor violation framework -- which can be
tested via flavor precision measurements. In particular, sizable CP violation
in Delta B=2 transitions can be obtained, and there is a natural region where
Bs mixing is predicted to be larger than Bd mixing, as favored by recent
Tevatron data. Unlike other proposals, the new contributions are not linked to
Higgs or any scalar exchange processes.
|
In this paper I report the highlights of the talk: "Universal properties in
galaxies and cored Dark Matter profiles", given at: Colloquium Lectures, Ecole
Internationale d'Astrophysique Daniel Chalonge. The 14th Paris Cosmology
Colloquium 2010 "The Standard Model of the Universe: Theory and Observations".
|
Crowdsourcing has become a common approach for annotating large amounts of
data. It has the advantage of harnessing a large workforce to produce large
amounts of data in a short time, but comes with the disadvantage of employing
non-expert annotators with different backgrounds. This raises the problem of
data reliability, in addition to the general question of how to combine the
opinions of multiple annotators in order to estimate the ground truth. This
paper presents a study of the annotations and annotators' reliability for audio
tagging. We adapt the use of Krippendorf's alpha and multi-annotator competence
estimation (MACE) for a multi-labeled data scenario, and present how MACE can
be used to estimate a candidate ground truth based on annotations from
non-expert users with different levels of expertise and competence.
|
In this paper we define signless Laplacian matrix of a hypergraph and obtain
structural properties from its eigenvalues. We generalize several known results
for graphs, relating the spectrum of this matrix with structural parameters of
the hypergraph such as the maximum degree, diameter and the chromatic number.
In addition, we characterize the complete signless Laplacian spectrum for the
class of the power hypergraphs from the spectrum of its base hypergraph.
|
The existence of a "knee" at energy ~1 PeV in the cosmic-ray spectrum
suggests the presence of Galactic PeV proton accelerators called "PeVatrons".
Supernova Remnant (SNR) G106.3+2.7 is a prime candidate for one of these. The
recent detection of very high energy (0.1-100 TeV) gamma rays from G106.3+2.7
may be explained either by the decay of neutral pions or inverse Compton
scattering by relativistic electrons. We report an analysis of 12 years of
Fermi-LAT gamma-ray data which shows that the GeV-TeV gamma-ray spectrum is
much harder and requires a different total electron energy than the radio and
X-ray spectra, suggesting it has a distinct, hadronic origin. The non-detection
of gamma rays below 10 GeV implies additional constraints on the relativistic
electron spectrum. A hadronic interpretation of the observed gamma rays is
strongly supported. This observation confirms the long-sought connection
between Galactic PeVatrons and SNRs. Moreover, it suggests that G106.3+2.7
could be the brightest member of a new population of SNRs whose gamma-ray
energy flux peaks at TeV energies. Such a population may contribute to the
cosmic-ray knee and be revealed by future very high energy gamma-ray detectors.
|
Diffusion models generating images conditionally on text, such as Dall-E 2
and Stable Diffusion, have recently made a splash far beyond the computer
vision community. Here, we tackle the related problem of generating point
clouds, both unconditionally, and conditionally with images. For the latter, we
introduce a novel geometrically-motivated conditioning scheme based on
projecting sparse image features into the point cloud and attaching them to
each individual point, at every step in the denoising process. This approach
improves geometric consistency and yields greater fidelity than current methods
relying on unstructured, global latent codes. Additionally, we show how to
apply recent continuous-time diffusion schemes. Our method performs on par or
above the state of art on conditional and unconditional experiments on
synthetic data, while being faster, lighter, and delivering tractable
likelihoods. We show it can also scale to diverse indoors scenes.
|
Muon spin rotation/relaxation ($\mu$SR) and polar Kerr effect measurements
provide evidence for a time-reversal symmetry breaking (TRSB) superconducting
state in Sr$_2$RuO$_4$. However, the absence of a cusp in the superconducting
transition temperature ($T_{\rm c}$) vs. stress and the absence of a resolvable
specific heat anomaly at TRSB transition temperature ($T_{\rm TRSB}$) under
uniaxial stress challenge a hypothesis of TRSB superconductivity. Recent
$\mu$SR studies under pressure and with disorder indicate that the splitting
between $T_{\rm c}$ and $T_{\rm TRSB}$ occurs only when the structural
tetragonal symmetry is broken. To further test such behavior, we measured
$T_\text{c}$ through susceptibility measurements, and $T_\text{TRSB}$ through
$\mu$SR, under uniaxial stress applied along a $\langle 110 \rangle$ lattice
direction. We have obtained preliminary evidence for suppression of
$T_\text{TRSB}$ below $T_\text{c}$, at a rate much higher than the suppression
rate of $T_\text{c}$.
|
Is AI set to redefine the legal profession? We argue that this claim is not
supported by the current evidence. We dive into AI's increasingly prevalent
roles in three types of legal tasks: information processing; tasks involving
creativity, reasoning, or judgment; and predictions about the future. We find
that the ease of evaluating legal applications varies greatly across legal
tasks, based on the ease of identifying correct answers and the observability
of information relevant to the task at hand. Tasks that would lead to the most
significant changes to the legal profession are also the ones most prone to
overoptimism about AI capabilities, as they are harder to evaluate. We make
recommendations for better evaluation and deployment of AI in legal contexts.
|
Learning from a few examples is an important practical aspect of training
classifiers. Various works have examined this aspect quite well. However, all
existing approaches assume that the few examples provided are always correctly
labeled. This is a strong assumption, especially if one considers the current
techniques for labeling using crowd-based labeling services. We address this
issue by proposing a novel robust few-shot learning approach. Our method relies
on generating robust prototypes from a set of few examples. Specifically, our
method refines the class prototypes by producing hybrid features from the
support examples of each class. The refined prototypes help to classify the
query images better. Our method can replace the evaluation phase of any
few-shot learning method that uses a nearest neighbor prototype-based
evaluation procedure to make them robust. We evaluate our method on standard
mini-ImageNet and tiered-ImageNet datasets. We perform experiments with various
label corruption rates in the support examples of the few-shot classes. We
obtain significant improvement over widely used few-shot learning methods that
suffer significant performance degeneration in the presence of label noise. We
finally provide extensive ablation experiments to validate our method.
|
This work is about the structure of the symbolic Rees algebra of the base
ideal of a Cremona map. We give sufficient conditions under which this algebra
has the "expected form" in some sense. The main theorem in this regard
seemingly covers all previous results on the subject so far. The proof relies
heavily on a criterion of birationality and the use of the so-called inversion
factor of a Cremona map. One adds a pretty long selection of examples of plane
and space Cremona maps tested against the conditions of the theorem, with
special emphasis on Cohen--Macaulay base ideals.
|
Proficiency with calculating, reporting, and understanding measurement
uncertainty is a nationally recognized learning outcome for undergraduate
physics lab courses. The Physics Measurement Questionnaire (PMQ) is a
research-based assessment tool that measures such understanding. The PMQ was
designed to characterize student reasoning into point or set paradigms, where
the set paradigm is more aligned with expert reasoning. We analyzed over 500
student open-ended responses collected at the beginning and the end of a
traditional introductory lab course at the University of Colorado Boulder. We
discuss changes in students' understanding over a semester by analyzing
pre-post shifts in student responses regarding data collection, data analysis,
and data comparison.
|
We investigate the rates of drug resistance acquisition in a natural
population using molecular epidemiological data from Bolivia. First, we study
the rate of direct acquisition of double resistance from the double sensitive
state within patients and compare it to the rates of evolution to single
resistance. In particular, we address whether or not double resistance can
evolve directly from a double sensitive state within a given host. Second, we
aim to understand whether the differences in mutation rates to rifampicin and
isoniazid resistance translate to the epidemiological scale. Third, we estimate
the proportion of MDR TB cases that are due to the transmission of MDR strains
compared to acquisition of resistance through evolution. To address these
problems we develop a model of TB transmission in which we track the evolution
of resistance to two drugs and the evolution of VNTR loci. However, the
available data is incomplete, in that it is recorded only {for a fraction of
the population and} at a single point in time. The likelihood function induced
by the proposed model is computationally prohibitive to evaluate and
accordingly impractical to work with directly. We therefore approach
statistical inference using approximate Bayesian computation techniques.
|
We study the long-term evolution of relativistic jets in collapsars and
examine the effects of viewing angle on the subsequent gamma ray bursts. We
carry out a series of high-resolution simulations of a jet propagating through
a stellar envelope in 2D cylindrical coordinates using the FLASH relativistic
hydrodynamics module. For the first time, simulations are carried out using an
adaptive mesh that allows for a large dynamic range inside the star while still
being efficient enough to follow the evolution of the jet long after it breaks
out from the star. Our simulations allow us to single out three phases in the
jet evolution: a precursor phase in which relativistic material turbulently
shed from the head of the jet first emerges from the star, a shocked jet phase
where a fully shocked jet of material is emerging, and an unshocked jet phase
where the jet consists of a free-streaming, unshocked core surrounded by a thin
boundary layer of shocked jet material. The appearance of these phases will be
different to observers at different angles. The precursor has a wide opening
angle and would be visible far off axis. The shocked phase has a relatively
narrow opening angle that is constant in time. During the unshocked jet phase
the opening angle increases logarithmically with time. As a consequence, some
observers see prolonged dead times of emission even for constant properties of
the jet injected in the stellar core. We also present an analytic model that is
able to reproduce the overall properties of the jet and its evolution. We
finally discuss the observational implications of our results, emphasizing the
possible ways to test progenitor models through the effects of jet propagation
in the star. In an appendix, we present 1D and 2D tests of the FLASH
relativistic hydrodynamics module.
|
Reinforcement learning (RL) can enable task-oriented dialogue systems to
steer the conversation towards successful task completion. In an end-to-end
setting, a response can be constructed in a word-level sequential decision
making process with the entire system vocabulary as action space. Policies
trained in such a fashion do not require expert-defined action spaces, but they
have to deal with large action spaces and long trajectories, making RL
impractical. Using the latent space of a variational model as action space
alleviates this problem. However, current approaches use an uninformed prior
for training and optimize the latent distribution solely on the context. It is
therefore unclear whether the latent representation truly encodes the
characteristics of different actions. In this paper, we explore three ways of
leveraging an auxiliary task to shape the latent variable distribution: via
pre-training, to obtain an informed prior, and via multitask learning. We
choose response auto-encoding as the auxiliary task, as this captures the
generative factors of dialogue responses while requiring low computational cost
and neither additional data nor labels. Our approach yields a more
action-characterized latent representations which support end-to-end dialogue
policy optimization and achieves state-of-the-art success rates. These results
warrant a more wide-spread use of RL in end-to-end dialogue models.
|
This book chapter presents an overview of the historical experimental and
theoretical developments in neutrino physics and astrophysics and also the
physical properties of neutrinos, as well as the physical processes involving
neutrinos. It also discusses the role of neutrinos in astrophysics and
cosmology. Correction to tex file made.
|
This paper is concerned with index pairs in the sense of Conley index theory
for flows relative to pseudo-gradient vector fields for $C^1$-functions
satisfying Palais-Smale condition. We prove a deformation theorem for such
index pairs to obtain a Lusternik-Schnirelmann type result in Conley index
theory.
|
In this work we propose an alternate scaling for the head loss in the steady
flow of Newtonian fluids through tubes. The characteristics of the proposed
scaling render more clear the role of inertia in this flow and ensure that the
trends of the relationship between dimensionless quantities are the same ones
observed in the dimensional problem.
|
The theory of string-like continuous curves and discrete chains have numerous
important physical applications. Here we develop a general geometrical
approach, to systematically derive Hamiltonian energy functions for these
objects. In the case of continuous curves, we demand that the energy function
must be invariant under local frame rotations, and it should also transform
covariantly under reparametrizations of the curve. This leads us to consider
energy functions that are constructed from the conserved quantities in the
hierarchy of the integrable nonlinear Schr\"odinger equation (NLSE). We point
out the existence of a Weyl transformation that we utilize to introduce a dual
hierarchy to the standard NLSE hierarchy. We propose that the dual hierarchy is
also integrable, and we confirm this to the first non-trivial order. In the
discrete case the requirement of reparametrization invariance is void. But the
demand of invariance under local frame rotations prevails, and we utilize it to
introduce a discrete variant of the Zakharov-Shabat recursion relation. We use
this relation to derive frame independent quantities that we propose are the
essentially unique and as such natural candidates for constructing energy
functions for piecewise linear polygonal chains. We also investigate the
discrete version of the Weyl duality transformation. We confirm that in the
continuum limit the discrete energy functions go over to their continuum
counterparts, including the perfect derivative contributions.
|
We study a discrete-time duplication-deletion random graph model and analyse
its asymptotic degree distribution. The random graphs consists of disjoint
cliques. In each time step either a new vertex is brought in with probability
$0<p<1$ and attached to an existing clique, chosen with probability
proportional to the clique size, or all the edges of a random vertex are
deleted with probability $1-p$. We prove almost sure convergence of the
asymptotic degree distribution and find its exact values in terms of a
hypergeometric integral, expressed in terms of the parameter $p$. In the regime
$0<p<\frac{1}{2}$ we show that the degree sequence decays exponentially at rate
$\frac{p}{1-p}$, whereas it satisfies a power-law with exponent
$\frac{p}{2p-1}$ if $\frac{1}{2}<p<1$. At the threshold $p=\frac{1}{2}$ the
degree sequence lies between a power-law and exponential decay.
|
We consider the equation $u_t = \mbox{Div}(a[u]\nabla u - u\nabla a[u])$,
$-\Delta a = u$. This model has attracted some attention in the recents years
and several results are available in the literature. We review recent results
on existence and smoothness of solutions and explain the open problems.
|
Spin-polarized two-dimensional materials with large and tunable
spin-splitting energy promise the field of 2D spintronics. While graphene has
been a canonical 2D material, its spin properties and tunability are limited.
Here, we demonstrate the emergence of robust spin-polarization in graphene with
large and tunable spin-splitting energy of up to 132 meV at zero applied
magnetic fields. The spin polarization is induced through a magnetic exchange
interaction between graphene and the underlying ferrimagnetic oxide insulating
layer, Tm3Fe5O12, as confirmed by its X-ray magnetic circular dichroism. The
spin-splitting energies are directly measured and visualized by the shift in
their landau fan diagram mapped by analyzing the measured subnikov-de-Haas
oscillations as a function of applied electric fields, showing consistent fit
with our first-principles and machine learning calculations. Further, the
observed spin-splitting energies can be tuned over a broad range between 98 and
166 meV by cooling fields. Our methods and results are applicable to other
two-dimensional (magnetic) materials and heterostructures, and offer great
potential for developing next-generation spin logic and memory devices.
|
For \Omega a C^{2}-smooth domain, and a positive bounded continuous map a \in
C(\Omega), we prove existence of a minimizer of the functional u \mapsto
$\int_{\Omega} a|Du| over the space BV(\Omega) of functions of bounded
variation with fixed trace f \in L^{1}(\partial\Omega). The method of proof is
constructive.
|
Vugs are small to medium-sized cavities inside rock, which have significant
effects on the fluid flow in rock. Moreover, the presence of vugs may have
non-trivial impacts on the geomechanical behavior of rock. How to quantify and
analyze such effects is still an opening problem. To this end, we derive a
macroscopic poroelastic model for a single-phase viscous fluid flow through a
deformable vuggy porous medium. At first, a vuggy porous medium is divided into
two parts: the porous matrix and vugs. Then, we model the hydro-mechanical
coupling process on the fine scale using Biot's equations within porous matrix,
Stokes equations within the vugs, and an extended Beavers-Joseph-Saffman
boundary condition on the porous-fluid interface. Next, based on the
homogenization theory, we obtain a macroscopic Biot's equations governing the
hydro-mechanical coupling behavior of vuggy porous media on larger scale.
Subsequently, the macroscopic poroelastic coefficients, such as the effective
Darcy permeability, effective Young's modulus and effective Biot coefficient,
can be computed from three cell problems. Finally, several numerical examples
are designed to demonstrate the computational procedure of evaluating the
geomechanical behavior of vuggy porous media.
|
The Lieb-Oxford bound, a nontrivial inequality for the indirect part of the
many-body Coulomb repulsion in an electronic system, plays an important role in
the construction of approximations in density functional theory. Using the
wavefunction for strictly-correlated electrons of a given density, we turn the
search over wavefunctions appearing in the original bound into a more
manageable search over electron densities. This allows us to challenge the
bound in a systematic way. We find that a maximizing density for the bound, if
it exists, must have compact support. We also find that, at least for particle
numbers $N\le 60$, a uniform density profile is not the most challenging for
the bound. With our construction we improve the bound for $N=2$ electrons that
was originally found by Lieb and Oxford, we give a new lower bound to the
constant appearing in the Lieb-Oxford inequality valid for any $N$, and we
provide an improved upper bound for the low-density uniform electron gas
indirect energy.
|
Random spanning trees of a graph $G$ are governed by a corresponding
probability mass distribution (or "law"), $\mu$, defined on the set of all
spanning trees of $G$. This paper addresses the problem of choosing $\mu$ in
order to utilize the edges as "fairly" as possible. This turns out to be
equivalent to minimizing, with respect to $\mu$, the expected overlap of two
independent random spanning trees sampled with law $\mu$. In the process, we
introduce the notion of homogeneous graphs. These are graphs for which it is
possible to choose a random spanning tree so that all edges have equal usage
probability. The main result is a deflation process that identifies a
hierarchical structure of arbitrary graphs in terms of homogeneous subgraphs,
which we call homogeneous cores. A key tool in the analysis is the spanning
tree modulus, for which there exists an algorithm based on minimum spanning
tree algorithms, such as Kruskal's or Prim's.
|
Charge density wave (CDW) order is an emergent quantum phase that is
characterized by a periodic lattice distortion and charge density modulation,
often present near superconducting transitions. Here we uncover a novel
inverted CDW state by using a femtosecond laser to coherently over-drive the
unique star-of-David lattice distortion in 1T-TaSe$_2$. We track the signature
of this novel CDW state using time- and angle-resolved photoemission
spectroscopy and time-dependent density functional theory, and validate that it
is associated with a unique lattice and charge arrangement never before
realized. The dynamic electronic structure further reveals its novel
properties, that are characterized by an increased density of states near the
Fermi level, high metallicity, and altered electron-phonon couplings. Our
results demonstrate how ultrafast lasers can be used to create unique states in
materials, by manipulating charge-lattice orders and couplings.
|
We obtain an exact formula for the equilibrium free energy of a charged
quantum particle moving in a harmonic potential in the presence of a uniform
external magnetic field and linearly coupled to a heat bath of independent
quantum harmonic oscillators through the momentum variables. We show that the
free energy has a different expression than that for the coordinate-coordinate
coupling between the particle and the heat-bath oscillators. For an
illustrative heat-bath spectrum, we evaluate the free energy in the
low-temperature limit, thereby showing that the entropy of the charged particle
vanishes at zero temperature, in agreement with the third law of
thermodynamics.
|
In this article we study properties of Ramanujan's mock theta functions that
can be expressed in Lerch sums. We mainly show that each Lerch sum is actually
the integral of a Jacobian theta function (here we show that for
$\vartheta_3(t,q)$ and $\vartheta_4(t,q)$) and the $\sec-$function. We also
prove some modular relations and evaluate the Fourier coefficients of a class
of Lerch sums.
|
The new data analysis of the experiment, where the photon splitting in the
atomic fields has been observed for the first time, is presented. This
experiment was performed at the tagged photon beam of the ROKK-1M facility at
the VEPP-4M collider. In the energy region of 120-450 MeV, the statistics of
$1.6\cdot 10^9$ photons incident on the BGO target was collected. About 400
candidates to the photon splitting events were reconstructed. Within the
attained experimental accuracy, the experimental results are consistent with
the cross section calculated exactly in an atomic field. The predictions
obtained in the Born approximation significantly differ from the experimental
results.
|
Fluids confined in nanopores exhibit properties different from the properties
of the same fluids in bulk, among these properties are the isothermal
compressibility or elastic modulus. The modulus of a fluid in nanopores can be
extracted from ultrasonic experiments or calculated from molecular simulations.
Using Monte Carlo simulations in the grand canonical ensemble, we calculated
the modulus for liquid argon at its normal boiling point (87.3~K) adsorbed in
model silica pores of two different morphologies and various sizes. For
spherical pores, for all the pore sizes (diameters) exceeding 2~nm, we obtained
a logarithmic dependence of fluid modulus on the vapor pressure. Calculation of
the modulus at saturation showed that the modulus of the fluid in spherical
pores is a linear function of the reciprocal pore size. The calculation of the
modulus of the fluid in cylindrical pores appeared too scattered to make
quantitative conclusions. We performed additional simulations at higher
temperature (119.6~K), at which Monte Carlo insertions and removals become more
efficient. The results of the simulations at higher temperature confirmed both
regularities for cylindrical pores and showed quantitative difference between
the fluid moduli in pores of different geometries. Both of the observed
regularities for the modulus stem from the Tait-Murnaghan equation applied to
the confined fluid. Our results, along with the development of the effective
medium theories for nanoporous media, set the groundwork for analysis of the
experimentally-measured elastic properties of fluid-saturated nanoporous
materials.
|
We describe a simple geometric transformation of triangles which leads to an
efficient and effective algorithm to smooth triangle and tetrahedral meshes.
Our focus lies on the convergence properties of this algorithm: we prove the
effectivity for some planar triangle meshes and further introduce dynamical
methods to study the dynamics of the algorithm which may be used for any kind
of algorithm based on a geometric transformation.
|
This article reviews the research program and efforts for the TEXONO
Collaboration on neutrino and astro-particle physics. The ``flagship'' program
is on reactor-based neutrino physics at the Kuo-Sheng (KS) Power Plant in
Taiwan. A limit on the neutrino magnetic moment of munuebar < 1.3 times
10^{-10} mub at 90% confidence level was derived from measurements with a high
purity germanium detector. Other physics topics at KS, as well as the various
R&D program, are discussed.
|
We describe our experience using Codio at Coventry University in our
undergraduate programming curriculum. Codio provides students with online
virtual Linux boxes, and allows staff to equip these with guides written in
markdown and supplemental tasks that provide automated feedback. The use of
Codio has coincided with a steady increase in student performance and
satisfaction as well as far greater data on student engagement and performance.
|
The design and performance of a newly developed cluster jet target
installation for hadron physics experiments are presented which, for the first
time, is able to generate a hydrogen cluster jet beam with a target thickness
of above $10^{15}\,\mathrm{atoms/cm}^2$ at a distance of two metres behind the
cluster jet nozzle. The properties of the cluster beam and of individual
clusters themselves are studied at this installation. Special emphasis is
placed on measurements of the target beam density as a function of the relevant
parameters as well as on the cluster beam profiles. By means of a
time-of-flight setup, measurements of the velocity of single clusters and
velocity distributions were possible. The complete installation, which meets
the requirements of future internal fixed target experiments at storage rings,
and the results of the systematic studies on hydrogen cluster jets are
presented and discussed.
|
We perform a replacement procedure in order to produce a free boundary
minimal surface whose area achieves the min-max value over all disk sweepouts
of a manifold whose boundary lie in a submanifold. Our result is based on a
proof of the convexity of the energy for free boundary harmonic maps and a
generalization of Colding-Minicozzi replacement procedure.
|
Let $(M,g)$ be a $n-$dimensional compact Riemannian manifold with boundary.
We consider the Yamabe type problem \begin{equation} \left\{ \begin{array}{ll}
-\Delta_{g}u+au=0 & \text{ on }M \\ \partial_\nu u+\frac{n-2}{2}bu= u^{{n\over
n-2}\pm\varepsilon} & \text{ on }\partial M \end{array}\right. \end{equation}
where $a\in C^1(M),$ $b\in C^1(\partial M)$, $\nu$ is the outward pointing unit
normal to $\partial M $ and $\varepsilon$ is a small positive parameter. We
build solutions which blow-up at a point of the boundary as $\varepsilon$ goes
to zero. The blowing-up behavior is ruled by the function $b-H_g ,$ where $H_g$
is the boundary mean curvature.
|
Suppose that $B$ is a $G$-Banach algebra over $\mathbb{F} = \mathbb{R}$ or
$\mathbb{C}$, $X$ is a finite dimensional compact metric space, $\zeta : P \to
X$ is a standard principal $G$-bundle, and $A_\zeta = \Gamma (X, P \times_G B)$
is the associated algebra of sections.
We produce a spectral sequence which converges to $\pi_*(GL_o A_\zeta) $ with
[E^2_{-p,q} \cong \check{H}^p(X ; \pi_q(GL_o B)).] A related spectral sequence
converging to $\K_{*+1}(A_\zeta)$ (the real or complex topological $K$-theory)
allows us to conclude that if $B$ is Bott-stable, (i.e., if $ \pi_*(GL_o B) \to
\K_{*+1}(B)$ is an isomorphism for all $*>0$) then so is $A_\zeta$.
|
We report a molecular dynamics study of liquid cesium at ambient pressure
intended to check the possibility of liquid-liquid phase transformation at
$T=590$ K. We find the presence of small kinks on thermodynamic characteristics
of the system, but no phase transition.
|
Recently, Government of India has taken several initiatives to make India
digitally strong such as to provide each resident a unique digital identity,
referred to as Aadhaar, and to provide several online e-Governance services
based on Aadhaar such as DigiLocker. DigiLocker is an online service which
provides a shareable private storage space on public cloud to its subscribers.
Although DigiLocker ensures traditional security such as data integrity and
secure data access, privacy of e-documents are yet to addressed.
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) can improve data privacy
but the right implementation of it has always been a challenge. This paper
presents a scheme to implement privacy enhanced DigiLocker using CP-ABE.
|
We argue in a quantitative way that the unitarity principle of quantum field
theory together with the quantum information bound on correlation functions are
in tension with a space which is made out of disconnected patches at
microscopic scales.
|
The total momentum of $N$ interacting bosons or fermions in a cube equipped
with periodic boundary conditions is a conserved quantity. Its eigenvalues
follow a probability distribution, determined by the thermal equilibrium state.
While in non-interacting systems the distribution is normal with variance $\sim
N$, interaction couples the single-particle momenta, so that the distribution
of their sum is unpredictable, except for some implications of Galilean
invariance. First, we present these implications which are strong in 1D,
moderately strong in 2D, and weak in 3D. Then, we speculate about the possible
form of the distribution in fluids, crystals, and superfluids. The existence of
phonons suggests that the total momentum can remain finite when $N\to\infty$.
We argue that in fluids the finite momenta distribute continuously, but their
integrated probability is smaller than 1, because the momentum can also tend to
infinity with $N$. In the fluid-crystal transition we expect that the total
momentum becomes finite with full probability and distributed over a lattice,
and that in the fluid-superfluid transition a delta peak appears only at zero
total momentum. Based on this picture, we discuss the superfluid flow in both
the frictionless and the dissipative cases, and derive a temperature-dependent
critical velocity. Finally, we show that Landau's criterion for excitations in
moving superfluids is an in some cases correct result of an erroneous
derivation.
|
We investigate numerically the Princeton magneto-rotational instability (MRI)
experiment and the effect of conducting axial boundaries or endcaps. MRI is
identified and found to reach a much higher saturation than for insulating
endcaps. This is probably due to stronger driving of the base flow by the
magnetically rather than viscously coupled boundaries. Although the
computations are necessarily limited to lower Reynolds numbers ($\Re$) than
their experimental counterparts, it appears that the saturation level becomes
independent of $\Re$ when $\Re$ is sufficiently large, whereas it has been
found previously to decrease roughly as $\Re^{-1/4}$ with insulating endcaps.
The much higher saturation levels will allow for the first positive detection
of MRI beyond its theoretical and numerical predictions.
|
We consider the forced surface quasi-geostrophic equation with supercritical
dissipation. We show that linear instability for steady state solutions leads
to their nonlinear instability. When the dissipation is given by a fractional
Laplacian, the nonlinear instability is expressed in terms of the scaling
invariant norm, while we establish stronger instability claims in the setting
of logarithmically supercritical dissipation. A key tool in treating the
logarithmically supercritical setting is a global well-posedness result for the
forced equation, which we prove by adapting and extending recent work related
to nonlinear maximum principles. We believe that our proof of global
well-posedness is of independent interest, to our knowledge giving the first
large-data supercritical result with sharp regularity assumptions on the
forcing term.
|
Accidents are a leading cause of deaths in armed forces. The Aim of this
paper is to minimize the accidents caused using weapons in the armed forces.
Developing artificial intelligence technologies aim to increase efficiency more
and more wherever people exist. Giving guns to inexperienced, untrained, or
unpredictable mentally unhealthy people in shooting ranges used for gun
training can be risky and fatal. With the use of image processing technologies
in these shooting ranges, it is aimed to minimize the risk of life-threatening
accidents that may be caused by this people. Artificial intelligence is trained
for the targets to be used in shooting ranges. When the camera of weapon sees
these targets, it switches from safe mode to firing mode. When a risky
situation occurs in shooting range, the gun turns itself into safe mode with
various additional security measures.
|
We study constraints from perturbativity and vacuum stability as well as the
EWPD in the type II seesaw model. As a result, we can put stringent limits on
the Higgs triplet couplings depending on the cut-off scale. The EWPD tightly
constrain the Higgs triplet mass splitting to be smaller than about 40 GeV.
Analyzing the Higgs-to-diphoton rate in the allowed parameter region, we show
how much it can deviate from the Standard Model prediction for specific
parameter points.
|
We demonstrate an Al superconducting quantum interference device in which the
Josephson junctions are implemented through gate-controlled proximitized Cu
mesoscopic weak-links. The latter behave analogously to genuine superconducting
metals in terms of the response to electrostatic gating, and provide a good
performance in terms of current-modulation visibility. We show that, through
the application of a static gate voltage, we are able to modify the
interferometer current-flux relation in a fashion which seems compatible with
the introduction of $\pi$-channels within the gated weak-link. Our results
suggest that the microscopic mechanism at the origin of the suppression of the
switching current in the interferometer is apparently phase coherent, resulting
in an overall damping of the superconducting phase rigidity. We finally tackle
the performance of the interferometer in terms of responsivity to magnetic flux
variations in the dissipative regime, and discuss the practical relevance of
gated proximity-based all-metallic SQUIDs for magnetometry at the nanoscale.
|
Considering as distance between two two-party correlations the minimum number
of half local results one party must toggle in order to turn one correlation
into the other, we show that the volume of the set of physically obtainable
correlations in the Einstein-Podolsky-Rosen-Bell scenario is (3 pi/8)^2 = 1.388
larger than the volume of the set of correlations obtainable in local
deterministic or probabilistic hidden-variable theories, but is only 3 pi^2/32
= 0.925 of the volume allowed by arbitrary causal (i.e., no-signaling)
theories.
|
Subsets and Splits