text
stringlengths 6
128k
|
---|
Powered by deep representation learning, reinforcement learning (RL) provides
an end-to-end learning framework capable of solving self-driving (SD) tasks
without manual designs. However, time-varying nonstationary environments cause
proficient but specialized RL policies to fail at execution time. For example,
an RL-based SD policy trained under sunny days does not generalize well to
rainy weather. Even though meta learning enables the RL agent to adapt to new
tasks/environments, its offline operation fails to equip the agent with online
adaptation ability when facing nonstationary environments. This work proposes
an online meta reinforcement learning algorithm based on the \emph{conjectural
online lookahead adaptation} (COLA). COLA determines the online adaptation at
every step by maximizing the agent's conjecture of the future performance in a
lookahead horizon. Experimental results demonstrate that under dynamically
changing weather and lighting conditions, the COLA-based self-adaptive driving
outperforms the baseline policies in terms of online adaptability. A demo
video, source code, and appendixes are available at {\tt
https://github.com/Panshark/COLA}
|
We present a basis of dimension-eight Green's functions involving Standard
Model (SM) bosonic fields, consisting of 86 new operators. Rather than using
algebraic identities and integration by parts, we prove the independence of
these interactions in momentum space, including a discussion on evanescent
bosonic operators. Our results pave the way for renormalising the SM effective
field theory (SMEFT), as well as for performing matching of ultraviolet models
onto the SMEFT, to higher order. To demonstrate the potential of our
construction, we have implemented our basis in matchmakereft and used it to
integrate out a heavy singlet scalar and a heavy quadruplet scalar up to one
loop. We provide the corresponding dimension-eight Wilson coefficients.
Likewise, we show how our results can be easily used to simplify cumbersome
redundant Lagrangians arising, for example, from integrating out heavy fields
using the path-integral approach to matching.
|
Modeling the multiwavelength emission of successive regions in the jet of the
quasar PKS 1136-135 we find indication that the jet suffers deceleration near
its end. Adopting a continuous flow approximation we discuss the possibility
that the inferred deceleration is induced by entrainment of external gas.
|
A stochastic nonlinear electrical characteristic of graphene is reported.
Abrupt current changes are observed from voltage sweeps between the source and
drain with an on/off ratio up to 10^(3). It is found that graphene channel
experience the topological change. Active radicals in an uneven graphene
channel cause local changes of electrostatic potential. Simulation results
based on the self-trapped electron and hole mechanism account well for the
experimental data. Our findings illustrate an important issue of reliable
electron transports and help for the understanding of transport properties in
graphene devices.
|
This paper extends the concept of scalar cepstrum coefficients from
single-input single-output linear time invariant dynamical systems to
multiple-input multiple-output models, making use of the Smith-McMillan form of
the transfer function. These coefficients are interpreted in terms of poles and
transmission zeros of the underlying dynamical system. We present a method to
compute the MIMO cepstrum based on input/output signal data for systems with
square transfer function matrices (i.e. systems with as many inputs as
outputs). This allows us to do a model-free analysis. Two examples to
illustrate these results are included: a simple MIMO system with 3 inputs and 3
outputs, of which the poles and zeros are known exactly, that allows us to
directly verify the equivalences derived in the paper, and a case study on
realistic data. This case study analyses data coming from a (model of) a
non-isothermal continuous stirred tank reactor, which experiences linear
fouling. We analyse normal and faulty operating behaviour, both with and
without a controller present. We show that the cepstrum detects faulty
behaviour, even when hidden by controller compensation. The code for the
numerical analysis is available online.
|
Although the notion of a concept as a collection of objects sharing certain
properties, and the notion of a conceptual hierarchy are fundamental to both
Formal Concept Analysis and Description Logics, the ways concepts are described
and obtained differ significantly between these two research areas. Despite
these differences, there have been several attempts to bridge the gap between
these two formalisms, and attempts to apply methods from one field in the
other. The present work aims to give an overview on the research done in
combining Description Logics and Formal Concept Analysis.
|
In this paper we present ChirpCast, a system for broadcasting network access
keys to laptops ultrasonically. This work explores several modulation
techniques for sending and receiving data using sound waves through commodity
speakers and built-in laptop microphones. Requiring only that laptop users run
a small application, the system successfully provides robust room-specific
broadcasting at data rates of 200 bits/second.
|
We prove that the solution of the Kac analogue of Boltzmann's equation can be
viewed as a probability distribution of a sum of a random number of random
variables. This fact allows us to study convergence to equilibrium by means of
a few classical statements pertaining to the central limit theorem. In
particular, a new proof of the convergence to the Maxwellian distribution is
provided, with a rate information both under the sole hypothesis that the
initial energy is finite and under the additional condition that the initial
distribution has finite moment of order $2+\delta$ for some $\delta$ in
$(0,1]$. Moreover, it is proved that finiteness of initial energy is necessary
in order that the solution of Kac's equation can converge weakly. While this
statement may seem to be intuitively clear, to our knowledge there is no proof
of it as yet.
|
In this paper, we investigate the best pixel expansion of the various models
of visual cryptography schemes. In this regard, we consider visual cryptography
schemes introduced by Tzeng and Hu [13]. In such a model, only minimal
qualified sets can recover the secret image and that the recovered secret image
can be darker or lighter than the background. Blundo et al. [4] introduced a
lower bound for the best pixel expansion of this scheme in terms of minimal
qualified sets. We present another lower bound for the best pixel expansion of
the scheme. As a corollary, we introduce a lower bound, based on an induced
matching of hypergraph of qualified sets, for the best pixel expansion of the
aforementioned model and the traditional model of visual cryptography realized
by basis matrices. Finally, we study access structures based on graphs and we
present an upper bound for the smallest pixel expansion in terms of strong
chromatic index.
|
Multi-Object Tracking (MOT) is a challenging task in the complex scene such
as surveillance and autonomous driving. In this paper, we propose a novel
tracklet processing method to cleave and re-connect tracklets on crowd or
long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet
generation utilizes object features extracted by CNN and RNN to create the
high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in
the generation process, the tracklets from different objects are split into
several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based
tracklet re-connection method is applied to link the sub-tracklets which belong
to the same object to form a whole trajectory. In addition, we extract the
tracklet images from existing MOT datasets and propose a novel dataset to train
our networks. The proposed dataset contains more than 95160 pedestrian images.
It has 793 different persons in it. On average, there are 120 images for each
person with positions and sizes. Experimental results demonstrate the
advantages of our model over the state-of-the-art methods on MOT16.
|
Let I be a sigma-ideal sigma-generated by a projective collection of closed
sets. The forcing with I-positive Borel sets is proper and adds a single real r
of an almost minimal degree: if s is a real in V[r] then s is Cohen generic
over V or V[s]=V[r].
|
We numerically analyze spectral properties of the Fibonacci model which is a
one-dimensional quasiperiodic system. We find that the energy levels of this
model have the distribution of the band widths $w$ obeys $P_B(w)\sim
w^{\alpha}$ $(w\to 0)$ and $P_B(w) \sim e^{-\beta w}$ $(w\to\infty)$, the gap
distribution $P_G(s)\sim s^{-\delta}$ $(s\to 0)$ ($\alpha,\beta,\delta >0$) .
We also compare the results with those of multi-scale Cantor sets. We find
qualitative differences between the spectra of the Fibonacci model and the
multi-scale Cantor sets.
|
We study the electronic band structure of monolayer graphene when Rashba
spin-orbit coupling is present. We show that if the Rashba spin-orbit coupling
is stronger than the intrinsic spin-orbit coupling, the low energy bands
undergo trigonal-warping deformation and that for energies smaller than the
Lifshitz energy, the Fermi circle breaks up into separate parts. The effect is
very similar to what happens in bilayer graphene at low energies. We discuss
the possible experimental implications, such as threefold increase of the
minimal conductivity for low electron densities, the wavenumber dependence of
the band splitting and the spin polarization structure. Our theoretical
predictions are in agreement with recent experimental results.
|
We show that simple thermodynamic conditions determine, to a great extent,
the equation of state and dynamics of cosmic defects of arbitrary
dimensionality. We use these conditions to provide a more direct derivation of
the Velocity-dependent One-Scale (VOS) model for the macroscopic dynamics of
topological defects of arbitrary dimensionality in a $N+1$-dimensional
homogeneous and isotropic universe. We parameterize the modifications to the
VOS model associated to the interaction of the topological defects with other
fields, including, in particular, a new dynamical degree of freedom associated
to the variation of the mass per unit $p$-area of the defects, and compute the
corresponding scaling solutions. The observational impact of this new dynamical
degree of freedom is also briefly discussed.
|
We present two exoplanets detected at Keck Observatory. HD 179079 is a G5
subgiant that hosts a hot Neptune planet with Msini = 27.5 M_earth in a 14.48
d, low-eccentricity orbit. The stellar reflex velocity induced by this planet
has a semiamplitude of K = 6.6 m/s. HD 73534 is a G5 subgiant with a
Jupiter-like planet of Msini = 1.1 M_jup and K = 16 m/s in a nearly circular
4.85 yr orbit. Both stars are chromospherically inactive and metal-rich. We
discuss a known, classical bias in measuring eccentricities for orbits with
velocity semiamplitudes, K, comparable to the radial velocity uncertainties.
For exoplanets with periods longer than 10 days, the observed exoplanet
eccentricity distribution is nearly flat for large amplitude systems (K > 80
m/s), but rises linearly toward low eccentricity for lower amplitude systems (K
> 20 m/s).
|
In this paper we propose an extension of Answer Set Programming (ASP), and in
particular, of its most general logical counterpart, Quantified Equilibrium
Logic (QEL), to deal with partial functions. Although the treatment of equality
in QEL can be established in different ways, we first analyse the choice of
decidable equality with complete functions and Herbrand models, recently
proposed in the literature. We argue that this choice yields some
counterintuitive effects from a logic programming and knowledge representation
point of view. We then propose a variant called QELF where the set of functions
is partitioned into partial and Herbrand functions (we also call constructors).
In the rest of the paper, we show a direct connection to Scott's Logic of
Existence and present a practical application, proposing an extension of normal
logic programs to deal with partial functions and equality, so that they can be
translated into function-free normal programs, being possible in this way to
compute their answer sets with any standard ASP solver.
|
We present the construction of an original stochastic model for the
instantaneous turbulent kinetic energy at a given point of a flow, and we
validate estimator methods on this model with observational data examples.
Motivated by the need for wind energy industry of acquiring relevant
statistical information of air motion at a local place, we adopt the Lagrangian
description of fluid flows to derive, from the $3$D+time equations of the
physics, a $0$D+time-stochastic model for the time series of the instantaneous
turbulent kinetic energy at a given position. Specifically, based on the
Lagrangian stochastic description of a generic fluid-particles, we derive a
family of mean-field dynamics featuring the square norm of the turbulent
velocity. By approximating at equilibrium the characteristic nonlinear terms of
the dynamics, we recover the so called Cox-Ingersoll-Ross process, which was
previously suggested in the literature for modelling wind speed. We then
propose a calibration procedure for the parameters employing both direct
methods and Bayesian inference. In particular, we show the consistency of the
estimators and validate the model through the quantification of uncertainty,
with respect to the range of values given in the literature for some physical
constants of turbulence modelling.
|
Deep learning and knowledge transfer techniques have permeated the field of
medical imaging and are considered as key approaches for revolutionizing
diagnostic imaging practices. However, there are still challenges for the
successful integration of deep learning into medical imaging tasks due to a
lack of large annotated imaging data. To address this issue, we propose a
teacher-student learning framework to transfer knowledge from a carefully
pre-trained convolutional neural network (CNN) teacher to a student CNN. In
this study, we explore the performance of knowledge transfer in the medical
imaging setting. We investigate the proposed network's performance when the
student network is trained on a small dataset (target dataset) as well as when
teacher's and student's domains are distinct. The performances of the CNN
models are evaluated on three medical imaging datasets including Diabetic
Retinopathy, CheXpert, and ChestX-ray8. Our results indicate that the
teacher-student learning framework outperforms transfer learning for small
imaging datasets. Particularly, the teacher-student learning framework improves
the area under the ROC Curve (AUC) of the CNN model on a small sample of
CheXpert (n=5k) by 4% and on ChestX-ray8 (n=5.6k) by 9%. In addition to small
training data size, we also demonstrate a clear advantage of the
teacher-student learning framework in the medical imaging setting compared to
transfer learning. We observe that the teacher-student network holds a great
promise not only to improve the performance of diagnosis but also to reduce
overfitting when the dataset is small.
|
Deep neural networks trained for classification have been found to learn
powerful image representations, which are also often used for other tasks such
as comparing images w.r.t. their visual similarity. However, visual similarity
does not imply semantic similarity. In order to learn semantically
discriminative features, we propose to map images onto class embeddings whose
pair-wise dot products correspond to a measure of semantic similarity between
classes. Such an embedding does not only improve image retrieval results, but
could also facilitate integrating semantics for other tasks, e.g., novelty
detection or few-shot learning. We introduce a deterministic algorithm for
computing the class centroids directly based on prior world-knowledge encoded
in a hierarchy of classes such as WordNet. Experiments on CIFAR-100, NABirds,
and ImageNet show that our learned semantic image embeddings improve the
semantic consistency of image retrieval results by a large margin.
|
In this Essay we address several fundamental issues in cosmology: What is the
nature of dark energy and dark matter? Why is the dark sector so different from
ordinary matter? Why is the effective cosmological constant non-zero but so
incredibly small? What is the reason behind the emergence of a critical
acceleration parameter of magnitude $10^{-8} cm/sec^2$ in galactic dynamics? We
suggest that the holographic principle is the linchpin in a unified scheme to
understand these various issues.
|
We analyze the nonlinear dynamics of a high-finesse optical cavity in which
one mirror is mounted on a flexible mechanical element. We find that this
system is governed by an array of dynamical attractors, which arise from
phase-locking between the mechanical oscillations of the mirror and the ringing
of the light intensity in the cavity. We describe an analytical approximation
to map out the diagram of attractors in parameter space, derive the slow
amplitude dynamics of the system, including thermally activated hopping between
different attractors, and suggest a scheme for exploiting the dynamical
multistability in the measurement of small displacements.
|
Deaf or hard-of-hearing (DHH) speakers typically have atypical speech caused
by deafness. With the growing support of speech-based devices and software
applications, more work needs to be done to make these devices inclusive to
everyone. To do so, we analyze the use of openly-available automatic speech
recognition (ASR) tools with a DHH Japanese speaker dataset. As these
out-of-the-box ASR models typically do not perform well on DHH speech, we
provide a thorough analysis of creating personalized ASR systems. We collected
a large DHH speaker dataset of four speakers totaling around 28.05 hours and
thoroughly analyzed the performance of different training frameworks by varying
the training data sizes. Our findings show that 1000 utterances (or 1-2 hours)
from a target speaker can already significantly improve the model performance
with minimal amount of work needed, thus we recommend researchers to collect at
least 1000 utterances to make an efficient personalized ASR system. In cases
where 1000 utterances is difficult to collect, we also discover significant
improvements in using previously proposed data augmentation techniques such as
intermediate fine-tuning when only 200 utterances are available.
|
We present redshift space two-point ($\xi$), three-point ($\zeta$) and
reduced three-point (Q) correlations of Ly$\alpha$ absorbers (Voigt profile
components having HI column density, $N_{\rm HI}>10^{13.5}$cm$^{-2}$) over
three redshift bins spanning $1.7< z<3.5$ using high-resolution spectra of 292
quasars. We detect positive $\xi$ up to 8 $h^{-1}$ cMpc in all three redshift
bins. The strongest detection of $\zeta =1.81\pm 0.59$ (with Q$=0.68\pm 0.23$),
is in $z=1.7-2.3$ bin at $1-2h^{-1}$ cMpc. The measured $\xi$ and $\zeta$
values show an increasing trend with $N_{\rm HI}$, while Q remains relatively
independent of $N_{\rm HI}$. We find $\xi$ and $\zeta$ to evolve strongly with
redshift. Using simulations, we find that $\xi$ and $\zeta$ seen in real space
may be strongly amplified by peculiar velocities in redshift space. Simulations
suggest that while feedback, thermal and pressure smoothing effects influence
the clustering of Ly$\alpha$ absorbers at small scales, i.e $<0.5h^{-1}$ cMpc,
the HI photo-ionization rate ($\Gamma_{\rm HI}$) has a strong influence at all
scales. The strong redshift evolution of $\xi$ and $\zeta$ (for a fixed $N_{\rm
HI}$-cutoff) is driven by the redshift evolution of the relationship between
$N_{\rm HI}$ and baryon overdensity. Our simulation using best-fitted
$\Gamma_{\rm HI}(z)$ measurements produces consistent clustering signals with
observations at $z\sim 2$ but under-predicts the clustering at higher
redshifts. One possible remedy is to have higher values of $\Gamma_{\rm HI}$ at
higher redshifts. Alternatively the discrepancy could be related to
non-equilibrium and inhomogeneous conditions prevailing during HeII
reionization not captured by our simulations.
|
Single-shot measurement of the charge arrangement and spin state of a double
quantum dot are reported, with measurement times down to ~ 100 ns. Sensing uses
radio-frequency reflectometry of a proximal quantum dot in the Coulomb blockade
regime. The sensor quantum dot is up to 30 times more sensitive than a
comparable quantum point contact sensor, and yields three times greater signal
to noise in rf single-shot measurements. Numerical modeling is qualitatively
consistent with experiment and shows that the improved sensitivity of the
sensor quantum dot results from reduced screening and lifetime broadening.
|
We propose a lean and functional transaction scheme to establish a secure
delivery-versus-payment across two blockchains, where a) no intermediary is
required and b) the operator of the payment chain/payment system has a small
overhead and does not need to store state. The main idea comes with two
requirements: First, the payment chain operator hosts a stateless decryption
service that allows decrypting messages with his secret key. Second, a "Payment
Contract" is deployed on the payment chain that implements a function
transferAndDecrypt(uint id, address from, address to, string
keyEncryptedSuccess, string keyEncryptedFail) that processes the
(trigger-based) payment and emits the decrypted key depending on the success or
failure of the transaction. The respective key can then trigger an associated
transaction, e.g. claiming delivery by the buyer or re-claiming the locked
asset by the seller.
|
In his famous monograph on permutation groups, H.~Wielandt gives an example
of a Schur ring over an elementary abelian group of order $p^2$ ($p>3$ is a
prime), which is non-schurian, that is, it is the transitivity module of no
permutation group. Generalizing this example, we construct a huge family of
non-schurian Schur rings over elementary abelian groups of even rank.
|
We study Andreev bound states (ABS) and resulting charge transport of Rashba
superconductor (RSC) where two-dimensional semiconductor (2DSM)
heterostructures is sandwiched by spin-singlet s-wave superconductor and
ferromagnet insulator. ABS becomes a chiral Majorana edge mode similar to that
in spinless chiral p-wave pairing in topological phase (TP). We clarify that
two types of quantum criticality about the topological change of ABS near a
quantum critical point (QCP), whether ABS exists at QCP or not. In the former
type, ABS has a energy gap and does not cross at zero energy in non-topological
phase (NTP). These complex properties can be detected by tunneling conductance
between normal metal / RSC junctions.
|
Throughout the processing and analysis of survey data, a ubiquitous issue
nowadays is that we are spoilt for choice when we need to select a methodology
for some of its steps. The alternative methods usually fail and excel in
different data regions, and have various advantages and drawbacks, so a
combination that unites the strengths of all while suppressing the weaknesses
is desirable. We propose to use a two-level hierarchy of learners. Its first
level consists of training and applying the possible base methods on the first
part of a known set. At the second level, we feed the output probability
distributions from all base methods to a second learner trained on the
remaining known objects. Using classification of variable stars and photometric
redshift estimation as examples, we show that the hierarchical combination is
capable of achieving general improvement over averaging-type combination
methods, correcting systematics present in all base methods, is easy to train
and apply, and thus, it is a promising tool in the astronomical "Big Data" era.
|
A state-dependent relay channel is studied in which strictly causal channel
state information is available at the relay and no state information is
available at the source and destination. Source and relay are connected via two
unidirectional out-of-band orthogonal links of finite capacity, and a
state-dependent memoryless channel connects source and relay, on one side, and
the destination, on the other. Via the orthogonal links, the source can convey
information about the message to be delivered to the destination to the relay
while the relay can forward state information to the source. This exchange
enables cooperation between source and relay on both transmission of message
and state information to the destination. First, an achievable scheme, inspired
by noisy network coding, is proposed that exploits both message and state
cooperation. Next, based on the given achievable rate and appropriate upper
bounds, capacity results are identified for some special cases. Finally, a
Gaussian model is studied, along with corresponding numerical results that
illuminate the relative merits of state and message cooperation.
|
We present first results from RoboPol, a novel-design optical polarimeter
operating at the Skinakas Observatory in Crete. The data, taken during the May
- June 2013 commissioning of the instrument, constitute a single-epoch linear
polarization survey of a sample of gamma-ray - loud blazars, defined according
to unbiased and objective selection criteria, easily reproducible in
simulations, as well as a comparison sample of, otherwise similar, gamma-ray -
quiet blazars. As such, the results of this survey are appropriate for both
phenomenological population studies and for tests of theoretical population
models. We have measured polarization fractions as low as $0.015$ down to $R$
magnitude of 17 and as low as $0.035$ down to 18 magnitude. The hypothesis that
the polarization fractions of gamma-ray - loud and gamma-ray - quiet blazars
are drawn from the same distribution is rejected at the $10^{-3}$ level. We
therefore conclude that gamma-ray - loud and gamma-ray - quiet sources have
different optical polarization properties. This is the first time this
statistical difference is demonstrated in optical wavelengths. The polarization
fraction distributions of both samples are well-described by exponential
distributions with averages of $\langle p \rangle =6.4 ^{+0.9}_{-0.8}\times
10^{-2}$ for gamma-ray--loud blazars, and $\langle p \rangle =3.2
^{+2.0}_{-1.1}\times 10^{-2}$ for gamma-ray--quiet blazars. The most probable
value for the difference of the means is $3.4^{+1.5}_{-2.0}\times 10^{-2}$. The
distribution of polarization angles is statistically consistent with being
uniform.
|
We present the results of precision mass measurements of neutron-rich cadmium
isotopes. These nuclei approach the $N=82$ closed neutron shell and are
important to nuclear structure as they lie near doubly-magic $^{132}$Sn on the
chart of nuclides. Of particular note is the clear identification of the ground
state mass in $^{127}$Cd along with the isomeric state. We show that the ground
state identified in a previous mass measurement which dominates the mass value
in the Atomic Mass Evaluation is an isomeric state. In addition to
$^{127/m}$Cd, we present other cadmium masses measured ($^{125/m}$Cd and
$^{126}$Cd) in a recent TITAN experiment at TRIUMF. Finally, we compare our
measurements to new \emph{ab initio} shell-model calculations and comment on
the state of the field in the $N=82$ region.
|
The CMS experiment will collect data from the proton-proton collisions
delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to
14 TeV. The CMS trigger system is designed to cope with unprecedented
luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger
architecture only employs two trigger levels. The Level-1 trigger is
implemented using custom electronics. The High Level Trigger is implemented on
a large cluster of commercial processors, the Filter Farm. Trigger menus have
been developed for detector calibration and for fulfilment of the CMS physics
program, at start-up of LHC operations, as well as for operations with higher
luminosities. A complete multipurpose trigger menu developed for an early
instantaneous luminosity of 10^{32}cm{-2}s{-1} has been tested in the HLT
system under realistic online running conditions. The required computing power
needed to process with no dead time a maximum HLT input rate of 50 kHz, as
expected at startup, has been measured, using the most recent commercially
available processors. The Filter Farm has been equipped with 720 such
processors, providing a computing power at least a factor two larger than
expected to be needed at startup. Results for the commissioning of the
full-scale trigger and data acquisition system with cosmic muon runs are
reported. The trigger performance during operations with LHC circulating proton
beams, delivered in September 2008, is outlined and first results are shown.
|
Super-resolution imaging with advanced optical systems has been
revolutionizing technical analysis in various fields from biological to
physical sciences. However, many objects are hidden by strongly scattering
media such as rough wall corners or biological tissues that scramble light
paths, create speckle patterns and hinder object's visualization, let alone
super-resolution imaging. Here, we realize a method to do non-invasive
super-resolution imaging through scattering media based on stochastic optical
scattering localization imaging (SOSLI) technique. Simply by capturing multiple
speckle patterns of photo-switchable emitters in our demonstration, the
stochastic approach utilizes the speckle correlation properties of scattering
media to retrieve an image with more than five-fold resolution enhancement
compared to the diffraction limit, while posing no fundamental limit in
achieving higher spatial resolution. More importantly, we demonstrate our SOSLI
to do non-invasive super-resolution imaging through not only optical diffusers,
i.e. static scattering media, but also biological tissues, i.e. dynamic
scattering media with decorrelation of up to 80%. Our approach paves the way to
non-invasively visualize various samples behind scattering media at
unprecedented levels of detail.
|
Artifact-centric process models aim to describe complex processes as a
collection of interacting artifacts. Recent development in process mining allow
for the discovery of such models. However, the focus is often on the
representation of the individual artifacts rather than their interactions.
Based on event data we can automatically discover composite state machines
representing artifact-centric processes. Moreover, we provide ways of
visualizing and quantifying interactions among different artifacts. For
example, we are able to highlight strongly correlated behaviours in different
artifacts. The approach has been fully implemented as a ProM plug-in; the CSM
Miner provides an interactive artifact-centric process discovery tool focussing
on interactions. The approach has been evaluated using real life data sets,
including the personal loan and overdraft process of a Dutch financial
institution.
|
In this paper, deep-learning-based approaches namely fine-tuning of
pretrained convolutional neural networks (VGG16 and VGG19), and end-to-end
training of a developed CNN model, have been used in order to classify X-Ray
images into four different classes that include COVID-19, normal, opacity and
pneumonia cases. A dataset containing more than 20,000 X-ray scans was
retrieved from Kaggle and used in this experiment. A two-stage classification
approach was implemented to be compared to the one-shot classification
approach. Our hypothesis was that a two-stage model will be able to achieve
better performance than a one-shot model. Our results show otherwise as VGG16
achieved 95% accuracy using one-shot approach over 5-fold of training. Future
work will focus on a more robust implementation of the two-stage classification
model Covid-TSC. The main improvement will be allowing data to flow from the
output of stage-1 to the input of stage-2, where stage-1 and stage-2 models are
VGG16 models fine-tuned on the Covid-19 dataset.
|
Measurements are generally collected as unilateral or bilateral data in
clinical trials or observational studies. For example, in ophthalmology
studies, the primary outcome is often obtained from one eye or both eyes of an
individual. In medical studies, the relative risk is usually the parameter of
interest and is commonly used. In this article, we develop three confidence
intervals for the relative risk for combined unilateral and bilateral
correlated data under the equal dependence assumption. The proposed confidence
intervals are based on maximum likelihood estimates of parameters derived using
the Fisher scoring method. Simulation studies are conducted to evaluate the
performance of proposed confidence intervals with respect to the empirical
coverage probability, the mean interval width, and the ratio of mesial
non-coverage probability to the distal non-coverage probability. We also
compare the proposed methods with the confidence interval based on the method
of variance estimates recovery and the confidence interval obtained from the
modified Poisson regression model with correlated binary data. We recommend the
score confidence interval for general applications because it best controls
converge probabilities at the 95% level with reasonable mean interval width. We
illustrate the methods with a real-world example.
|
Tight-binding calculations predict that the AA-stacked graphene bilayer has
one electron and one hole conducting bands, and that the Fermi surfaces of
these bands coincide. We demonstrate that as a result of this degeneracy, the
bilayer becomes unstable with respect to a set of spontaneous symmetry
violations. Which of the symmetries is broken depends on the microscopic
details of the system. We find that antiferromagnetism is the more stable order
parameter. This order is stabilized by the strong on-site Coulomb repulsion.
For an on-site repulsion energy typical for graphene systems, the
antiferromagnetic gap can exist up to room temperatures.
|
In this note, we study the fluctuations in the number of points of smooth
projective plane curves over finite fields $\mathbb{F}_q$ as $q$ is fixed and
the genus varies. More precisely, we show that these fluctuations are predicted
by a natural probabilistic model, in which the points of the projective plane
impose independent conditions on the curve. The main tool we use is a geometric
sieving process introduced by Poonen.
|
This paper considers the stabilization of nonlinear continuous-time dynamical
systems employing periodic event-triggered control (PETC). Assuming knowledge
of a stabilizing feedback law for the continuous-time system with a certain
convergence rate, a dynamic, state dependent PETC mechanism is designed. The
proposed mechanism guarantees on average the same worst case convergence
behavior except for tunable deviations. Furthermore, a new approach to
determine the sampling period for the proposed PETC mechanism is presented.
This approach as well as the actual trigger rule exploit the theory of
non-monotonic Lyapunov functions. An additional feature of the proposed PETC
mechanism is the possibility to integrate knowledge about packet losses in the
PETC design. The proposed PETC mechanism is illustrated with a nonlinear
numerical example from literature. This paper is the accepted version of [1],
containing also the proofs of the main results.
|
Highly efficient exciton-exciton annihilation process unique to
one-dimensional systems is utilized for super-resolution imaging of
air-suspended carbon nanotubes. Through the comparison of fluorescence signals
in linear and sublinear regimes at different excitation powers, we extract the
efficiency of the annihilation processes using conventional confocal
microscopy. Spatial images of the annihilation rate of the excitons have
resolution beyond the diffraction limit. We investigate excitation power
dependence of the annihilation processes by experiment and Monte Carlo
simulation, and the resolution improvement of the annihilation images can be
quantitatively explained by the superlinearity of the annihilation process. We
have also developed another method in which the cubic dependence of the
annihilation rate on exciton density is utilized to achieve further sharpening
of single nanotube images.
|
Aval et al. proved that starting from a critical configuration of a chip-
firing game on an undirected graph, one can never achieve a stable
configuration by reverse firing any non-empty subsets of its vertices. In this
paper, we generalize the result to digraphs with a global sink where reverse
firing subsets of vertices is replaced with reverse firing multi-subsets of
vertices. Consequently, a combinatorial proof for the duality between critical
configurations and superstable configurations on digraphs is given. Finally, by
introducing the concept of energy vector assigned to each configuration, we
show that critical and superstable configurations are the unique ones with the
greatest and smallest (w.r.t. the containment order), respectively, energy
vectors in each of their equivalence classes.
|
The stability of an abelian (Nielsen-Olesen) vortex embedded in the
electroweak theory against W production is investigated in a gauge defined by
the condition of a single-component Higgs field. The model is characterized by
the parameters $\beta=({M_H\over M_Z})^2$ and $\gamma=\cos^2\theta_{\rm w}$
where $\theta_{\rm w}$ is the weak mixing angle. It is shown that the equations
for W's in the background of the Nielsen-Olesen vortex have no solutions in the
linear approximation. A necessary condition for the nonlinear equations to have
a solution in the region of parameter space where the abelian vortex is
classically unstable is that the W's be produced in a state of angular momentum
$m$ such that $0>m>-2n$. The integer $n$ is defined by the phase of the Higgs
field, $\exp(in\varphi)$. Solutions for a set of values of the parameters
$\beta$ and $\gamma$ in this region were obtained numerically for the case
$-m=n=1$. The possibility of existence of a stationary state for $n=1$ with W's
in the state $m=-1$ was investigated. The boundary conditions for the
Euler-Lagrange equations required to make the energy finite cannot be satisfied
at $r=0$. For these values of $n$ and $m$ the possibility of a finite-energy
stationary state defined in terms of distributions is discussed.
|
We study the effect of primordial black holes on the classical rate of
nucleation of AdS regions within the standard electroweak vacuum. We find that
the energy barrier for transitions to the new vacuum, which characterizes the
exponential suppression of the nucleation rate, can be reduced significantly in
the black-hole background. A precise analysis is required in order to determine
whether the the existence of primordial black holes is compatible with the form
of the Higgs potential at high temperature or density in the Standard Model or
its extensions.
|
A new family of spark-protected micropattern gaseous detectors is introduced:
a 2-D sensitive restive microstrip counter and hybrid detectors, which combine
in one design a resistive GEM with a microstrip detector. These novel detectors
have several important advantages over other conventional micropattern
detectors and are unique for applications like the readout detectors for dual
phase noble liquid TPCs and RICHs.
|
In a class of three-dimensional Abelian gauge theories with both light and
heavy fermions, heavy chiral fermions can trigger dynamical generation of a
magnetic field, leading to the spontaneous breaking of the Lorentz invaiance.
Finite masses of light fermions tend to restore the Lorentz invariance.
|
Feature selection aims to identify the optimal feature subset for enhancing
downstream models. Effective feature selection can remove redundant features,
save computational resources, accelerate the model learning process, and
improve the model overall performance. However, existing works are often
time-intensive to identify the effective feature subset within high-dimensional
feature spaces. Meanwhile, these methods mainly utilize a single downstream
task performance as the selection criterion, leading to the selected subsets
that are not only redundant but also lack generalizability. To bridge these
gaps, we reformulate feature selection through a neuro-symbolic lens and
introduce a novel generative framework aimed at identifying short and effective
feature subsets. More specifically, we found that feature ID tokens of the
selected subset can be formulated as symbols to reflect the intricate
correlations among features. Thus, in this framework, we first create a data
collector to automatically collect numerous feature selection samples
consisting of feature ID tokens, model performance, and the measurement of
feature subset redundancy. Building on the collected data, an
encoder-decoder-evaluator learning paradigm is developed to preserve the
intelligence of feature selection into a continuous embedding space for
efficient search. Within the learned embedding space, we leverage a
multi-gradient search algorithm to find more robust and generalized embeddings
with the objective of improving model performance and reducing feature subset
redundancy. These embeddings are then utilized to reconstruct the feature ID
tokens for executing the final feature selection. Ultimately, comprehensive
experiments and case studies are conducted to validate the effectiveness of the
proposed framework.
|
Fermi balls produced in a cosmological first-order phase transition may
collapse to primordial black holes (PBHs) if the fermion dark matter particles
that comprise them interact via a sufficiently strong Yukawa force. We show
that phase transitions described by a quartic thermal effective potential with
vacuum energy, $0.1\lesssim B^{1/4}/{\rm MeV} \lesssim 10^3$, generate PBHs of
mass, $10^{-20}\lesssim M_{\rm PBH}/M_\odot \lesssim 10^{-16}$, and
gravitational waves from the phase transition (at THEIA/$\mu$Ares) can be
correlated with an isotropic extragalactic X-ray/$\gamma$-ray background from
PBH evaporation (at AMEGO-X/e-ASTROGAM).
|
We experimentally demonstrate optical control of negative-feedback avalanche
diode (NFAD) detectors using bright light. We deterministically generate fake
single-photon detections with a better timing precision than normal operation.
This could potentially open a security loophole in quantum cryptography
systems. We then show how monitoring the photocurrent through the avalanche
photodiode can be used to reveal the detector is being blinded.
|
We show how to prepare four-photon polarization entangled states based on
some Einstein-Podolsky-Rosen (EPR) entanglers. An EPR entangler consists of two
single photons, linear optics elements, quantum non-demolition measurement
using a weak cross-Kerr nonlinearity, and classical feed forward. This
entangler which acts as the most primary part in the construction of our scheme
allows us to make two separable polarization qubits entangled near
deterministically. Therefore, the efficiency of the present device completely
depends on that of EPR entanglers, and it has a high success probability.
|
Dynamic topic models (DTMs) are very effective in discovering topics and
capturing their evolution trends in time series data. To do posterior inference
of DTMs, existing methods are all batch algorithms that scan the full dataset
before each update of the model and make inexact variational approximations
with mean-field assumptions. Due to a lack of a more scalable inference
algorithm, despite the usefulness, DTMs have not captured large topic dynamics.
This paper fills this research void, and presents a fast and parallelizable
inference algorithm using Gibbs Sampling with Stochastic Gradient Langevin
Dynamics that does not make any unwarranted assumptions. We also present a
Metropolis-Hastings based $O(1)$ sampler for topic assignments for each word
token. In a distributed environment, our algorithm requires very little
communication between workers during sampling (almost embarrassingly parallel)
and scales up to large-scale applications. We are able to learn the largest
Dynamic Topic Model to our knowledge, and learned the dynamics of 1,000 topics
from 2.6 million documents in less than half an hour, and our empirical results
show that our algorithm is not only orders of magnitude faster than the
baselines but also achieves lower perplexity.
|
Despite the huge spread and economical importance of configurable software
systems, there is unsatisfactory support in utilizing the full potential of
these systems with respect to finding performance-optimal configurations. Prior
work on predicting the performance of software configurations suffered from
either (a) requiring far too many sample configurations or (b) large variances
in their predictions. Both these problems can be avoided using the WHAT
spectral learner. WHAT's innovation is the use of the spectrum (eigenvalues) of
the distance matrix between the configurations of a configurable software
system, to perform dimensionality reduction. Within that reduced configuration
space, many closely associated configurations can be studied by executing only
a few sample configurations. For the subject systems studied here, a few dozen
samples yield accurate and stable predictors - less than 10% prediction error,
with a standard deviation of less than 2%. When compared to the state of the
art, WHAT (a) requires 2 to 10 times fewer samples to achieve similar
prediction accuracies, and (b) its predictions are more stable (i.e., have
lower standard deviation). Furthermore, we demonstrate that predictive models
generated by WHAT can be used by optimizers to discover system configurations
that closely approach the optimal performance.
|
Identifying important nodes in complex networks is essential in theoretical
and applied fields. A small number of such nodes have deterministic power to
decide information spreading, so it is of importance to find a set of nodes
that maximize the propagation in networks. Based on baseline ranking methods,
various improved methods were proposed, but there does not exist one enhanced
method that covers all the base methods. In this paper, we propose a penalized
method called RCD-Map, which is short for resampling community detection to
maximize propagation, on five baseline ranking methods(Degree centrality,
Closeness centrality, Betweennees centrality, K-shell and PageRank) with nodes'
local community information. We perturbed the original graph by resampling to
decrease the biases and randomness brought by community detection methods-both
overlapping and non-overlapping methods. To assess the performance of our
identifying method, SIR(susceptible-infected-recovered) model is applied to
simulate the information propagation process. The result shows that methods
with penalties perform better with a vaster propagation range in general.
|
The Nonrelativistic Effective Theory (NRET) is widely used in dark matter
direct detection and charged-lepton flavor violation studies through $\mu \to
e$ conversion. However, existing literature has not fully considered tensor
couplings. This study bridges this gap by utilizing an innovative tensor
decomposition method, extending NRET to incorporate previously overlooked
tensor interactions. We find additional operators in the $\mu \to e$ conversion
that are not present in the scalar and vector couplings. This development is
expected to have a significant impact on ongoing experiments seeking physics
beyond the Standard Model and on our understanding of the new-physics
interactions. To support further research and experimental analyses,
comprehensive tables featuring tensor matrix elements and their corresponding
operators are provided.
|
We investigate the supersymmetric extension of k-field models, in which the
scalar field is described by generalized dynamics. We illustrate some results
with models that support static solutions with the standard kink or the compact
profile.
|
In this paper we study almost complex and almost para-complex Cayley
structures on six-dimensional pseudo-Riemannian spheres in the space of purely
imaginary octaves of the split Cayley algebra $\mathbf{Ca}'$. It is shown that
the Cayley structures are non-integrable, their basic geometric characteristics
are calculated. In contrast to the usual Riemann sphere $\mathbb{S}^6$, there
exist (integrable) complex structures and para-complex structures on the
pseudospheres under consideration.
|
Intersection type systems have been independently applied to different
evaluation strategies, such as call-by-name (CBN) and call-by-value (CBV).
These type systems have been then generalized to different subsuming paradigms
being able, in particular, to encode CBN and CBV in a unique unifying
framework. However, there are no intersection type systems that explicitly
enable CBN and CBV to cohabit together without making use of an encoding into a
common target framework. This work proposes an intersection type system for PCF
with a specific notion of evaluation, called PCFH. Evaluation in PCFH actually
has a hybrid nature, in the sense that CBN and CBV operational behaviors
cohabit together. Indeed, PCFH combines a CBV-like operational behavior for
function application with a CBN-like behavior for recursion. This hybrid nature
is reflected in the type system, which turns out to be sound and complete with
respect to PCFH: not only typability implies normalization, but also the
converse holds. Moreover, the type system is quantitative, in the sense that
the size of typing derivations provides upper bounds for the length of the
reduction sequences to normal form. This type system is then refined to a tight
one, offering exact information regarding the length of normalization
sequences. This is the first time that a sound and complete quantitative type
system has been designed for a hybrid computational model.
|
Recently, denoising diffusion models have led to significant breakthroughs in
the generation of images, audio and text. However, it is still an open question
on how to adapt their strong modeling ability to model time series. In this
paper, we propose TimeDiff, a non-autoregressive diffusion model that achieves
high-quality time series prediction with the introduction of two novel
conditioning mechanisms: future mixup and autoregressive initialization.
Similar to teacher forcing, future mixup allows parts of the ground-truth
future predictions for conditioning, while autoregressive initialization helps
better initialize the model with basic time series patterns such as short-term
trends. Extensive experiments are performed on nine real-world datasets.
Results show that TimeDiff consistently outperforms existing time series
diffusion models, and also achieves the best overall performance across a
variety of the existing strong baselines (including transformers and FiLM).
|
New time-resolved optical spectroscopic echelle observations of the nova-like
cataclysmic variable RW Sextantis were obtained, with the aim to study the
properties of emission features in the system. The profile of the H_alpha
emission line can be clearly divided into two (`narrow' and `wide') components.
Similar emission profiles are observed in another nova-like system,
1RXS~J064434.5+33445, for which we also reanalysed the spectral data and
redetermined the system parameters. The source of the `narrow', low-velocity
component is the irradiated face of the secondary star. We disentangled and
removed the `narrow' component from the H_alpha profile to study the origin and
structure of the region emitting the wide component. We found that the `wide'
component is not related to the white dwarf or the wind from the central part
of the accretion disc, but is emanated from the outer side of the disc.
Inspection of literature on similar systems indicates that this feature is
common for some other long-period nova-like variables. We propose that the
source of the `wide' component is an extended, low-velocity region in the
outskirts of the opposite side of the accretion disc, with respect to the
collision point of the accretion stream and the disc.
|
In this paper, we study the combinatorial multi-armed bandit problem (CMAB)
with probabilistically triggered arms (PTAs). Under the assumption that the arm
triggering probabilities (ATPs) are positive for all arms, we prove that a
class of upper confidence bound (UCB) policies, named Combinatorial UCB with
exploration rate $\kappa$ (CUCB-$\kappa$), and Combinatorial Thompson Sampling
(CTS), which estimates the expected states of the arms via Thompson sampling,
achieve bounded regret. In addition, we prove that CUCB-$0$ and CTS incur
$O(\sqrt{T})$ gap-independent regret. These results improve the results in
previous works, which show $O(\log T)$ gap-dependent and $O(\sqrt{T\log T})$
gap-independent regrets, respectively, under no assumptions on the ATPs. Then,
we numerically evaluate the performance of CUCB-$\kappa$ and CTS in a
real-world movie recommendation problem, where the actions correspond to
recommending a set of movies, the arms correspond to the edges between the
movies and the users, and the goal is to maximize the total number of users
that are attracted by at least one movie. Our numerical results complement our
theoretical findings on bounded regret. Apart from this problem, our results
also directly apply to the online influence maximization (OIM) problem studied
in numerous prior works.
|
During an epidemic, infectious individuals might not be detectable until some
time after becoming infected. The studies show that carriers with mild or no
symptoms are the main contributors to the transmission of a virus within the
population. The average time it takes to develop the symptoms causes a delay in
the spread dynamics of the disease. When considering the influence of delay on
the disease propagation in epidemic networks, depending on the value of the
time-delay and the network topology, the peak of epidemic could be considerably
different in time, duration, and intensity. Motivated by the recent worldwide
outbreak of the COVID-19 virus and the topological extent in which this virus
has spread over the course of a few months, this study aims to highlight the
effect of time-delay in the progress of such infectious diseases in the
meta-population networks rather than individuals or a single population. In
this regard, the notions of epidemic network centrality in terms of the
underlying interaction graph of the network, structure of the uncertainties,
and symptom development duration are investigated to establish a
centrality-based analysis of the disease evolution. A convex traffic volume
optimization method is then developed to control the outbreak. The control
process is done by identifying the sub-populations with the highest centrality
and then isolating them while maintaining the same overall traffic volume
(motivated by economic considerations) in the meta-population level. The
numerical results, along with the theoretical expectations, highlight the
impact of time-delay as well as the importance of considering the worst-case
scenarios in investigating the most effective methods of epidemic containment.
|
I discuss the recent claims made by Mario Bunge on the philosophical
implications of the discovery of gravitational waves. I think that Bunge is
right when he points out that the detection implies the materiality of
spacetime, but I reject his identification of spacetime with the gravitational
field. I show that Bunge's analysis of the spacetime inside a hollow sphere is
defective, but this in no way affects his main claim.
|
We review a recent proposal for the construction of a quantum theory of the
gravitational field. The proposal is based on approximating the continuum
theory by a discrete theory that has several attractive properties, among them,
the fact that in its canonical formulation it is free of constraints. This
allows to bypass many of the hard conceptual problems of traditional canonical
quantum gravity. In particular the resulting theory implies a fundamental
mechanism for decoherence and bypasses the black hole information paradox.
|
In this article we demonstrate that a grating fabricated through nanoscale
volumetric crosslinking of a liquid crystalline polymer enables remote
polarization control over the diffracted channels. This functionality is a
consequence of the responsivity of liquid crystal networks upon light stimuli.
Tuning the photonic response of the device is obtained thanks to both a
refractive index and a shape change of the grating elements induced by a
molecular rearrangement under irradiation. In particular, the material
anisotropy allows for nontrivial polarization state management over multiple
beams. Absence of any liquid component and a time response down to 0.2
milliseconds make our device appealing in the fields of polarimetry and optical
communications.
|
We construct Skyrme fields from holonomy of the spin connection of
multi-Taub-NUT instantons with the centres positioned along a line in
$\mathbb{R}^3.$ Our family of Skyrme fields includes the Taub-NUT Skyrme field
previously constructed by Dunajski. However, we demonstrate that different
gauges of the spin connection can result in Skyrme fields with different
topological degrees. As a by-product, we present a method to compute the
degrees of the Taub-NUT and Atiyah-Hitchin Skyrme fields analytically; these
degrees are well defined as a preferred gauge is fixed by the $SU(2)$ symmetry
of the two metrics.
Regardless of the gauge, the domain of our Skyrme fields is the space of
orbits of the axial symmetry of the multi-Taub-NUT instantons. We obtain an
expression for the induced Einstein-Weyl metric on the space and its associated
solution to the $SU(\infty)$-Toda equation.
|
Polchinski has argued that the prediction of Hawking radiation must be
independent of the details of unknown high-energy physics because the
calculation may be performed using `nice slices', for which the adiabatic
theorem may be used. If this is so, then any calculation using a manifestly
covariant --- and so slice-independent --- ultraviolet regularization must
reproduce the standard Hawking result. We investigate the dependence of the
Hawking radiation on such a short-distance regulator by calculating it using a
Pauli--Villars regularization scheme. We find that the regulator scale,
$\Lambda$, only contributes to the Hawking flux by an amount that is
exponentially small in the large variable ${\Lambda}/{T_\ssh} \gg 1$, where
$T_\ssh$ is the Hawking temperature; in agreement with Polchinski's arguments.
We also solve a technical puzzle concerning the relation between the
short-distance singularities of the propagator and the Hawking effect.
|
The effect of the clusterization on the effective properties of a composite
material reinforced by MXene or graphene platelets is studied using the finite
element method with periodic representative volume element (RVE). A hybrid
2D/3D finite element mesh is used to reduce the computational complexity of the
numerical model. Several realizations of an RVE were generated with increasing
volume fractions of inclusions, resulting in progressive clusterization of the
platelets. Numerically obtained effective properties of the composite are
compared with analytical predictions by the Mori-Tanaka method and Halpin-Tsai
equations, and the limits of the applicability of the analytical models are
established. A two-step homogenization scheme is proposed to increase the
accuracy and stability of the effective properties of an RVE with a relatively
small number of inclusions. Simple scaling relations are proposed to generalize
numerical results to platelets with other aspect ratios.
|
Forecasting pedestrians' future motions is essential for autonomous driving
systems to safely navigate in urban areas. However, existing prediction
algorithms often overly rely on past observed trajectories and tend to fail
around abrupt dynamic changes, such as when pedestrians suddenly start or stop
walking. We suggest that predicting these highly non-linear transitions should
form a core component to improve the robustness of motion prediction
algorithms. In this paper, we introduce the new task of pedestrian stop and go
forecasting. Considering the lack of suitable existing datasets for it, we
release TRANS, a benchmark for explicitly studying the stop and go behaviors of
pedestrians in urban traffic. We build it from several existing datasets
annotated with pedestrians' walking motions, in order to have various scenarios
and behaviors. We also propose a novel hybrid model that leverages
pedestrian-specific and scene features from several modalities, both video
sequences and high-level attributes, and gradually fuses them to integrate
multiple levels of context. We evaluate our model and several baselines on
TRANS, and set a new benchmark for the community to work on pedestrian stop and
go forecasting.
|
We present a generative framework for zero-shot action recognition where some
of the possible action classes do not occur in the training data. Our approach
is based on modeling each action class using a probability distribution whose
parameters are functions of the attribute vector representing that action
class. In particular, we assume that the distribution parameters for any action
class in the visual space can be expressed as a linear combination of a set of
basis vectors where the combination weights are given by the attributes of the
action class. These basis vectors can be learned solely using labeled data from
the known (i.e., previously seen) action classes, and can then be used to
predict the parameters of the probability distributions of unseen action
classes. We consider two settings: (1) Inductive setting, where we use only the
labeled examples of the seen action classes to predict the unseen action class
parameters; and (2) Transductive setting which further leverages unlabeled data
from the unseen action classes. Our framework also naturally extends to
few-shot action recognition where a few labeled examples from unseen classes
are available. Our experiments on benchmark datasets (UCF101, HMDB51 and
Olympic) show significant performance improvements as compared to various
baselines, in both standard zero-shot (disjoint seen and unseen classes) and
generalized zero-shot learning settings.
|
Recently, there have been claims in the literature that the cosmological
constant problem can be dynamically solved by specific compactifications of
gravity from higher-dimensional toy models. These models have the novel feature
that in the four-dimensional theory, the cosmological constant $\Lambda$ is
much smaller than the Planck density and in fact accumulates at $\Lambda=0$.
Here we show that while these are very interesting models, they do not properly
address the real cosmological constant problem. As we explain, the real problem
is not simply to obtain $\Lambda$ that is small in Planck units in a toy model,
but to explain why $\Lambda$ is much smaller than other mass scales (and
combinations of scales) in the theory. Instead, in these toy models, all other
particle mass scales have been either removed or sent to zero, thus ignoring
the real problem. To this end, we provide a general argument that the included
moduli masses are generically of order Hubble, so sending them to zero
trivially sends the cosmological constant to zero. We also show that the
fundamental Planck mass is being sent to zero, and so the central problem is
trivially avoided by removing high energy physics altogether. On the other
hand, by including various large mass scales from particle physics with a high
fundamental Planck mass, one is faced with a real problem, whose only known
solution involves accidental cancellations in a landscape.
|
We propose two new approaches to the Tannakian Galois groups of holonomic
D-modules on abelian varieties. The first is an interpretation in terms of
principal bundles given by the Fourier-Mukai transform, which shows that they
are almost connected. The second constructs a microlocalization functor
relating characteristic cycles to Weyl group orbits of weights. This explains
the ubiquity of minuscule representations, and we illustrate it with a Torelli
theorem and with a bound for decompositions of a given subvariety as a sum of
subvarieties. The appendix sketches a twistor variant that may be useful for
D-modules not coming from Hodge theory.
|
Millions of battery-powered sensors deployed for monitoring purposes in a
multitude of scenarios, e.g., agriculture, smart cities, industry, etc.,
require energy-efficient solutions to prolong their lifetime. When these
sensors observe a phenomenon distributed in space and evolving in time, it is
expected that collected observations will be correlated in time and space. In
this paper, we propose a Deep Reinforcement Learning (DRL) based scheduling
mechanism capable of taking advantage of correlated information. We design our
solution using the Deep Deterministic Policy Gradient (DDPG) algorithm. The
proposed mechanism is capable of determining the frequency with which sensors
should transmit their updates, to ensure accurate collection of observations,
while simultaneously considering the energy available. To evaluate our
scheduling mechanism, we use multiple datasets containing environmental
observations obtained in multiple real deployments. The real observations
enable us to model the environment with which the mechanism interacts as
realistically as possible. We show that our solution can significantly extend
the sensors' lifetime. We compare our mechanism to an idealized, all-knowing
scheduler to demonstrate that its performance is near-optimal. Additionally, we
highlight the unique feature of our design, energy-awareness, by displaying the
impact of sensors' energy levels on the frequency of updates.
|
Participants of this workshop pursue the old Neutrino Theory of Light
vigorously. Other physicists have long ago abandoned it, because it lacks gauge
invariance. In the recent Quantum Induction (QI), all basic Bose fields
${\mathcal B}^{P}$ are local limits of quantum fields composed of Dirac's
$\Psi$ (for leptons and quarks). The induced field equations of QI even
determine all the interactions of those ${\mathcal B}^{P}$. Thus a precise
gauge invariance and other physical consequences are unavoidable. They include
the absence of divergencies, the exclusion of Pauli terms, a prediction of the
Higgs mass and a `minimal' Quantum Gravity.
As we find in this paper, however, photons can't be bound states while
Maxwell's potential $A_{\mu}$ contains all basic Dirac fields except those of
neutrinos.
|
We provide some considerations on the excitation of black hole quasinormal
modes (QNMs) in different physical scenarios. Considering a simple model in
which a stream of particles accretes onto a black hole, we show that resonant
QNM excitation by hyperaccretion requires a significant amount of fine-tuning,
and is quite unlikely to occur in nature. Then we summarize and discuss present
estimates of black hole QNM excitation from gravitational collapse, distorted
black holes and head-on black hole collisions. We emphasize the areas that, in
our opinion, are in urgent need of further investigation from the point of view
of gravitational wave source modeling.
|
Modern neural trajectory predictors in autonomous driving are developed using
imitation learning (IL) from driving logs. Although IL benefits from its
ability to glean nuanced and multi-modal human driving behaviors from large
datasets, the resulting predictors often struggle with out-of-distribution
(OOD) scenarios and with traffic rule compliance. On the other hand, classical
rule-based predictors, by design, can predict traffic rule satisfying behaviors
while being robust to OOD scenarios, but these predictors fail to capture
nuances in agent-to-agent interactions and human driver's intent. In this
paper, we present RuleFuser, a posterior-net inspired evidential framework that
combines neural predictors with classical rule-based predictors to draw on the
complementary benefits of both, thereby striking a balance between performance
and traffic rule compliance. The efficacy of our approach is demonstrated on
the real-world nuPlan dataset where RuleFuser leverages the higher performance
of the neural predictor in in-distribution (ID) scenarios and the higher safety
offered by the rule-based predictor in OOD scenarios.
|
This study introduces a data-driven approach using machine learning (ML)
techniques to explore and predict albedo anomalies on the Moon's surface. The
research leverages diverse planetary datasets, including
high-spatial-resolution albedo maps and element maps (LPFe, LPK, LPTh, LPTi)
derived from laser and gamma-ray measurements. The primary objective is to
identify relationships between chemical elements and albedo, thereby expanding
our understanding of planetary surfaces and offering predictive capabilities
for areas with incomplete datasets. To bridge the gap in resolution between the
albedo and element maps, we employ Gaussian blurring techniques, including an
innovative adaptive Gaussian blur. Our methodology culminates in the deployment
of an Extreme Gradient Boosting Regression Model, optimized to predict full
albedo based on elemental composition. Furthermore, we present an interactive
analytical tool to visualize prediction errors, delineating their spatial and
chemical characteristics. The findings not only pave the way for a more
comprehensive understanding of the Moon's surface but also provide a framework
for similar studies on other celestial bodies.
|
In this paper, bicomplex Pell and bicomplex Pell-Lucas numbers are defined.
Also, negabicomplex Pell and negabicomplex Pell-Lucas numbers are given. Some
algebraic properties of bicomplex Pell and bicomplex Pell-Lucas numbers which
are connected with bicomplex numbers and Pell and Pell-Lucas numbers are
investigated. Furthermore, d'Ocagne's identity, Binet's formula, Cassini's
identity and Catalan's identity for these numbers are given.
|
In this study, capillary-driven flow of different pure liquids and diluted
bitumen samples were studied using microfluidic channel (width of 30 um and
depth of 9 um). Capillary filling kinetics of liquids as a function of time
were evaluated and compared with theoretical predictions. For pure liquids
including water, toluene, hexane, and methanol experimental results agreed well
with theoretical predictions. However, for bitumen samples, as concentration of
bitumen increased the deviation between theoretical and experimental results
became larger. The higher deviation for high concentrations (i.e. above 30%)
can be due to the difference between dynamic contact angle and bulk contact
angle. Microchannels are suitable experimental devices to study the flow of
heavy oil and bitumen in porous structure such as those of reservoirs.
|
We consider the minimizers for the biharmonic nonlinear Schr\"odinger
functional $$ \mathcal{E}_a(u)=\int_{\mathbb{R}^d} |\Delta u(x)|^2 d x +
\int_{\mathbb{R}^d} V(x) |u(x)|^2 d x - a \int_{\mathbb{R}^d} |u(x)|^{q} d x $$
with the mass constraint $\int |u|^2=1$. We focus on the special power
$q=2(1+4/d)$, which makes the nonlinear term $\int |u|^q$ scales similarly to
the biharmonic term $\int |\Delta u|^2$. Our main results are the existence and
blow-up behavior of the minimizers when $a$ tends to a critical value $a^*$,
which is the optimal constant in a Gagliardo--Nirenberg interpolation
inequality.
|
Watching movies and TV shows with subtitles enabled is not simply down to
audibility or speech intelligibility. A variety of evolving factors related to
technological advances, cinema production and social behaviour challenge our
perception and understanding. This study seeks to formalise and give context to
these influential factors under a wider and novel term referred to as Dialogue
Understandability. We propose a working definition for Dialogue
Understandability being a listener's capacity to follow the story without undue
cognitive effort or concentration being required that impacts their Quality of
Experience (QoE). The paper identifies, describes and categorises the factors
that influence Dialogue Understandability mapping them over the QoE framework,
a media streaming lifecycle, and the stakeholders involved. We then explore
available measurement tools in the literature and link them to the factors they
could potentially be used for. The maturity and suitability of these tools is
evaluated over a set of pilot experiments. Finally, we reflect on the gaps that
still need to be filled, what we can measure and what not, future subjective
experiments, and new research trends that could help us to fully characterise
Dialogue Understandability.
|
We present a new high-mass membership of the nearby Sco OB2 association based
on HIPPARCOS positions, proper motions and parallaxes and radial velocities
taken from the Kharchenko et al. (2007) catalogue. The Bayesian membership
selection method developed makes no distinction between subgroups of Sco OB2
and utilises linear models in calculation of membership probabilities. We
select 436 members, 88 of which are new members not included in previous
membership selections. We include the classical non members Alpha-Cru and
Beta-Cru as new members as well as the pre-main-sequence stars HIP 79080 and
79081. We also show that the association is well mixed over distances of 8
degrees on the sky, and hence no determination can be made as to the formation
process of the entire association.
|
The Constrained Application Protocol (CoAP) is an HTTP-like protocol for
RESTful applications intended to run on constrained devices, typically part of
the Internet of Things. CoAP observe is an extension to the CoAP specification
that allows CoAP clients to observe a resource through a simple
publish/subscribe mechanism. In this paper we leverage Information-Centric
Networking (ICN), transparently deployed within the domain of a network
provider, to provide enhanced CoAP services. We present the design and the
implementation of CoAP observe over ICN and we discuss how ICN can provide
benefits to both network providers and CoAP applications, even though the
latter are not aware of the existence of ICN. In particular, the use of ICN
results in smaller state management and simpler implementation at CoAP
endpoints, and less communication overhead in the network.
|
We have tested a relative spectral lag (RSL) method suggested earlier as a
luminosity/redshift (or distance) estimator, using the generalized method by
Schaefer & Collazzi. We find the derivations from the luminosity/redshift-RSL
(L/R-RSL) relation are comparable with the corresponding observations. Applying
the luminosity-RSL relation to two different GRB samples, we find that there
exist no violators from the generalized test, namely the Nakar & Piran test and
Li test. We also find that about 36 per cent of Schaefer's sample are outliers
for the L/R-RSL relation within 1$\sigma$ confidence level, but no violators at
3$\sigma$ level within the current precision of L/R-RSL relation. An analysis
of several potential outliers for other luminosity relations shows they can
match the L/R-RSL relation well within an acceptable uncertainty. All the
coincident results seem to suggest that this relation could be a potential tool
for cosmological study.
|
The super generalized Broer-Kaup(gBK) hierarchy and its super Hamiltonian
structure are established based on a loop super Lie algebra and super-trace
identity. Then the self-consistent sources, the conservation laws, the novel
symmetry constraint and the binary nonlinearization of the super gBK hierarchy
are generated, respectively. In addition, the integrals of motion required for
Liouville integrability are explicitly given.
|
We report electronic transport measurements on two-dimensional electron gases
in a Ga[Al]As heterostructure with an embedded layer of InAs self-assembled
quantum dots. At high InAs dot densities, pronounced Altshuler-Aronov-Spivak
magnetoresistance oscillations are observed, which indicate short-range
ordering of the potential landscape formed by the charged dots and the strain
fields. The presence of these oscillations coincides with the observation of a
metal-insulator transition, and a maximum in the electron mobility as a
function of the electron density. Within a model based on correlated disorder,
we establish a relation between these effects.
|
Endowed with higher levels of autonomy, robots are required to perform
increasingly complex manipulation tasks. Learning from demonstration is arising
as a promising paradigm for transferring skills to robots. It allows to
implicitly learn task constraints from observing the motion executed by a human
teacher, which can enable adaptive behavior. We present a novel
Gaussian-Process-based learning from demonstration approach. This probabilistic
representation allows to generalize over multiple demonstrations, and encode
variability along the different phases of the task. In this paper, we address
how Gaussian Processes can be used to effectively learn a policy from
trajectories in task space. We also present a method to efficiently adapt the
policy to fulfill new requirements, and to modulate the robot behavior as a
function of task variability. This approach is illustrated through a real-world
application using the TIAGo robot.
|
Mobile robots are increasingly populating homes, hospitals, shopping malls,
factory floors, and other human environments. Human society has social norms
that people mutually accept; obeying these norms is an essential signal that
someone is participating socially with respect to the rest of the population.
For robots to be socially compatible with humans, it is crucial for robots to
obey these social norms. In prior work, we demonstrated a Socially-Aware
Navigation (SAN) planner, based on Pareto Concavity Elimination Transformation
(PaCcET), in a hallway scenario, optimizing two objectives so that the robot
does not invade the personal space of people. This paper extends our PaCcET
based SAN planner to multiple scenarios with more than two objectives. We
modified the Robot Operating System's (ROS) navigation stack to include PaCcET
in the local planning task. We show that our approach can accommodate multiple
Human-Robot Interaction (HRI) scenarios. Using the proposed approach, we
achieved successful HRI in multiple scenarios like hallway interactions, an art
gallery, waiting in a queue, and interacting with a group. We implemented our
method on a simulated PR2 robot in a 2D simulator (Stage) and a pioneer-3DX
mobile robot in the real-world to validate all the scenarios. A comprehensive
set of experiments shows that our approach can handle multiple interaction
scenarios on both holonomic and non-holonomic robots; hence, it can be a viable
option for a Unified Socially-Aware Navigation (USAN).
|
Forward single $\pi^0$ production by coherent neutral-current interactions,
$\nu \mathcal{A} \to \nu \mathcal{A} \pi^0$, is investigated using a 2.8$\times
10^{20}$ protons-on-target exposure of the MINOS Near Detector. For
single-shower topologies, the event distribution in production angle exhibits a
clear excess above the estimated background at very forward angles for visible
energy in the range~1-8 GeV. Cross sections are obtained for the detector
medium comprised of 80% iron and 20% carbon nuclei with $\langle \mathcal{A}
\rangle = 48$, the highest-$\langle \mathcal{A} \rangle$ target used to date in
the study of this coherent reaction. The total cross section for coherent
neutral-current single-$\pi^0$ production initiated by the $\nu_\mu$ flux of
the NuMI low-energy beam with mean (mode) $E_{\nu}$ of 4.9 GeV (3.0 GeV), is
$77.6\pm5.0\,(\text{stat})
^{+15.0}_{-16.8}\,(\text{syst})\times10^{-40}\,\text{cm}^2~\text{per nucleus}$.
The results are in good agreement with predictions of the Berger-Sehgal model.
|
In the current deep learning paradigm, the amount and quality of training
data are as critical as the network architecture and its training details.
However, collecting, processing, and annotating real data at scale is
difficult, expensive, and time-consuming, particularly for tasks such as 3D
object registration. While synthetic datasets can be created, they require
expertise to design and include a limited number of categories. In this paper,
we introduce a new approach called AutoSynth, which automatically generates 3D
training data for point cloud registration. Specifically, AutoSynth
automatically curates an optimal dataset by exploring a search space
encompassing millions of potential datasets with diverse 3D shapes at a low
cost.To achieve this, we generate synthetic 3D datasets by assembling shape
primitives, and develop a meta-learning strategy to search for the best
training data for 3D registration on real point clouds. For this search to
remain tractable, we replace the point cloud registration network with a much
smaller surrogate network, leading to a $4056.43$ times speedup. We demonstrate
the generality of our approach by implementing it with two different point
cloud registration networks, BPNet and IDAM. Our results on TUD-L, LINEMOD and
Occluded-LINEMOD evidence that a neural network trained on our searched dataset
yields consistently better performance than the same one trained on the widely
used ModelNet40 dataset.
|
We study the prospects for charged Higgs boson searches in the $W \gamma$
decay channel. This loop-induced decay channel can be important if the charged
Higgs is fermiophobic, particularly when its mass is below the $WZ$ threshold.
We identify useful kinematic observables and evaluate the future Large Hadron
Collider sensitivity to this channel using the custodial-fiveplet charged Higgs
in the Georgi-Machacek model as a fermiophobic benchmark. We show that the LHC
with 300~fb$^{-1}$ of data at 14~TeV will be able to exclude charged Higgs
masses below about 130~GeV for almost any value of the SU(2)$_L$-triplet vacuum
expectation value in the model, and masses up to 200~GeV and beyond when the
triplet vacuum expectation value is very small. We describe the signal
simulation tools created for this analysis, which have been made publicly
available.
|
"Co-Frobenius" coalgebras were introduced as dualizations of Frobenius
algebras. Recently, it was shown in \cite{I} that they admit left-right
symmetric characterizations analogue to those of Frobenius algebras: a
coalgebra $C$ is co-Frobenius if and only if it is isomorphic to its rational
dual. We consider the more general quasi-co-Frobenius (QcF) coalgebras; in the
first main result we show that these also admit symmetric characterizations: a
coalgebra is QcF if it is weakly isomorphic to its (left, or equivalently
right) rational dual $Rat(C^*)$, in the sense that certain coproduct or product
powers of these objects are isomorphic. These show that QcF coalgebras can be
viewed as generalizations of both co-Frobenius coalgebras and Frobenius
algebras. Surprisingly, these turn out to have many applications to fundamental
results of Hopf algebras. The equivalent characterizations of Hopf algebras
with left (or right) nonzero integrals as left (or right) co-Frobenius, or QcF,
or semiperfect or with nonzero rational dual all follow immediately from these
results. Also, the celebrated uniqueness of integrals follows at the same time
as just another equivalent statement. Moreover, as a by-product of our methods,
we observe a short proof for the bijectivity of the antipode of a Hopf algebra
with nonzero integral. This gives a purely representation theoretic approach to
many of the basic fundamental results in the theory of Hopf algebras.
|
Examples of knots and links distinguished by the total rank of their Khovanov
homology but sharing the same two-fold branched cover are given. As a result,
Khovanov homology does not yield an invariant of two-fold branched covers.
|
Reaching for a better understanding of turbulence, a line of investigation
was followed, its main presupposition being that each scale dependent state, in
a general renormalization flow, is a state that can be modeled using a class of
ninth degree polynomials. These polynomials are deduced from the Weierstrass
models of a certain kind of elliptic curves. As the consequences of this
presupposition unfolded, leading to the numerical study of a few samples of
elliptic curves, the L functions associated with these later were considered.
Their bifurcation diagrams were observed and their escape rates were
determined. The consistency of such an approach was put to a statistical test,
measuring the rank correlation between escape rates and values taken by these L
functions on the point z=1+0i. In the most significant case, the rank
correlation coefficient found, r_s, was about r_s=-0.78, with an associated
p-value of an order of magnitude close to the (-69) power of 10.
|
Spintronics devices rely on spin-dependent transport behavior evoked by the
presence of spin-polarized electrons. Transport through nanostructures, on the
other hand, is dominated by strong Coulomb interaction. We study a model system
in the intersection of both fields, a quantum dot attached to ferromagnetic
leads. The combination of spin-polarization in the leads and strong Coulomb
interaction in the quantum dot gives rise to an exchange field acting on
electron spins in the dot. Depending on the parameter regime, this exchange
field is visible in the transport either via a precession of an accumulated dot
spin or via an induced level splitting. We review the situation for various
transport regimes, and discuss two of them in more detail.
|
We describe a stochastic, dynamical system capable of inference and learning
in a probabilistic latent variable model. The most challenging problem in such
models - sampling the posterior distribution over latent variables - is
proposed to be solved by harnessing natural sources of stochasticity inherent
in electronic and neural systems. We demonstrate this idea for a sparse coding
model by deriving a continuous-time equation for inferring its latent variables
via Langevin dynamics. The model parameters are learned by simultaneously
evolving according to another continuous-time equation, thus bypassing the need
for digital accumulators or a global clock. Moreover we show that Langevin
dynamics lead to an efficient procedure for sampling from the posterior
distribution in the 'L0 sparse' regime, where latent variables are encouraged
to be set to zero as opposed to having a small L1 norm. This allows the model
to properly incorporate the notion of sparsity rather than having to resort to
a relaxed version of sparsity to make optimization tractable. Simulations of
the proposed dynamical system on both synthetic and natural image datasets
demonstrate that the model is capable of probabilistically correct inference,
enabling learning of the dictionary as well as parameters of the prior.
|
We construct the family of bilinear forms gG on R3+1 for which Galilean
boosts and spatial rotations are isometries. The key feature of these bilinear
forms is that they are parametrized by a Galilean invariant vector whose
physical interpretation is rather unclear. At the end of the paper, we
construct the Poisson bracket associated with the (nondegenerate) antisymmetric
part of gG.
|
In recent years, multiple noninvasive imaging modalities have been used to
develop a better understanding of the human brain functionality, including
positron emission tomography, single-photon emission computed tomography, and
functional magnetic resonance imaging, all of which provide brain images with
millimeter spatial resolutions. Despite good spatial resolution, time
resolution of these methods are poor and values are about seconds.
Electroencephalography (EEG) is a popular non-invasive electrophysiological
technique of relatively very high time resolution which is used to measure
electric potential of brain neural activity. Scalp EEG recordings can be used
to perform the inverse problem in order to specify the location of the dominant
sources of the brain activity. In this paper, EEG source localization research
is clustered as follows: solving the inverse problem by statistical method
(37.5%), diagnosis of brain abnormalities using common EEG source localization
methods (18.33%), improving EEG source localization methods by non-statistical
strategies (3.33%), investigating the effect of the head model on EEG source
imaging results (12.5%), detection of epileptic seizures by brain activity
localization based on EEG signals (20%), diagnosis and treatment of ADHD
abnormalities (8.33%). Among the available methods, minimum norm solution has
shown to be very promising for sources with different depths. This review
investigates diseases that are diagnosed using EEG source localization
techniques. In this review we provide enough evidence that the effects of
psychiatric drugs on the activity of brain sources have not been enough
investigated, which provides motivation for consideration in the future
research using EEG source localization methods.
|
Recently, the experimental measurements of the branching ratios and different
polarization asymmetries for the processes occurring through
flavor-changing-charged current $b\rightarrow c\tau\overline{\nu}_{\tau}$
transitions by BABAR, Belle, and LHCb show some sparkling differences with the
corresponding SM predictions. Assuming the left handed neutrinos, we add the
dimension-six vector, (pseudo-)scalar, and tensor operators with complex WCs to
the SM WEH. Together with 60%, 30% and 10% constraints coming from the
branching ratio of $B_{c}\to\tau\bar{\nu}_{\tau}$, we analyze the parametric
space of these new physics WCs accommodating the current anomalies in the
purview of the most recent HFLAV data of $R_{\tau/{\mu,e}}\left(D\right)$,
$R_{\tau/{\mu,e}}\left(D^*\right)$ and Belle data of $F_{L}\left(D^*\right)$
and $P_{\tau}\left(D^*\right)$. Furthermore, we derive the sum rules which
correlate these observables with $R_{\tau/{\mu,e}}\left(D\right)$ and
$R_{\tau/{\mu,e}}\left(D^*\right)$. Using the best-fit points of the new
complex WCs along with the latest measurements of
$R_{\tau/{\mu,e}}\left(D^{(*)}\right)$, we predict the numerical values of the
observable $R_{\tau/\ell}\left(\Lambda_c\right)$,
$R_{\tau/\mu}\left(J/\psi\right)$ and $R_{\tau/\ell}\left(X_c\right)$ from the
sum rules. Apart from finding the correlation matrix among the observables
under consideration, we plot them graphically which is useful to discriminate
different NP scenarios. Finally, we study the impact of these NP couplings on
various angular and the CP triple product asymmetries, that could be measured
in some ongoing and future experiments. The precise measurements of these
observables are important to check the SM and extract the possible NP.
|
How ground states of quantum matter transform between one another reveals
deep insights into the mechanisms stabilizing them. Correspondingly, quantum
phase transitions are explored in numerous materials classes, with heavy
fermion compounds being among the most prominent ones. Recent studies in an
anisotropic heavy fermion compound have shown that different types of
transitions are induced by variations of chemical or external pressure [1-3],
raising the question of the extent to which heavy fermion quantum criticality
is universal. To make progress, it is essential to broaden both the materials
basis and the microscopic parameter variety. Here, we identify a cubic heavy
fermion material as exhibiting a field-induced quantum phase transition, and
show how the material can be used to explore one extreme of the dimensionality
axis. The transition between two different ordered phases is accompanied by an
abrupt change of Fermi surface, reminiscent of what happens across the
field-induced antiferromagnetic to paramagnetic transition in the anisotropic
YbRh2Si2. This finding leads to a materials-based global phase diagram -- a
precondition for a unified theoretical description.
|
The demand for e-hailing services is growing rapidly, especially in large
cities. Uber is the first and popular e-hailing company in the United Stated
and New York City. A comparison of the demand for yellow-cabs and Uber in NYC
in 2014 and 2015 shows that the demand for Uber has increased. However, this
demand may not be distributed uniformly either spatially or temporally. Using
spatio-temporal time series models can help us to better understand the demand
for e-hailing services and to predict it more accurately. This paper analyzes
the prediction performance of one temporal model (vector autoregressive (VAR))
and two spatio-temporal models (Spatial-temporal autoregressive (STAR); least
absolute shrinkage and selection operator applied on STAR (LASSO-STAR)) and for
different scenarios (based on the number of time and space lags), and applied
to both rush hours and non-rush hours periods. The results show the need of
considering spatial models for taxi demand.
|
With the growth of location-based services, indoor localization is attracting
great interests as it facilitates further ubiquitous environments.
Specifically, device free localization using wireless signals is getting
increased attention as human location is estimated using its impact on the
surrounding wireless signals without any active device tagged with subject. In
this paper, we propose MuDLoc, the first multi-view discriminant learning
approach for device free indoor localization using both amplitude and phase
features of Channel State Information (CSI) from multiple APs. Multi-view
learning is an emerging technique in machine learning which improve performance
by utilizing diversity from different view data. In MuDLoc, the localization is
modeled as a pattern matching problem, where the target location is predicted
based on similarity measure of CSI features of an unknown location with those
of the training locations. MuDLoc implements Generalized Inter-view and
Intra-view Discriminant Correlation Analysis (GI$^{2}$DCA), a discriminative
feature extraction approach using multi-view CSIs. It incorporates inter-view
and intra-view class associations while maximizing pairwise correlations across
multi-view data sets. A similarity measure is performed to find the best match
to localize a subject. Experimental results from two cluttered environments
show that MuDLoc can estimate location with high accuracy which outperforms
other benchmark approaches.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.