title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa | This paper presents a generic Bayesian framework that enables any deep
learning model to actively learn from targeted crowds. Our framework inherits
from recent advances in Bayesian deep learning, and extends existing work by
considering the targeted crowdsourcing approach, where multiple annotators with
unknown expertise contribute an uncontrolled amount (often limited) of
annotations. Our framework leverages the low-rank structure in annotations to
learn individual annotator expertise, which then helps to infer the true labels
from noisy and sparse annotations. It provides a unified Bayesian model to
simultaneously infer the true labels and train the deep learning model in order
to reach an optimal learning efficacy. Finally, our framework exploits the
uncertainty of the deep learning model during prediction as well as the
annotators' estimated expertise to minimize the number of required annotations
and annotators for optimally training the deep learning model.
We evaluate the effectiveness of our framework for intent classification in
Alexa (Amazon's personal assistant), using both synthetic and real-world
datasets. Experiments show that our framework can accurately learn annotator
expertise, infer true labels, and effectively reduce the amount of annotations
in model training as compared to state-of-the-art approaches. We further
discuss the potential of our proposed framework in bridging machine learning
and crowdsourcing towards improved human-in-the-loop systems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Using Convolutional Neural Networks to Count Palm Trees in Satellite Images | In this paper we propose a supervised learning system for counting and
localizing palm trees in high-resolution, panchromatic satellite imagery
(40cm/pixel to 1.5m/pixel). A convolutional neural network classifier trained
on a set of palm and no-palm images is applied across a satellite image scene
in a sliding window fashion. The resultant confidence map is smoothed with a
uniform filter. A non-maximal suppression is applied onto the smoothed
confidence map to obtain peaks. Trained with a small dataset of 500 images of
size 40x40 cropped from satellite images, the system manages to achieve a tree
count accuracy of over 99%.
| 1 | 0 | 0 | 0 | 0 | 0 |
The sharp for the Chang model is small | Woodin has shown that if there is a measurable Woodin cardinal then there is,
in an appropriate sense, a sharp for the Chang model. We produce, in a weaker
sense, a sharp for the Chang model using only the existence of a cardinal
$\kappa$ having an extender of length $\kappa^{+\omega_1}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Weak quadrupole moments | Collective effects in deformed atomic nuclei present possible avenues of
study on the non-spherical distribution of neutrons and the violation of the
local Lorentz invariance. We introduce the weak quadrupole moment of nuclei,
related to the quadrupole distribution of the weak charge in the nucleus. The
weak quadrupole moment produces tensor weak interaction between the nucleus and
electrons and can be observed in atomic and molecular experiments measuring
parity nonconservation. The dominating contribution to the weak quadrupole is
given by the quadrupole moment of the neutron distribution, therefore,
corresponding experiments should allow one to measure the neutron quadrupoles.
Using the deformed oscillator model and the Schmidt model we calculate the
quadrupole distributions of neutrons, $Q_{n}$, the weak quadrupole moments
,$Q_{W}^{(2)}$, and the Lorentz Innvariance violating energy shifts in
$^{9}$Be, $^{21}$Ne , $^{27}$Al, $^{131}$Xe, $^{133}$Cs, $^{151}$Eu,
$^{153}$Eu, $^{163}$Dy, $^{167}$Er, $^{173}$Yb, $^{177}$Hf, $^{179}$Hf,
$^{181}$Ta, $^{201}$Hg and $^{229}$Th.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast and Accurate Semantic Mapping through Geometric-based Incremental Segmentation | We propose an efficient and scalable method for incrementally building a
dense, semantically annotated 3D map in real-time. The proposed method assigns
class probabilities to each region, not each element (e.g., surfel and voxel),
of the 3D map which is built up through a robust SLAM framework and
incrementally segmented with a geometric-based segmentation method. Differently
from all other approaches, our method has a capability of running at over 30Hz
while performing all processing components, including SLAM, segmentation, 2D
recognition, and updating class probabilities of each segmentation label at
every incoming frame, thanks to the high efficiency that characterizes the
computationally intensive stages of our framework. By utilizing a specifically
designed CNN to improve the frame-wise segmentation result, we can also achieve
high accuracy. We validate our method on the NYUv2 dataset by comparing with
the state of the art in terms of accuracy and computational efficiency, and by
means of an analysis in terms of time and space complexity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Pumping Lemma for Higher-order Languages | We study a pumping lemma for the word/tree languages generated by
higher-order grammars. Pumping lemmas are known up to order-2 word languages
(i.e., for regular/context-free/indexed languages), and have been used to show
that a given language does not belong to the classes of
regular/context-free/indexed languages. We prove a pumping lemma for word/tree
languages of arbitrary orders, modulo a conjecture that a higher-order version
of Kruskal's tree theorem holds. We also show that the conjecture indeed holds
for the order-2 case, which yields a pumping lemma for order-2 tree languages
and order-3 word languages.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generative Bridging Network in Neural Sequence Prediction | In order to alleviate data sparsity and overfitting problems in maximum
likelihood estimation (MLE) for sequence prediction tasks, we propose the
Generative Bridging Network (GBN), in which a novel bridge module is introduced
to assist the training of the sequence prediction model (the generator
network). Unlike MLE directly maximizing the conditional likelihood, the bridge
extends the point-wise ground truth to a bridge distribution conditioned on it,
and the generator is optimized to minimize their KL-divergence. Three different
GBNs, namely uniform GBN, language-model GBN and coaching GBN, are proposed to
penalize confidence, enhance language smoothness and relieve learning burden.
Experiments conducted on two recognized sequence prediction tasks (machine
translation and abstractive text summarization) show that our proposed GBNs can
yield significant improvements over strong baselines. Furthermore, by analyzing
samples drawn from different bridges, expected influences on the generator are
verified.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Rule-Based Computational Model of Cognitive Arithmetic | Cognitive arithmetic studies the mental processes used in solving math
problems. This area of research explores the retrieval mechanisms and
strategies used by people during a common cognitive task. Past research has
shown that human performance in arithmetic operations is correlated to the
numerical size of the problem. Past research on cognitive arithmetic has
pinpointed this trend to either retrieval strength, error checking, or
strategy-based approaches when solving equations. This paper describes a
rule-based computational model that performs the four major arithmetic
operations (addition, subtraction, multiplication and division) on two
operands. We then evaluated our model to probe its validity in representing the
prevailing concepts observed in psychology experiments from the related works.
The experiments specifically explore the problem size effect, an
activation-based model for fact retrieval, backup strategies when retrieval
fails, and finally optimization strategies when faced with large operands. From
our experimental results, we concluded that our model's response times were
comparable to results observed when people performed similar tasks during
psychology experiments. The fit of our model in reproducing these results and
incorporating accuracy into our model are discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modular categories are not determined by their modular data | Arbitrarily many pairwise inequivalent modular categories can share the same
modular data. We exhibit a family of examples that are module categories over
twisted Drinfeld doubles of finite groups, and thus in particular integral
modular categories.
| 0 | 0 | 1 | 0 | 0 | 0 |
Change Detection in a Dynamic Stream of Attributed Networks | While anomaly detection in static networks has been extensively studied, only
recently, researchers have focused on dynamic networks. This trend is mainly
due to the capacity of dynamic networks in representing complex physical,
biological, cyber, and social systems. This paper proposes a new methodology
for modeling and monitoring of dynamic attributed networks for quick detection
of temporal changes in network structures. In this methodology, the generalized
linear model (GLM) is used to model static attributed networks. This model is
then combined with a state transition equation to capture the dynamic behavior
of the system. Extended Kalman filter (EKF) is used as an online, recursive
inference procedure to predict and update network parameters over time. In
order to detect changes in the underlying mechanism of edge formation,
prediction residuals are monitored through an Exponentially Weighted Moving
Average (EWMA) control chart. The proposed modeling and monitoring procedure is
examined through simulations for attributed binary and weighted networks. The
email communication data from the Enron corporation is used as a case study to
show how the method can be applied in real-world problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Local Differential Privacy for Physical Sensor Data and Sparse Recovery | In this work we explore the utility of locally differentially private thermal
sensor data. We design a locally differentially private recovery algorithm for
the 1-dimensional, discrete heat source location problem and analyse its
performance in terms of the Earth Mover Distance error. Our work indicates that
it is possible to produce locally private sensor measurements that both keep
the exact locations of the heat sources private and permit recovery of the
"general geographic vicinity" of the sources. We also discuss the relationship
between the property of an inverse problem being ill-conditioned and the amount
of noise needed to maintain privacy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Kernel Feature Selection via Conditional Covariance Minimization | We propose a method for feature selection that employs kernel-based measures
of independence to find a subset of covariates that is maximally predictive of
the response. Building on past work in kernel dimension reduction, we show how
to perform feature selection via a constrained optimization problem involving
the trace of the conditional covariance operator. We prove various consistency
results for this procedure, and also demonstrate that our method compares
favorably with other state-of-the-art algorithms on a variety of synthetic and
real data sets.
| 1 | 0 | 0 | 1 | 0 | 0 |
2s exciton-polariton revealed in an external magnetic field | We demonstrate the existence of the excited state of an exciton-polariton in
a semiconductor microcavity. The strong coupling of the quantum well heavy-hole
exciton in an excited 2s state to the cavity photon is observed in non-zero
magnetic field due to surprisingly fast increase of Rabi energy of the 2s
exciton-polariton in magnetic field. This effect is explained by a strong
modification of the wave-function of the relative electron-hole motion for the
2s exciton state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Weight hierarchy of a class of linear codes relating to non-degenerate quadratic forms | In this paper, we discuss the generalized Hamming weights of a class of
linear codes associated with non-degenerate quadratic forms. In order to do so,
we study the quadratic forms over subspaces of finite field and obtain some
interesting results about subspaces and their dual spaces. On this basis, we
solve all the generalized Hamming weights of these linear codes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cosmological discordances II: Hubble constant, Planck and large-scale-structure data sets | We examine systematically the (in)consistency between cosmological
constraints as obtained from various current data sets of the expansion
history, Large Scale Structure (LSS), and Cosmic Microwave Background (CMB)
from Planck. We run (dis)concordance tests within each set and across the sets
using a recently introduced index of inconsistency (IOI) capable of dissecting
inconsistencies between two or more data sets. First, we compare the
constraints on $H_0$ from five different methods and find that the IOI drops
from 2.85 to 0.88 (on Jeffreys' scales) when the local $H_0$ measurements is
removed. This seems to indicate that the local measurement is an outlier, thus
favoring a systematics-based explanation. We find a moderate inconsistency
(IOI=2.61) between Planck temperature and polarization. We find that current
LSS data sets including WiggleZ, SDSS RSD, CFHTLenS, CMB lensing and SZ cluster
count, are consistent one with another and when all combined. However, we find
a persistent moderate inconsistency between Planck and individual or combined
LSS probes. For Planck TT+lowTEB versus individual LSS probes, the IOI spans
the range 2.92--3.72 and increases to 3.44--4.20 when the polarization data is
added in. The joint LSS versus the combined Planck temperature and polarization
has an IOI of 2.83 in the most conservative case. But if Planck lowTEB is added
to the joint LSS to constrain $\tau$ and break degeneracies, the inconsistency
between Planck and joint LSS data increases to the high-end of the moderate
range with IOI=4.81. Whether due to systematic effects in the data or to the
underlying model, these inconsistencies need to be resolved. Finally, we
perform forecast calculations using LSST and find that the discordance between
Planck and future LSS data, if it persists as present, can rise up to a high
IOI of 17, thus falling in the strong range of inconsistency. (Abridged).
| 0 | 1 | 0 | 0 | 0 | 0 |
The perfect spin injection in silicene FS/NS junction | We theoretically investigate the spin injection from a ferromagnetic silicene
to a normal silicene (FS/NS), where the magnetization in the FS is assumed from
the magnetic proximity effect. Based on a silicene lattice model, we
demonstrated that the pure spin injection could be obtained by tuning the Fermi
energy of two spin species, where one is in the spin orbit coupling gap and the
other one is outside the gap. Moreover, the valley polarity of the spin species
can be controlled by a perpendicular electric field in the FS region. Our
findings may shed light on making silicene-based spin and valley devices in the
spintronics and valleytronics field.
| 0 | 1 | 0 | 0 | 0 | 0 |
Distance-based Protein Folding Powered by Deep Learning | Contact-assisted protein folding has made very good progress, but two
challenges remain. One is accurate contact prediction for proteins lack of many
sequence homologs and the other is that time-consuming folding simulation is
often needed to predict good 3D models from predicted contacts. We show that
protein distance matrix can be predicted well by deep learning and then
directly used to construct 3D models without folding simulation at all. Using
distance geometry to construct 3D models from our predicted distance matrices,
we successfully folded 21 of the 37 CASP12 hard targets with a median family
size of 58 effective sequence homologs within 4 hours on a Linux computer of 20
CPUs. In contrast, contacts predicted by direct coupling analysis (DCA) cannot
fold any of them in the absence of folding simulation and the best CASP12 group
folded 11 of them by integrating predicted contacts into complex,
fragment-based folding simulation. The rigorous experimental validation on 15
CASP13 targets show that among the 3 hardest targets of new fold our
distance-based folding servers successfully folded 2 large ones with <150
sequence homologs while the other servers failed on all three, and that our ab
initio folding server also predicted the best, high-quality 3D model for a
large homology modeling target. Further experimental validation in CAMEO shows
that our ab initio folding server predicted correct fold for a membrane protein
of new fold with 200 residues and 229 sequence homologs while all the other
servers failed. These results imply that deep learning offers an efficient and
accurate solution for ab initio folding on a personal computer.
| 0 | 0 | 0 | 0 | 1 | 0 |
Double Threshold Digraphs | A semiorder is a model of preference relations where each element $x$ is
associated with a utility value $\alpha(x)$, and there is a threshold $t$ such
that $y$ is preferred to $x$ iff $\alpha(y) > \alpha(x)+t$. These are motivated
by the notion that there is some uncertainty in the utility values we assign an
object or that a subject may be unable to distinguish a preference between
objects whose values are close. However, they fail to model the well-known
phenomenon that preferences are not always transitive. Also, if we are
uncertain of the utility values, it is not logical that preference is
determined absolutely by a comparison of them with an exact threshold. We
propose a new model in which there are two thresholds, $t_1$ and $t_2$; if the
difference $\alpha(y) - \alpha(x)$ less than $t_1$, then $y$ is not preferred
to $x$; if the difference is greater than $t_2$ then $y$ is preferred to $x$;
if it is between $t_1$ and $t_2$, then then $y$ may or may not be preferred to
$x$. We call such a relation a double-threshold semiorder, and the
corresponding directed graph $G = (V,E)$ a double threshold digraph. Every
directed acyclic graph is a double threshold graph; bounds on $t_2/t_1$ give a
nested hierarchy of subclasses of the directed acyclic graphs. In this paper we
characterize the subclasses in terms of forbidden subgraphs, and give
algorithms for finding an assignment of of utility values that explains the
relation in terms of a given $(t_1,t_2)$ or else produces a forbidden subgraph,
and finding the minimum value $\lambda$ of $t_2/t_1$ that is satisfiable for a
given directed acyclic graph. We show that $\lambda$ gives a measure of the
complexity of a directed acyclic graph with respect to several optimization
problems that are NP-hard on arbitrary directed acyclic graphs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Directed unions of local quadratic transforms of regular local rings and pullbacks | Let $\{ R_n, {\mathfrak m}_n \}_{n \ge 0}$ be an infinite sequence of regular
local rings with $R_{n+1}$ birationally dominating $R_n$ and ${\mathfrak
m}_nR_{n+1}$ a principal ideal of $R_{n+1}$ for each $n$. We examine properties
of the integrally closed local domain $S = \bigcup_{n \ge 0}R_n$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lipschitz regularity of deep neural networks: analysis and efficient estimation | Deep neural networks are notorious for being sensitive to small well-chosen
perturbations, and estimating the regularity of such architectures is of utmost
importance for safe and robust practical applications. In this paper, we
investigate one of the key characteristics to assess the regularity of such
methods: the Lipschitz constant of deep learning architectures. First, we show
that, even for two layer neural networks, the exact computation of this
quantity is NP-hard and state-of-art methods may significantly overestimate it.
Then, we both extend and improve previous estimation methods by providing
AutoLip, the first generic algorithm for upper bounding the Lipschitz constant
of any automatically differentiable function. We provide a power method
algorithm working with automatic differentiation, allowing efficient
computations even on large convolutions. Second, for sequential neural
networks, we propose an improved algorithm named SeqLip that takes advantage of
the linear computation graph to split the computation per pair of consecutive
layers. Third we propose heuristics on SeqLip in order to tackle very large
networks. Our experiments show that SeqLip can significantly improve on the
existing upper bounds.
| 0 | 0 | 0 | 1 | 0 | 0 |
Preference-based Teaching | We introduce a new model of teaching named "preference-based teaching" and a
corresponding complexity parameter---the preference-based teaching dimension
(PBTD)---representing the worst-case number of examples needed to teach any
concept in a given concept class. Although the PBTD coincides with the
well-known recursive teaching dimension (RTD) on finite classes, it is
radically different on infinite ones: the RTD becomes infinite already for
trivial infinite classes (such as half-intervals) whereas the PBTD evaluates to
reasonably small values for a wide collection of infinite classes including
classes consisting of so-called closed sets w.r.t. a given closure operator,
including various classes related to linear sets over $\mathbb{N}_0$ (whose RTD
had been studied quite recently) and including the class of Euclidean
half-spaces. On top of presenting these concrete results, we provide the reader
with a theoretical framework (of a combinatorial flavor) which helps to derive
bounds on the PBTD.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unified description of dynamics of a repulsive two-component Fermi gas | We study a binary spin-mixture of a zero-temperature repulsively interacting
$^6$Li atoms using both the atomic-orbital and the density functional
approaches. The gas is initially prepared in a configuration of two magnetic
domains and we determine the frequency of the spin-dipole oscillations which
are emerging after the repulsive barrier, initially separating the domains, is
removed. We find, in agreement with recent experiment (G. Valtolina et al.,
arXiv:1605.07850 (2016)), the occurrence of a ferromagnetic instability in an
atomic gas while the interaction strength between different spin states is
increased, after which the system becomes ferromagnetic. The ferromagnetic
instability is preceded by the softening of the spin-dipole mode.
| 0 | 1 | 0 | 0 | 0 | 0 |
Effective inertial frame in an atom interferometric test of the equivalence principle | In an ideal test of the equivalence principle, the test masses fall in a
common inertial frame. A real experiment is affected by gravity gradients,
which introduce systematic errors by coupling to initial kinematic differences
between the test masses. We demonstrate a method that reduces the sensitivity
of a dual-species atom interferometer to initial kinematics by using a
frequency shift of the mirror pulse to create an effective inertial frame for
both atomic species. This suppresses the gravity-gradient-induced dependence of
the differential phase on initial kinematic differences by a factor of 100 and
enables a precise measurement of these differences. We realize a relative
precision of $\Delta g / g \approx 6 \times 10^{-11}$ per shot, which improves
on the best previous result for a dual-species atom interferometer by more than
three orders of magnitude. By suppressing gravity gradient systematic errors to
below one part in $10^{13}$, these results pave the way for an atomic test of
the equivalence principle at an accuracy comparable with state-of-the-art
classical tests.
| 0 | 1 | 0 | 0 | 0 | 0 |
Phonon-mediated spin-flipping mechanism in the spin ices Dy$_2$Ti$_2$O$_7$ and Ho$_2$Ti$_2$O$_7$ | To understand emergent magnetic monopole dynamics in the spin ices
Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$, it is necessary to investigate the
mechanisms by which spins flip in these materials. Presently there are thought
to be two processes: quantum tunneling at low and intermediate temperatures and
thermally activated at high temperatures. We identify possible couplings
between crystal field and optical phonon excitations and construct a strictly
constrained model of phonon-mediated spin flipping that quantitatively
describes the high-temperature processes in both compounds, as measured by
quasielastic neutron scattering. We support the model with direct experimental
evidence of the coupling between crystal field states and optical phonons in
Ho$_2$Ti$_2$O$_7$.
| 0 | 1 | 0 | 0 | 0 | 0 |
A hexatic smectic phase with algebraically decaying bond-orientational order | The hexatic phase predicted by the theories of two-dimensional melting is
characterised by the power law decay of the orientational correlations whereas
the in-layer bond orientational order in all the hexatic smectic phases
observed so far was found to be long-range. We report a hexatic smectic phase
where the in-layer bond orientational correlations decay as $\propto r^{-1/4}$,
in quantitative agreement with the hexatic ordering predicted by the theory for
two dimensions. The phase was formed in a molecular dynamics simulation of a
one-component system of particles interacting via a spherically symmetric
potential. This is the first observation of the theoretically predicted
two-dimensional hexatic order in a three-dimensional system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pebble accretion at the origin of water in Europa | Despite the fact that the observed gradient in water content among the
Galilean satellites is globally consistent with a formation in a circum-Jovian
disk on both sides of the snowline, the mechanisms that led to a low water mass
fraction in Europa ($\sim$$8\%$) are not yet understood. Here, we present new
modeling results of solids transport in the circum-Jovian disk accounting for
aerodynamic drag, turbulent diffusion, surface temperature evolution and
sublimation of water ice. We find that the water mass fraction of pebbles
(e.g., solids with sizes of 10$^{-2}$ -- 1 m) as they drift inward is globally
consistent with the current water content of the Galilean system. This opens
the possibility that each satellite could have formed through pebble accretion
within a delimited region whose boundaries were defined by the position of the
snowline. This further implies that the migration of the forming satellites was
tied to the evolution of the snowline so that Europa fully accreted from
partially dehydrated material in the region just inside of the snowline.
| 0 | 1 | 0 | 0 | 0 | 0 |
Traffic Graph Convolutional Recurrent Neural Network: A Deep Learning Framework for Network-Scale Traffic Learning and Forecasting | Traffic forecasting is a particularly challenging application of
spatiotemporal forecasting, due to the complicated spatial dependencies on
roadway networks and the time-varying traffic patterns. To address this
challenge, we learn the traffic network as a graph and propose a novel deep
learning framework, Traffic Graph Convolutional Long Short-Term Memory Neural
Network (TGC-LSTM), to learn the interactions between roadways in the traffic
network and forecast the network-wide traffic state. We define the traffic
graph convolution based on the physical network topology. The relationship
between traffic graph convolution and the spectral graph convolution is also
discussed. The proposed model employs L1-norms on the graph convolution weights
and L2-norms on the extracted features to identify the most influential
roadways in the traffic network. Experiments show that our TGC-LSTM network is
able to capture the complex spatial-temporal dependencies efficiently present
in a vehicle traffic network and consistently outperforms state-of-the-art
baseline methods on two heterogeneous real-world traffic datasets. The
visualization of graph convolution weights shows that the proposed framework
can accurately recognize the most influential roadway segments in real-world
traffic networks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Intrinsic Analysis of the Sample Fréchet Mean and Sample Mean of Complex Wishart Matrices | We consider two types of averaging of complex covariance matrices, a sample
mean (average) and the sample Fréchet mean. We analyse the performance of
these quantities as estimators for the true covariance matrix via `intrinsic'
versions of bias and mean square error, a methodology which takes account of
geometric structure. We derive simple expressions for the intrinsic bias in
both cases, and the simple average is seen to be preferable. The same is true
for the asymptotic Riemannian risk, and for the Riemannian risk itself in the
scalar case. Combined with a similar preference for the simple average using
non-intrinsic analysis, we conclude that the simple average is preferred
overall to the sample Fréchet mean in this context.
| 0 | 0 | 1 | 1 | 0 | 0 |
Alternating Optimization for Capacity Region of Gaussian MIMO Broadcast Channels with Per-antenna Power Constraint | This paper characterizes the capacity region of Gaussian MIMO broadcast
channels (BCs) with per-antenna power constraint (PAPC). While the capacity
region of MIMO BCs with a sum power constraint (SPC) was extensively studied,
that under PAPC has received less attention. A reason is that efficient
solutions for this problem are hard to find. The goal of this paper is to
devise an efficient algorithm for determining the capacity region of Gaussian
MIMO BCs subject to PAPC, which is scalable to the problem size. To this end,
we first transform the weighted sum capacity maximization problem, which is
inherently nonconvex with the input covariance matrices, into a convex
formulation in the dual multiple access channel by minimax duality. Then we
derive a computationally efficient algorithm combining the concept of
alternating optimization and successive convex approximation. The proposed
algorithm achieves much lower complexity compared to an existing interiorpoint
method. Moreover, numerical results demonstrate that the proposed algorithm
converges very fast under various scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tales of Two Cities: Using Social Media to Understand Idiosyncratic Lifestyles in Distinctive Metropolitan Areas | Lifestyles are a valuable model for understanding individuals' physical and
mental lives, comparing social groups, and making recommendations for improving
people's lives. In this paper, we examine and compare lifestyle behaviors of
people living in cities of different sizes, utilizing freely available social
media data as a large-scale, low-cost alternative to traditional survey
methods. We use the Greater New York City area as a representative for large
cities, and the Greater Rochester area as a representative for smaller cities
in the United States. We employed matrix factor analysis as an unsupervised
method to extract salient mobility and work-rest patterns for a large
population of users within each metropolitan area. We discovered interesting
human behavior patterns at both a larger scale and a finer granularity than is
present in previous literature, some of which allow us to quantitatively
compare the behaviors of individuals of living in big cities to those living in
small cities. We believe that our social media-based approach to lifestyle
analysis represents a powerful tool for social computing in the big data age.
| 1 | 0 | 0 | 0 | 0 | 0 |
Randomized Iterative Reconstruction for Sparse View X-ray Computed Tomography | With the availability of more powerful computers, iterative reconstruction
algorithms are the subject of an ongoing work in the design of more efficient
reconstruction algorithms for X-ray computed tomography. In this work, we show
how two analytical reconstruction algorithms can be improved by correcting the
corresponding reconstructions using a randomized iterative reconstruction
algorithm. The combined analytical reconstruction followed by randomized
iterative reconstruction can also be viewed as a reconstruction algorithm
which, in the experiments we have conducted, uses up to $35\%$ less projection
angles as compared to the analytical reconstruction algorithms and produces the
same results in terms of quality of reconstruction, without increasing the
execution time significantly.
| 1 | 0 | 0 | 0 | 0 | 0 |
Finding Local Minima via Stochastic Nested Variance Reduction | We propose two algorithms that can find local minima faster than the
state-of-the-art algorithms in both finite-sum and general stochastic nonconvex
optimization. At the core of the proposed algorithms is
$\text{One-epoch-SNVRG}^+$ using stochastic nested variance reduction (Zhou et
al., 2018a), which outperforms the state-of-the-art variance reduction
algorithms such as SCSG (Lei et al., 2017). In particular, for finite-sum
optimization problems, the proposed
$\text{SNVRG}^{+}+\text{Neon2}^{\text{finite}}$ algorithm achieves
$\tilde{O}(n^{1/2}\epsilon^{-2}+n\epsilon_H^{-3}+n^{3/4}\epsilon_H^{-7/2})$
gradient complexity to converge to an $(\epsilon, \epsilon_H)$-second-order
stationary point, which outperforms $\text{SVRG}+\text{Neon2}^{\text{finite}}$
(Allen-Zhu and Li, 2017) , the best existing algorithm, in a wide regime. For
general stochastic optimization problems, the proposed
$\text{SNVRG}^{+}+\text{Neon2}^{\text{online}}$ achieves
$\tilde{O}(\epsilon^{-3}+\epsilon_H^{-5}+\epsilon^{-2}\epsilon_H^{-3})$
gradient complexity, which is better than both
$\text{SVRG}+\text{Neon2}^{\text{online}}$ (Allen-Zhu and Li, 2017) and
Natasha2 (Allen-Zhu, 2017) in certain regimes. Furthermore, we explore the
acceleration brought by third-order smoothness of the objective function.
| 0 | 0 | 0 | 1 | 0 | 0 |
Growth rate of the state vector in a generalized linear stochastic system with symmetric matrix | The mean growth rate of the state vector is evaluated for a generalized
linear stochastic second-order system with a symmetric matrix. Diagonal entries
of the matrix are assumed to be independent and exponentially distributed with
different means, while the off-diagonal entries are equal to zero.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian Patchworks: An Approach to Case-Based Reasoning | Doctors often rely on their past experience in order to diagnose patients.
For a doctor with enough experience, almost every patient would have
similarities to key cases seen in the past, and each new patient could be
viewed as a mixture of these key past cases. Because doctors often tend to
reason this way, an efficient computationally aided diagnostic tool that thinks
in the same way might be helpful in locating key past cases of interest that
could assist with diagnosis. This article develops a novel mathematical model
to mimic the type of logical thinking that physicians use when considering past
cases. The proposed model can also provide physicians with explanations that
would be similar to the way they would naturally reason about cases. The
proposed method is designed to yield predictive accuracy, computational
efficiency, and insight into medical data; the key element is the insight into
medical data, in some sense we are automating a complicated process that
physicians might perform manually. We finally implemented the result of this
work on two publicly available healthcare datasets, for heart disease
prediction and breast cancer prediction.
| 0 | 0 | 0 | 1 | 0 | 0 |
Strong Black-box Adversarial Attacks on Unsupervised Machine Learning Models | Machine Learning (ML) and Deep Learning (DL) models have achieved
state-of-the-art performance on multiple learning tasks, from vision to natural
language modelling. With the growing adoption of ML and DL to many areas of
computer science, recent research has also started focusing on the security
properties of these models. There has been a lot of work undertaken to
understand if (deep) neural network architectures are resilient to black-box
adversarial attacks which craft perturbed input samples that fool the
classifier without knowing the architecture used. Recent work has also focused
on the transferability of adversarial attacks and found that adversarial
attacks are generally easily transferable between models, datasets, and
techniques. However, such attacks and their analysis have not been covered from
the perspective of unsupervised machine learning algorithms. In this paper, we
seek to bridge this gap through multiple contributions. We first provide a
strong (iterative) black-box adversarial attack that can craft adversarial
samples which will be incorrectly clustered irrespective of the choice of
clustering algorithm. We choose 4 prominent clustering algorithms, and a
real-world dataset to show the working of the proposed adversarial algorithm.
Using these clustering algorithms we also carry out a simple study of
cross-technique adversarial attack transferability.
| 1 | 0 | 0 | 1 | 0 | 0 |
Formal affine Demazure and Hecke algebras of Kac-Moody root systems | We define the formal affine Demazure algebra and formal affine Hecke algebra
associated to a Kac-Moody root system. We prove the structure theorems of these
algebras, hence, extending several result and construction (presentation in
terms of generators and relations, coproduct and product structures, filtration
by codimension of Bott-Samelson classes, root polynomials and multiplication
formulas) that were previously known for finite root system.
| 0 | 0 | 1 | 0 | 0 | 0 |
Handling Homographs in Neural Machine Translation | Homographs, words with different meanings but the same surface form, have
long caused difficulty for machine translation systems, as it is difficult to
select the correct translation based on the context. However, with the advent
of neural machine translation (NMT) systems, which can theoretically take into
account global sentential context, one may hypothesize that this problem has
been alleviated. In this paper, we first provide empirical evidence that
existing NMT systems in fact still have significant problems in properly
translating ambiguous words. We then proceed to describe methods, inspired by
the word sense disambiguation literature, that model the context of the input
word with context-aware word embeddings that help to differentiate the word
sense be- fore feeding it into the encoder. Experiments on three language pairs
demonstrate that such models improve the performance of NMT systems both in
terms of BLEU score and in the accuracy of translating homographs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simple Length Rigidity for Hitchin Representations | We show that a Hitchin representation is determined by the spectral radii of
the images of simple, non-separating closed curves. As a consequence, we
classify isometries of the intersection function on Hitchin components of
dimension 3 and on the self-dual Hitchin components in all dimensions. As an
important tool in the proof, we establish a transversality result for positive
quadruples of flags.
| 0 | 0 | 1 | 0 | 0 | 0 |
Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology | Digital pathology is not only one of the most promising fields of diagnostic
medicine, but at the same time a hot topic for fundamental research. Digital
pathology is not just the transfer of histopathological slides into digital
representations. The combination of different data sources (images, patient
records, and *omics data) together with current advances in artificial
intelligence/machine learning enable to make novel information accessible and
quantifiable to a human expert, which is not yet available and not exploited in
current medical settings. The grand goal is to reach a level of usable
intelligence to understand the data in the context of an application task,
thereby making machine decisions transparent, interpretable and explainable.
The foundation of such an "augmented pathologist" needs an integrated approach:
While machine learning algorithms require many thousands of training examples,
a human expert is often confronted with only a few data points. Interestingly,
humans can learn from such few examples and are able to instantly interpret
complex patterns. Consequently, the grand goal is to combine the possibilities
of artificial intelligence with human intelligence and to find a well-suited
balance between them to enable what neither of them could do on their own. This
can raise the quality of education, diagnosis, prognosis and prediction of
cancer and other diseases. In this paper we describe some (incomplete) research
issues which we believe should be addressed in an integrated and concerted
effort for paving the way towards the augmented pathologist.
| 1 | 0 | 0 | 1 | 0 | 0 |
Morse Code Datasets for Machine Learning | We present an algorithm to generate synthetic datasets of tunable difficulty
on classification of Morse code symbols for supervised machine learning
problems, in particular, neural networks. The datasets are spatially
one-dimensional and have a small number of input features, leading to high
density of input information content. This makes them particularly challenging
when implementing network complexity reduction methods. We explore how network
performance is affected by deliberately adding various forms of noise and
expanding the feature set and dataset size. Finally, we establish several
metrics to indicate the difficulty of a dataset, and evaluate their merits. The
algorithm and datasets are open-source.
| 0 | 0 | 0 | 1 | 0 | 0 |
Guarantees for Spectral Clustering with Fairness Constraints | Given the widespread popularity of spectral clustering (SC) for partitioning
graph data, we study a version of constrained SC in which we try to incorporate
the fairness notion proposed by Chierichetti et al. (2017). According to this
notion, a clustering is fair if every demographic group is approximately
proportionally represented in each cluster. To this end, we develop variants of
both normalized and unnormalized constrained SC and show that they help find
fairer clusterings on both synthetic and real data. We also provide a rigorous
theoretical analysis of our algorithms. While there have been efforts to
incorporate various constraints into the SC framework, theoretically analyzing
them is a challenging problem. We overcome this by proposing a natural variant
of the stochastic block model where h groups have strong inter-group
connectivity, but also exhibit a "natural" clustering structure which is fair.
We prove that our algorithms can recover this fair clustering with high
probability.
| 1 | 0 | 0 | 1 | 0 | 0 |
Using Maximum Entry-Wise Deviation to Test the Goodness-of-Fit for Stochastic Block Models | The stochastic block model is widely used for detecting community structures
in network data. How to test the goodness-of-fit of the model is one of the
fundamental problems and has gained growing interests in recent years. In this
paper, we propose a novel goodness-of-fit test based on the maximum entry of
the centered and re-scaled adjacency matrix for the stochastic block model. One
noticeable advantage of the proposed test is that the number of communities can
be allowed to grow linearly with the number of nodes ignoring a logarithmic
factor. We prove that the null distribution of the test statistic converges in
distribution to a Gumbel distribution, and we show that both the number of
communities and the membership vector can be tested via the proposed method.
Further, we show that the proposed test has asymptotic power guarantee against
a class of alternatives. We also demonstrate that the proposed method can be
extended to the degree-corrected stochastic block model. Both simulation
studies and real-world data examples indicate that the proposed method works
well.
| 0 | 0 | 0 | 1 | 0 | 0 |
Twitter and the Press: an Ego-Centred Analysis | Ego networks have proved to be a valuable tool for understanding the
relationships that individuals establish with their peers, both in offline and
online social networks. Particularly interesting are the cognitive constraints
associated with the interactions between the ego and the members of their ego
network, whereby individuals cannot maintain meaningful interactions with more
than 150 people, on average. In this work, we focus on the ego networks of
journalists on Twitter, and we investigate whether they feature the same
characteristics observed for other relevant classes of Twitter users, like
politicians and generic users. Our findings are that journalists are generally
more active and interact with more people than generic users. Their ego network
structure is very aligned with reference models derived from the social brain
hypothesis and observed in general human ego networks. Remarkably, the
similarity is even higher than the one of politicians and generic users ego
networks. This may imply a greater cognitive involvement with Twitter than with
other social interaction means. Moreover, the ego networks of journalists are
much stabler than those of politicians and generic users, and the ego-alter
ties are often information-driven.
| 1 | 0 | 0 | 0 | 0 | 0 |
Majorana quasiparticles in condensed matter | In the space of less than one decade, the search for Majorana quasiparticles
in condensed matter has become one of the hottest topics in physics. The aim of
this review is to provide a brief perspective of where we are with strong focus
on artificial implementations of one-dimensional topological superconductivity.
After a self-contained introduction and some technical parts, an overview of
the current experimental status is given and some of the most successful
experiments of the last few years are discussed in detail. These include the
novel generation of ballistic InSb nanowire devices, epitaxial Al-InAs
nanowires and Majorana boxes, high frequency experiments with proximitized
quantum spin Hall insulators realised in HgTe quantum wells and recent
experiments on ferromagnetic atomic chains on top of superconducting surfaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
Proper orthogonal decomposition vs. Fourier analysis for extraction of large-scale structures of thermal convection | We performed a comparative study of extraction of large-scale flow structures
in Rayleigh Bénard convection using proper orthogonal decomposition (POD) and
{\em Fourier analysis}. We show that the free-slip basis functions capture the
flow profiles successfully for the no-slip boundary conditions. We observe that
the large-scale POD modes capture a larger fraction of total energy than the
Fourier modes. However, the Fourier modes capture the rarer flow structures
like flow reversals better. The flow profiles of the dominant POD and Fourier
modes are quite similar. Our results show that the Fourier analysis provides an
attractive alternative to POD analysis for capturing large-scale flow
structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Gromov--Witten invariants of $\mathbb{P}^1$ | We propose a conjectural explicit formula of generating series of a new type
for Gromov--Witten invariants of $\mathbb{P}^1$ of all degrees in full genera.
| 0 | 1 | 1 | 0 | 0 | 0 |
Downwash-Aware Trajectory Planning for Large Quadrotor Teams | We describe a method for formation-change trajectory planning for large
quadrotor teams in obstacle-rich environments. Our method decomposes the
planning problem into two stages: a discrete planner operating on a graph
representation of the workspace, and a continuous refinement that converts the
non-smooth graph plan into a set of C^k-continuous trajectories, locally
optimizing an integral-squared-derivative cost. We account for the downwash
effect, allowing safe flight in dense formations. We demonstrate the
computational efficiency in simulation with up to 200 robots and the physical
plausibility with an experiment with 32 nano-quadrotors. Our approach can
compute safe and smooth trajectories for hundreds of quadrotors in dense
environments with obstacles in a few minutes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Flow simulation in a 2D bubble column with the Euler-Lagrange and Euler-Euler method | Bubbly flows, as present in bubble column reactors, can be simulated using a
variety of simulation techniques. In order to gain high resolution CFD methods
are used to simulate a pseudo 2D bubble column using EL and EE techniques. The
forces on bubble dynamics are solved within open access software OpenFOAM with
bubble interactions computed via Monte Carlo methods. The estimated bubble size
distribution and the predicted hold-up are compared to experimental data and
other simulative work using EE approach and show reasonable consensus for both.
Benchmarks with state of the art EE simulations shows that the EL approach is
advantageous if the bubble number stays at a certain level, as the EL approach
scales linearly with the number of bubbles simulated. Therefore, different
computational meshes have been used to also account for influence of the
resolution quality. The EL approach indicated faster solution for all realistic
cases, only deliberate decrease of coalescence rates could push CPU time to the
limits. Critical bubble number - when EE becomes advantageous over the EL
approach - was estimated to be 40.000 in this particular case.
| 0 | 1 | 0 | 0 | 0 | 0 |
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient | In this paper, we study the problem of sampling from a given probability
density function that is known to be smooth and strongly log-concave. We
analyze several methods of approximate sampling based on discretizations of the
(highly overdamped) Langevin diffusion and establish guarantees on its error
measured in the Wasserstein-2 distance. Our guarantees improve or extend the
state-of-the-art results in three directions. First, we provide an upper bound
on the error of the first-order Langevin Monte Carlo (LMC) algorithm with
optimized varying step-size. This result has the advantage of being horizon
free (we do not need to know in advance the target precision) and to improve by
a logarithmic factor the corresponding result for the constant step-size.
Second, we study the case where accurate evaluations of the gradient of the
log-density are unavailable, but one can have access to approximations of the
aforementioned gradient. In such a situation, we consider both deterministic
and stochastic approximations of the gradient and provide an upper bound on the
sampling error of the first-order LMC that quantifies the impact of the
gradient evaluation inaccuracies. Third, we establish upper bounds for two
versions of the second-order LMC, which leverage the Hessian of the
log-density. We nonasymptotic guarantees on the sampling error of these
second-order LMCs. These guarantees reveal that the second-order LMC algorithms
improve on the first-order LMC in ill-conditioned settings.
| 1 | 0 | 1 | 1 | 0 | 0 |
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples | The Madry Lab recently hosted a competition designed to test the robustness
of their adversarially trained MNIST model. Attacks were constrained to perturb
each pixel of the input image by a scaled maximal $L_\infty$ distortion
$\epsilon$ = 0.3. This discourages the use of attacks which are not optimized
on the $L_\infty$ distortion metric. Our experimental results demonstrate that
by relaxing the $L_\infty$ constraint of the competition, the elastic-net
attack to deep neural networks (EAD) can generate transferable adversarial
examples which, despite their high average $L_\infty$ distortion, have minimal
visual distortion. These results call into question the use of $L_\infty$ as a
sole measure for visual distortion, and further demonstrate the power of EAD at
generating robust adversarial examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum sensors for the generating functional of interacting quantum field theories | Difficult problems described in terms of interacting quantum fields evolving
in real time or out of equilibrium are abound in condensed-matter and
high-energy physics. Addressing such problems via controlled experiments in
atomic, molecular, and optical physics would be a breakthrough in the field of
quantum simulations. In this work, we present a quantum-sensing protocol to
measure the generating functional of an interacting quantum field theory and,
with it, all the relevant information about its in or out of equilibrium
phenomena. Our protocol can be understood as a collective interferometric
scheme based on a generalization of the notion of Schwinger sources in quantum
field theories, which make it possible to probe the generating functional. We
show that our scheme can be realized in crystals of trapped ions acting as
analog quantum simulators of self-interacting scalar quantum field theories.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sockeye: A Toolkit for Neural Machine Translation | We describe Sockeye (version 1.12), an open-source sequence-to-sequence
toolkit for Neural Machine Translation (NMT). Sockeye is a production-ready
framework for training and applying models as well as an experimental platform
for researchers. Written in Python and built on MXNet, the toolkit offers
scalable training and inference for the three most prominent encoder-decoder
architectures: attentional recurrent neural networks, self-attentional
transformers, and fully convolutional networks. Sockeye also supports a wide
range of optimizers, normalization and regularization techniques, and inference
improvements from current NMT literature. Users can easily run standard
training recipes, explore different model settings, and incorporate new ideas.
In this paper, we highlight Sockeye's features and benchmark it against other
NMT toolkits on two language arcs from the 2017 Conference on Machine
Translation (WMT): English-German and Latvian-English. We report competitive
BLEU scores across all three architectures, including an overall best score for
Sockeye's transformer implementation. To facilitate further comparison, we
release all system outputs and training scripts used in our experiments. The
Sockeye toolkit is free software released under the Apache 2.0 license.
| 1 | 0 | 0 | 1 | 0 | 0 |
Bayesian shape modelling of cross-sectional geological data | Shape information is of great importance in many applications. For example,
the oil-bearing capacity of sand bodies, the subterranean remnants of ancient
rivers, is related to their cross-sectional shapes. The analysis of these
shapes is therefore of some interest, but current classifications are
simplistic and ad hoc. In this paper, we describe the first steps towards a
coherent statistical analysis of these shapes by deriving the integrated
likelihood for data shapes given class parameters. The result is of interest
beyond this particular application.
| 0 | 0 | 0 | 1 | 0 | 0 |
Analysing Relations involving small number of Monomials in AES S- Box | In the present day, AES is one the most widely used and most secure
Encryption Systems prevailing. So, naturally lots of research work is going on
to mount a significant attack on AES. Many different forms of Linear and
differential cryptanalysis have been performed on AES. Of late, an active area
of research has been Algebraic Cryptanalysis of AES, where although fast
progress is being made, there are still numerous scopes for research and
improvement. One of the major reasons behind this being that algebraic
cryptanalysis mainly depends on I/O relations of the AES S- Box (a major
component of the AES). As, already known, that the key recovery algorithm of
AES can be broken down as an MQ problem which is itself considered hard.
Solving these equations depends on our ability reduce them into linear forms
which are easily solvable under our current computational prowess. The lower
the degree of these equations, the easier it is for us to linearlize hence the
attack complexity reduces. The aim of this paper is to analyze the various
relations involving small number of monomials of the AES S- Box and to answer
the question whether it is actually possible to have such monomial equations
for the S- Box if we restrict the degree of the monomials. In other words this
paper aims to study such equations and see if they can be applicable for AES.
| 1 | 0 | 0 | 0 | 0 | 0 |
q-Virasoro algebra and affine Kac-Moody Lie algebras | We establish a natural connection of the $q$-Virasoro algebra $D_{q}$
introduced by Belov and Chaltikian with affine Kac-Moody Lie algebras. More
specifically, for each abelian group $S$ together with a one-to-one linear
character $\chi$, we define an infinite-dimensional Lie algebra $D_{S}$ which
reduces to $D_{q}$ when $S=\mathbb{Z}$. Guided by the theory of equivariant
quasi modules for vertex algebras, we introduce another Lie algebra
${\mathfrak{g}}_{S}$ with $S$ as an automorphism group and we prove that
$D_{S}$ is isomorphic to the $S$-covariant algebra of the affine Lie algebra
$\widehat{\mathfrak{g}_{S}}$. We then relate restricted $D_{S}$-modules of
level $\ell\in \mathbb{C}$ to equivariant quasi modules for the vertex algebra
$V_{\widehat{\mathfrak{g}_{S}}}(\ell,0)$ associated to
$\widehat{\mathfrak{g}_{S}}$ with level $\ell$. Furthermore, we show that if
$S$ is a finite abelian group of order $2l+1$, $D_{S}$ is isomorphic to the
affine Kac-Moody algebra of type $B^{(1)}_{l}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Noise Handling Properties of the Talbot Algorithm for Numerically Inverting the Laplace Transform | This paper examines the noise handling properties of three of the most widely
used algorithms for numerically inverting the Laplace Transform. After
examining the genesis of the algorithms, the regularization properties are
evaluated through a series of standard test functions in which noise is added
to the inverse transform. Comparisons are then made with the exact data. Our
main finding is that the Talbot inversion algorithm is very good at handling
noisy data and performs much better than the Fourier Series and Stehfest
numerical inversion schemes as outlined in this paper. This offers a
considerable advantage for it's use in inverting the Laplace Transform when
seeking numerical solutions to time dependent differential equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Short-term Motion Prediction of Traffic Actors for Autonomous Driving using Deep Convolutional Networks | Despite its ubiquity in our daily lives, AI is only just starting to make
advances in what may arguably have the largest societal impact thus far, the
nascent field of autonomous driving. In this work we discuss this important
topic and address one of crucial aspects of the emerging area, the problem of
predicting future state of autonomous vehicle's surrounding necessary for safe
and efficient operations. We introduce a deep learning-based approach that
takes into account current world state and produces rasterized representations
of each actor's vicinity. The raster images are then used by deep convolutional
models to infer future movement of actors while accounting for inherent
uncertainty of the prediction task. Extensive experiments on real-world data
strongly suggest benefits of the proposed approach. Moreover, following
successful tests the system was deployed to a fleet of autonomous vehicles.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tropicalization, symmetric polynomials, and complexity | D. Grigoriev-G. Koshevoy recently proved that tropical Schur polynomials have
(at worst) polynomial tropical semiring complexity. They also conjectured
tropical skew Schur polynomials have at least exponential complexity; we
establish a polynomial complexity upper bound. Our proof uses results about
(stable) Schubert polynomials, due to R. P. Stanley and S. Billey-W.
Jockusch-R. P. Stanley, together with a sufficient condition for polynomial
complexity that is connected to the saturated Newton polytope property.
| 1 | 0 | 0 | 0 | 0 | 0 |
The normal closure of big Dehn twists, and plate spinning with rotating families | We study the normal closure of a big power of one or several Dehn twists in a
Mapping Class Group. We prove that it has a presentation whose relators
consists only of commutators between twists of disjoint support, thus answering
a question of Ivanov. Our method is to use the theory of projection complexes
of Bestvina Bromberg and Fujiwara, together with the theory of rotating
families, simultaneously on several spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Secure Minimum Time Planning Under Environmental Uncertainty: an Extended Treatment | Cyber Physical Systems (CPS) are becoming ubiquitous and affect the physical
world, yet security is seldom at the forefront of their design. This is
especially true of robotic control algorithms which seldom consider the effect
of a cyber attack on mission objectives and success. This work presents a
secure optimal control algorithm in the face of a cyber attack on a robot's
knowledge of the environment. This work focuses on cyber attack, but the
results generalize to incomplete or outdated information of an environment.
This work fuses ideas from robust control, optimal control, and sensor based
planning to provide a generalization of stopping distance in 3D. The planner is
implemented in simulation and its properties are analyzed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Treatment-Response Models for Counterfactual Reasoning with Continuous-time, Continuous-valued Interventions | Treatment effects can be estimated from observational data as the difference
in potential outcomes. In this paper, we address the challenge of estimating
the potential outcome when treatment-dose levels can vary continuously over
time. Further, the outcome variable may not be measured at a regular frequency.
Our proposed solution represents the treatment response curves using linear
time-invariant dynamical systems---this provides a flexible means for modeling
response over time to highly variable dose curves. Moreover, for multivariate
data, the proposed method: uncovers shared structure in treatment response and
the baseline across multiple markers; and, flexibly models challenging
correlation structure both across and within signals over time. For this, we
build upon the framework of multiple-output Gaussian Processes. On simulated
and a challenging clinical dataset, we show significant gains in accuracy over
state-of-the-art models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Reduced Electron Exposure for Energy-Dispersive Spectroscopy using Dynamic Sampling | Analytical electron microscopy and spectroscopy of biological specimens,
polymers, and other beam sensitive materials has been a challenging area due to
irradiation damage. There is a pressing need to develop novel imaging and
spectroscopic imaging methods that will minimize such sample damage as well as
reduce the data acquisition time. The latter is useful for high-throughput
analysis of materials structure and chemistry. In this work, we present a novel
machine learning based method for dynamic sparse sampling of EDS data using a
scanning electron microscope. Our method, based on the supervised learning
approach for dynamic sampling algorithm and neural networks based
classification of EDS data, allows a dramatic reduction in the total sampling
of up to 90%, while maintaining the fidelity of the reconstructed elemental
maps and spectroscopic data. We believe this approach will enable imaging and
elemental mapping of materials that would otherwise be inaccessible to these
analysis techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimization of Smooth Functions with Noisy Observations: Local Minimax Rates | We consider the problem of global optimization of an unknown non-convex
smooth function with zeroth-order feedback. In this setup, an algorithm is
allowed to adaptively query the underlying function at different locations and
receives noisy evaluations of function values at the queried points (i.e. the
algorithm has access to zeroth-order information). Optimization performance is
evaluated by the expected difference of function values at the estimated
optimum and the true optimum. In contrast to the classical optimization setup,
first-order information like gradients are not directly accessible to the
optimization algorithm. We show that the classical minimax framework of
analysis, which roughly characterizes the worst-case query complexity of an
optimization algorithm in this setting, leads to excessively pessimistic
results. We propose a local minimax framework to study the fundamental
difficulty of optimizing smooth functions with adaptive function evaluations,
which provides a refined picture of the intrinsic difficulty of zeroth-order
optimization. We show that for functions with fast level set growth around the
global minimum, carefully designed optimization algorithms can identify a near
global minimizer with many fewer queries. For the special case of strongly
convex and smooth functions, our implied convergence rates match the ones
developed for zeroth-order convex optimization problems. At the other end of
the spectrum, for worst-case smooth functions no algorithm can converge faster
than the minimax rate of estimating the entire unknown function in the
$\ell_\infty$-norm. We provide an intuitive and efficient algorithm that
attains the derived upper error bounds.
| 0 | 0 | 0 | 1 | 0 | 0 |
Raw Waveform-based Speech Enhancement by Fully Convolutional Networks | This study proposes a fully convolutional network (FCN) model for raw
waveform-based speech enhancement. The proposed system performs speech
enhancement in an end-to-end (i.e., waveform-in and waveform-out) manner, which
dif-fers from most existing denoising methods that process the magnitude
spectrum (e.g., log power spectrum (LPS)) only. Because the fully connected
layers, which are involved in deep neural networks (DNN) and convolutional
neural networks (CNN), may not accurately characterize the local information of
speech signals, particularly with high frequency components, we employed fully
convolutional layers to model the waveform. More specifically, FCN consists of
only convolutional layers and thus the local temporal structures of speech
signals can be efficiently and effectively preserved with relatively few
weights. Experimental results show that DNN- and CNN-based models have limited
capability to restore high frequency components of waveforms, thus leading to
decreased intelligibility of enhanced speech. By contrast, the proposed FCN
model can not only effectively recover the waveforms but also outperform the
LPS-based DNN baseline in terms of short-time objective intelligibility (STOI)
and perceptual evaluation of speech quality (PESQ). In addition, the number of
model parameters in FCN is approximately only 0.2% compared with that in both
DNN and CNN.
| 1 | 0 | 0 | 1 | 0 | 0 |
Kinetic Theory for Finance Brownian Motion from Microscopic Dynamics | Recent technological development has enabled researchers to study social
phenomena scientifically in detail and financial markets has particularly
attracted physicists since the Brownian motion has played the key role as in
physics. In our previous report (arXiv:1703.06739; to appear in Phys. Rev.
Lett.), we have presented a microscopic model of trend-following high-frequency
traders (HFTs) and its theoretical relation to the dynamics of financial
Brownian motion, directly supported by a data analysis of tracking trajectories
of individual HFTs in a financial market. Here we show the mathematical
foundation for the HFT model paralleling to the traditional kinetic theory in
statistical physics. We first derive the time-evolution equation for the
phase-space distribution for the HFT model exactly, which corresponds to the
Liouville equation in conventional analytical mechanics. By a systematic
reduction of the Liouville equation for the HFT model, the
Bogoliubov-Born-Green-Kirkwood-Yvon hierarchal equations are derived for
financial Brownian motion. We then derive the Boltzmann-like and Langevin-like
equations for the order-book and the price dynamics by making the assumption of
molecular chaos. The qualitative behavior of the model is asymptotically
studied by solving the Boltzmann-like and Langevin-like equations for the large
number of HFTs, which is numerically validated through the Monte-Carlo
simulation. Our kinetic description highlights the parallel mathematical
structure between the financial Brownian motion and the physical Brownian
motion.
| 0 | 0 | 0 | 0 | 0 | 1 |
Assessing Uncertainties in X-ray Single-particle Three-dimensional reconstructions | Modern technology for producing extremely bright and coherent X-ray laser
pulses provides the possibility to acquire a large number of diffraction
patterns from individual biological nanoparticles, including proteins, viruses,
and DNA. These two-dimensional diffraction patterns can be practically
reconstructed and retrieved down to a resolution of a few \angstrom. In
principle, a sufficiently large collection of diffraction patterns will contain
the required information for a full three-dimensional reconstruction of the
biomolecule. The computational methodology for this reconstruction task is
still under development and highly resolved reconstructions have not yet been
produced.
We analyze the Expansion-Maximization-Compression scheme, the current state
of the art approach for this very challenging application, by isolating
different sources of uncertainty. Through numerical experiments on synthetic
data we evaluate their respective impact. We reach conclusions of relevance for
handling actual experimental data, as well as pointing out certain improvements
to the underlying estimation algorithm.
We also introduce a practically applicable computational methodology in the
form of bootstrap procedures for assessing reconstruction uncertainty in the
real data case. We evaluate the sharpness of this approach and argue that this
type of procedure will be critical in the near future when handling the
increasing amount of data.
| 1 | 1 | 0 | 1 | 0 | 0 |
Learning Hawkes Processes from Short Doubly-Censored Event Sequences | Many real-world applications require robust algorithms to learn point
processes based on a type of incomplete data --- the so-called short
doubly-censored (SDC) event sequences. We study this critical problem of
quantitative asynchronous event sequence analysis under the framework of Hawkes
processes by leveraging the idea of data synthesis. Given SDC event sequences
observed in a variety of time intervals, we propose a sampling-stitching data
synthesis method --- sampling predecessors and successors for each SDC event
sequence from potential candidates and stitching them together to synthesize
long training sequences. The rationality and the feasibility of our method are
discussed in terms of arguments based on likelihood. Experiments on both
synthetic and real-world data demonstrate that the proposed data synthesis
method improves learning results indeed for both time-invariant and
time-varying Hawkes processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Unveiling the Role of Dopant Polarity on the Recombination, and Performance of Organic Light-Emitting Diodes | The recombination of charges is an important process in organic photonic
devices because the process influences the device characteristics such as the
driving voltage, efficiency and lifetime. By combining the dipole trap theory
with the drift-diffusion model, we report that the stationary dipole moment
({\mu}0) of the dopant is a major factor determining the recombination
mechanism in the dye-doped organic light emitting diodes when the trap depth
({\Delta}Et) is larger than 0.3 eV where any de-trapping effect becomes
negligible. Dopants with large {\mu}0 (e.g., homoleptic Ir(III) dyes) induce
large charge trapping on them, resulting in high driving voltage and
trap-assisted-recombination dominated emission. On the other hand, dyes with
small {\mu}0 (e.g., heteroleptic Ir(III) dyes) show much less trapping on them
no matter what {\Delta}Et is, leading to lower driving voltage, higher
efficiencies and Langevin recombination dominated emission characteristics.
This finding will be useful in any organic photonic devices where trapping and
recombination sites play key roles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sliced Wasserstein Distance for Learning Gaussian Mixture Models | Gaussian mixture models (GMM) are powerful parametric tools with many
applications in machine learning and computer vision. Expectation maximization
(EM) is the most popular algorithm for estimating the GMM parameters. However,
EM guarantees only convergence to a stationary point of the log-likelihood
function, which could be arbitrarily worse than the optimal solution. Inspired
by the relationship between the negative log-likelihood function and the
Kullback-Leibler (KL) divergence, we propose an alternative formulation for
estimating the GMM parameters using the sliced Wasserstein distance, which
gives rise to a new algorithm. Specifically, we propose minimizing the
sliced-Wasserstein distance between the mixture model and the data distribution
with respect to the GMM parameters. In contrast to the KL-divergence, the
energy landscape for the sliced-Wasserstein distance is more well-behaved and
therefore more suitable for a stochastic gradient descent scheme to obtain the
optimal GMM parameters. We show that our formulation results in parameter
estimates that are more robust to random initializations and demonstrate that
it can estimate high-dimensional data distributions more faithfully than the EM
algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
How constant shifts affect the zeros of certain rational harmonic functions | We study the effect of constant shifts on the zeros of rational harmomic
functions $f(z) = r(z) - \conj{z}$. In particular, we characterize how shifting
through the caustics of $f$ changes the number of zeros and their respective
orientations. This also yields insight into the nature of the singular zeros of
$f$. Our results have applications in gravitational lensing theory, where
certain such functions $f$ represent gravitational point-mass lenses, and a
constant shift can be interpreted as the position of the light source of the
lens.
| 0 | 1 | 1 | 0 | 0 | 0 |
Discovery and usage of joint attention in images | Joint visual attention is characterized by two or more individuals looking at
a common target at the same time. The ability to identify joint attention in
scenes, the people involved, and their common target, is fundamental to the
understanding of social interactions, including others' intentions and goals.
In this work we deal with the extraction of joint attention events, and the use
of such events for image descriptions. The work makes two novel contributions.
First, our extraction algorithm is the first which identifies joint visual
attention in single static images. It computes 3D gaze direction, identifies
the gaze target by combining gaze direction with a 3D depth map computed for
the image, and identifies the common gaze target. Second, we use a human study
to demonstrate the sensitivity of humans to joint attention, suggesting that
the detection of such a configuration in an image can be useful for
understanding the image, including the goals of the agents and their joint
activity, and therefore can contribute to image captioning and related tasks.
| 1 | 0 | 0 | 0 | 1 | 0 |
Singular p-Laplacian parabolic system in exterior domains: higher regularity of solutions and related properties of extinction and asymptotic behavior in time | We consider the IBVP in exterior domains for the p-Laplacian parabolic
system. We prove regularity up to the boundary, extinction properties for p \in
( 2n/(n+2) , 2n/(n+1) ) and exponential decay for p= 2n/(n+1) .
| 0 | 0 | 1 | 0 | 0 | 0 |
Size distribution of galaxies in SDSS DR7: weak dependence on halo environment | Using a sample of galaxies selected from the Sloan Digital Sky Survey Data
Release 7 (SDSS DR7) and a catalog of bulge-disk decompositions, we study how
the size distribution of galaxies depends on the intrinsic properties of
galaxies, such as concentration, morphology, specific star formation rate
(sSFR), and bulge fraction, and on the large-scale environments in the context
of central/satellite decomposition, halo environment, the cosmic web: \cluster,
\filament, \sheet ~and \void, as well as galaxy number density. We find that
there is a strong dependence of the luminosity- or mass-size relation on the
galaxy concentration, morphology, sSFR, and bulge fraction. Compared with
late-type (spiral) galaxies, there is a clear trend of smaller sizes and
steeper slope for early-type (elliptical) galaxies. Similarly, galaxies with
high bulge fraction have smaller sizes and steeper slope than those with low
bulge fraction. Fitting formula of the average luminosity- and mass-size
relations are provided for galaxies of these different intrinsic properties.
Examining galaxies in terms of their large scale environments, we find that the
mass-size relation has some weak dependence on the halo mass and
central/satellite segregation for galaxies within mass range $9.0\le \log
M_{\ast} \le 10.5$, where satellites or galaxies in more massive halos have
slightly smaller sizes than their counterparts. While the cosmic web and local
number density dependence of the mass-size relation is almost negligible.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards thinner convolutional neural networks through Gradually Global Pruning | Deep network pruning is an effective method to reduce the storage and
computation cost of deep neural networks when applying them to resource-limited
devices. Among many pruning granularities, neuron level pruning will remove
redundant neurons and filters in the model and result in thinner networks. In
this paper, we propose a gradually global pruning scheme for neuron level
pruning. In each pruning step, a small percent of neurons were selected and
dropped across all layers in the model. We also propose a simple method to
eliminate the biases in evaluating the importance of neurons to make the scheme
feasible. Compared with layer-wise pruning scheme, our scheme avoid the
difficulty in determining the redundancy in each layer and is more effective
for deep networks. Our scheme would automatically find a thinner sub-network in
original network under a given performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Configurable 3D Scene Synthesis and 2D Image Rendering with Per-Pixel Ground Truth using Stochastic Grammars | We propose a systematic learning-based approach to the generation of massive
quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D
images thereof, with associated ground truth information, for the purposes of
training, benchmarking, and diagnosing learning-based computer vision and
robotics algorithms. In particular, we devise a learning-based pipeline of
algorithms capable of automatically generating and rendering a potentially
infinite variety of indoor scenes by using a stochastic grammar, represented as
an attributed Spatial And-Or Graph, in conjunction with state-of-the-art
physics-based rendering. Our pipeline is capable of synthesizing scene layouts
with high diversity, and it is configurable inasmuch as it enables the precise
customization and control of important attributes of the generated scenes. It
renders photorealistic RGB images of the generated scenes while automatically
synthesizing detailed, per-pixel ground truth data, including visible surface
depth and normal, object identity, and material information (detailed to object
parts), as well as environments (e.g., illuminations and camera viewpoints). We
demonstrate the value of our synthesized dataset, by improving performance in
certain machine-learning-based scene understanding tasks--depth and surface
normal prediction, semantic segmentation, reconstruction, etc.--and by
providing benchmarks for and diagnostics of trained models by modifying object
attributes and scene properties in a controllable manner.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network | Vision sensors lie in the heart of computer vision. In many computer vision
applications, such as AR/VR, non-contacting near-field communication (NFC) with
high throughput is required to transfer information to algorithms. In this
work, we proposed a novel NFC system which utilizes multiple frequency bands to
achieve high throughput.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Rare Word Representations using Semantic Bridging | We propose a methodology that adapts graph embedding techniques (DeepWalk
(Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016)) as well as
cross-lingual vector space mapping approaches (Least Squares and Canonical
Correlation Analysis) in order to merge the corpus and ontological sources of
lexical knowledge. We also perform comparative analysis of the used algorithms
in order to identify the best combination for the proposed system. We then
apply this to the task of enhancing the coverage of an existing word
embedding's vocabulary with rare and unseen words. We show that our technique
can provide considerable extra coverage (over 99%), leading to consistent
performance gain (around 10% absolute gain is achieved with w2v-gn-500K cf.§
3.3) on the Rare Word Similarity dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Effect of annealing temperatures on the electrical conductivity and dielectric properties of Ni1.5Fe1.5O4 spinel ferrite prepared by chemical reaction at different pH values | The electrical conductivity and dielectric properties of Ni1.5Fe1.5O4 ferrite
has been controlled by varying the annealing temperature of the chemical routed
samples. The frequency activated conductivity obeyed Jonschers power law and
universal scaling suggested semiconductor nature. An unusual metal like state
has been revealed in the measurement temperature scale in between two
semiconductor states with different activation energy. The metal like state has
been affected by thermal annealing of the material. The analysis of electrical
impedance and modulus spectra has confirmed non-Debye dielectric relaxation
with contributions from grains and grain boundaries. The dielectric relaxation
process is thermally activated in terms of measurement temperature and
annealing temperature of the samples. The hole hopping process, due to presence
of Ni3+ ions in the present Ni rich ferrite, played a significant role in
determining the thermal activated conduction mechanism. This work has
successfully applied the technique of a combined variation of annealing
temperature and pH value during chemical reaction for tuning electrical
parameters in a wide range; for example dc limit of conductivity 10power(-4)
-10power(-12) S/cm, and unusually high activation energy 0.17-1.36 eV.
| 0 | 1 | 0 | 0 | 0 | 0 |
Molecular dynamic simulation of water vapor interaction with blind pore of dead-end and saccate type | One of the varieties of pores, often found in natural or artificial building
materials, are the so-called blind pores of dead-end or saccate type.
Three-dimensional model of such kind of pore has been developed in this work.
This model has been used for simulation of water vapor interaction with
individual pore by molecular dynamics in combination with the diffusion
equation method. Special investigations have been done to find dependencies
between thermostats implementations and conservation of thermodynamic and
statistical values of water vapor - pore system. The two types of evolution of
water-pore system have been investigated: drying and wetting of the pore. Full
research of diffusion coefficient, diffusion velocity and other diffusion
parameters has been made.
| 1 | 1 | 0 | 0 | 0 | 0 |
Learning Program Component Order | Successful programs are written to be maintained. One aspect to this is that
programmers order the components in the code files in a particular way. This is
part of programming style. While the conventions for ordering are sometimes
given as part of a style guideline, such guidelines are often incomplete and
programmers tend to have their own more comprehensive orderings in mind. This
paper defines a model for ordering program components and shows how this model
can be learned from sample code. Such a model is a useful tool for a
programming environment in that it can be used to find the proper location for
inserting new components or for reordering files to better meet the needs of
the programmer. The model is designed so that it can be fine- tuned by the
programmer. The learning framework is evaluated both by looking at code with
known style guidelines and by testing whether it inserts existing components
into a file correctly.
| 1 | 0 | 0 | 0 | 0 | 0 |
Random Euler Complex-Valued Nonlinear Filters | Over the last decade, both the neural network and kernel adaptive filter have
successfully been used for nonlinear signal processing. However, they suffer
from high computational cost caused by their complex/growing network
structures. In this paper, we propose two random Euler filters for
complex-valued nonlinear filtering problem, i.e., linear random Euler
complex-valued filter (LRECF) and its widely-linear version (WLRECF), which
possess a simple and fixed network structure. The transient and steady-state
performances are studied in a non-stationary environment. The analytical
minimum mean square error (MSE) and optimum step-size are derived. Finally,
numerical simulations on complex-valued nonlinear system identification and
nonlinear channel equalization are presented to show the effectiveness of the
proposed methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Memory effects on epidemic evolution: The susceptible-infected-recovered epidemic model | Memory has a great impact on the evolution of every process related to human
societies. Among them, the evolution of an epidemic is directly related to the
individuals' experiences. Indeed, any real epidemic process is clearly
sustained by a non-Markovian dynamics: memory effects play an essential role in
the spreading of diseases. Including memory effects in the
susceptible-infected-recovered (SIR) epidemic model seems very appropriate for
such an investigation. Thus, the memory prone SIR model dynamics is
investigated using fractional derivatives. The decay of long-range memory,
taken as a power-law function, is directly controlled by the order of the
fractional derivatives in the corresponding nonlinear fractional differential
evolution equations. Here we assume "fully mixed" approximation and show that
the epidemic threshold is shifted to higher values than those for the
memoryless system, depending on this memory "length" decay exponent. We also
consider the SIR model on structured networks and study the effect of topology
on threshold points in a non- Markovian dynamics. Furthermore, the lack of
access to the precise information about the initial conditions or the past
events plays a very relevant role in the correct estimation or prediction of
the epidemic evolution. Such a "constraint" is analyzed and discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the equivalence of Eulerian and Lagrangian variables for the two-component Camassa-Holm system | The Camassa-Holm equation and its two-component Camassa-Holm system
generalization both experience wave breaking in finite time. To analyze this,
and to obtain solutions past wave breaking, it is common to reformulate the
original equation given in Eulerian coordinates, into a system of ordinary
differential equations in Lagrangian coordinates. It is of considerable
interest to study the stability of solutions and how this is manifested in
Eulerian and Lagrangian variables. We identify criteria of convergence, such
that convergence in Eulerian coordinates is equivalent to convergence in
Lagrangian coordinates. In addition, we show how one can approximate global
conservative solutions of the scalar Camassa-Holm equation by smooth solutions
of the two-component Camassa-Holm system that do not experience wave breaking.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bulk diffusion in a kinetically constrained lattice gas | In the hydrodynamic regime, the evolution of a stochastic lattice gas with
symmetric hopping rules is described by a diffusion equation with
density-dependent diffusion coefficient encapsulating all microscopic details
of the dynamics. This diffusion coefficient is, in principle, determined by a
Green-Kubo formula. In practice, even when the equilibrium properties of a
lattice gas are analytically known, the diffusion coefficient cannot be
computed except when a lattice gas additionally satisfies the gradient
condition. We develop a procedure to systematically obtain analytical
approximations for the diffusion coefficient for non-gradient lattice gases
with known equilibrium. The method relies on a variational formula found by
Varadhan and Spohn which is a version of the Green-Kubo formula particularly
suitable for diffusive lattice gases. Restricting the variational formula to
finite-dimensional sub-spaces allows one to perform the minimization and gives
upper bounds for the diffusion coefficient. We apply this approach to a
kinetically constrained non-gradient lattice gas, viz. to the Kob-Andersen
model on the square lattice.
| 0 | 1 | 0 | 0 | 0 | 0 |
Censored pairwise likelihood-based tests for mixing coefficient of spatial max-mixture models | Max-mixture processes are defined as Z = max(aX, (1 -- a)Y) with X an
asymptotic dependent (AD) process, Y an asymptotic independent (AI) process and
a $\in$ [0, 1]. So that, the mixing coefficient a may reveal the strength of
the AD part present in the max-mixture process. In this paper we focus on two
tests based on censored pairwise likelihood estimates. We compare their
performance through an extensive simulation study. Monte Carlo simulation plays
a fundamental tool for asymptotic variance calculations. We apply our tests to
daily precipitations from the East of Australia. Drawbacks and possible
developments are discussed.
| 0 | 0 | 1 | 1 | 0 | 0 |
From rate distortion theory to metric mean dimension: variational principle | The purpose of this paper is to point out a new connection between
information theory and dynamical systems. In the information theory side, we
consider rate distortion theory, which studies lossy data compression of
stochastic processes under distortion constraints. In the dynamical systems
side, we consider mean dimension theory, which studies how many parameters per
second we need to describe a dynamical system. The main results are new
variational principles connecting rate distortion function to metric mean
dimension.
| 1 | 0 | 1 | 0 | 0 | 0 |
Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks | Quantized Neural Networks (QNNs), which use low bitwidth numbers for
representing parameters and performing computations, have been proposed to
reduce the computation complexity, storage size and memory usage. In QNNs,
parameters and activations are uniformly quantized, such that the
multiplications and additions can be accelerated by bitwise operations.
However, distributions of parameters in Neural Networks are often imbalanced,
such that the uniform quantization determined from extremal values may under
utilize available bitwidth. In this paper, we propose a novel quantization
method that can ensure the balance of distributions of quantized values. Our
method first recursively partitions the parameters by percentiles into balanced
bins, and then applies uniform quantization. We also introduce computationally
cheaper approximations of percentiles to reduce the computation overhead
introduced. Overall, our method improves the prediction accuracies of QNNs
without introducing extra computation during inference, has negligible impact
on training speed, and is applicable to both Convolutional Neural Networks and
Recurrent Neural Networks. Experiments on standard datasets including ImageNet
and Penn Treebank confirm the effectiveness of our method. On ImageNet, the
top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7\%, which is
superior to the state-of-the-arts of QNNs.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Causal Frame Problem: An Algorithmic Perspective | The Frame Problem (FP) is a puzzle in philosophy of mind and epistemology,
articulated by the Stanford Encyclopedia of Philosophy as follows: "How do we
account for our apparent ability to make decisions on the basis only of what is
relevant to an ongoing situation without having explicitly to consider all that
is not relevant?" In this work, we focus on the causal variant of the FP, the
Causal Frame Problem (CFP). Assuming that a reasoner's mental causal model can
be (implicitly) represented by a causal Bayes net, we first introduce a notion
called Potential Level (PL). PL, in essence, encodes the relative position of a
node with respect to its neighbors in a causal Bayes net. Drawing on the
psychological literature on causal judgment, we substantiate the claim that PL
may bear on how time is encoded in the mind. Using PL, we propose an inference
framework, called the PL-based Inference Framework (PLIF), which permits a
boundedly-rational approach to the CFP to be formally articulated at Marr's
algorithmic level of analysis. We show that our proposed framework, PLIF, is
consistent with a wide range of findings in causal judgment literature, and
that PL and PLIF make a number of predictions, some of which are already
supported by existing findings.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Visual Representation of Wittgenstein's Tractatus Logico-Philosophicus | In this paper we present a data visualization method together with its
potential usefulness in digital humanities and philosophy of language. We
compile a multilingual parallel corpus from different versions of
Wittgenstein's Tractatus Logico-Philosophicus, including the original in German
and translations into English, Spanish, French, and Russian. Using this corpus,
we compute a similarity measure between propositions and render a visual
network of relations for different languages.
| 1 | 0 | 0 | 0 | 0 | 0 |
Primordial perturbations generated by Higgs field and $R^2$ operator | If the very early Universe is dominated by the non-minimally coupled Higgs
field and Starobinsky's curvature-squared term together, the potential diagram
would mimic the landscape of a valley, serving as a cosmological attractor. The
inflationary dynamics along this valley is studied, model parameters are
constrained against observational data, and the isocurvature perturbation is
evaluated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Schubert polynomials, theta and eta polynomials, and Weyl group invariants | We examine the relationship between the (double) Schubert polynomials of
Billey-Haiman and Ikeda-Mihalcea-Naruse and the (double) theta and eta
polynomials of Buch-Kresch-Tamvakis and Wilson from the perspective of Weyl
group invariants. We obtain generators for the kernel of the natural map from
the corresponding ring of Schubert polynomials to the (equivariant) cohomology
ring of symplectic and orthogonal flag manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Massive Fields as Systematics for Single Field Inflation | During inflation, massive fields can contribute to the power spectrum of
curvature perturbation via a dimension-5 operator. This contribution can be
considered as a bias for the program of using $n_s$ and $r$ to select inflation
models. Even the dimension-5 operator is suppressed by $\Lambda = M_p$, there
is still a significant shift on the $n_s$-$r$ diagram if the massive fields
have $m\sim H$. On the other hand, if the heavy degree of freedom appears only
at the same energy scale as the suppression scale of the dimension-5 operator,
then significant shift on the $n_s$-$r$ diagram takes place at $m=\Lambda \sim
70H$, which is around the inflationary time-translation symmetry breaking
scale. Hence, the systematics from massive fields pose a greater challenge for
future high precision experiments for inflationary model selection. This result
can be thought of as the impact of UV sensitivity to inflationary observables.
| 0 | 1 | 0 | 0 | 0 | 0 |
The second boundary value problem of the prescribed affine mean curvature equation and related linearized Monge-Ampère equation | These lecture notes are concerned with the solvability of the second boundary
value problem of the prescribed affine mean curvature equation and related
regularity theory of the Monge-Ampère and linearized Monge-Ampère
equations. The prescribed affine mean curvature equation is a fully nonlinear,
fourth order, geometric partial differential equation of the following form
$$\sum_{i, j=1}^n U^{ij}\frac{\partial^2}{\partial
{x_i}\partial{x_j}}\left[(\det D^2 u)^{-\frac{n+1}{n+2}}\right]=f$$ where
$(U^{ij})$ is the cofactor matrix of the Hessian matrix $D^2 u$ of a locally
uniformly convex function $u$. Its variant is related to the problem of finding
Kähler metrics of constant scalar curvature in complex geometry. We first
introduce the background of the prescribed affine mean curvature equation which
can be viewed as a coupled system of Monge-Ampère and linearized
Monge-Ampère equations. Then we state key open problems and present the
solution of the second boundary value problem that prescribes the boundary
values of the solution $u$ and its Hessian determinant $\det D^2 u$. Its proof
uses important tools from the boundary regularity theory of the Monge-Ampère
and linearized Monge-Ampère equations that we will present in the lecture
notes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Additive Combinatorics: A Menu of Research Problems | This text contains over three hundred specific open questions on various
topics in additive combinatorics, each placed in context by reviewing all
relevant results. While the primary purpose is to provide an ample supply of
problems for student research, it is hopefully also useful for a wider
audience. It is the author's intention to keep the material current, thus all
feedback and updates are greatly appreciated.
| 0 | 0 | 1 | 0 | 0 | 0 |
NMR evidence for static local nematicity and its cooperative interplay with low-energy magnetic fluctuations in FeSe under pressure | We present $^{77}$Se-NMR measurements on single-crystalline FeSe under
pressures up to 2 GPa. Based on the observation of the splitting and broadening
of the NMR spectrum due to structural twin domains, we discovered that static,
local nematic ordering exists well above the bulk nematic ordering temperature,
$T_{\rm s}$. The static, local nematic order and the low-energy stripe-type
antiferromagnetic spin fluctuations, as revealed by NMR spin-lattice relaxation
rate measurements, are both insensitive to pressure application. These NMR
results provide clear evidence for the microscopic cooperation between
magnetism and local nematicity in FeSe.
| 0 | 1 | 0 | 0 | 0 | 0 |
LitStoryTeller: An Interactive System for Visual Exploration of Scientific Papers Leveraging Named entities and Comparative Sentences | The present study proposes LitStoryTeller, an interactive system for visually
exploring the semantic structure of a scientific article. We demonstrate how
LitStoryTeller could be used to answer some of the most fundamental research
questions, such as how a new method was built on top of existing methods, based
on what theoretical proof and experimental evidences. More importantly,
LitStoryTeller can assist users to understand the full and interesting story a
scientific paper, with a concise outline and important details. The proposed
system borrows a metaphor from screen play, and visualizes the storyline of a
scientific paper by arranging its characters (scientific concepts or
terminologies) and scenes (paragraphs/sentences) into a progressive and
interactive storyline. Such storylines help to preserve the semantic structure
and logical thinking process of a scientific paper. Semantic structures, such
as scientific concepts and comparative sentences, are extracted using existing
named entity recognition APIs and supervised classifiers, from a scientific
paper automatically. Two supplementary views, ranked entity frequency view and
entity co-occurrence network view, are provided to help users identify the
"main plot" of such scientific storylines. When collective documents are ready,
LitStoryTeller also provides a temporal entity evolution view and entity
community view for collection digestion.
| 1 | 0 | 0 | 0 | 0 | 0 |
Acoustic Metacages for Omnidirectional Sound Shielding | Conventional sound shielding structures typically prevent fluid transport
between the exterior and interior. A design of a two-dimensional acoustic
metacage with subwavelength thickness which can shield acoustic waves from all
directions while allowing steady fluid flow is presented in this paper. The
structure is designed based on acoustic gradient-index metasurfaces composed of
open channels and shunted Helmholtz resonators. The strong parallel momentum on
the metacage surface rejects in-plane sound at an arbitrary angle of incidence
which leads to low sound transmission through the metacage. The performance of
the proposed metacage is verified by numerical simulations and measurements on
a three-dimensional printed prototype. The acoustic metacage has potential
applications in sound insulation where steady fluid flow is necessary or
advantageous.
| 0 | 1 | 0 | 0 | 0 | 0 |
Concave losses for robust dictionary learning | Traditional dictionary learning methods are based on quadratic convex loss
function and thus are sensitive to outliers. In this paper, we propose a
generic framework for robust dictionary learning based on concave losses. We
provide results on composition of concave functions, notably regarding
super-gradient computations, that are key for developing generic dictionary
learning algorithms applicable to smooth and non-smooth losses. In order to
improve identification of outliers, we introduce an initialization heuristic
based on undercomplete dictionary learning. Experimental results using
synthetic and real data demonstrate that our method is able to better detect
outliers, is capable of generating better dictionaries, outperforming
state-of-the-art methods such as K-SVD and LC-KSVD.
| 1 | 0 | 0 | 1 | 0 | 0 |
Target-Quality Image Compression with Recurrent, Convolutional Neural Networks | We introduce a stop-code tolerant (SCT) approach to training recurrent
convolutional neural networks for lossy image compression. Our methods
introduce a multi-pass training method to combine the training goals of
high-quality reconstructions in areas around stop-code masking as well as in
highly-detailed areas. These methods lead to lower true bitrates for a given
recursion count, both pre- and post-entropy coding, even using unstructured
LZ77 code compression. The pre-LZ77 gains are achieved by trimming stop codes.
The post-LZ77 gains are due to the highly unequal distributions of 0/1 codes
from the SCT architectures. With these code compressions, the SCT architecture
maintains or exceeds the image quality at all compression rates compared to
JPEG and to RNN auto-encoders across the Kodak dataset. In addition, the SCT
coding results in lower variance in image quality across the extent of the
image, a characteristic that has been shown to be important in human ratings of
image quality
| 1 | 0 | 0 | 0 | 0 | 0 |
Embedding simply connected 2-complexes in 3-space -- IV. Dual matroids | We introduce dual matroids of 2-dimensional simplicial complexes. Under
certain necessary conditions, duals matroids are used to characterise
embeddability in 3-space in a way analogous to Whitney's planarity criterion.
We further use dual matroids to extend a 3-dimensional analogue of
Kuratowski's theorem to the class of 2-dimensional simplicial complexes
obtained from simply connected ones by identifying vertices or edges.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.