title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Maximal fluctuations of confined actomyosin gels: dynamics of the cell nucleus | We investigate the effect of stress fluctuations on the stochastic dynamics
of an inclusion embedded in a viscous gel. We show that, in non-equilibrium
systems, stress fluctuations give rise to an effective attraction towards the
boundaries of the confining domain, which is reminiscent of an active Casimir
effect. We apply this generic result to the dynamics of deformations of the
cell nucleus and we demonstrate the appearance of a fluctuation maximum at a
critical level of activity, in agreement with recent experiments [E. Makhija,
D. S. Jokhun, and G. V. Shivashankar, Proc. Natl. Acad. Sci. U.S.A. 113, E32
(2016)].
| 0 | 1 | 0 | 0 | 0 | 0 |
Age-at-harvest models as monitoring and harvest management tools for Wisconsin carnivores | Quantifying and estimating wildlife population sizes is a foundation of
wildlife management. However, many carnivore species are cryptic, leading to
innate difficulties in estimating their populations. We evaluated the potential
for using more rigorous statistical models to estimate the populations of black
bears (Ursus americanus) in Wisconisin, and their applicability to other
furbearers such as bobcats (Lynx rufus). To estimate black bear populations, we
developed an AAH state-space model in a Bayesian framework based on Norton
(2015) that can account for variation in harvest and population demographics
over time. Our state-space model created an accurate estimate of the black bear
population in Wisconsin based on age-at-harvest data and improves on previous
models by using little demographic data, no auxiliary data, and not being
sensitive to initial population size. The increased accuracy of the AAH
state-space models should allow for better management by being able to set
accurate quotas to ensure a sustainable harvest for the black bear population
in each zone. We also evaluated the demography and annual harvest data of
bobcats in Wisconsin to determine trends in harvest, method, and hunter
participation in Wisconsin. Each trait of harvested bobcats that we tested
(mass, male:female sex ratio, and age) changed over time, and because these
were interrelated, and we inferred that harvest selection for larger size
biased harvests in favor of a) larger, b) older, and c) male bobcats by hound
hunters. We also found an increase in the proportion of bobcats that were
harvested by hound hunting compared to trapping, and that hound hunters were
more likely to create taxidermy mounts than trappers. We also found that
decreasing available bobcat tags and increasing success have occurred over
time, and correlate with substantially increasing both hunter populations and
hunter interest.
| 0 | 0 | 0 | 0 | 1 | 0 |
Information Planning for Text Data | Information planning enables faster learning with fewer training examples. It
is particularly applicable when training examples are costly to obtain. This
work examines the advantages of information planning for text data by focusing
on three supervised models: Naive Bayes, supervised LDA and deep neural
networks. We show that planning based on entropy and mutual information
outperforms random selection baseline and therefore accelerates learning.
| 0 | 0 | 0 | 1 | 0 | 0 |
Development of Si-CMOS hybrid detectors towards electron tracking based Compton imaging in semiconductor detectors | Electron tracking based Compton imaging is a key technique to improve the
sensitivity of Compton cameras by measuring the initial direction of recoiled
electrons. To realize this technique in semiconductor Compton cameras, we
propose a new detector concept, Si-CMOS hybrid detector. It is a Si detector
bump-bonded to a CMOS readout integrated circuit to obtain electron trajectory
images. To acquire the energy and the event timing, signals from N-side are
also read out in this concept. By using an ASIC for the N-side readout, the
timing resolution of few us is achieved. In this paper, we present the results
of two prototypes with 20 um pitch pixels. The images of the recoiled electron
trajectories are obtained with them successfully. The energy resolutions (FWHM)
are 4.1 keV (CMOS) and 1.4 keV (N-side) at 59.5 keV. In addition, we confirmed
that the initial direction of the electron is determined using the
reconstruction algorithm based on the graph theory approach. These results show
that Si-CMOS hybrid detectors can be used for electron tracking based Compton
imaging.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sharp constant of an anisotropic Gagliardo-Nirenberg-type inequality and applications | In this paper we establish the best constant of an anisotropic
Gagliardo-Nirenberg-type inequality related to the
Benjamin-Ono-Zakharov-Kuznetsov equation. As an application of our results, we
prove the uniform bound of solutions for such a equation in the energy space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Treatment Effect Quantification for Time-to-event Endpoints -- Estimands, Analysis Strategies, and beyond | A draft addendum to ICH E9 has been released for public consultation in
August 2017. The addendum focuses on two topics particularly relevant for
randomized confirmatory clinical trials: estimands and sensitivity analyses.
The need to amend ICH E9 grew out of the realization of a lack of alignment
between the objectives of a clinical trial stated in the protocol and the
accompanying quantification of the "treatment effect" reported in a regulatory
submission. We embed time-to-event endpoints in the estimand framework, and
discuss how the four estimand attributes described in the addendum apply to
time-to-event endpoints. We point out that if the proportional hazards
assumption is not met, the estimand targeted by the most prevalent methods used
to analyze time-to-event endpoints, logrank test and Cox regression, depends on
the censoring distribution. We discuss for a large randomized clinical trial
how the analyses for the primary and secondary endpoints as well as the
sensitivity analyses actually performed in the trial can be seen in the context
of the addendum. To the best of our knowledge, this is the first attempt to do
so for a trial with a time-to-event endpoint. Questions that remain open with
the addendum for time-to-event endpoints and beyond are formulated, and
recommendations for planning of future trials are given. We hope that this will
provide a contribution to developing a common framework based on the final
version of the addendum that can be applied to design, protocols, statistical
analysis plans, and clinical study reports in the future.
| 0 | 0 | 0 | 1 | 0 | 0 |
A character of Siegel modular group of level 2 from theta constants | Given a characteristic, we define a character of the Siegel modular group of
level 2, the computations of their values are also obtained. By using our
theorems, some key theorems of Igusa [1] can be recovered.
| 0 | 0 | 1 | 0 | 0 | 0 |
Antiferromagnetic Chern insulators in non-centrosymmetric systems | We investigate a new class of topological antiferromagnetic (AF) Chern
insulators driven by electronic interactions in two-dimensional systems without
inversion symmetry. Despite the absence of a net magnetization, AF Chern
insulators (AFCI) possess a nonzero Chern number $C$ and exhibit the quantum
anomalous Hall effect (QAHE). Their existence is guaranteed by the bifurcation
of the boundary line of Weyl points between a quantum spin Hall insulator and a
topologically trivial phase with the emergence of AF long-range order. As a
concrete example, we study the phase structure of the honeycomb lattice
Kane-Mele model as a function of the inversion-breaking ionic potential and the
Hubbard interaction. We find an easy $z$-axis $C=1$ AFCI phase and a spin-flop
transition to a topologically trivial $xy$-plane collinear antiferromagnet. We
propose experimental realizations of the AFCI and QAHE in correlated electron
materials and cold atom systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
The concentration-mass relation of clusters of galaxies from the OmegaWINGS survey | The relation between a cosmological halo concentration and its mass (cMr) is
a powerful tool to constrain cosmological models of halo formation and
evolution. On the scale of galaxy clusters the cMr has so far been determined
mostly with X-ray and gravitational lensing data. The use of independent
techniques is helpful in assessing possible systematics. Here we provide one of
the few determinations of the cMr by the dynamical analysis of the
projected-phase-space distribution of cluster members. Based on the WINGS and
OmegaWINGS data sets, we used the Jeans analysis with the MAMPOSSt technique to
determine masses and concentrations for 49 nearby clusters, each of which has
~60 spectroscopic members or more within the virial region, after removal of
substructures. Our cMr is in statistical agreement with theoretical predictions
based on LambdaCDM cosmological simulations. Our cMr is different from most
previous observational determinations because of its flatter slope and lower
normalization. It is however in agreement with two recent cMr obtained using
the lensing technique on the CLASH and LoCuSS cluster data sets. In the future
we will extend our analysis to galaxy systems of lower mass and at higher
redshifts.
| 0 | 1 | 0 | 0 | 0 | 0 |
Effects of the structural distortion on the electronic band structure of {\boldmath $\rm Na Os O_3$} studied within density functional theory and a three-orbital model | Effects of the structural distortion associated with the $\rm OsO_6$
octahedral rotation and tilting on the electronic band structure and magnetic
anisotropy energy for the $5d^3$ compound NaOsO$_3$ are investigated using the
density functional theory (DFT) and within a three-orbital model. Comparison of
the essential features of the DFT band structures with the three-orbital model
for both the undistorted and distorted structures provides insight into the
orbital and directional asymmetry in the electron hopping terms resulting from
the structural distortion. The orbital mixing terms obtained in the transformed
hopping Hamiltonian resulting from the octahedral rotations are shown to
account for the fine features in the DFT band structure. Staggered
magnetization and the magnetic character of states near the Fermi energy
indicate weak coupling behavior.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Joint Distribution Of $\mathrm{Sel}_ϕ(E/\mathbb{Q})$ and $\mathrm{Sel}_{\hatϕ}(E^\prime/\mathbb{Q})$ in Quadratic Twist Families | If $E$ is an elliptic curve with a point of order two, then work of Klagsbrun
and Lemke Oliver shows that the distribution of
$\dim_{\mathbb{F}_2}\mathrm{Sel}_\phi(E^d/\mathbb{Q}) - \dim_{\mathbb{F}_2}
\mathrm{Sel}_{\hat\phi}(E^{\prime d}/\mathbb{Q})$ within the quadratic twist
family tends to the discrete normal distribution $\mathcal{N}(0,\frac{1}{2}
\log \log X)$ as $X \rightarrow \infty$.
We consider the distribution of $\mathrm{dim}_{\mathbb{F}_2}
\mathrm{Sel}_\phi(E^d/\mathbb{Q})$ within such a quadratic twist family when
$\dim_{\mathbb{F}_2} \mathrm{Sel}_\phi(E^d/\mathbb{Q}) - \dim_{\mathbb{F}_2}
\mathrm{Sel}_{\hat\phi}(E^{\prime d}/\mathbb{Q})$ has a fixed value $u$.
Specifically, we show that for every $r$, the limiting probability that
$\dim_{\mathbb{F}_2} \mathrm{Sel}_\phi(E^d/\mathbb{Q}) = r$ is given by an
explicit constant $\alpha_{r,u}$. The constants $\alpha_{r,u}$ are closely
related to the $u$-probabilities introduced in Cohen and Lenstra's work on the
distribution of class groups, and thus provide a connection between the
distribution of Selmer groups of elliptic curves and random abelian groups.
Our analysis of this problem has two steps. The first step uses algebraic and
combinatorial methods to directly relate the ranks of the Selmer groups in
question to the dimensions of the kernels of random $\mathbb{F}_2$-matrices.
This proves that the density of twists with a given $\phi$-Selmer rank $r$ is
given by $\alpha_{r,u}$ for an unusual notion of density. The second step of
the analysis utilizes techniques from analytic number theory to show that this
result implies the correct asymptotics in terms of the natural notion of
density.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning What Data to Learn | Machine learning is essentially the sciences of playing with data. An
adaptive data selection strategy, enabling to dynamically choose different data
at various training stages, can reach a more effective model in a more
efficient way. In this paper, we propose a deep reinforcement learning
framework, which we call \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter}
(\textbf{NDF}), to explore automatic and adaptive data selection in the
training process. In particular, NDF takes advantage of a deep neural network
to adaptively select and filter important data instances from a sequential
stream of training data, such that the future accumulative reward (e.g., the
convergence speed) is maximized. In contrast to previous studies in data
selection that is mainly based on heuristic strategies, NDF is quite generic
and thus can be widely suitable for many machine learning tasks. Taking neural
network training with stochastic gradient descent (SGD) as an example,
comprehensive experiments with respect to various neural network modeling
(e.g., multi-layer perceptron networks, convolutional neural networks and
recurrent neural networks) and several applications (e.g., image classification
and text understanding) demonstrate that NDF powered SGD can achieve comparable
accuracy with standard SGD process by using less data and fewer iterations.
| 1 | 0 | 0 | 1 | 0 | 0 |
Deformation estimation of an elastic object by partial observation using a neural network | Deformation estimation of elastic object assuming an internal organ is
important for the computer navigation of surgery. The aim of this study is to
estimate the deformation of an entire three-dimensional elastic object using
displacement information of very few observation points. A learning approach
with a neural network was introduced to estimate the entire deformation of an
object. We applied our method to two elastic objects; a rectangular
parallelepiped model, and a human liver model reconstructed from computed
tomography data. The average estimation error for the human liver model was
0.041 mm when the object was deformed up to 66.4 mm, from only around 3 %
observations. These results indicate that the deformation of an entire elastic
object can be estimated with an acceptable level of error from limited
observations by applying a trained neural network to a new deformation.
| 1 | 0 | 0 | 1 | 0 | 0 |
Comparative analysis of two discretizations of Ricci curvature for complex networks | We have performed an empirical comparison of two distinct notions of discrete
Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and
Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci
curvature were developed based on different properties of the classical smooth
notion, and thus, the two notions shed light on different aspects of network
structure and behavior. Nevertheless, our extensive computational analysis in a
wide range of both model and real-world networks shows that the two
discretizations of Ricci curvature are highly correlated in many networks.
Moreover, we show that if one considers the augmented Forman-Ricci curvature
which also accounts for the two-dimensional simplicial complexes arising in
graphs, the observed correlation between the two discretizations is even
higher, especially, in real networks. Besides the potential theoretical
implications of these observations, the close relationship between the two
discretizations has practical implications whereby Forman-Ricci curvature can
be employed in place of Ollivier-Ricci curvature for faster computation in
larger real-world networks whenever coarse analysis suffices.
| 1 | 0 | 1 | 0 | 0 | 0 |
Zero-Shot Learning via Class-Conditioned Deep Generative Models | We present a deep generative model for learning to predict classes not seen
at training time. Unlike most existing methods for this problem, that represent
each class as a point (via a semantic embedding), we represent each seen/unseen
class using a class-specific latent-space distribution, conditioned on class
attributes. We use these latent-space distributions as a prior for a supervised
variational autoencoder (VAE), which also facilitates learning highly
discriminative feature representations for the inputs. The entire framework is
learned end-to-end using only the seen-class training data. The model infers
corresponding attributes of a test image by maximizing the VAE lower bound; the
inferred attributes may be linked to labels not seen when training. We further
extend our model to a (1) semi-supervised/transductive setting by leveraging
unlabeled unseen-class data via an unsupervised learning module, and (2)
few-shot learning where we also have a small number of labeled inputs from the
unseen classes. We compare our model with several state-of-the-art methods
through a comprehensive set of experiments on a variety of benchmark data sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the vanishing viscosity approximation of a nonlinear model for tumor growth | We investigate the dynamics of a nonlinear system modeling tumor growth with
drug application. The tumor is viewed as a mixture consisting of proliferating,
quiescent and dead cells as well as a nutrient in the presence of a drug. The
system is given by a multi-phase flow model: the densities of the different
cells are governed by a set of transport equations, the density of the nutrient
and the density of the drug are governed by rather general diffusion equations,
while the velocity of the tumor is given by Darcy's equation. The domain
occupied by the tumor in this setting is a growing continuum $\Omega$ with
boundary $\partial \Omega$ both of which evolve in time. Global-in-time weak
solutions are obtained using an approach based on the vanishing viscosity of
the Brinkman's regularization. Both the solutions and the domain are rather
general, no symmetry assumption is required and the result holds for large
initial data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Noise-gating to clean astrophysical image data | I present a family of algorithms to reduce noise in astrophysical im- ages
and image sequences, preserving more information from the original data than is
retained by conventional techniques. The family uses locally adaptive filters
("noise gates") in the Fourier domain, to separate coherent image structure
from background noise based on the statistics of local neighborhoods in the
image. Processing of solar data limited by simple shot noise or by additive
noise reveals image structure not easily visible in the originals, preserves
photometry of observable features, and reduces shot noise by a factor of 10 or
more with little to no apparent loss of resolution, revealing faint features
that were either not directly discernible or not sufficiently strongly detected
for quantitative analysis. The method works best on image sequences containing
related subjects, for example movies of solar evolution, but is also applicable
to single images provided that there are enough pixels. The adaptive filter
uses the statistical properties of noise and of local neighborhoods in the
data, to discriminate between coherent features and incoherent noise without
reference to the specific shape or evolution of the those features. The
technique can potentially be modified in a straightforward way to exploit
additional a priori knowledge about the functional form of the noise.
| 0 | 1 | 0 | 0 | 0 | 0 |
Variational Bayesian dropout: pitfalls and fixes | Dropout, a stochastic regularisation technique for training of neural
networks, has recently been reinterpreted as a specific type of approximate
inference algorithm for Bayesian neural networks. The main contribution of the
reinterpretation is in providing a theoretical framework useful for analysing
and extending the algorithm. We show that the proposed framework suffers from
several issues; from undefined or pathological behaviour of the true posterior
related to use of improper priors, to an ill-defined variational objective due
to singularity of the approximating distribution relative to the true
posterior. Our analysis of the improper log uniform prior used in variational
Gaussian dropout suggests the pathologies are generally irredeemable, and that
the algorithm still works only because the variational formulation annuls some
of the pathologies. To address the singularity issue, we proffer Quasi-KL (QKL)
divergence, a new approximate inference objective for approximation of
high-dimensional distributions. We show that motivations for variational
Bernoulli dropout based on discretisation and noise have QKL as a limit.
Properties of QKL are studied both theoretically and on a simple practical
example which shows that the QKL-optimal approximation of a full rank Gaussian
with a degenerate one naturally leads to the Principal Component Analysis
solution.
| 0 | 0 | 0 | 1 | 0 | 0 |
Deep Belief Networks Based Feature Generation and Regression for Predicting Wind Power | Wind energy forecasting helps to manage power production, and hence, reduces
energy cost. Deep Neural Networks (DNN) mimics hierarchical learning in the
human brain and thus possesses hierarchical, distributed, and multi-task
learning capabilities. Based on aforementioned characteristics, we report Deep
Belief Network (DBN) based forecast engine for wind power prediction because of
its good generalization and unsupervised pre-training attributes. The proposed
DBN-WP forecast engine, which exhibits stochastic feature generation
capabilities and is composed of multiple Restricted Boltzmann Machines,
generates suitable features for wind power prediction using atmospheric
properties as input. DBN-WP, due to its unsupervised pre-training of RBM layers
and generalization capabilities, is able to learn the fluctuations in the
meteorological properties and thus is able to perform effective mapping of the
wind power. In the deep network, a regression layer is appended at the end to
predict sort-term wind power. It is experimentally shown that the deep learning
and unsupervised pre-training capabilities of DBN based model has comparable
and in some cases better results than hybrid and complex learning techniques
proposed for wind power prediction. The proposed prediction system based on
DBN, achieves mean values of RMSE, MAE and SDE as 0.124, 0.083 and 0.122,
respectively. Statistical analysis of several independent executions of the
proposed DBN-WP wind power prediction system demonstrates the stability of the
system. The proposed DBN-WP architecture is easy to implement and offers
generalization as regards the change in location of the wind farm is concerned.
| 0 | 0 | 0 | 1 | 0 | 0 |
Control Interpretations for First-Order Optimization Methods | First-order iterative optimization methods play a fundamental role in large
scale optimization and machine learning. This paper presents control
interpretations for such optimization methods. First, we give loop-shaping
interpretations for several existing optimization methods and show that they
are composed of basic control elements such as PID and lag compensators. Next,
we apply the small gain theorem to draw a connection between the convergence
rate analysis of optimization methods and the input-output gain computations of
certain complementary sensitivity functions. These connections suggest that
standard classical control synthesis tools may be brought to bear on the design
of optimization algorithms.
| 1 | 0 | 1 | 0 | 0 | 0 |
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction | Machine understanding of complex images is a key goal of artificial
intelligence. One challenge underlying this task is that visual scenes contain
multiple inter-related objects, and that global context plays an important role
in interpreting the scene. A natural modeling framework for capturing such
effects is structured prediction, which optimizes over complex labels, while
modeling within-label interactions. However, it is unclear what principles
should guide the design of a structured prediction model that utilizes the
power of deep learning components. Here we propose a design principle for such
architectures that follows from a natural requirement of permutation
invariance. We prove a necessary and sufficient characterization for
architectures that follow this invariance, and discuss its implication on model
design. Finally, we show that the resulting model achieves new state of the art
results on the Visual Genome scene graph labeling benchmark, outperforming all
recent approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
Identifying Similarities in Epileptic Patients for Drug Resistance Prediction | Currently, approximately 30% of epileptic patients treated with antiepileptic
drugs (AEDs) remain resistant to treatment (known as refractory patients). This
project seeks to understand the underlying similarities in refractory patients
vs. other epileptic patients, identify features contributing to drug resistance
across underlying phenotypes for refractory patients, and develop predictive
models for drug resistance in epileptic patients. In this study, epileptic
patient data was examined to attempt to observe discernable similarities or
differences in refractory patients (case) and other non-refractory patients
(control) to map underlying mechanisms in causality. For the first part of the
study, unsupervised algorithms such as Kmeans, Spectral Clustering, and
Gaussian Mixture Models were used to examine patient features projected into a
lower dimensional space. Results from this study showed a high degree of
non-linearity in the underlying feature space. For the second part of this
study, classification algorithms such as Logistic Regression, Gradient Boosted
Decision Trees, and SVMs, were tested on the reduced-dimensionality features,
with accuracy results of 0.83(+/-0.3) testing using 7 fold cross validation.
Observations of test results indicate using a radial basis function kernel PCA
to reduce features ingested by a Gradient Boosted Decision Tree Ensemble lead
to gains in improved accuracy in mapping a binary decision to highly non-linear
features collected from epileptic patients.
| 1 | 0 | 0 | 1 | 0 | 0 |
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations | Disentangled representations, where the higher level data generative factors
are reflected in disjoint latent dimensions, offer several benefits such as
ease of deriving invariant representations, transferability to other tasks,
interpretability, etc. We consider the problem of unsupervised learning of
disentangled representations from large pool of unlabeled observations, and
propose a variational inference based approach to infer disentangled latent
factors. We introduce a regularizer on the expectation of the approximate
posterior over observed data that encourages the disentanglement. We also
propose a new disentanglement metric which is better aligned with the
qualitative disentanglement observed in the decoder's output. We empirically
observe significant improvement over existing methods in terms of both
disentanglement and data likelihood (reconstruction quality).
| 1 | 0 | 0 | 1 | 0 | 0 |
Designing the color of hot-dip galvanized steel sheet through destructive light interference using a Zn-Ti liquid metallic bath | The color of hot-dip galvanized steel sheet was adjusted in a reproducible
way using a liquid Zn-Ti metallic bath, air atmosphere, and controlling the
bath temperature as the only experimental parameter. Coloring was found only
for sample s cooled in air and dipped into Ti-containing liquid Zn. For samples
dipped into a 0.15 wt pct Ti-containing Zn bath, the color remained metallic
(gray) below a 792 K (519 C) bath temperature; it was yellow at 814 K, violet
at 847 K, and blue at 873 K. With the increasing bath temperature, the
thickness of the adhered Zn-Ti layer gradually decreased from 52 to 32
micrometers, while the thickness of the outer TiO2 layer gradually increased
from 24 to 69 nm. Due to small Al contamination of the Zn bath, a thin (around
2 nm) alumina-rich layer is found between the outer TiO2 layer and the inner
macroscopic Zn layer. It is proven that the color change was governed by the
formation of thin outer TiO2 layer; different colors appear depending on the
thickness of this layer, mostly due to the destructive interference of visible
light on this transparent nano-layer. A complex model was built to explain the
results using known relationships of chemical thermodynamics, adhesion, heat
flow, kinetics of chemical reactions, diffusion, and optics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Counting Multiplicities in a Hypersurface over a Number Field | We fix a counting function of multiplicities of algebraic points in a
projective hypersurface over a number field, and take the sum over all
algebraic points of bounded height and fixed degree. An upper bound for the sum
with respect to this counting function will be given in terms of the degree of
the hypersurface, the dimension of the singular locus, the upper bounds of
height, and the degree of the field of definition.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis | We consider the task of fine-grained sentiment analysis from the perspective
of multiple instance learning (MIL). Our neural model is trained on document
sentiment labels, and learns to predict the sentiment of text segments, i.e.
sentences or elementary discourse units (EDUs), without segment-level
supervision. We introduce an attention-based polarity scoring method for
identifying positive and negative text snippets and a new dataset which we call
SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating
MIL-style sentiment models like ours. Experimental results demonstrate superior
performance against multiple baselines, whereas a judgement elicitation study
shows that EDU-level opinion extraction produces more informative summaries
than sentence-based alternatives.
| 1 | 0 | 0 | 0 | 0 | 0 |
Eva-CiM: A System-Level Energy Evaluation Framework for Computing-in-Memory Architectures | Computing-in-Memory (CiM) architectures aim to reduce costly data transfers
by performing arithmetic and logic operations in memory and hence relieve the
pressure due to the memory wall. However, determining whether a given workload
can really benefit from CiM, which memory hierarchy and what device technology
should be adopted by a CiM architecture requires in-depth study that is not
only time consuming but also demands significant expertise in architectures and
compilers. This paper presents an energy evaluation framework, Eva-CiM, for
systems based on CiM architectures. Eva-CiM encompasses a multi-level (from
device to architecture) comprehensive tool chain by leveraging existing
modeling and simulation tools such as GEM5, McPAT [2] and DESTINY [3]. To
support high-confidence prediction, rapid design space exploration and ease of
use, Eva-CiM introduces several novel modeling/analysis approaches including
models for capturing memory access and dependency-aware ISA traces, and for
quantifying interactions between the host CPU and CiM modules. Eva-CiM can
readily produce energy estimates of the entire system for a given program, a
processor architecture, and the CiM array and technology specifications.
Eva-CiM is validated by comparing with DESTINY [3] and [4], and enables
findings including practical contributions from CiM-supported accesses,
CiM-sensitive benchmarking as well as the pros and cons of increased memory
size for CiM. Eva-CiM also enables exploration over different configurations
and device technologies, showing 1.3-6.0X energy improvement for SRAM and
2.0-7.9X for FeFET-RAM, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Converging Shock Flows for a Mie-Grüneisen Equation of State | Previous work has shown that the one-dimensional (1D) inviscid compressible
flow (Euler) equations admit a wide variety of scale-invariant solutions
(including the famous Noh, Sedov, and Guderley shock solutions) when the
included equation of state (EOS) closure model assumes a certain
scale-invariant form. However, this scale-invariant EOS class does not include
even simple models used for shock compression of crystalline solids, including
many broadly applicable representations of Mie-Grüneisen EOS. Intuitively,
this incompatibility naturally arises from the presence of multiple dimensional
scales in the Mie-Grüneisen EOS, which are otherwise absent from
scale-invariant models that feature only dimensionless parameters (such as the
adiabatic index in the ideal gas EOS). The current work extends previous
efforts intended to rectify this inconsistency, by using a scale-invariant EOS
model to approximate a Mie- Grüneisen EOS form. To this end, the adiabatic
bulk modulus for the Mie-Grüneisen EOS is constructed, and its key features
are used to motivate the selection of a scale-invariant approximation form. The
remaining surrogate model parameters are selected through enforcement of the
Rankine-Hugoniot jump conditions for an infinitely strong shock in a
Mie-Grüneisen material. Finally, the approximate EOS is used in conjunction
with the 1D inviscid Euler equations to calculate a semi-analytical,
Guderley-like imploding shock solution in a metal sphere, and to determine if
and when the solution may be valid for the underlying Mie-Grüneisen EOS.
| 0 | 1 | 0 | 0 | 0 | 0 |
Robust estimation of tree structured Gaussian Graphical Model | Consider jointly Gaussian random variables whose conditional independence
structure is specified by a graphical model. If we observe realizations of the
variables, we can compute the covariance matrix, and it is well known that the
support of the inverse covariance matrix corresponds to the edges of the
graphical model. Instead, suppose we only have noisy observations. If the noise
at each node is independent, we can compute the sum of the covariance matrix
and an unknown diagonal. The inverse of this sum is (in general) dense. We ask:
can the original independence structure be recovered? We address this question
for tree structured graphical models. We prove that this problem is
unidentifiable, but show that this unidentifiability is limited to a small
class of candidate trees. We further present additional constraints under which
the problem is identifiable. Finally, we provide an O(n^3) algorithm to find
this equivalence class of trees.
| 1 | 0 | 0 | 1 | 0 | 0 |
What Propels Celebrity Follower Counts? Language Use or Social Connectivity | Follower count is a factor that quantifies the popularity of celebrities. It
is a reflection of their power, prestige and overall social reach. In this
paper we investigate whether the social connectivity or the language choice is
more correlated to the future follower count of a celebrity. We collect data
about tweets, retweets and mentions of 471 Indian celebrities with verified
Twitter accounts. We build two novel networks to approximate social
connectivity of the celebrities. We study various structural properties of
these two networks and observe their correlations with future follower counts.
In parallel, we analyze the linguistic structure of the tweets (LIWC features,
syntax and sentiment features and style and readability features) and observe
the correlations of each of these with the future follower count of a
celebrity. As a final step we use there features to classify a celebrity in a
specific bucket of future follower count (HIGH, MID or LOW). We observe that
the network features alone achieve an accuracy of 0.52 while the linguistic
features alone achieve an accuracy of 0.69 grossly outperforming the network
features. The network and linguistic features in conjunction produce an
accuracy of 0.76. We also discuss some final insights that we obtain from
further data analysis celebrities with larger follower counts post tweets that
have (i) more words from friend and family LIWC categories, (ii) more positive
sentiment laden words, (iii) have better language constructs and are (iv) more
readable.
| 1 | 0 | 0 | 0 | 0 | 0 |
Monochromatic metrics are generalized Berwald | We show that monochromatic Finsler metrics, i.e., Finsler metrics such that
each two tangent spaces are isomorphic as normed spaces, are generalized
Berwald metrics, i.e., there exists an affine connection, possibly with
torsion, that preserves the Finsler function
| 0 | 0 | 1 | 0 | 0 | 0 |
CTCF Degradation Causes Increased Usage of Upstream Exons in Mouse Embryonic Stem Cells | Transcriptional repressor CTCF is an important regulator of chromatin 3D
structure, facilitating the formation of topologically associating domains
(TADs). However, its direct effects on gene regulation is less well understood.
Here, we utilize previously published ChIP-seq and RNA-seq data to investigate
the effects of CTCF on alternative splicing of genes with CTCF sites. We
compared the amount of RNA-seq signals in exons upstream and downstream of
binding sites following auxin-induced degradation of CTCF in mouse embryonic
stem cells. We found that changes in gene expression following CTCF depletion
were significant, with a general increase in the presence of upstream exons. We
infer that a possible mechanism by which CTCF binding contributes to
alternative splicing is by causing pauses in the transcription mechanism during
which splicing elements are able to concurrently act on upstream exons already
transcribed into RNA.
| 0 | 0 | 0 | 0 | 1 | 0 |
The stability of tightly-packed, evenly-spaced systems of Earth-mass planets orbiting a Sun-like star | Many of the multi-planet systems discovered to date have been notable for
their compactness, with neighbouring planets closer together than any in the
Solar System. Interestingly, planet-hosting stars have a wide range of ages,
suggesting that such compact systems can survive for extended periods of time.
We have used numerical simulations to investigate how quickly systems go
unstable in relation to the spacing between planets, focusing on hypothetical
systems of Earth-mass planets on evenly-spaced orbits (in mutual Hill radii).
In general, the further apart the planets are initially, the longer it takes
for a pair of planets to undergo a close encounter. We recover the results of
previous studies, showing a linear trend in the initial planet spacing between
3 and 8 mutual Hill radii and the logarithm of the stability time.
Investigating thousands of simulations with spacings up to 13 mutual Hill radii
reveals distinct modulations superimposed on this relationship in the vicinity
of first and second-order mean motion resonances of adjacent and next-adjacent
planets. We discuss the impact of this structure and the implications on the
stability of compact multi-planet systems. Applying the outcomes of our
simulations, we show that isolated systems of up to five Earth-mass planets can
fit in the habitable zone of a Sun-like star without close encounters for at
least $10^9$ orbits.
| 0 | 1 | 0 | 0 | 0 | 0 |
GPUQT: An efficient linear-scaling quantum transport code fully implemented on graphics processing units | We present GPUQT, a quantum transport code fully implemented on graphics
processing units. Using this code, one can obtain intrinsic electronic
transport properties of large systems described by a real-space tight-binding
Hamiltonian together with one or more types of disorder. The DC Kubo
conductivity is represented as a time integral of the velocity auto-correlation
or a time derivative of the mean square displacement. Linear scaling (with
respect to the total number of orbitals in the system) computation time and
memory usage are achieved by using various numerical techniques, including
sparse matrix-vector multiplication, random phase approximation of trace,
Chebyshev expansion of quantum evolution operator, and kernel polynomial method
for quantum resolution operator. We describe the inputs and outputs of GPUQT
and give two examples to demonstrate its usage, paying attention to the
interpretations of the results.
| 0 | 1 | 0 | 0 | 0 | 0 |
Full-Duplex Cooperative Cognitive Radio Networks with Wireless Energy Harvesting | This paper proposes and analyzes a new full-duplex (FD) cooperative cognitive
radio network with wireless energy harvesting (EH). We consider that the
secondary receiver is equipped with a FD radio and acts as a FD hybrid access
point (HAP), which aims to collect information from its associated EH secondary
transmitter (ST) and relay the signals. The ST is assumed to be equipped with
an EH unit and a rechargeable battery such that it can harvest and accumulate
energy from radio frequency (RF) signals transmitted by the primary transmitter
(PT) and the HAP. We develop a novel cooperative spectrum sharing (CSS)
protocol for the considered system. In the proposed protocol, thanks to its FD
capability, the HAP can receive the PT's signals and transmit energy-bearing
signals to charge the ST simultaneously, or forward the PT's signals and
receive the ST's signals at the same time. We derive analytical expressions for
the achievable throughput of both primary and secondary links by characterizing
the dynamic charging/discharging behaviors of the ST battery as a finite-state
Markov chain. We present numerical results to validate our theoretical analysis
and demonstrate the merits of the proposed protocol over its non-cooperative
counterpart.
| 1 | 0 | 0 | 0 | 0 | 0 |
Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns | In this study, a method to construct a full-colour volumetric display is
presented using a commercially available inkjet printer. Photoreactive
luminescence materials are minutely and automatically printed as the volume
elements, and volumetric displays are constructed with high resolution using
easy-to-fabricate means that exploit inkjet printing technologies. The results
experimentally demonstrate the first prototype of an inkjet printing-based
volumetric display composed of multiple layers of transparent films that yield
a full-colour three-dimensional (3D) image. Moreover, we propose a design
algorithm with 3D structures that provide multiple different 2D full-colour
patterns when viewed from different directions and experimentally demonstrates
prototypes. It is considered that these types of 3D volumetric structures and
their fabrication methods based on widely deployed existing printing
technologies can be utilised as novel information display devices and systems,
including digital signage, media art, entertainment and security.
| 1 | 0 | 0 | 0 | 0 | 0 |
Analysis of luminosity distributions of strong lensing galaxies: subtraction of diffuse lensed signal | Strong gravitational lensing gives access to the total mass distribution of
galaxies. It can unveil a great deal of information about the lenses dark
matter content when combined with the study of the lenses light profile.
However, gravitational lensing galaxies, by definition, appear surrounded by
point-like and diffuse lensed signal that is irrelevant to the lens flux.
Therefore, the observer is most often restricted to studying the innermost
portions of the galaxy, where classical fitting methods show some
instabilities. We aim at subtracting that lensed signal and at characterising
some lenses light profile by computing their shape parameters. Our objective is
to evaluate the total integrated flux in an aperture the size of the Einstein
ring in order to obtain a robust estimate of the quantity of ordinary matter in
each system. We are expanding the work we started in a previous paper that
consisted in subtracting point-like lensed images and in independently
measuring each shape parameter. We improve it by designing a subtraction of the
diffuse lensed signal, based only on one simple hypothesis of symmetry. This
extra step improves our study of the shape parameters and we refine it even
more by upgrading our half-light radius measurement. We also calculate the
impact of our specific image processing on the error bars. The diffuse lensed
signal subtraction makes it possible to study a larger portion of relevant
galactic flux, as the radius of the fitting region increases by on average
17\%. We retrieve new half-light radii values that are on average 11\% smaller
than in our previous work, although the uncertainties overlap in most cases.
This shows that not taking the diffuse lensed signal into account may lead to a
significant overestimate of the half-light radius. We are also able to measure
the flux within the Einstein radius and to compute secure error bars to all of
our results.
| 0 | 1 | 0 | 0 | 0 | 0 |
Support Estimation via Regularized and Weighted Chebyshev Approximations | We introduce a new framework for estimating the support size of an unknown
distribution which improves upon known approximation-based techniques. Our main
contributions include describing a rigorous new weighted Chebyshev polynomial
approximation method and introducing regularization terms into the problem
formulation that provably improve the performance of state-of-the-art
approximation-based approaches. In particular, we present both theoretical and
computer simulation results that illustrate the utility and performance
improvements of our method. The theoretical analysis relies on jointly
optimizing the bias and variance components of the risk, and combining new
weighted minmax polynomial approximation techniques with discretized
semi-infinite programming solvers. Such a setting allows for casting the
estimation problem as a linear program (LP) with a small number of variables
and constraints that may be solved as efficiently as the original Chebyshev
approximation-based problem. The described approach also applies to the support
coverage and entropy estimation problems. Our newly developed technique is
tested on synthetic data and used to estimate the number of bacterial species
in the human gut. On synthetic datasets, we observed up to five-fold
improvements in the value of the worst-case risk. For the bioinformatics
application, metagenomic data from the NIH Human Gut and the American Gut
Microbiome was combined and processed to obtain lists of bacterial taxonomies.
These were subsequently used to compute the bacterial species histograms and
estimate the number of bacterial species in the human gut to roughly 2350, with
the species being represented by trillions of cells.
| 1 | 0 | 0 | 1 | 0 | 0 |
Two-photon superbunching of pseudothermal light in a Hanbury Brown-Twiss interferometer | Two-photon superbunching of pseudothermal light is observed with single-mode
continuous-wave laser light in a linear optical system. By adding more
two-photon paths via three rotating ground glasses,g(2)(0) = 7.10 is
experimentally observed. The second-order temporal coherence function of
superbunching pseudothermal light is theoretically and experimentally studied
in detail. It is predicted that the degree of coherence of light can be
increased dramatically by adding more multi-photon paths. For instance, the
degree of the second- and third-order coherence of the superbunching
pseudothermal light with five rotating ground glasses can reach 32 and 7776,
respectively. The results are helpful to understand the physics of
superbunching and to improve the visibility of thermal light ghost imaging.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non Relativistic Limit of Integrable QFT with fermionic excitations | The aim of this paper is to investigate the non-relativistic limit of
integrable quantum field theories with fermionic fields, such as the O(N)
Gross-Neveu model, the supersymmetric Sinh-Gordon and non-linear sigma models.
The non-relativistic limit of these theories is implemented by a double scaling
limit which consists of sending the speed of light c to infinity and rescaling
at the same time the relevant coupling constant of the model in such a way to
have finite energy excitations. For the general purpose of mapping the space of
continuous non-relativistic integrable models, this paper completes and
integrates the analysis done in Ref.[1] on the non-relativistic limit of purely
bosonic theories.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extensions of the Benson-Solomon fusion systems | The Benson-Solomon systems comprise the only known family of simple saturated
fusion systems at the prime two that do not arise as the fusion system of any
finite group. We determine the automorphism groups and the possible almost
simple extensions of these systems and of their centric linking systems.
| 0 | 0 | 1 | 0 | 0 | 0 |
Run-Wise Simulations for Imaging Atmospheric Cherenkov Telescope Arrays | We present a new paradigm for the simulation of arrays of Imaging Atmospheric
Cherenkov Telescopes (IACTs) which overcomes limitations of current approaches.
Up to now, all major IACT experiments rely on the same Monte-Carlo simulation
strategy, using predefined observation and instrument settings. Simulations
with varying parameters are generated to provide better estimates of the
Instrument Response Functions (IRFs) of different observations. However, a
large fraction of the simulation configuration remains preserved, leading to
complete negligence of all related influences. Additionally, the simulation
scheme relies on interpolations between different array configurations, which
are never fully reproducing the actual configuration for a given observation.
Interpolations are usually performed on zenith angles, off-axis angles, array
multiplicity, and the optical response of the instrument. With the advent of
hybrid systems consisting of a large number of IACTs with different sizes,
types, and camera configurations, the complexity of the interpolation and the
size of the phase space becomes increasingly prohibitive. Going beyond the
existing approaches, we introduce a new simulation and analysis concept which
takes into account the actual observation conditions as well as individual
telescope configurations of each observation run of a given data set. These
run-wise simulations (RWS) thus exhibit considerably reduced systematic
uncertainties compared to the existing approach, and are also more
computationally efficient and simple. The RWS framework has been implemented in
the H.E.S.S. software and tested, and is already being exploited in science
analysis.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multi-party Poisoning through Generalized $p$-Tampering | In a poisoning attack against a learning algorithm, an adversary tampers with
a fraction of the training data $T$ with the goal of increasing the
classification error of the constructed hypothesis/model over the final test
distribution. In the distributed setting, $T$ might be gathered gradually from
$m$ data providers $P_1,\dots,P_m$ who generate and submit their shares of $T$
in an online way.
In this work, we initiate a formal study of $(k,p)$-poisoning attacks in
which an adversary controls $k\in[n]$ of the parties, and even for each
corrupted party $P_i$, the adversary submits some poisoned data $T'_i$ on
behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g.,
$1-p$ fraction of $T'_i$ is still honestly generated). For $k=m$, this model
becomes the traditional notion of poisoning, and for $p=1$ it coincides with
the standard notion of corruption in multi-party computation.
We prove that if there is an initial constant error for the generated
hypothesis $h$, there is always a $(k,p)$-poisoning attacker who can decrease
the confidence of $h$ (to have a small error), or alternatively increase the
error of $h$, by $\Omega(p \cdot k/m)$. Our attacks can be implemented in
polynomial time given samples from the correct data, and they use no wrong
labels if the original distributions are not noisy.
At a technical level, we prove a general lemma about biasing bounded
functions $f(x_1,\dots,x_n)\in[0,1]$ through an attack model in which each
block $x_i$ might be controlled by an adversary with marginal probability $p$
in an online way. When the probabilities are independent, this coincides with
the model of $p$-tampering attacks, thus we call our model generalized
$p$-tampering. We prove the power of such attacks by incorporating ideas from
the context of coin-flipping attacks into the $p$-tampering model and
generalize the results in both of these areas.
| 0 | 0 | 0 | 1 | 0 | 0 |
Spectral edge behavior for eventually monotone Jacobi and Verblunsky coefficients | We consider Jacobi matrices with eventually increasing sequences of diagonal
and off-diagonal Jacobi parameters. We describe the asymptotic behavior of the
subordinate solution at the top of the essential spectrum, and the asymptotic
behavior of the spectral density at the top of the essential spectrum.
In particular, allowing on both diagonal and off-diagonal Jacobi parameters
perturbations of the free case of the form $- \sum_{j=1}^J c_j n^{-\tau_j} +
o(n^{-\tau_1-1})$ with $0 < \tau_1 < \tau_2 < \dots < \tau_J$ and $c_1>0$, we
find the asymptotic behavior of the $\log$ of spectral density to order
$O(\log(2-x))$ as $x$ approaches $2$.
Apart from its intrinsic interest, the above results also allow us to
describe the asymptotics of the spectral density for orthogonal polynomials on
the unit circle with real-valued Verblunsky coefficients of the same form.
| 0 | 0 | 1 | 0 | 0 | 0 |
Web Video in Numbers - An Analysis of Web-Video Metadata | Web video is often used as a source of data in various fields of study. While
specialized subsets of web video, mainly earmarked for dedicated purposes, are
often analyzed in detail, there is little information available about the
properties of web video as a whole. In this paper we present insights gained
from the analysis of the metadata associated with more than 120 million videos
harvested from two popular web video platforms, vimeo and YouTube, in 2016 and
compare their properties with the ones found in commonly used video
collections. This comparison has revealed that existing collections do not (or
no longer) properly reflect the properties of web video "in the wild".
| 1 | 0 | 0 | 0 | 0 | 0 |
Bernoulli Correlations and Cut Polytopes | Given $n$ symmetric Bernoulli variables, what can be said about their
correlation matrix viewed as a vector? We show that the set of those vectors
$R(\mathcal{B}_n)$ is a polytope and identify its vertices. Those extreme
points correspond to correlation vectors associated to the discrete uniform
distributions on diagonals of the cube $[0,1]^n$. We also show that the
polytope is affinely isomorphic to a well-known cut polytope ${\rm CUT}(n)$
which is defined as a convex hull of the cut vectors in a complete graph with
vertex set $\{1,\ldots,n\}$. The isomorphism is obtained explicitly as
$R(\mathcal{B}_n)= {\mathbf{1}}-2~{\rm CUT}(n)$. As a corollary of this work,
it is straightforward using linear programming to determine if a particular
correlation matrix is realizable or not. Furthermore, a sampling method for
multivariate symmetric Bernoullis with given correlation is obtained. In some
cases the method can also be used for general, not exclusively Bernoulli,
marginals.
| 0 | 0 | 1 | 1 | 0 | 0 |
Tensor products of NCDL-C*-algebras and the C*-algebra of the Heisenberg motion groups | We show that the tensor product $A\otimes B$ over $\mathbb{C}$ of two $C^*
$-algebras satisfying the \textit{NCDL} conditions has again the same property.
We use this result to describe the $C^* $-algebra of the Heisenberg motion
groups $G_n = \mathbb{T}^n \ltimes \mathbb{H}_n$ as algebra of operator fields
defined over the spectrum of $G_n $.
| 0 | 0 | 1 | 0 | 0 | 0 |
ViP-CNN: Visual Phrase Guided Convolutional Neural Network | As the intermediate level task connecting image captioning and object
detection, visual relationship detection started to catch researchers'
attention because of its descriptive power and clear structure. It detects the
objects and captures their pair-wise interactions with a
subject-predicate-object triplet, e.g. person-ride-horse. In this paper, each
visual relationship is considered as a phrase with three components. We
formulate the visual relationship detection as three inter-connected
recognition problems and propose a Visual Phrase guided Convolutional Neural
Network (ViP-CNN) to address them simultaneously. In ViP-CNN, we present a
Phrase-guided Message Passing Structure (PMPS) to establish the connection
among relationship components and help the model consider the three problems
jointly. Corresponding non-maximum suppression method and model training
strategy are also proposed. Experimental results show that our ViP-CNN
outperforms the state-of-art method both in speed and accuracy. We further
pretrain ViP-CNN on our cleansed Visual Genome Relationship dataset, which is
found to perform better than the pretraining on the ImageNet for this task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Haantjes Algebras and Diagonalization | We propose the notion of Haantjes algebra, which consists of an assignment of
a family of fields of operators over a differentiable manifold, with vanishing
Haantjes torsion and satisfying suitable compatibility conditions among each
others. Haantjes algebras naturally generalize several known interesting
geometric structures, arising in Riemannian geometry and in the theory of
integrable systems. At the same time, they play a crucial role in the theory of
diagonalization of operators on differentiable manifolds.
Whenever the elements of an Haantjes algebra are semisimple and commute, we
shall prove that there exists a set of local coordinates where all operators
can be diagonalized simultaneously. Moreover, in the non-semisimple case, they
acquire simultaneously a block-diagonal form.
| 0 | 1 | 1 | 0 | 0 | 0 |
Multipair Massive MIMO Relaying Systems with One-Bit ADCs and DACs | This paper considers a multipair amplify-and-forward massive MIMO relaying
system with one-bit ADCs and one-bit DACs at the relay. The channel state
information is estimated via pilot training, and then utilized by the relay to
perform simple maximum-ratio combining/maximum-ratio transmission processing.
Leveraging on the Bussgang decomposition, an exact achievable rate is derived
for the system with correlated quantization noise. Based on this, a closed-form
asymptotic approximation for the achievable rate is presented, thereby enabling
efficient evaluation of the impact of key parameters on the system performance.
Furthermore, power scaling laws are characterized to study the potential energy
efficiency associated with deploying massive one-bit antenna arrays at the
relay. In addition, a power allocation strategy is designed to compensate for
the rate degradation caused by the coarse quantization. Our results suggest
that the quality of the channel estimates depends on the specific orthogonal
pilot sequences that are used, contrary to unquantized systems where any set of
orthogonal pilot sequences gives the same result. Moreover, the sum rate gap
between the double-quantized relay system and an ideal non-quantized system is
a moderate factor of $4/\pi^2$ in the low power regime.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian Methods for Exoplanet Science | Exoplanet research is carried out at the limits of the capabilities of
current telescopes and instruments. The studied signals are weak, and often
embedded in complex systematics from instrumental, telluric, and astrophysical
sources. Combining repeated observations of periodic events, simultaneous
observations with multiple telescopes, different observation techniques, and
existing information from theory and prior research can help to disentangle the
systematics from the planetary signals, and offers synergistic advantages over
analysing observations separately. Bayesian inference provides a
self-consistent statistical framework that addresses both the necessity for
complex systematics models, and the need to combine prior information and
heterogeneous observations. This chapter offers a brief introduction to
Bayesian inference in the context of exoplanet research, with focus on time
series analysis, and finishes with an overview of a set of freely available
programming libraries.
| 0 | 1 | 0 | 0 | 0 | 0 |
Feature importance scores and lossless feature pruning using Banzhaf power indices | Understanding the influence of features in machine learning is crucial to
interpreting models and selecting the best features for classification. In this
work we propose the use of principles from coalitional game theory to reason
about importance of features. In particular, we propose the use of the Banzhaf
power index as a measure of influence of features on the outcome of a
classifier. We show that features having Banzhaf power index of zero can be
losslessly pruned without damage to classifier accuracy. Computing the power
indices does not require having access to data samples. However, if samples are
available, the indices can be empirically estimated. We compute Banzhaf power
indices for a neural network classifier on real-life data, and compare the
results with gradient-based feature saliency, and coefficients of a logistic
regression model with $L_1$ regularization.
| 1 | 0 | 0 | 1 | 0 | 0 |
Robust clustering of languages across Wikipedia growth | Wikipedia is the largest existing knowledge repository that is growing on a
genuine crowdsourcing support. While the English Wikipedia is the most
extensive and the most researched one with over five million articles,
comparatively little is known about the behavior and growth of the remaining
283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here
we use a subset of this data, consisting of 14962 different articles, each of
which exists in 26 different languages, from Arabic to Ukrainian. We study the
growth of Wikipedias in these languages over a time span of 15 years. We show
that, while an average article follows a random path from one language to
another, there exist six well-defined clusters of Wikipedias that share common
growth patterns. The make-up of these clusters is remarkably robust against the
method used for their determination, as we verify via four different clustering
methods. Interestingly, the identified Wikipedia clusters have little
correlation with language families and groups. Rather, the growth of Wikipedia
across different languages is governed by different factors, ranging from
similarities in culture to information literacy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Attractive Heaviside-Maxwellian (Vector) Gravity from Special Relativity and Quantum Field Theory | Adopting two independent approaches (a) Lorentz-invariance of physical laws
and (b) local phase invariance of quantum field theory applied to the Dirac
Lagrangian for massive electrically neutral Dirac particles, we rediscovered
the fundamental field equations of Heaviside Gravity (HG) of 1893 and
Maxwellian Gravity (MG), which look different from each other due to a sign
difference in some terms of their respective field equations. However, they are
shown to represent two mathematical representations of a single physical theory
of vector gravity that we name here as Heaviside-Maxwellian Gravity (HMG), in
which the speed of gravitational waves in vacuum is uniquely found to be equal
to the speed of light in vacuum. We also corrected a sign error in Heaviside's
speculative gravitational analogue of the Lorentz force law. This spin-1 HMG is
shown to produce attractive force between like masses under static condition,
contrary to the prevalent view of field theorists. Galileo's law of
universality of free fall is a consequence of HMG, without any initial
assumption of the equality of gravitational mass with velocity-dependent mass.
We also note a new set of Lorentz-Maxwell's equations having the same physical
effects as the standard set - a byproduct of our present study.
| 0 | 1 | 0 | 0 | 0 | 0 |
Critical pairing fluctuations in the normal state of a superconductor: pseudogap and quasi-particle damping | We study the effect of critical pairing fluctuations on the electronic
properties in the normal state of a clean superconductor in three dimensions.
Using a functional renormalization group approach to take the non-Gaussian
nature of critical fluctuations into account, we show microscopically that in
the BCS regime, where the inverse coherence length is much smaller than the
Fermi wavevector, critical pairing fluctuations give rise to a non-analytic
contribution to the quasi-particle damping of order $ T_c \sqrt{Gi} \ln ( 80 /
Gi )$, where the Ginzburg-Levanyuk number $Gi$ is a dimensionless measure for
the width of the critical region. As a consequence, there is a temperature
window above $T_c$ where the quasiparticle damping due to critical pairing
fluctuations can be larger than the usual $T^2$-Fermi liquid damping due to
non-critical scattering processes. On the other hand, in the strong coupling
regime where $Gi$ is of order unity, we find that the quasiparticle damping due
to critical pairing fluctuations is proportional to the temperature. Moreover,
we show that in the vicinity of the critical temperature $T_c$ the electronic
density of states exhibits a fluctuation-induced pseudogap. We also use
functional renormalization group methods to derive and classify various types
of processes induced by the pairing interaction in Fermi systems close to the
superconducting instability.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Mixing method: low-rank coordinate descent for semidefinite programming with diagonal constraints | In this paper, we propose a low-rank coordinate descent approach to
structured semidefinite programming with diagonal constraints. The approach,
which we call the Mixing method, is extremely simple to implement, has no free
parameters, and typically attains an order of magnitude or better improvement
in optimization performance over the current state of the art. We show that the
method is strictly decreasing, converges to a critical point, and further that
for sufficient rank all non-optimal critical points are unstable. Moreover, we
prove that with a step size, the Mixing method converges to the global optimum
of the semidefinite program almost surely in a locally linear rate under random
initialization. This is the first low-rank semidefinite programming method that
has been shown to achieve a global optimum on the spherical manifold without
assumption. We apply our algorithm to two related domains: solving the maximum
cut semidefinite relaxation, and solving a maximum satisfiability relaxation
(we also briefly consider additional applications such as learning word
embeddings). In all settings, we demonstrate substantial improvement over the
existing state of the art along various dimensions, and in total, this work
expands the scope and scale of problems that can be solved using semidefinite
programming methods.
| 1 | 0 | 1 | 1 | 0 | 0 |
GeoSeq2Seq: Information Geometric Sequence-to-Sequence Networks | The Fisher information metric is an important foundation of information
geometry, wherein it allows us to approximate the local geometry of a
probability distribution. Recurrent neural networks such as the
Sequence-to-Sequence (Seq2Seq) networks that have lately been used to yield
state-of-the-art performance on speech translation or image captioning have so
far ignored the geometry of the latent embedding, that they iteratively learn.
We propose the information geometric Seq2Seq (GeoSeq2Seq) network which
abridges the gap between deep recurrent neural networks and information
geometry. Specifically, the latent embedding offered by a recurrent network is
encoded as a Fisher kernel of a parametric Gaussian Mixture Model, a formalism
common in computer vision. We utilise such a network to predict the shortest
routes between two nodes of a graph by learning the adjacency matrix using the
GeoSeq2Seq formalism; our results show that for such a problem the
probabilistic representation of the latent embedding supersedes the
non-probabilistic embedding by 10-15\%.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stochastic graph Voronoi tessellation reveals community structure | Given a network, the statistical ensemble of its graph-Voronoi diagrams with
randomly chosen cell centers exhibits properties convertible into information
on the network's large scale structures. We define a node-pair level measure
called {\it Voronoi cohesion} which describes the probability for sharing the
same Voronoi cell, when randomly choosing $g$ centers in the network. This
measure provides information based on the global context (the network in its
entirety) a type of information that is not carried by other similarity
measures. We explore the mathematical background of this phenomenon and several
of its potential applications. A special focus is laid on the possibilities and
limitations pertaining to the exploitation of the phenomenon for community
detection purposes.
| 1 | 1 | 0 | 0 | 0 | 0 |
Recursion for the smallest eigenvalue density of $β$-Wishart-Laguerre ensemble | The statistics of the smallest eigenvalue of Wishart-Laguerre ensemble is
important from several perspectives. The smallest eigenvalue density is
typically expressible in terms of determinants or Pfaffians. These results are
of utmost significance in understanding the spectral behavior of
Wishart-Laguerre ensembles and, among other things, unveil the underlying
universality aspects in the asymptotic limits. However, obtaining exact and
explicit expressions by expanding determinants or Pfaffians becomes impractical
if large dimension matrices are involved. For the real matrices ($\beta=1$)
Edelman has provided an efficient recurrence scheme to work out exact and
explicit results for the smallest eigenvalue density which does not involve
determinants or matrices. Very recently, an analogous recurrence scheme has
been obtained for the complex matrices ($\beta=2$). In the present work we
extend this to $\beta$-Wishart-Laguerre ensembles for the case when exponent
$\alpha$ in the associated Laguerre weight function, $\lambda^\alpha
e^{-\beta\lambda/2}$, is a non-negative integer, while $\beta$ is positive
real. This also gives access to the smallest eigenvalue density of fixed trace
$\beta$-Wishart-Laguerre ensemble, as well as moments for both cases. Moreover,
comparison with earlier results for the smallest eigenvalue density in terms of
certain hypergeometric function of matrix argument results in an effective way
of evaluating these explicitly. Exact evaluations for large values of $n$ (the
matrix dimension) and $\alpha$ also enable us to compare with Tracy-Widom
density and large deviation results of Katzav and Castillo. We also use our
result to obtain the density of the largest of the proper delay times which are
eigenvalues of the Wigner-Smith matrix and are relevant to the problem of
quantum chaotic scattering.
| 0 | 0 | 0 | 1 | 0 | 0 |
Comparing Neural and Attractiveness-based Visual Features for Artwork Recommendation | Advances in image processing and computer vision in the latest years have
brought about the use of visual features in artwork recommendation. Recent
works have shown that visual features obtained from pre-trained deep neural
networks (DNNs) perform very well for recommending digital art. Other recent
works have shown that explicit visual features (EVF) based on attractiveness
can perform well in preference prediction tasks, but no previous work has
compared DNN features versus specific attractiveness-based visual features
(e.g. brightness, texture) in terms of recommendation performance. In this
work, we study and compare the performance of DNN and EVF features for the
purpose of physical artwork recommendation using transactional data from
UGallery, an online store of physical paintings. In addition, we perform an
exploratory analysis to understand if DNN embedded features have some relation
with certain EVF. Our results show that DNN features outperform EVF, that
certain EVF features are more suited for physical artwork recommendation and,
finally, we show evidence that certain neurons in the DNN might be partially
encoding visual features such as brightness, providing an opportunity for
explaining recommendations based on visual neural models.
| 1 | 0 | 0 | 0 | 0 | 0 |
DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds | We propose DeepMapping, a novel registration framework using deep neural
networks (DNNs) as auxiliary functions to align multiple point clouds from
scratch to a globally consistent frame. We use DNNs to model the highly
non-convex mapping process that traditionally involves hand-crafted data
association, sensor pose initialization, and global refinement. Our key novelty
is that properly defining unsupervised losses to "train" these DNNs through
back-propagation is equivalent to solving the underlying registration problem,
yet enables fewer dependencies on good initialization as required by ICP. Our
framework contains two DNNs: a localization network that estimates the poses
for input point clouds, and a map network that models the scene structure by
estimating the occupancy status of global coordinates. This allows us to
convert the registration problem to a binary occupancy classification, which
can be solved efficiently using gradient-based optimization. We further show
that DeepMapping can be readily extended to address the problem of Lidar SLAM
by imposing geometric constraints between consecutive point clouds. Experiments
are conducted on both simulated and real datasets. Qualitative and quantitative
comparisons demonstrate that DeepMapping often enables more robust and accurate
global registration of multiple point clouds than existing techniques. Our code
is available at this http URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
The First Comparison Between Swarm-C Accelerometer-Derived Thermospheric Densities and Physical and Empirical Model Estimates | The first systematic comparison between Swarm-C accelerometer-derived
thermospheric density and both empirical and physics-based model results using
multiple model performance metrics is presented. This comparison is performed
at the satellite's high temporal 10-s resolution, which provides a meaningful
evaluation of the models' fidelity for orbit prediction and other space weather
forecasting applications. The comparison against the physical model is
influenced by the specification of the lower atmospheric forcing, the
high-latitude ionospheric plasma convection, and solar activity. Some insights
into the model response to thermosphere-driving mechanisms are obtained through
a machine learning exercise. The results of this analysis show that the
short-timescale variations observed by Swarm-C during periods of high solar and
geomagnetic activity were better captured by the physics-based model than the
empirical models. It is concluded that Swarm-C data agree well with the
climatologies inherent within the models and are, therefore, a useful data set
for further model validation and scientific research.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gaussian Parsimonious Clustering Models with Covariates | We consider model-based clustering methods for continuous, correlated data
that account for external information available in the presence of mixed-type
fixed covariates by proposing the MoEClust suite of models. These allow
covariates influence the component weights and/or component densities by
modelling the parameters of the mixture as functions of the covariates. A
familiar range of constrained eigen-decomposition parameterisations of the
component covariance matrices are also accommodated. This paper thus addresses
the equivalent aims of including covariates in Gaussian Parsimonious Clustering
Models and incorporating parsimonious covariance structures into the Gaussian
mixture of experts framework. The MoEClust models demonstrate significant
improvement from both perspectives in applications to univariate and
multivariate data sets.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Rice-Shapiro theorem in Computable Topology | We provide requirements on effectively enumerable topological spaces which
guarantee that the Rice-Shapiro theorem holds for the computable elements of
these spaces. We show that the relaxation of these requirements leads to the
classes of effectively enumerable topological spaces where the Rice-Shapiro
theorem does not hold. We propose two constructions that generate effectively
enumerable topological spaces with particular properties from wn--families and
computable trees without computable infinite paths. Using them we propose
examples that give a flavor of this class.
| 1 | 0 | 1 | 0 | 0 | 0 |
Multivariate Regression with Grossly Corrupted Observations: A Robust Approach and its Applications | This paper studies the problem of multivariate linear regression where a
portion of the observations is grossly corrupted or is missing, and the
magnitudes and locations of such occurrences are unknown in priori. To deal
with this problem, we propose a new approach by explicitly consider the error
source as well as its sparseness nature. An interesting property of our
approach lies in its ability of allowing individual regression output elements
or tasks to possess their unique noise levels. Moreover, despite working with a
non-smooth optimization problem, our approach still guarantees to converge to
its optimal solution. Experiments on synthetic data demonstrate the
competitiveness of our approach compared with existing multivariate regression
models. In addition, empirically our approach has been validated with very
promising results on two exemplar real-world applications: The first concerns
the prediction of \textit{Big-Five} personality based on user behaviors at
social network sites (SNSs), while the second is 3D human hand pose estimation
from depth images. The implementation of our approach and comparison methods as
well as the involved datasets are made publicly available in support of the
open-source and reproducible research initiatives.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Deep Learning Approach for Population Estimation from Satellite Imagery | Knowing where people live is a fundamental component of many decision making
processes such as urban development, infectious disease containment, evacuation
planning, risk management, conservation planning, and more. While bottom-up,
survey driven censuses can provide a comprehensive view into the population
landscape of a country, they are expensive to realize, are infrequently
performed, and only provide population counts over broad areas. Population
disaggregation techniques and population projection methods individually
address these shortcomings, but also have shortcomings of their own. To jointly
answer the questions of "where do people live" and "how many people live
there," we propose a deep learning model for creating high-resolution
population estimations from satellite imagery. Specifically, we train
convolutional neural networks to predict population in the USA at a
$0.01^{\circ} \times 0.01^{\circ}$ resolution grid from 1-year composite
Landsat imagery. We validate these models in two ways: quantitatively, by
comparing our model's grid cell estimates aggregated at a county-level to
several US Census county-level population projections, and qualitatively, by
directly interpreting the model's predictions in terms of the satellite image
inputs. We find that aggregating our model's estimates gives comparable results
to the Census county-level population projections and that the predictions made
by our model can be directly interpreted, which give it advantages over
traditional population disaggregation methods. In general, our model is an
example of how machine learning techniques can be an effective tool for
extracting information from inherently unstructured, remotely sensed data to
provide effective solutions to social problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Trends in the Diffusion of Misinformation on Social Media | We measure trends in the diffusion of misinformation on Facebook and Twitter
between January 2015 and July 2018. We focus on stories from 570 sites that
have been identified as producers of false stories. Interactions with these
sites on both Facebook and Twitter rose steadily through the end of 2016.
Interactions then fell sharply on Facebook while they continued to rise on
Twitter, with the ratio of Facebook engagements to Twitter shares falling by
approximately 60 percent. We see no similar pattern for other news, business,
or culture sites, where interactions have been relatively stable over time and
have followed similar trends on the two platforms both before and after the
election.
| 1 | 0 | 0 | 0 | 0 | 1 |
SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications | We describe the SemEval task of extracting keyphrases and relations between
them from scientific documents, which is crucial for understanding which
publications describe which processes, tasks and materials. Although this was a
new task, we had a total of 26 submissions across 3 evaluation scenarios. We
expect the task and the findings reported in this paper to be relevant for
researchers working on understanding scientific content, as well as the broader
knowledge base population and information extraction communities.
| 1 | 0 | 0 | 1 | 0 | 0 |
Well-posedness of the Two-dimensional Nonlinear Schrödinger Equation with Concentrated Nonlinearity | We consider a two-dimensional nonlinear Schrödinger equation with
concentrated nonlinearity. In both the focusing and defocusing case we prove
local well-posedness, i.e., existence and uniqueness of the solution for short
times, as well as energy and mass conservation. In addition, we prove that this
implies global existence in the defocusing case, irrespective of the power of
the nonlinearity, while in the focusing case blowing-up solutions may arise.
| 0 | 0 | 1 | 0 | 0 | 0 |
The nilpotent variety of $W(1;n)_{p}$ is irreducible | In the late 1980s, Premet conjectured that the nilpotent variety of any
finite dimensional restricted Lie algebra over an algebraically closed field of
characteristic $p>0$ is irreducible. This conjecture remains open, but it is
known to hold for a large class of simple restricted Lie algebras, e.g. for Lie
algebras of connected reductive algebraic groups, and for Cartan series $W, S$
and $H$. In this paper, with the assumption that $p>3$, we confirm this
conjecture for the minimal $p$-envelope $W(1;n)_p$ of the Zassenhaus algebra
$W(1;n)$ for all $n\geq 2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Solitons in Bose-Einstein Condensates with Helicoidal Spin-Orbit Coupling | We report on the existence and stability of freely moving solitons in a
spatially inhomogeneous Bose- Einstein condensate with helicoidal spin-orbit
(SO) coupling. In spite of the periodically varying parameters, the system
allows for the existence of stable propagating solitons. Such states are found
in the rotating frame, where the helicoidal SO coupling is reduced to a
homogeneous one. In the absence of the Zeeman splitting, the coupled
Gross-Pitaevskii equations describing localized states feature many properties
of the integrable systems. In particular, four-parametric families of solitons
can be obtained in the exact form. Such solitons interact elastically. Zeeman
splitting still allows for the existence of two families of moving solitons,
but makes collisions of solitons inelastic.
| 0 | 1 | 0 | 0 | 0 | 0 |
Possible heights of graph transformation groups | In the following text we prove that for all finite $p\geq0$ there exists a
topological graph $X$ such that $\{p,p+1,p+2,\ldots\}\cup\{+\infty\}$ is the
collection of all possible heights for transformation groups with phase space
$X$. Moreover for all topological graph $X$ with $p$ as height of
transformation group $(Homeo(X),X)$, $\{p,p+1,p+2,\ldots\}\cup\{+\infty\}$
again is the collection of all possible heights for transformation groups with
phase space $X$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dependencies: Formalising Semantic Catenae for Information Retrieval | Building machines that can understand text like humans is an AI-complete
problem. A great deal of research has already gone into this, with astounding
results, allowing everyday people to discuss with their telephones, or have
their reading materials analysed and classified by computers. A prerequisite
for processing text semantics, common to the above examples, is having some
computational representation of text as an abstract object. Operations on this
representation practically correspond to making semantic inferences, and by
extension simulating understanding text. The complexity and granularity of
semantic processing that can be realised is constrained by the mathematical and
computational robustness, expressiveness, and rigour of the tools used.
This dissertation contributes a series of such tools, diverse in their
mathematical formulation, but common in their application to model semantic
inferences when machines process text. These tools are principally expressed in
nine distinct models that capture aspects of semantic dependence in highly
interpretable and non-complex ways. This dissertation further reflects on
present and future problems with the current research paradigm in this area,
and makes recommendations on how to overcome them.
The amalgamation of the body of work presented in this dissertation advances
the complexity and granularity of semantic inferences that can be made
automatically by machines.
| 1 | 0 | 0 | 0 | 0 | 0 |
A New Pseudo-color Technique Based on Intensity Information Protection for Passive Sensor Imagery | Remote sensing image processing is so important in geo-sciences. Images which
are obtained by different types of sensors might initially be unrecognizable.
To make an acceptable visual perception in the images, some pre-processing
steps (for removing noises and etc) are preformed which they affect the
analysis of images. There are different types of processing according to the
types of remote sensing images. The method that we are going to introduce in
this paper is to use virtual colors to colorize the gray-scale images of
satellite sensors. This approach helps us to have a better analysis on a sample
single-band image which has been taken by Landsat-8 (OLI) sensor (as a
multi-band sensor with natural color bands, its images' natural color can be
compared to synthetic color by our approach). A good feature of this method is
the original image reversibility in order to keep the suitable resolution of
output images.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tunneling anisotropic magnetoresistance driven by magnetic phase transition | The independent control of two magnetic electrodes and spin-coherent
transport in magnetic tunnel junctions are strictly required for tunneling
magnetoresistance, while junctions with only one ferromagnetic electrode
exhibit tunneling anisotropic magnetoresistance dependent on the anisotropic
density of states with no room temperature performance so far. Here we report
an alternative approach to obtaining tunneling anisotropic magnetoresistance in
alfa-FeRh-based junctions driven by the magnetic phase transition of alfa-FeRh
and resultantly large variation of the density of states in the vicinity of MgO
tunneling barrier, referred to as phase transition tunneling anisotropic
magnetoresistance. The junctions with only one alfa-FeRh magnetic electrode
show a magnetoresistance ratio up to 20% at room temperature. Both the polarity
and magnitude of the phase transition tunneling anisotropic magnetoresistance
can be modulated by interfacial engineering at the alfa-FeRh/MgO interface.
Besides the fundamental significance, our finding might add a different
dimension to magnetic random access memory and antiferromagnet spintronics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Finding Efficient Swimming Strategies in a Three Dimensional Chaotic Flow by Reinforcement Learning | We apply a reinforcement learning algorithm to show how smart particles can
learn approximately optimal strategies to navigate in complex flows. In this
paper we consider microswimmers in a paradigmatic three-dimensional case given
by a stationary superposition of two Arnold-Beltrami-Childress flows with
chaotic advection along streamlines. In such a flow, we study the evolution of
point-like particles which can decide in which direction to swim, while keeping
the velocity amplitude constant. We show that it is sufficient to endow the
swimmers with a very restricted set of actions (six fixed swimming directions
in our case) to have enough freedom to find efficient strategies to move upward
and escape local fluid traps. The key ingredient is the
learning-from-experience structure of the algorithm, which assigns positive or
negative rewards depending on whether the taken action is, or is not,
profitable for the predetermined goal in the long term horizon. This is another
example supporting the efficiency of the reinforcement learning approach to
learn how to accomplish difficult tasks in complex fluid environments.
| 1 | 1 | 0 | 0 | 0 | 0 |
Trapped imbalanced fermionic superfluids in one dimension: A variational approach | We propose and analyze a variational wave function for a
population-imbalanced one-dimensional Fermi gas that allows for
Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) type pairing correlations among the two
fermion species, while also accounting for the harmonic confining potential. In
the strongly interacting regime, we find large spatial oscillations of the
order parameter, indicative of an FFLO state. The obtained density profiles
versus imbalance are consistent with recent experimental results as well as
with theoretical calculations based on combining Bethe ansatz with the local
density approximation. Although we find no signature of the FFLO state in the
densities of the two fermion species, we show that the oscillations of the
order parameter appear in density-density correlations, both in-situ and after
free expansion. Furthermore, above a critical polarization, the value of which
depends on the interaction, we find the unpaired Fermi-gas state to be
energetically more favorable.
| 0 | 1 | 0 | 0 | 0 | 0 |
Prospects of dynamical determination of General Relativity parameter beta and solar quadrupole moment J2 with asteroid radar astronomy | We evaluated the prospects of quantifying the parameterized post-Newtonian
parameter beta and solar quadrupole moment J2 with observations of near-Earth
asteroids with large orbital precession rates (9 to 27 arcsec century$^{-1}$).
We considered existing optical and radar astrometry, as well as radar
astrometry that can realistically be obtained with the Arecibo planetary radar
in the next five years. Our sensitivity calculations relied on a traditional
covariance analysis and Monte Carlo simulations. We found that independent
estimates of beta and J2 can be obtained with precisions of $6\times10^{-4}$
and $3\times10^{-8}$, respectively. Because we assumed rather conservative
observational uncertainties, as is the usual practice when reporting radar
astrometry, it is likely that the actual precision will be closer to
$2\times10^{-4}$ and $10^{-8}$, respectively. A purely dynamical determination
of solar oblateness with asteroid radar astronomy may therefore rival the
helioseismology determination.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modelling and Using Response Times in Online Courses | Each time a learner in a self-paced online course is trying to answer an
assessment question, it takes some time to submit the answer, and if multiple
attempts are allowed and the first answer was incorrect, it takes some time to
submit the second attempt, and so on. Here we study the distribution of such
"response times". We find that the log-normal statistical model for such times,
previously suggested in the literature, holds for online courses qualitatively.
Users who, according to this model, tend to take longer on submits are more
likely to complete the course, have a higher level of engagement and achieve a
higher grade. This finding can be the basis for designing interventions in
online courses, such as MOOCs, which would encourage some users to slow down.
| 1 | 0 | 0 | 0 | 0 | 0 |
Univalent Foundations and the UniMath Library | We give a concise presentation of the Univalent Foundations of mathematics
outlining the main ideas, followed by a discussion of the UniMath library of
formalized mathematics implementing the ideas of the Univalent Foundations
(section 1), and the challenges one faces in attempting to design a large-scale
library of formalized mathematics (section 2). This leads us to a general
discussion about the links between architecture and mathematics where a meeting
of minds is revealed between architects and mathematicians (section 3). On the
way our odyssey from the foundations to the "horizon" of mathematics will lead
us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the
architect Christopher Alexander.
| 1 | 0 | 1 | 0 | 0 | 0 |
Distributed methods for synchronization of orthogonal matrices over graphs | This paper addresses the problem of synchronizing orthogonal matrices over
directed graphs. For synchronized transformations (or matrices), composite
transformations over loops equal the identity. We formulate the synchronization
problem as a least-squares optimization problem with nonlinear constraints. The
synchronization problem appears as one of the key components in applications
ranging from 3D-localization to image registration. The main contributions of
this work can be summarized as the introduction of two novel algorithms; one
for symmetric graphs and one for graphs that are possibly asymmetric. Under
general conditions, the former has guaranteed convergence to the solution of a
spectral relaxation to the synchronization problem. The latter is stable for
small step sizes when the graph is quasi-strongly connected. The proposed
methods are verified in numerical simulations.
| 1 | 0 | 1 | 0 | 0 | 0 |
Stochastic Model of SIR Epidemic Modelling | Threshold theorem is probably the most important development of mathematical
epidemic modelling. Unfortunately, some models may not behave according to the
threshold. In this paper, we will focus on the final outcome of SIR model with
demography. The behaviour of the model approached by deteministic and
stochastic models will be introduced, mainly using simulations. Furthermore, we
will also investigate the dynamic of susceptibles in population in absence of
infective. We have successfully showed that both deterministic and stochastic
models performed similar results when $R_0 \leq 1$. That is, the disease-free
stage in the epidemic. But when $R_0 > 1$, the deterministic and stochastic
approaches had different interpretations.
| 0 | 0 | 0 | 1 | 1 | 0 |
Parent Oriented Teacher Selection Causes Language Diversity | An evolutionary model for emergence of diversity in language is developed. We
investigated the effects of two real life observations, namely, people prefer
people that they communicate with well, and people interact with people that
are physically close to each other. Clearly these groups are relatively small
compared to the entire population. We restrict selection of the teachers from
such small groups, called imitation sets, around parents. Then the child learns
language from a teacher selected within the imitation set of her parent. As a
result, there are subcommunities with their own languages developed. Within
subcommunity comprehension is found to be high. The number of languages is
related to the relative size of imitation set by a power law.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Role-based Graph Embeddings | Random walks are at the heart of many existing network embedding methods.
However, such algorithms have many limitations that arise from the use of
random walks, e.g., the features resulting from these methods are unable to
transfer to new nodes and graphs as they are tied to vertex identity. In this
work, we introduce the Role2Vec framework which uses the flexible notion of
attributed random walks, and serves as a basis for generalizing existing
methods such as DeepWalk, node2vec, and many others that leverage random walks.
Our proposed framework enables these methods to be more widely applicable for
both transductive and inductive learning as well as for use on graphs with
attributes (if available). This is achieved by learning functions that
generalize to new nodes and graphs. We show that our proposed framework is
effective with an average AUC improvement of 16.55% while requiring on average
853x less space than existing methods on a variety of graphs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum Emulation of Extreme Non-equilibrium Phenomena with Trapped Atoms | Ultracold atomic physics experiments offer a nearly ideal context for the
investigation of quantum systems far from equilibrium. We describe three
related emerging directions of research into extreme non-equilibrium phenomena
in atom traps: quantum emulation of ultrafast atom-light interactions, coherent
phasonic spectroscopy in tunable quasicrystals, and realization of Floquet
matter in strongly-driven lattice systems. We show that all three should enable
quantum emulation in parameter regimes inaccessible in solid-state experiments,
facilitating a complementary approach to open problems in non-equilibrium
condensed matter.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach | We consider solving convex-concave saddle point problems. We focus on two
variants of gradient decent-ascent algorithms, Extra-gradient (EG) and
Optimistic Gradient (OGDA) methods, and show that they admit a unified analysis
as approximations of the classical proximal point method for solving
saddle-point problems. This viewpoint enables us to generalize EG (in terms of
extrapolation steps) and OGDA (in terms of parameters) and obtain new
convergence rate results for these algorithms for the bilinear case as well as
the strongly convex-concave case.
| 1 | 0 | 0 | 1 | 0 | 0 |
Adaptive Multi-Step Prediction based EKF to Power System Dynamic State Estimation | Power system dynamic state estimation is essential to monitoring and
controlling power system stability. Kalman filtering approaches are predominant
in estimation of synchronous machine dynamic states (i.e. rotor angle and rotor
speed). This paper proposes an adaptive multi-step prediction (AMSP) approach
to improve the extended Kalman filter s (EKF) performance in estimating the
dynamic states of a synchronous machine. The proposed approach consists of
three major steps. First, two indexes are defined to quantify the non-linearity
levels of the state transition function and measurement function, respectively.
Second, based on the non-linearity indexes, a multi prediction factor (Mp) is
defined to determine the number of prediction steps. And finally, to mitigate
the non-linearity impact on dynamic state estimation (DSE) accuracy, the
prediction step repeats a few time based on Mp before performing the correction
step. The two-area four-machine system is used to evaluate the effectiveness of
the proposed AMSP approach. It is shown through the Monte-Carlo method that a
good trade-off between estimation accuracy and computational time can be
achieved effectively through the proposed AMSP approach.
| 1 | 1 | 0 | 0 | 0 | 0 |
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality | A number of statistical estimation problems can be addressed by semidefinite
programs (SDP). While SDPs are solvable in polynomial time using interior point
methods, in practice generic SDP solvers do not scale well to high-dimensional
problems. In order to cope with this problem, Burer and Monteiro proposed a
non-convex rank-constrained formulation, which has good performance in practice
but is still poorly understood theoretically.
In this paper we study the rank-constrained version of SDPs arising in MaxCut
and in synchronization problems. We establish a Grothendieck-type inequality
that proves that all the local maxima and dangerous saddle points are within a
small multiplicative gap from the global maximum. We use this structural
information to prove that SDPs can be solved within a known accuracy, by
applying the Riemannian trust-region method to this non-convex problem, while
constraining the rank to be of order one. For the MaxCut problem, our
inequality implies that any local maximizer of the rank-constrained SDP
provides a $ (1 - 1/(k-1)) \times 0.878$ approximation of the MaxCut, when the
rank is fixed to $k$.
We then apply our results to data matrices generated according to the
Gaussian ${\mathbb Z}_2$ synchronization problem, and the two-groups stochastic
block model with large bounded degree. We prove that the error achieved by
local maximizers undergoes a phase transition at the same threshold as for
information-theoretically optimal methods.
| 0 | 0 | 1 | 1 | 0 | 0 |
Poisson-Nernst-Planck equations with steric effects - non-convexity and multiple stationary solutions | We study the existence and stability of stationary solutions of
Poisson-Nernst- Planck equations with steric effects (PNP-steric equations)
with two counter-charged species. These equations describe steady current
through open ionic channels quite well. The current levels in open ionic
channels are known to switch between `open' or `closed' states in a spontaneous
stochastic process called gating, suggesting that their governing equations
should give rise to multiple stationary solutions that enable such multi-stable
behavior. We show that within a range of parameters, steric effects give rise
to multiple stationary solutions that are smooth. These solutions, however, are
all unstable under PNP-steric dynamics. Following these findings, we introduce
a novel PNP-Cahn-Hilliard model, and show that it admits multiple stationary
solutions that are smooth and stable. The various branches of stationary
solutions and their stability are mapped utilizing bifurcation analysis and
numerical continuation methods.
| 0 | 1 | 1 | 0 | 0 | 0 |
A Fast Noniterative Algorithm for Compressive Sensing Using Binary Measurement Matrices | In this paper we present a new algorithm for compressive sensing that makes
use of binary measurement matrices and achieves exact recovery of ultra sparse
vectors, in a single pass and without any iterations. Due to its noniterative
nature, our algorithm is hundreds of times faster than $\ell_1$-norm
minimization, and methods based on expander graphs, both of which require
multiple iterations. Our algorithm can accommodate nearly sparse vectors, in
which case it recovers index set of the largest components, and can also
accommodate burst noise measurements. Compared to compressive sensing methods
that are guaranteed to achieve exact recovery of all sparse vectors, our method
requires fewer measurements However, methods that achieve statistical recovery,
that is, recovery of almost all but not all sparse vectors, can require fewer
measurements than our method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions | An important goal common to domain adaptation and causal inference is to make
accurate predictions when the distributions for the source (or training)
domain(s) and target (or test) domain(s) differ. In many cases, these different
distributions can be modeled as different contexts of a single underlying
system, in which each distribution corresponds to a different perturbation of
the system, or in causal terms, an intervention. We focus on a class of such
causal domain adaptation problems, where data for one or more source domains
are given, and the task is to predict the distribution of a certain target
variable from measurements of other variables in one or more target domains. We
propose an approach for solving these problems that exploits causal inference
and does not rely on prior knowledge of the causal graph, the type of
interventions or the intervention targets. We demonstrate our approach by
evaluating a possible implementation on simulated and real world data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fitting ReLUs via SGD and Quantized SGD | In this paper we focus on the problem of finding the optimal weights of the
shallowest of neural networks consisting of a single Rectified Linear Unit
(ReLU). These functions are of the form $\mathbf{x}\rightarrow
\max(0,\langle\mathbf{w},\mathbf{x}\rangle)$ with $\mathbf{w}\in\mathbb{R}^d$
denoting the weight vector. We focus on a planted model where the inputs are
chosen i.i.d. from a Gaussian distribution and the labels are generated
according to a planted weight vector. We first show that mini-batch stochastic
gradient descent when suitably initialized, converges at a geometric rate to
the planted model with a number of samples that is optimal up to numerical
constants. Next we focus on a parallel implementation where in each iteration
the mini-batch gradient is calculated in a distributed manner across multiple
processors and then broadcast to a master or all other processors. To reduce
the communication cost in this setting we utilize a Quanitzed Stochastic
Gradient Scheme (QSGD) where the partial gradients are quantized. Perhaps
unexpectedly, we show that QSGD maintains the fast convergence of SGD to a
globally optimal model while significantly reducing the communication cost. We
further corroborate our numerical findings via various experiments including
distributed implementations over Amazon EC2.
| 1 | 0 | 0 | 1 | 0 | 0 |
The meet operation in the imbalance lattice of maximal instantaneous codes: alternative proof of existence | An alternative proof is given of the existence of greatest lower bounds in
the imbalance order of binary maximal instantaneous codes of a given size.
These codes are viewed as maximal antichains of a given size in the infinite
binary tree of 0-1 words. The proof proposed makes use of a single balancing
operation instead of expansion and contraction as in the original proof of the
existence of glb.
| 0 | 0 | 1 | 0 | 0 | 0 |
Approximating Partition Functions in Constant Time | We study approximations of the partition function of dense graphical models.
Partition functions of graphical models play a fundamental role is statistical
physics, in statistics and in machine learning. Two of the main methods for
approximating the partition function are Markov Chain Monte Carlo and
Variational Methods. An impressive body of work in mathematics, physics and
theoretical computer science provides conditions under which Markov Chain Monte
Carlo methods converge in polynomial time. These methods often lead to
polynomial time approximation algorithms for the partition function in cases
where the underlying model exhibits correlation decay. There are very few
theoretical guarantees for the performance of variational methods. One
exception is recent results by Risteski (2016) who considered dense graphical
models and showed that using variational methods, it is possible to find an
$O(\epsilon n)$ additive approximation to the log partition function in time
$n^{O(1/\epsilon^2)}$ even in a regime where correlation decay does not hold.
We show that under essentially the same conditions, an $O(\epsilon n)$
additive approximation of the log partition function can be found in constant
time, independent of $n$. In particular, our results cover dense Ising and
Potts models as well as dense graphical models with $k$-wise interaction. They
also apply for low threshold rank models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stability of a Volterra Integral Equation on Time Scales | In this paper, we study Hyers-Ulam stability for integral equation of
Volterra type in time scale setting. Moreover we study the stability of the
considered equation in Hyers-Ulam-Rassias sense. Our technique depends on
successive approximation method, and we use time scale variant of induction
principle to show that equation (1.1) is stable on unbounded domains in
Hyers-Ulam-Rassias sense.
| 0 | 0 | 1 | 0 | 0 | 0 |
Near-IR period-luminosity relations for pulsating stars in $ω$ Centauri (NGC 5139) | $\omega$ Centauri (NGC 5139) hosts hundreds of pulsating variable stars of
different types, thus representing a treasure trove for studies of their
corresponding period-luminosity (PL) relations. Our goal in this study is to
obtain the PL relations for RR Lyrae, and SX Phoenicis stars in the field of
the cluster, based on high-quality, well-sampled light curves in the
near-infrared (IR). $\omega$ Centauri was observed using VIRCAM mounted on
VISTA. A total of 42 epochs in $J$ and 100 epochs in $K_{\rm S}$ were obtained,
spanning 352 days. Point-spread function photometry was performed using DoPhot
and DAOPHOT in the outer and inner regions of the cluster, respectively. Based
on the comprehensive catalogue of near-IR light curves thus secured, PL
relations were obtained for the different types of pulsators in the cluster,
both in the $J$ and $K_{\rm S}$ bands. This includes the first PL relations in
the near-IR for fundamental-mode SX Phoenicis stars. The near-IR magnitudes and
periods of Type II Cepheids and RR Lyrae stars were used to derive an updated
true distance modulus to the cluster, with a resulting value of $(m-M)_0 =
13.708 \pm 0.035 \pm 0.10$ mag, where the error bars correspond to the adopted
statistical and systematic errors, respectively. Adding the errors in
quadrature, this is equivalent to a heliocentric distance of $5.52\pm 0.27$
kpc.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pseudo-deterministic Proofs | We introduce pseudo-deterministic interactive proofs (psdAM): interactive
proof systems for search problems where the verifier is guaranteed with high
probability to output the same output on different executions. As in the case
with classical interactive proofs, the verifier is a probabilistic polynomial
time algorithm interacting with an untrusted powerful prover.
We view pseudo-deterministic interactive proofs as an extension of the study
of pseudo-deterministic randomized polynomial time algorithms: the goal of the
latter is to find canonical solutions to search problems whereas the goal of
the former is to prove that a solution to a search problem is canonical to a
probabilistic polynomial time verifier. Alternatively, one may think of the
powerful prover as aiding the probabilistic polynomial time verifier to find
canonical solutions to search problems, with high probability over the
randomness of the verifier. The challenge is that pseudo-determinism should
hold not only with respect to the randomness, but also with respect to the
prover: a malicious prover should not be able to cause the verifier to output a
solution other than the unique canonical one.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Mechanism behind Erosive Bursts in Porous Media | Erosion and deposition during flow through porous media can lead to large
erosive bursts that manifest as jumps in permeability and pressure loss. Here
we reveal that the cause of these bursts is the re-opening of clogged pores
when the pressure difference between two opposite sites of the pore surpasses a
certain threshold. We perform numerical simulations of flow through porous
media and compare our predictions to experimental results, recovering with
excellent agreement shape and power-law distribution of pressure loss jumps,
and the behavior of the permeability jumps as function of particle
concentration. Furthermore, we find that erosive bursts only occur for pressure
gradient thresholds within the range of two critical values, independent on how
the flow is driven. Our findings provide a better understanding of sudden sand
production in oil wells and breakthrough in filtration.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Maximum Likelihood Degree of Toric Varieties | We study the maximum likelihood degree (ML degree) of toric varieties, known
as discrete exponential models in statistics. By introducing scaling
coefficients to the monomial parameterization of the toric variety, one can
change the ML degree. We show that the ML degree is equal to the degree of the
toric variety for generic scalings, while it drops if and only if the scaling
vector is in the locus of the principal $A$-determinant. We also illustrate how
to compute the ML estimate of a toric variety numerically via homotopy
continuation from a scaled toric variety with low ML degree. Throughout, we
include examples motivated by algebraic geometry and statistics. We compute the
ML degree of rational normal scrolls and a large class of Veronese-type
varieties. In addition, we investigate the ML degree of scaled Segre varieties,
hierarchical loglinear models, and graphical models.
| 0 | 0 | 1 | 1 | 0 | 0 |
Low Mach number limit of a pressure correction MAC scheme for compressible barotropic flows | We study the incompressible limit of a pressure correction MAC scheme [3] for
the unstationary compressible barotropic Navier-Stokes equations. Provided the
initial data are well-prepared, the solution of the numerical scheme converges,
as the Mach number tends to zero, towards the solution of the classical
pressure correction inf-sup stable MAC scheme for the incompressible
Navier-Stokes equations.
| 0 | 1 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.