title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Insider-Attacks on Physical-Layer Group Secret-Key Generation in Wireless Networks | Physical-layer group secret-key (GSK) generation is an effective way of
generating secret keys in wireless networks, wherein the nodes exploit inherent
randomness in the wireless channels to generate group keys, which are
subsequently applied to secure messages while broadcasting, relaying, and other
network-level communications. While existing GSK protocols focus on securing
the common source of randomness from external eavesdroppers, they assume that
the legitimate nodes of the group are trusted. In this paper, we address
insider attacks from the legitimate participants of the wireless network during
the key generation process. Instead of addressing conspicuous attacks such as
switching-off communication, injecting noise, or denying consensus on group
keys, we introduce stealth attacks that can go undetected against
state-of-the-art GSK schemes. We propose two forms of attacks, namely: (i)
different-key attacks, wherein an insider attempts to generate different keys
at different nodes, especially across nodes that are out of range so that they
fail to recover group messages despite possessing the group key, and (ii)
low-rate key attacks, wherein an insider alters the common source of randomness
so as to reduce the key-rate. We also discuss various detection techniques,
which are based on detecting anomalies and inconsistencies on the channel
measurements at the legitimate nodes. Through simulations we show that GSK
generation schemes are vulnerable to insider-threats, especially on topologies
that cannot support additional secure links between neighbouring nodes to
verify the attacks.
| 1 | 0 | 1 | 0 | 0 | 0 |
Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning | Interactive reinforcement learning (IRL) extends traditional reinforcement
learning (RL) by allowing an agent to interact with parent-like trainers during
a task. In this paper, we present an IRL approach using dynamic audio-visual
input in terms of vocal commands and hand gestures as feedback. Our
architecture integrates multi-modal information to provide robust commands from
multiple sensory cues along with a confidence value indicating the
trustworthiness of the feedback. The integration process also considers the
case in which the two modalities convey incongruent information. Additionally,
we modulate the influence of sensory-driven feedback in the IRL task using
goal-oriented knowledge in terms of contextual affordances. We implement a
neural network architecture to predict the effect of performed actions with
different objects to avoid failed-states, i.e., states from which it is not
possible to accomplish the task. In our experimental setup, we explore the
interplay of multimodal feedback and task-specific affordances in a robot
cleaning scenario. We compare the learning performance of the agent under four
different conditions: traditional RL, multi-modal IRL, and each of these two
setups with the use of contextual affordances. Our experiments show that the
best performance is obtained by using audio-visual feedback with
affordancemodulated IRL. The obtained results demonstrate the importance of
multi-modal sensory processing integrated with goal-oriented knowledge in IRL
tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America | As South and Central American countries prepare for increased birth defects
from Zika virus outbreaks and plan for mitigation strategies to minimize
ongoing and future outbreaks, understanding important characteristics of Zika
outbreaks and how they vary across regions is a challenging and important
problem. We developed a mathematical model for the 2015 Zika virus outbreak
dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly
available data provided by the Pan American Health Organization, using
Approximate Bayesian Computation to estimate parameter distributions and
provide uncertainty quantification. An important model input is the at-risk
susceptible population, which can vary with a number of factors including
climate, elevation, population density, and socio-economic status. We informed
this initial condition using the highest historically reported dengue incidence
modified by the probable dengue reporting rates in the chosen countries. The
model indicated that a country-level analysis was not appropriate for Colombia.
We then estimated the basic reproduction number, or the expected number of new
human infections arising from a single infected human, to range between 4 and 6
for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We
estimated the reporting rate to be around 16% in El Salvador and 18% in
Suriname with estimated total outbreak sizes of 73,395 and 21,647 people,
respectively. The uncertainty in parameter estimates highlights a need for
research and data collection that will better constrain parameter ranges.
| 0 | 0 | 0 | 1 | 0 | 0 |
A systematic study of the class imbalance problem in convolutional neural networks | In this study, we systematically investigate the impact of class imbalance on
classification performance of convolutional neural networks (CNNs) and compare
frequently used methods to address the issue. Class imbalance is a common
problem that has been comprehensively studied in classical machine learning,
yet very limited systematic research is available in the context of deep
learning. In our study, we use three benchmark datasets of increasing
complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of
imbalance on classification and perform an extensive comparison of several
methods to address the issue: oversampling, undersampling, two-phase training,
and thresholding that compensates for prior class probabilities. Our main
evaluation metric is area under the receiver operating characteristic curve
(ROC AUC) adjusted to multi-class tasks since overall accuracy metric is
associated with notable difficulties in the context of imbalanced data. Based
on results from our experiments we conclude that (i) the effect of class
imbalance on classification performance is detrimental; (ii) the method of
addressing class imbalance that emerged as dominant in almost all analyzed
scenarios was oversampling; (iii) oversampling should be applied to the level
that completely eliminates the imbalance, whereas the optimal undersampling
ratio depends on the extent of imbalance; (iv) as opposed to some classical
machine learning models, oversampling does not cause overfitting of CNNs; (v)
thresholding should be applied to compensate for prior class probabilities when
overall number of properly classified cases is of interest.
| 1 | 0 | 0 | 1 | 0 | 0 |
New simple lattices in products of trees and their projections | Let $\Gamma \leq \mathrm{Aut}(T_{d_1}) \times \mathrm{Aut}(T_{d_2})$ be a
group acting freely and transitively on the product of two regular trees of
degree $d_1$ and $d_2$. We develop an algorithm which computes the closure of
the projection of $\Gamma$ on $\mathrm{Aut}(T_{d_t})$ under the hypothesis that
$d_t \geq 6$ is even and that the local action of $\Gamma$ on $T_{d_t}$
contains $\mathrm{Alt}(d_t)$. We show that if $\Gamma$ is torsion-free and $d_1
= d_2 = 6$, exactly seven closed subgroups of $\mathrm{Aut}(T_6)$ arise in this
way. We also construct two new infinite families of virtually simple lattices
in $\mathrm{Aut}(T_{6}) \times \mathrm{Aut}(T_{4n})$ and in
$\mathrm{Aut}(T_{2n}) \times \mathrm{Aut}(T_{2n+1})$ respectively, for all $n
\geq 2$. In particular we provide an explicit presentation of a torsion-free
infinite simple group on $5$ generators and $10$ relations, that splits as an
amalgamated free product of two copies of $F_3$ over $F_{11}$. We include
information arising from computer-assisted exhaustive searches of lattices in
products of trees of small degrees. In an appendix by Pierre-Emmanuel Caprace,
some of our results are used to show that abstract and relative commensurator
groups of free groups are almost simple, providing partial answers to questions
of Lubotzky and Lubotzky-Mozes-Zimmer.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fourth-order time-stepping for stiff PDEs on the sphere | We present in this paper algorithms for solving stiff PDEs on the unit sphere
with spectral accuracy in space and fourth-order accuracy in time. These are
based on a variant of the double Fourier sphere method in coefficient space
with multiplication matrices that differ from the usual ones, and
implicit-explicit time-stepping schemes. Operating in coefficient space with
these new matrices allows one to use a sparse direct solver, avoids the
coordinate singularity and maintains smoothness at the poles, while
implicit-explicit schemes circumvent severe restrictions on the time-steps due
to stiffness. A comparison is made against exponential integrators and it is
found that implicit-explicit schemes perform best. Implementations in MATLAB
and Chebfun make it possible to compute the solution of many PDEs to high
accuracy in a very convenient fashion.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Formalizing Fairness in Prediction with Machine Learning | Machine learning algorithms for prediction are increasingly being used in
critical decisions affecting human lives. Various fairness formalizations, with
no firm consensus yet, are employed to prevent such algorithms from
systematically discriminating against people based on certain attributes
protected by law. The aim of this article is to survey how fairness is
formalized in the machine learning literature for the task of prediction and
present these formalizations with their corresponding notions of distributive
justice from the social sciences literature. We provide theoretical as well as
empirical critiques of these notions from the social sciences literature and
explain how these critiques limit the suitability of the corresponding fairness
formalizations to certain domains. We also suggest two notions of distributive
justice which address some of these critiques and discuss avenues for
prospective fairness formalizations.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Memory Function Formalism: A Review | An introduction to the Zwanzig-Mori-Götze-Wölfle memory function
formalism (or generalized Drude formalism) is presented. This formalism is used
extensively in analyzing the experimentally obtained optical conductivity of
strongly correlated systems like cuprates and Iron based superconductors etc.
For a broader perspective both the generalised Langevin equation approach and
the projection operator approach for the memory function formalism are given.
The Götze-Wölfle perturbative expansion of memory function is presented
and its application to the computation of the dynamical conductivity of metals
is also reviewd. This review of the formalism contains all the mathematical
details for pedagogical purposes.
| 0 | 1 | 0 | 0 | 0 | 0 |
RIPML: A Restricted Isometry Property based Approach to Multilabel Learning | The multilabel learning problem with large number of labels, features, and
data-points has generated a tremendous interest recently. A recurring theme of
these problems is that only a few labels are active in any given datapoint as
compared to the total number of labels. However, only a small number of
existing work take direct advantage of this inherent extreme sparsity in the
label space. By the virtue of Restricted Isometry Property (RIP), satisfied by
many random ensembles, we propose a novel procedure for multilabel learning
known as RIPML. During the training phase, in RIPML, labels are projected onto
a random low-dimensional subspace followed by solving a least-square problem in
this subspace. Inference is done by a k-nearest neighbor (kNN) based approach.
We demonstrate the effectiveness of RIPML by conducting extensive simulations
and comparing results with the state-of-the-art linear dimensionality reduction
based approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
Erratum to: Medial axis and singularities | We correct one erroneous statement made in our recent paper "Medial axis and
singularities".
| 0 | 0 | 1 | 0 | 0 | 0 |
AP-initiated Multi-User Transmissions in IEEE 802.11ax WLANs | Next-generation 802.11ax WLANs will make extensive use of multi-user
communications in both downlink (DL) and uplink (UL) directions to achieve high
and efficient spectrum utilization in scenarios with many user stations per
access point. It will become possible with the support of multi-user (MU)
multiple input, multiple output (MIMO) and orthogonal frequency division
multiple access (OFDMA) transmissions. In this paper, we first overview the
novel characteristics introduced by IEEE 802.11ax to implement AP-initiated
OFDMA and MU-MIMO transmissions in both downlink and uplink directions. Namely,
we describe the changes made at the physical layer and at the medium access
control layer to support OFDMA, the use of \emph{trigger frames} to schedule
uplink multi-user transmissions, and the new \emph{multi-user RTS/CTS
mechanism} to protect large multi-user transmissions from collisions. Then, in
order to study the achievable throughput of an 802.11ax network, we use both
mathematical analysis and simulations to numerically quantify the benefits of
MU transmissions and the impact of 802.11ax overheads on the WLAN saturation
throughput. Results show the advantages of MU transmissions in scenarios with
many user stations, also providing some novel insights on the conditions in
which 802.11ax WLANs are able to maximize their performance, such as the
existence of an optimal number of active user stations in terms of throughput,
or the need to provide strict prioritization to AP-initiated MU transmissions
to avoid collisions with user stations.
| 1 | 0 | 0 | 0 | 0 | 0 |
What do we know about the geometry of space? | The belief that three dimensional space is infinite and flat in the absence
of matter is a canon of physics that has been in place since the time of
Newton. The assumption that space is flat at infinity has guided several modern
physical theories. But what do we actually know to support this belief? A
simple argument, called the "Telescope Principle", asserts that all that we can
know about space is bounded by observations. Physical theories are best when
they can be verified by observations, and that should also apply to the
geometry of space. The Telescope Principle is simple to state, but it leads to
very interesting insights into relativity and Yang-Mills theory via projective
equivalences of their respective spaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
Evidence of new twinning modes in magnesium questioning the shear paradigm | Twinning is an important deformation mode of hexagonal close-packed metals.
The crystallographic theory is based on the 150-years old concept of simple
shear. The habit plane of the twin is the shear plane, it is invariant. Here we
present Electron BackScatter Diffraction observations and crystallographic
analysis of a millimeter size twin in a magnesium single crystal whose straight
habit plane, unambiguously determined both the parent crystal and in its twin,
is not an invariant plane. This experimental evidence demonstrates that
macroscopic deformation twinning can be obtained by a mechanism that is not a
simple shear. Beside, this unconventional twin is often co-formed with a new
conventional twin that exhibits the lowest shear magnitude ever reported in
metals. The existence of unconventional twinning introduces a shift of paradigm
and calls for the development of a new theory for the displacive
transformations
| 0 | 1 | 0 | 0 | 0 | 0 |
Probing the Interatomic Potential of Solids by Strong-Field Nonlinear Phononics | Femtosecond optical pulses at mid-infrared frequencies have opened up the
nonlinear control of lattice vibrations in solids. So far, all applications
have relied on second order phonon nonlinearities, which are dominant at field
strengths near 1 MVcm-1. In this regime, nonlinear phononics can transiently
change the average lattice structure, and with it the functionality of a
material. Here, we achieve an order-of-magnitude increase in field strength,
and explore higher-order lattice nonlinearities. We drive up to five phonon
harmonics of the A1 mode in LiNbO3. Phase-sensitive measurements of atomic
trajectories in this regime are used to experimentally reconstruct the
interatomic potential and to benchmark ab-initio calculations for this
material. Tomography of the Free Energy surface by high-order nonlinear
phononics will impact many aspects of materials research, including the study
of classical and quantum phase transitions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Assessment of learning tomography using Mie theory | In Optical diffraction tomography, the multiply scattered field is a
nonlinear function of the refractive index of the object. The Rytov method is a
linear approximation of the forward model, and is commonly used to reconstruct
images. Recently, we introduced a reconstruction method based on the Beam
Propagation Method (BPM) that takes the nonlinearity into account. We refer to
this method as Learning Tomography (LT). In this paper, we carry out
simulations in order to assess the performance of LT over the linear iterative
method. Each algorithm has been rigorously assessed for spherical objects, with
synthetic data generated using the Mie theory. By varying the RI contrast and
the size of the objects, we show that the LT reconstruction is more accurate
and robust than the reconstruction based on the linear model. In addition, we
show that LT is able to correct distortion that is evident in Rytov
approximation due to limitations in phase unwrapping. More importantly, the
capacity of LT in handling multiple scattering problem are demonstrated by
simulations of multiple cylinders using the Mie theory and confirmed by
experimental results of two spheres.
| 0 | 1 | 0 | 0 | 0 | 0 |
Single Molecule Studies Under Constant Force Using Model Based Robust Control Design | Optical tweezers have enabled important insights into intracellular transport
through the investigation of motor proteins, with their ability to manipulate
particles at the microscale, affording femto Newton force resolution. Its use
to realize a constant force clamp has enabled vital insights into the behavior
of motor proteins under different load conditions. However, the varying nature
of disturbances and the effect of thermal noise pose key challenges to force
regulation. Furthermore, often the main aim of many studies is to determine the
motion of the motor and the statistics related to the motion, which can be at
odds with the force regulation objective. In this article, we propose a mixed
objective H2-Hinfinity optimization framework using a model-based design, that
achieves the dual goals of force regulation and real time motion estimation
with quantifiable guarantees. Here, we minimize the Hinfinity norm for the
force regulation and error in step estimation while maintaining the H2 norm of
the noise on step estimate within user specified bounds. We demonstrate the
efficacy of the framework through extensive simulations and an experimental
implementation using an optical tweezer setup with live samples of the motor
protein kinesin; where regulation of forces below 1 pico Newton with errors
below 10 percent is obtained while simultaneously providing real time estimates
of motor motion.
| 0 | 1 | 1 | 0 | 0 | 0 |
Measuring the effects of Loop Quantum Cosmology in the CMB data | In this Essay we investigate the observational signatures of Loop Quantum
Cosmology (LQC) in the CMB data. First, we concentrate on the dynamics of LQC
and we provide the basic cosmological functions. We then obtain the power
spectrum of scalar and tensor perturbations in order to study the performance
of LQC against the latest CMB data. We find that LQC provides a robust
prediction for the main slow-roll parameters, like the scalar spectral index
and the tensor-to-scalar fluctuation ratio, which are in excellent agreement
within $1\sigma$ with the values recently measured by the Planck collaboration.
This result indicates that LQC can be seen as an alternative scenario with
respect to that of standard inflation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization | In many modern machine learning applications, structures of underlying
mathematical models often yield nonconvex optimization problems. Due to the
intractability of nonconvexity, there is a rising need to develop efficient
methods for solving general nonconvex problems with certain performance
guarantee. In this work, we investigate the accelerated proximal gradient
method for nonconvex programming (APGnc). The method compares between a usual
proximal gradient step and a linear extrapolation step, and accepts the one
that has a lower function value to achieve a monotonic decrease. In specific,
under a general nonsmooth and nonconvex setting, we provide a rigorous argument
to show that the limit points of the sequence generated by APGnc are critical
points of the objective function. Then, by exploiting the
Kurdyka-{\L}ojasiewicz (\KL) property for a broad class of functions, we
establish the linear and sub-linear convergence rates of the function value
sequence generated by APGnc. We further propose a stochastic variance reduced
APGnc (SVRG-APGnc), and establish its linear convergence under a special case
of the \KL property. We also extend the analysis to the inexact version of
these methods and develop an adaptive momentum strategy that improves the
numerical performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exploring the Interconnectedness of Cryptocurrencies using Correlation Networks | Correlation networks were used to detect characteristics which, although
fixed over time, have an important influence on the evolution of prices over
time. Potentially important features were identified using the websites and
whitepapers of cryptocurrencies with the largest userbases. These were assessed
using two datasets to enhance robustness: one with fourteen cryptocurrencies
beginning from 9 November 2017, and a subset with nine cryptocurrencies
starting 9 September 2016, both ending 6 March 2018. Separately analysing the
subset of cryptocurrencies raised the number of data points from 115 to 537,
and improved robustness to changes in relationships over time. Excluding USD
Tether, the results showed a positive association between different
cryptocurrencies that was statistically significant. Robust, strong positive
associations were observed for six cryptocurrencies where one was a fork of the
other; Bitcoin / Bitcoin Cash was an exception. There was evidence for the
existence of a group of cryptocurrencies particularly associated with Cardano,
and a separate group correlated with Ethereum. The data was not consistent with
a token's functionality or creation mechanism being the dominant determinants
of the evolution of prices over time but did suggest that factors other than
speculation contributed to the price.
| 0 | 0 | 0 | 0 | 0 | 1 |
Affective Neural Response Generation | Existing neural conversational models process natural language primarily on a
lexico-syntactic level, thereby ignoring one of the most crucial components of
human-to-human dialogue: its affective content. We take a step in this
direction by proposing three novel ways to incorporate affective/emotional
aspects into long short term memory (LSTM) encoder-decoder neural conversation
models: (1) affective word embeddings, which are cognitively engineered, (2)
affect-based objective functions that augment the standard cross-entropy loss,
and (3) affectively diverse beam search for decoding. Experiments show that
these techniques improve the open-domain conversational prowess of
encoder-decoder networks by enabling them to produce emotionally rich responses
that are more interesting and natural.
| 1 | 0 | 0 | 0 | 0 | 0 |
Energy Efficient Power Allocation in Massive MIMO Systems based on Standard Interference Function | In this paper, energy efficient power allocation for downlink massive MIMO
systems is investigated. A constrained non-convex optimization problem is
formulated to maximize the energy efficiency (EE), which takes into account the
quality of service (QoS) requirements. By exploiting the properties of
fractional programming and the lower bound of the user data rate, the
non-convex optimization problem is transformed into a convex optimization
problem. The Lagrangian dual function method is utilized to convert the
constrained convex problem into an unconstrained convex one. Due to the
multi-variable coupling problem caused by the intra-user interference, it is
intractable to derive an explicit solution to the above optimization problem.
Exploiting the standard interference function, we propose an implicit iterative
algorithm to solve the unconstrained convex optimization problem and obtain the
optimal power allocation scheme. Simulation results show that the proposed
iterative algorithm converges in just a few iterations, and demonstrate the
impact of the number of users and the number of antennas on the EE.
| 1 | 0 | 0 | 0 | 0 | 0 |
Composite fermion basis for M-component Bose gases | The composite fermion (CF) formalism produces wave functions that are not
always linearly independent. This is especially so in the low angular momentum
regime in the lowest Landau level, where a subclass of CF states, known as
simple states, gives a good description of the low energy spectrum. For the
two-component Bose gas, explicit bases avoiding the large number of redundant
states have been found. We generalize one of these bases to the $M$-component
Bose gas and prove its validity. We also show that the numbers of linearly
independent simple states for different values of angular momentum are given by
coefficients of $q$-multinomials.
| 0 | 1 | 0 | 0 | 0 | 0 |
An FPT Algorithm Beating 2-Approximation for $k$-Cut | In the $k$-Cut problem, we are given an edge-weighted graph $G$ and an
integer $k$, and have to remove a set of edges with minimum total weight so
that $G$ has at least $k$ connected components. Prior work on this problem
gives, for all $h \in [2,k]$, a $(2-h/k)$-approximation algorithm for $k$-cut
that runs in time $n^{O(h)}$. Hence to get a $(2 - \varepsilon)$-approximation
algorithm for some absolute constant $\varepsilon$, the best runtime using
prior techniques is $n^{O(k\varepsilon)}$. Moreover, it was recently shown that
getting a $(2 - \varepsilon)$-approximation for general $k$ is NP-hard,
assuming the Small Set Expansion Hypothesis.
If we use the size of the cut as the parameter, an FPT algorithm to find the
exact $k$-Cut is known, but solving the $k$-Cut problem exactly is $W[1]$-hard
if we parameterize only by the natural parameter of $k$. An immediate question
is: \emph{can we approximate $k$-Cut better in FPT-time, using $k$ as the
parameter?}
We answer this question positively. We show that for some absolute constant
$\varepsilon > 0$, there exists a $(2 - \varepsilon)$-approximation algorithm
that runs in time $2^{O(k^6)} \cdot \widetilde{O} (n^4)$. This is the first FPT
algorithm that is parameterized only by $k$ and strictly improves the
$2$-approximation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamics beyond dynamic jam; unfolding the Painlevé paradox singularity | This paper analyses in detail the dynamics in a neighbourhood of a
Génot-Brogliato point, colloquially termed the G-spot, which physically
represents so-called dynamic jam in rigid body mechanics with unilateral
contact and Coulomb friction. Such singular points arise in planar rigid body
problems with slipping point contacts at the intersection between the
conditions for onset of lift-off and for the Painlevé paradox. The G-spot can
be approached in finite time by an open set of initial conditions in a general
class of problems. The key question addressed is what happens next. In
principle trajectories could, at least instantaneously, lift off, continue in
slip, or undergo a so-called impact without collision. Such impacts are
non-local in momentum space and depend on properties evaluated away from the
G-spot. The results are illustrated on a particular physical example, namely
the a frictional impact oscillator first studied by Leine et al.
The answer is obtained via an analysis that involves a consistent contact
regularisation with a stiffness proportional to $1/\varepsilon^2$. Taking a
singular limit as $\varepsilon \to 0$, one finds an inner and an outer
asymptotic zone in the neighbourhood of the G-spot. Two distinct cases are
found according to whether the contact force becomes infinite or remains finite
as the G-spot is approached. In the former case it is argued that there can be
no such canards and so an impact without collision must occur. In the latter
case, the canard trajectory acts as a dividing surface between trajectories
that momentarily lift off and those that do not before taking the impact. The
orientation of the initial condition set leading to each eventuality is shown
to change each time a certain positive parameter $\beta$ passes through an
integer.
| 0 | 1 | 0 | 0 | 0 | 0 |
Existence and regularity of positive solutions of quasilinear elliptic problems with singular semilinear term | This paper deals with existence and regularity of positive solutions of
singular elliptic problems on a smooth bounded domain with Dirichlet boundary
conditions involving the $\Phi$-Laplacian operator. The proof of existence is
based on a variant of the generalized Galerkin method that we developed
inspired on ideas by Browder and a comparison principle. By using a kind of
Moser iteration scheme we show $L^{\infty}(\Omega)$-regularity for positive
solutions
| 0 | 0 | 1 | 0 | 0 | 0 |
World Literature According to Wikipedia: Introduction to a DBpedia-Based Framework | Among the manifold takes on world literature, it is our goal to contribute to
the discussion from a digital point of view by analyzing the representation of
world literature in Wikipedia with its millions of articles in hundreds of
languages. As a preliminary, we introduce and compare three different
approaches to identify writers on Wikipedia using data from DBpedia, a
community project with the goal of extracting and providing structured
information from Wikipedia. Equipped with our basic set of writers, we analyze
how they are represented throughout the 15 biggest Wikipedia language versions.
We combine intrinsic measures (mostly examining the connectedness of articles)
with extrinsic ones (analyzing how often articles are frequented by readers)
and develop methods to evaluate our results. The better part of our findings
seems to convey a rather conservative, old-fashioned version of world
literature, but a version derived from reproducible facts revealing an implicit
literary canon based on the editing and reading behavior of millions of people.
While still having to solve some known issues, the introduced methods will help
us build an observatory of world literature to further investigate its
representativeness and biases.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Task Clustering for Deep Many-Task Learning | We investigate task clustering for deep-learning based multi-task and
few-shot learning in a many-task setting. We propose a new method to measure
task similarities with cross-task transfer performance matrix for the deep
learning scenario. Although this matrix provides us critical information
regarding similarity between tasks, its asymmetric property and unreliable
performance scores can affect conventional clustering methods adversely.
Additionally, the uncertain task-pairs, i.e., the ones with extremely
asymmetric transfer scores, may collectively mislead clustering algorithms to
output an inaccurate task-partition. To overcome these limitations, we propose
a novel task-clustering algorithm by using the matrix completion technique. The
proposed algorithm constructs a partially-observed similarity matrix based on
the certainty of cluster membership of the task-pairs. We then use a matrix
completion algorithm to complete the similarity matrix. Our theoretical
analysis shows that under mild constraints, the proposed algorithm will
perfectly recover the underlying "true" similarity matrix with a high
probability. Our results show that the new task clustering method can discover
task clusters for training flexible and superior neural network models in a
multi-task learning setup for sentiment classification and dialog intent
classification tasks. Our task clustering approach also extends metric-based
few-shot learning methods to adapt multiple metrics, which demonstrates
empirical advantages when the tasks are diverse.
| 1 | 0 | 0 | 1 | 0 | 0 |
Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets | The roles played by learning and memorization represent an important topic in
deep learning research. Recent work on this subject has shown that the
optimization behavior of DNNs trained on shuffled labels is qualitatively
different from DNNs trained with real labels. Here, we propose a novel
permutation approach that can differentiate memorization from learning in deep
neural networks (DNNs) trained as usual (i.e., using the real labels to guide
the learning, rather than shuffled labels). The evaluation of weather the DNN
has learned and/or memorized, happens in a separate step where we compare the
predictive performance of a shallow classifier trained with the features
learned by the DNN, against multiple instances of the same classifier, trained
on the same input, but using shuffled labels as outputs. By evaluating these
shallow classifiers in validation sets that share structure with the training
set, we are able to tell apart learning from memorization. Application of our
permutation approach to multi-layer perceptrons and convolutional neural
networks trained on image data corroborated many findings from other groups.
Most importantly, our illustrations also uncovered interesting dynamic patterns
about how DNNs memorize over increasing numbers of training epochs, and support
the surprising result that DNNs are still able to learn, rather than only
memorize, when trained with pure Gaussian noise as input.
| 0 | 0 | 0 | 1 | 0 | 0 |
Bias Correction For Paid Search In Media Mix Modeling | Evaluating the return on ad spend (ROAS), the causal effect of advertising on
sales, is critical to advertisers for understanding the performance of their
existing marketing strategy as well as how to improve and optimize it. Media
Mix Modeling (MMM) has been used as a convenient analytical tool to address the
problem using observational data. However it is well recognized that MMM
suffers from various fundamental challenges: data collection, model
specification and selection bias due to ad targeting, among others
\citep{chan2017,wolfe2016}.
In this paper, we study the challenge associated with measuring the impact of
search ads in MMM, namely the selection bias due to ad targeting. Using causal
diagrams of the search ad environment, we derive a statistically principled
method for bias correction based on the \textit{back-door} criterion
\citep{pearl2013causality}. We use case studies to show that the method
provides promising results by comparison with results from randomized
experiments. We also report a more complex case study where the advertiser had
spent on more than a dozen media channels but results from a randomized
experiment are not available. Both our theory and empirical studies suggest
that in some common, practical scenarios, one may be able to obtain an
approximately unbiased estimate of search ad ROAS.
| 0 | 0 | 0 | 1 | 0 | 0 |
Neville's algorithm revisited | Neville's algorithm is known to provide an efficient and numerically stable
solution for polynomial interpolations. In this paper, an extension of this
algorithm is presented which includes the derivatives of the interpolating
polynomial.
| 1 | 0 | 0 | 0 | 0 | 0 |
Forecasting and Granger Modelling with Non-linear Dynamical Dependencies | Traditional linear methods for forecasting multivariate time series are not
able to satisfactorily model the non-linear dependencies that may exist in
non-Gaussian series. We build on the theory of learning vector-valued functions
in the reproducing kernel Hilbert space and develop a method for learning
prediction functions that accommodate such non-linearities. The method not only
learns the predictive function but also the matrix-valued kernel underlying the
function search space directly from the data. Our approach is based on learning
multiple matrix-valued kernels, each of those composed of a set of input
kernels and a set of output kernels learned in the cone of positive
semi-definite matrices. In addition to superior predictive performance in the
presence of strong non-linearities, our method also recovers the hidden dynamic
relationships between the series and thus is a new alternative to existing
graphical Granger techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
HDLTex: Hierarchical Deep Learning for Text Classification | The continually increasing number of documents produced each year
necessitates ever improving information processing methods for searching,
retrieving, and organizing text. Central to these information processing
methods is document classification, which has become an important application
for supervised learning. Recently the performance of these traditional
classifiers has degraded as the number of documents has increased. This is
because along with this growth in the number of documents has come an increase
in the number of categories. This paper approaches this problem differently
from current document classification methods that view the problem as
multi-class classification. Instead we perform hierarchical classification
using an approach we call Hierarchical Deep Learning for Text classification
(HDLTex). HDLTex employs stacks of deep learning architectures to provide
specialized understanding at each level of the document hierarchy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-task Learning with Gradient Guided Policy Specialization | We present a method for efficient learning of control policies for multiple
related robotic motor skills. Our approach consists of two stages, joint
training and specialization training. During the joint training stage, a neural
network policy is trained with minimal information to disambiguate the motor
skills. This forces the policy to learn a common representation of the
different tasks. Then, during the specialization training stage we selectively
split the weights of the policy based on a per-weight metric that measures the
disagreement among the multiple tasks. By splitting part of the control policy,
it can be further trained to specialize to each task. To update the control
policy during learning, we use Trust Region Policy Optimization with
Generalized Advantage Function (TRPOGAE). We propose a modification to the
gradient update stage of TRPO to better accommodate multi-task learning
scenarios. We evaluate our approach on three continuous motor skill learning
problems in simulation: 1) a locomotion task where three single legged robots
with considerable difference in shape and size are trained to hop forward, 2) a
manipulation task where three robot manipulators with different sizes and joint
types are trained to reach different locations in 3D space, and 3) locomotion
of a two-legged robot, whose range of motion of one leg is constrained in
different ways. We compare our training method to three baselines. The first
baseline uses only joint training for the policy, the second trains independent
policies for each task, and the last randomly selects weights to split. We show
that our approach learns more efficiently than each of the baseline methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mean square in the prime geodesic theorem | We prove upper bounds for the mean square of the remainder in the prime
geodesic theorem, for every cofinite Fuchsian group, which improve on average
on the best known pointwise bounds. The proof relies on the Selberg trace
formula. For the modular group we prove a refined upper bound by using the
Kuznetsov trace formula.
| 0 | 0 | 1 | 0 | 0 | 0 |
Approximation Fixpoint Theory and the Well-Founded Semantics of Higher-Order Logic Programs | We define a novel, extensional, three-valued semantics for higher-order logic
programs with negation. The new semantics is based on interpreting the types of
the source language as three-valued Fitting-monotonic functions at all levels
of the type hierarchy. We prove that there exists a bijection between such
Fitting-monotonic functions and pairs of two-valued-result functions where the
first member of the pair is monotone-antimonotone and the second member is
antimonotone-monotone. By deriving an extension of consistent approximation
fixpoint theory (Denecker et al. 2004) and utilizing the above bijection, we
define an iterative procedure that produces for any given higher-order logic
program a distinguished extensional model. We demonstrate that this model is
actually a minimal one. Moreover, we prove that our construction generalizes
the familiar well-founded semantics for classical logic programs, making in
this way our proposal an appealing formulation for capturing the well-founded
semantics for higher-order logic programs. This paper is under consideration
for acceptance in TPLP.
| 1 | 0 | 0 | 0 | 0 | 0 |
An Application of Deep Neural Networks in the Analysis of Stellar Spectra | Spectroscopic surveys require fast and efficient analysis methods to maximize
their scientific impact. Here we apply a deep neural network architecture to
analyze both SDSS-III APOGEE DR13 and synthetic stellar spectra. When our
convolutional neural network model (StarNet) is trained on APOGEE spectra, we
show that the stellar parameters (temperature, gravity, and metallicity) are
determined with similar precision and accuracy as the APOGEE pipeline. StarNet
can also predict stellar parameters when trained on synthetic data, with
excellent precision and accuracy for both APOGEE data and synthetic data, over
a wide range of signal-to-noise ratios. In addition, the statistical
uncertainties in the stellar parameter determinations are comparable to the
differences between the APOGEE pipeline results and those determined
independently from optical spectra. We compare StarNet to other data-driven
methods; for example, StarNet and the Cannon 2 show similar behaviour when
trained with the same datasets, however StarNet performs poorly on small
training sets like those used by the original Cannon. The influence of the
spectral features on the stellar parameters is examined via partial derivatives
of the StarNet model results with respect to the input spectra. While StarNet
was developed using the APOGEE observed spectra and corresponding ASSET
synthetic data, we suggest that this technique is applicable to other
wavelength ranges and other spectral surveys.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analysis of Service-oriented Modeling Approaches for Viewpoint-specific Model-driven Development of Microservice Architecture | Microservice Architecture (MSA) is a novel service-based architectural style
for distributed software systems. Compared to Service-oriented Architecture
(SOA), MSA puts a stronger focus on self-containment of services. Each
microservice is responsible for realizing exactly one business or technological
capability that is distinct from other services' capabilities. Additionally, on
the implementation and operation level, microservices are self-contained in
that they are developed, tested, deployed and operated independently from each
other. Next to these characteristics that distinguish MSA from SOA, both
architectural styles rely on services as building blocks of distributed
software architecture and hence face similar challenges regarding, e.g.,
service identification, composition and provisioning. However, in contrast to
MSA, SOA may rely on an extensive body of knowledge to tackle these challenges.
Thus, due to both architectural styles being service-based, the question arises
to what degree MSA might draw on existing findings of SOA research and
practice. In this paper we address this question in the field of Model-driven
Development (MDD) for design and operation of service-based architectures.
Therefore, we present an analysis of existing MDD approaches to SOA, which
comprises the identification and semantic clustering of modeling concepts for
SOA design and operation. For each concept cluster, the analysis assesses its
applicability to MDD of MSA (MSA-MDD) and assigns it to a specific modeling
viewpoint. The goal of the presented analysis is to provide a conceptual
foundation for an MSA-MDD metamodel.
| 1 | 0 | 0 | 0 | 0 | 0 |
RoboJam: A Musical Mixture Density Network for Collaborative Touchscreen Interaction | RoboJam is a machine-learning system for generating music that assists users
of a touchscreen music app by performing responses to their short
improvisations. This system uses a recurrent artificial neural network to
generate sequences of touchscreen interactions and absolute timings, rather
than high-level musical notes. To accomplish this, RoboJam's network uses a
mixture density layer to predict appropriate touch interaction locations in
space and time. In this paper, we describe the design and implementation of
RoboJam's network and how it has been integrated into a touchscreen music app.
A preliminary evaluation analyses the system in terms of training, musical
generation and user interaction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quasi-two-dimensional Fermi surfaces with localized $f$ electrons in the layered heavy-fermion compound CePt$_2$In$_7$ | We report measurements of the de Haas-van Alphen effect in the layered
heavy-fermion compound CePt$_2$In$_7$ in high magnetic fields up to 35 T. Above
an angle-dependent threshold field, we observed several de Haas-van Alphen
frequencies originating from almost ideally two-dimensional Fermi surfaces. The
frequencies are similar to those previously observed to develop only above a
much higher field of 45 T, where a clear anomaly was detected and proposed to
originate from a change in the electronic structure [M. M. Altarawneh et al.,
Phys. Rev. B 83, 081103 (2011)]. Our experimental results are compared with
band structure calculations performed for both CePt$_2$In$_7$ and
LaPt$_2$In$_7$, and the comparison suggests localized $f$ electrons in
CePt$_2$In$_7$. This conclusion is further supported by comparing
experimentally observed Fermi surfaces in CePt$_2$In$_7$ and PrPt$_2$In$_7$,
which are found to be almost identical. The measured effective masses in
CePt$_2$In$_7$ are only moderately enhanced above the bare electron mass $m_0$,
from 2$m_0$ to 6$m_0$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Differential Forms, Linked Fields and the $u$-Invariant | We associate an Albert form to any pair of cyclic algebras of prime degree
$p$ over a field $F$ with $\operatorname{char}(F)=p$ which coincides with the
classical Albert form when $p=2$. We prove that if every Albert form is
isotropic then $H^4(F)=0$. As a result, we obtain that if $F$ is a linked field
with $\operatorname{char}(F)=2$ then its $u$-invariant is either $0,2,4$ or
$8$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A cyclic system with delay and its characteristic equation | A nonlinear cyclic system with delay and the overall negative feedback is
considered. The characteristic equation of the linearized system is studied in
detail. Sufficient conditions for the oscillation of all solutions and for the
existence of monotone solutions are derived in terms of roots of the
characteristic equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Object Detection and Motion Planning for Automated Welding of Tubular Joints | Automatic welding of tubular TKY joints is an important and challenging task
for the marine and offshore industry. In this paper, a framework for tubular
joint detection and motion planning is proposed. The pose of the real tubular
joint is detected using RGB-D sensors, which is used to obtain a
real-to-virtual mapping for positioning the workpiece in a virtual environment.
For motion planning, a Bi-directional Transition based Rapidly exploring Random
Tree (BiTRRT) algorithm is used to generate trajectories for reaching the
desired goals. The complete framework is verified with experiments, and the
results show that the robot welding torch is able to transit without collision
to desired goals which are close to the tubular joint.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nilpotence order growth of recursion operators in characteristic p | We prove that the killing rate of certain degree-lowering "recursion
operators" on a polynomial algebra over a finite field grows slower than
linearly in the degree of the polynomial attacked. We also explain the
motivating application: obtaining a lower bound for the Krull dimension of a
local component of a big mod-p Hecke algebra in the genus-zero case. We sketch
the application for p=2 and p=3 in level one. The case p=2 was first
established in by Nicolas and Serre in 2012 using different methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems | Neural models have become ubiquitous in automatic speech recognition systems.
While neural networks are typically used as acoustic models in more complex
systems, recent studies have explored end-to-end speech recognition systems
based on neural networks, which can be trained to directly predict text from
input acoustic features. Although such systems are conceptually elegant and
simpler than traditional systems, it is less obvious how to interpret the
trained models. In this work, we analyze the speech representations learned by
a deep end-to-end model that is based on convolutional and recurrent layers,
and trained with a connectionist temporal classification (CTC) loss. We use a
pre-trained model to generate frame-level features which are given to a
classifier that is trained on frame classification into phones. We evaluate
representations from different layers of the deep model and compare their
quality for predicting phone labels. Our experiments shed light on important
aspects of the end-to-end model such as layer depth, model complexity, and
other design choices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian uncertainty quantification in linear models for diffusion MRI | Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue
microstructure. By fitting a model to the dMRI signal it is possible to derive
various quantitative features. Several of the most popular dMRI signal models
are expansions in an appropriately chosen basis, where the coefficients are
determined using some variation of least-squares. However, such approaches lack
any notion of uncertainty, which could be valuable in e.g. group analyses. In
this work, we use a probabilistic interpretation of linear least-squares
methods to recast popular dMRI models as Bayesian ones. This makes it possible
to quantify the uncertainty of any derived quantity. In particular, for
quantities that are affine functions of the coefficients, the posterior
distribution can be expressed in closed-form. We simulated measurements from
single- and double-tensor models where the correct values of several quantities
are known, to validate that the theoretically derived quantiles agree with
those observed empirically. We included results from residual bootstrap for
comparison and found good agreement. The validation employed several different
models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI)
and Constrained Spherical Deconvolution (CSD). We also used in vivo data to
visualize maps of quantitative features and corresponding uncertainties, and to
show how our approach can be used in a group analysis to downweight subjects
with high uncertainty. In summary, we convert successful linear models for dMRI
signal estimation to probabilistic models, capable of accurate uncertainty
quantification.
| 0 | 1 | 0 | 1 | 0 | 0 |
On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks | Empirical risk minimization (ERM) is ubiquitous in machine learning and
underlies most supervised learning methods. While there has been a large body
of work on algorithms for various ERM problems, the exact computational
complexity of ERM is still not understood. We address this issue for multiple
popular ERM problems including kernel SVMs, kernel ridge regression, and
training the final layer of a neural network. In particular, we give
conditional hardness results for these problems based on complexity-theoretic
assumptions such as the Strong Exponential Time Hypothesis. Under these
assumptions, we show that there are no algorithms that solve the aforementioned
ERM problems to high accuracy in sub-quadratic time. We also give similar
hardness results for computing the gradient of the empirical loss, which is the
main computational burden in many non-convex learning tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Deep Learning for Predicting Asset Returns | Deep learning searches for nonlinear factors for predicting asset returns.
Predictability is achieved via multiple layers of composite factors as opposed
to additive ones. Viewed in this way, asset pricing studies can be revisited
using multi-layer deep learners, such as rectified linear units (ReLU) or
long-short-term-memory (LSTM) for time-series effects. State-of-the-art
algorithms including stochastic gradient descent (SGD), TensorFlow and dropout
design provide imple- mentation and efficient factor exploration. To illustrate
our methodology, we revisit the equity market risk premium dataset of Welch and
Goyal (2008). We find the existence of nonlinear factors which explain
predictability of returns, in particular at the extremes of the characteristic
space. Finally, we conclude with directions for future research.
| 0 | 0 | 0 | 1 | 0 | 0 |
Analysis of Distributed ADMM Algorithm for Consensus Optimization in Presence of Error | ADMM is a popular algorithm for solving convex optimization problems.
Applying this algorithm to distributed consensus optimization problem results
in a fully distributed iterative solution which relies on processing at the
nodes and communication between neighbors. Local computations usually suffer
from different types of errors, due to e.g., observation or quantization noise,
which can degrade the performance of the algorithm. In this work, we focus on
analyzing the convergence behavior of distributed ADMM for consensus
optimization in presence of additive node error. We specifically show that (a
noisy) ADMM converges linearly under certain conditions and also examine the
associated convergence point. Numerical results are provided which demonstrate
the effectiveness of the presented analysis.
| 1 | 0 | 1 | 0 | 0 | 0 |
Jet determination of smooth CR automorphisms and generalized stationary discs | We prove finite jet determination for (finitely) smooth CR diffeomorphisms of
(finitely) smooth Levi degenerate hypersurfaces in $\mathbb{C}^{n+1}$ by
constructing generalized stationary discs glued to such hypersurfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Well-Tempered Landscape for Non-convex Robust Subspace Recovery | We present a mathematical analysis of a non-convex energy landscape for
robust subspace recovery. We prove that an underlying subspace is the only
stationary point and local minimizer in a specified neighborhood under
deterministic conditions on a dataset. If the deterministic condition is
satisfied, we further show that a geodesic gradient descent method over the
Grassmannian manifold can exactly recover the underlying subspace when the
method is properly initialized. Proper initialization by principal component
analysis is guaranteed with a similar deterministic condition. Under slightly
stronger assumptions, the gradient descent method with a special shrinking step
size scheme achieves linear convergence. The practicality of the deterministic
condition is demonstrated on some statistical models of data, and the method
achieves almost state-of-the-art recovery guarantees on the Haystack Model for
different regimes of sample size and ambient dimension. In particular, when the
ambient dimension is fixed and the sample size is large enough, we show that
our gradient method can exactly recover the underlying subspace for any fixed
fraction of outliers (less than 1).
| 1 | 0 | 1 | 1 | 0 | 0 |
Biologically inspired protection of deep networks from adversarial attacks | Inspired by biophysical principles underlying nonlinear dendritic computation
in neural circuits, we develop a scheme to train deep neural networks to make
them robust to adversarial attacks. Our scheme generates highly nonlinear,
saturated neural networks that achieve state of the art performance on gradient
based adversarial examples on MNIST, despite never being exposed to
adversarially chosen examples during training. Moreover, these networks exhibit
unprecedented robustness to targeted, iterative schemes for generating
adversarial examples, including second-order methods. We further identify
principles governing how these networks achieve their robustness, drawing on
methods from information geometry. We find these networks progressively create
highly flat and compressed internal representations that are sensitive to very
few input dimensions, while still solving the task. Moreover, they employ
highly kurtotic weight distributions, also found in the brain, and we
demonstrate how such kurtosis can protect even linear classifiers from
adversarial attack.
| 1 | 0 | 0 | 1 | 0 | 0 |
Intrinsic entropies of log-concave distributions | The entropy of a random variable is well-known to equal the exponential
growth rate of the volumes of its typical sets. In this paper, we show that for
any log-concave random variable $X$, the sequence of the $\lfloor n\theta
\rfloor^{\text{th}}$ intrinsic volumes of the typical sets of $X$ in dimensions
$n \geq 1$ grows exponentially with a well-defined rate. We denote this rate by
$h_X(\theta)$, and call it the $\theta^{\text{th}}$ intrinsic entropy of $X$.
We show that $h_X(\theta)$ is a continuous function of $\theta$ over the range
$[0,1]$, thereby providing a smooth interpolation between the values 0 and
$h(X)$ at the endpoints 0 and 1, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems | In this paper, we consider solving a class of nonconvex and nonsmooth
problems frequently appearing in signal processing and machine learning
research. The traditional alternating direction method of multipliers
encounters troubles in both mathematics and computations in solving the
nonconvex and nonsmooth subproblem. In view of this, we propose a reweighted
alternating direction method of multipliers. In this algorithm, all subproblems
are convex and easy to solve. We also provide several guarantees for the
convergence and prove that the algorithm globally converges to a critical point
of an auxiliary function with the help of the Kurdyka-{\L}ojasiewicz property.
Several numerical results are presented to demonstrate the efficiency of the
proposed algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
Acoustic Features Fusion using Attentive Multi-channel Deep Architecture | In this paper, we present a novel deep fusion architecture for audio
classification tasks. The multi-channel model presented is formed using deep
convolution layers where different acoustic features are passed through each
channel. To enable dissemination of information across the channels, we
introduce attention feature maps that aid in the alignment of frames. The
output of each channel is merged using interaction parameters that non-linearly
aggregate the representative features. Finally, we evaluate the performance of
the proposed architecture on three benchmark datasets:- DCASE-2016 and LITIS
Rouen (acoustic scene recognition), and CHiME-Home (tagging). Our experimental
results suggest that the architecture presented outperforms the standard
baselines and achieves outstanding performance on the task of acoustic scene
recognition and audio tagging.
| 1 | 0 | 0 | 0 | 0 | 0 |
An EM Based Probabilistic Two-Dimensional CCA with Application to Face Recognition | Recently, two-dimensional canonical correlation analysis (2DCCA) has been
successfully applied for image feature extraction. The method instead of
concatenating the columns of the images to the one-dimensional vectors,
directly works with two-dimensional image matrices. Although 2DCCA works well
in different recognition tasks, it lacks a probabilistic interpretation. In
this paper, we present a probabilistic framework for 2DCCA called probabilistic
2DCCA (P2DCCA) and an iterative EM based algorithm for optimizing the
parameters. Experimental results on synthetic and real data demonstrate
superior performance in loading factor estimation for P2DCCA compared to 2DCCA.
For real data, three subsets of AR face database and also the UMIST face
database confirm the robustness of the proposed algorithm in face recognition
tasks with different illumination conditions, facial expressions, poses and
occlusions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Understanding Group Event Scheduling via the OutWithFriendz Mobile Application | The wide adoption of smartphones and mobile applications has brought
significant changes to not only how individuals behave in the real world, but
also how groups of users interact with each other when organizing group events.
Understanding how users make event decisions as a group and identifying the
contributing factors can offer important insights for social group studies and
more effective system and application design for group event scheduling.
In this work, we have designed a new mobile application called
OutWithFriendz, which enables users of our mobile app to organize group events,
invite friends, suggest and vote on event time and venue. We have deployed
OutWithFriendz at both Apple App Store and Google Play, and conducted a
large-scale user study spanning over 500 users and 300 group events. Our
analysis has revealed several important observations regarding group event
planning process including the importance of user mobility, individual
preferences, host preferences, and group voting process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Energy Level Alignment at Hybridized Organic-metal Interfaces: the Role of Many-electron Effects | Hybridized molecule/metal interfaces are ubiquitous in molecular and organic
devices. The energy level alignment (ELA) of frontier molecular levels relative
to the metal Fermi level (EF) is critical to the conductance and functionality
of these devices. However, a clear understanding of the ELA that includes
many-electron self-energy effects is lacking. Here, we investigate the
many-electron effects on the ELA using state-of-the-art, benchmark GW
calculations on prototypical chemisorbed molecules on Au(111), in eleven
different geometries. The GW ELA is in good agreement with photoemission for
monolayers of benzene-diamine on Au(111). We find that in addition to static
image charge screening, the frontier levels in most of these geometries are
renormalized by additional screening from substrate-mediated intermolecular
Coulomb interactions. For weakly chemisorbed systems, such as amines and
pyridines on Au, this additional level renormalization (~1.5 eV) comes solely
from static screened exchange energy, allowing us to suggest computationally
more tractable schemes to predict the ELA at such interfaces. However, for more
strongly chemisorbed thiolate layers, dynamical effects are present. Our ab
initio results constitute an important step towards the understanding and
manipulation of functional molecular/organic systems for both fundamental
studies and applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Radio detection of Extensive Air Showers (ECRS 2016) | Detection of the mostly geomagnetically generated radio emission of
cosmic-ray air showers provides an alternative to air-Cherenkov and
air-fluorescence detection, since it is not limited to clear nights. Like these
established methods, the radio signal is sensitive to the calorimetric energy
and the position of the maximum of the electromagnetic shower component. This
makes antenna arrays an ideal extension for particle-detector arrays above a
threshold energy of about 100 PeV of the primary cosmic-ray particles. In the
last few years the digital radio technique for cosmic-ray air showers again
made significant progress, and there now is a consistent picture of the
emission mechanisms confirmed by several measurements. Recent results by the
antenna arrays AERA and Tunka-Rex confirm that the absolute accuracy for the
shower energy is as good as the other detection techniques. Moreover, the
sensitivity to the shower maximum of the radio signal has been confirmed in
direct comparison to air-Cherenkov measurements by Tunka-Rex. The dense antenna
array LOFAR can already compete with the established techniques in accuracy for
cosmic-ray mass-composition. In the future, a new generation of radio
experiments might drive the field: either by providing extremely large exposure
for inclined cosmic-ray or neutrino showers or, like the SKA core in Australia
with its several 10,000 antennas, by providing extremely detailed measurements.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bandit Regret Scaling with the Effective Loss Range | We study how the regret guarantees of nonstochastic multi-armed bandits can
be improved, if the effective range of the losses in each round is small (e.g.
the maximal difference between two losses in a given round). Despite a recent
impossibility result, we show how this can be made possible under certain mild
additional assumptions, such as availability of rough estimates of the losses,
or advance knowledge of the loss of a single, possibly unspecified arm. Along
the way, we develop a novel technique which might be of independent interest,
to convert any multi-armed bandit algorithm with regret depending on the loss
range, to an algorithm with regret depending only on the effective range, while
avoiding predictably bad arms altogether.
| 1 | 0 | 0 | 1 | 0 | 0 |
Changing Fashion Cultures | The paper presents a novel concept that analyzes and visualizes worldwide
fashion trends. Our goal is to reveal cutting-edge fashion trends without
displaying an ordinary fashion style. To achieve the fashion-based analysis, we
created a new fashion culture database (FCDB), which consists of 76 million
geo-tagged images in 16 cosmopolitan cities. By grasping a fashion trend of
mixed fashion styles,the paper also proposes an unsupervised fashion trend
descriptor (FTD) using a fashion descriptor, a codeword vetor, and temporal
analysis. To unveil fashion trends in the FCDB, the temporal analysis in FTD
effectively emphasizes consecutive features between two different times. In
experiments, we clearly show the analysis of fashion trends and fashion-based
city similarity. As the result of large-scale data collection and an
unsupervised analyzer, the proposed approach achieves world-level fashion
visualization in a time series. The code, model, and FCDB will be publicly
available after the construction of the project page.
| 1 | 0 | 0 | 0 | 0 | 0 |
A strong failure of aleph_0-stability for atomic classes | We study classes of atomic models At_T of a countable, complete first-order
theory T . We prove that if At_T is not pcl-small, i.e., there is an atomic
model N that realizes uncountably many types over pcl(a) for some finite tuple
a from N, then there are 2^aleph1 non-isomorphic atomic models of T, each of
size aleph1.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sub-Gaussian estimators of the mean of a random vector | We study the problem of estimating the mean of a random vector $X$ given a
sample of $N$ independent, identically distributed points. We introduce a new
estimator that achieves a purely sub-Gaussian performance under the only
condition that the second moment of $X$ exists. The estimator is based on a
novel concept of a multivariate median.
| 0 | 0 | 1 | 1 | 0 | 0 |
Resource Allocation for Containing Epidemics from Temporal Network Data | We study the problem of containing epidemic spreading processes in temporal
networks. We specifically focus on the problem of finding a resource allocation
to suppress epidemic infection, provided that an empirical time-series data of
connectivities between nodes is available. Although this problem is of
practical relevance, it has not been clear how an empirical time-series data
can inform our strategy of resource allocations, due to the computational
complexity of the problem. In this direction, we present a computationally
efficient framework for finding a resource allocation that satisfies a given
budget constraint and achieves a given control performance. The framework is
based on convex programming and, moreover, allows the performance measure to be
described by a wide class of functionals called posynomials with nonnegative
exponents. We illustrate our theoretical results using a data of temporal
interaction networks within a primary school.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Plan Transformations for Real-World Pick and Place Tasks | In this paper, we investigate the possibility of applying plan
transformations to general manipulation plans in order to specialize them to
the specific situation at hand. We present a framework for optimizing execution
and achieving higher performance by autonomously transforming robot's behavior
at runtime. We show that plans employed by robotic agents in real-world
environments can be transformed, despite their control structures being very
complex due to the specifics of acting in the real world. The evaluation is
carried out on a plan of a PR2 robot performing pick and place tasks, to which
we apply three example transformations, as well as on a large amount of
experiments in a fast plan projection environment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning for New Visual Environments with Limited Labels | In computer vision applications, such as domain adaptation (DA), few shot
learning (FSL) and zero-shot learning (ZSL), we encounter new objects and
environments, for which insufficient examples exist to allow for training
"models from scratch," and methods that adapt existing models, trained on the
presented training environment, to the new scenario are required. We propose a
novel visual attribute encoding method that encodes each image as a
low-dimensional probability vector composed of prototypical part-type
probabilities. The prototypes are learnt to be representative of all training
data. At test-time we utilize this encoding as an input to a classifier. At
test-time we freeze the encoder and only learn/adapt the classifier component
to limited annotated labels in FSL; new semantic attributes in ZSL. We conduct
extensive experiments on benchmark datasets. Our method outperforms
state-of-art methods trained for the specific contexts (ZSL, FSL, DA).
| 1 | 0 | 0 | 0 | 0 | 0 |
A Survey of Bandwidth and Latency Enhancement Approaches for Mobile Cloud Game Multicasting | Among mobile cloud applications, mobile cloud gaming has gained a significant
popularity in the recent years. In mobile cloud games, textures, game objects,
and game events are typically streamed from a server to the mobile client.
One of the challenges in cloud mobile gaming is how to efficiently multicast
gaming contents and updates in Massively Multi-player Online Games (MMOGs).
This report surveys the state of art techniques introduced for game
synchronization and multicasting mechanisms to decrease latency and bandwidth
consumption, and discuss several schemes that have been proposed in this area
that can be applied to any networked gaming context. From our point of view,
gaming applications demand high interactivity. Therefore, concentrating on
gaming applications will eventually cover a wide range of applications without
violating the limited scope of this survey.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modulation of High-Energy Particles and the Heliospheric Current Sheet Tilts throughout 1976-2014 | Cosmic ray intensities (CRIs) recorded by sixteen neutron monitors have been
used to study its dependence on the tilt angles (TA) of the heliospheric
current sheet (HCS) during period 1976-2014, which covers three solar activity
cycles 21, 22 and 23. The median primary rigidity covers the range 16-33 GV.
Our results have indicated that the CRIs are directly sensitive to, and
organized by, the interplanetary magnetic field (IMF) and its neutral sheet
inclinations. The observed differences in the sensitivity of cosmic ray
intensity to changes in the neutral sheet tilt angles before and after the
reversal of interplanetary magnetic field polarity have been studied. Much
stronger intensity-tilt angle correlation was found when the solar magnetic
field in the North Polar Region was directed inward than it was outward. The
rigidity dependence of sensitivities of cosmic rays differs according to the
IMF polarity, for the periods 1981-1988 and 2001-2008 (qA < 0) it was R-1.00
and R-1.48 respectively, while for the 1991-1998 epoch (qA > 0) it was R-1.35.
Hysteresis loops between TA and CRIs have been examined during three solar
activity cycles 21, 22 and 23. A consider differences in time lags during qA >
0 and qA < 0 polarity states of the heliosphere have been observed. We also
found that the cosmic ray intensity decreases at much faster rate with increase
of tilt angle during qA < 0 than qA > 0, indicating stronger response to the
tilt angle changes during qA < 0. Our results are discussed in the light of 3D
modulation models including the gradient, curvature drifts and the tilt of the
heliospheric current sheet.
| 0 | 1 | 0 | 0 | 0 | 0 |
Detecting the impact of public transit on the transmission of epidemics | In many developing countries, public transit plays an important role in daily
life. However, few existing methods have considered the influence of public
transit in their models. In this work, we present a dual-perspective view of
the epidemic spreading process of the individual that involves both
contamination in places (such as work places and homes) and public transit
(such as buses and trains). In more detail, we consider a group of individuals
who travel to some places using public transit, and introduce public transit
into the epidemic spreading process. A novel modeling framework is proposed
considering place-based infections and the public-transit-based infections. In
the urban scenario, we investigate the public transit trip contribution rate
(PTTCR) in the epidemic spreading process of the individual, and assess the
impact of the public transit trip contribution rate by evaluating the volume of
infectious people. Scenarios for strategies such as public transit and school
closure were tested and analyzed. Our simulation results suggest that
individuals with a high public transit trip contribution rate will increase the
volume of infectious people when an infectious disease outbreak occurs by
affecting the social network through the public transit trip contribution rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Hamiltonian Dynamics of Magnetic Confinement in Toroidal Domains | We consider a class of magnetic fields defined over the interior of a
manifold $M$ which go to infinity at its boundary and whose direction near the
boundary of $M$ is controlled by a closed 1-form $\sigma_\infty \in
\Gamma(T^*\partial M)$. We are able to show that charged particles in the
interior of $M$ under the influence of such fields can only escape the manifold
through the zero locus of $\sigma_\infty$. In particular in the case where the
1-form is nowhere vanishing we conclude that the particles become confined to
its interior for all time.
| 0 | 0 | 1 | 0 | 0 | 0 |
Airway segmentation from 3D chest CT volumes based on volume of interest using gradient vector flow | Some lung diseases are related to bronchial airway structures and morphology.
Although airway segmentation from chest CT volumes is an important task in the
computer-aided diagnosis and surgery assistance systems for the chest, complete
3-D airway structure segmentation is a quite challenging task due to its
complex tree-like structure. In this paper, we propose a new airway
segmentation method from 3D chest CT volumes based on volume of interests (VOI)
using gradient vector flow (GVF). This method segments the bronchial regions by
applying the cavity enhancement filter (CEF) to trace the bronchial tree
structure from the trachea. It uses the CEF in the VOI to segment each branch.
And a tube-likeness function based on GVF and the GVF magnitude map in each VOI
are utilized to assist predicting the positions and directions of child
branches. By calculating the tube-likeness function based on GVF and the GVF
magnitude map, the airway-like candidate structures are identified and their
centrelines are extracted. Based on the extracted centrelines, we can detect
the branch points of the bifurcations and directions of the airway branches in
the next level. At the same time, a leakage detection is performed to avoid the
leakage by analysing the pixel information and the shape information of airway
candidate regions extracted in the VOI. Finally, we unify all of the extracted
bronchial regions to form an integrated airway tree. Preliminary experiments
using four cases of chest CT volumes demonstrated that the proposed method can
extract more bronchial branches in comparison with other methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-robot motion-formation distributed control with sensor self-calibration: experimental validation | In this paper, we present the design and implementation of a robust motion
formation distributed control algorithm for a team of mobile robots. The
primary task for the team is to form a geometric shape, which can be freely
translated and rotated at the same time. This approach makes the robots to
behave as a cohesive whole, which can be useful in tasks such as collaborative
transportation. The robustness of the algorithm relies on the fact that each
robot employs only local measurements from a laser sensor which does not need
to be off-line calibrated. Furthermore, robots do not need to exchange any
information with each other. Being free of sensor calibration and not requiring
a communication channel helps the scaling of the overall system to a large
number of robots. In addition, since the robots do not need any off-board
localization system, but require only relative positions with respect to their
neighbors, it can be aimed to have a full autonomous team that operates in
environments where such localization systems are not available. The
computational cost of the algorithm is inexpensive and the resources from a
standard microcontroller will suffice. This fact makes the usage of our
approach appealing as a support for other more demanding algorithms, e.g.,
processing images from onboard cameras. We validate the performance of the
algorithm with a team of four mobile robots equipped with low-cost commercially
available laser scanners.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Impossibility of Supersized Machines | In recent years, a number of prominent computer scientists, along with
academics in fields such as philosophy and physics, have lent credence to the
notion that machines may one day become as large as humans. Many have further
argued that machines could even come to exceed human size by a significant
margin. However, there are at least seven distinct arguments that preclude this
outcome. We show that it is not only implausible that machines will ever exceed
human size, but in fact impossible.
| 1 | 1 | 0 | 0 | 0 | 0 |
Non-geodesic variations of Hodge structure of maximum dimension | There are a number of examples of variations of Hodge structure of maximum
dimension. However, to our knowledge, those that are global on the level of the
period domain are totally geodesic subspaces that arise from an orbit of a
subgroup of the group of the period domain. That is, they are defined by Lie
theory rather than by algebraic geometry. In this note, we give an example of a
variation of maximum dimension which is nowhere tangent to a geodesic
variation. The period domain in question, which classifies weight two Hodge
structures with $h^{2,0} = 2$ and $h^{1,1} = 28$, is of dimension $57$. The
horizontal tangent bundle has codimension one, thus it is an example of a
holomorphic contact structure, with local integral manifolds of dimension 28.
The group of the period domain is $SO(4,28)$, and one can produce global
integral manifolds as orbits of the action of subgroups isomorphic to
$SU(2,14)$. Our example is given by the variation of Hodge structure on the
second cohomology of weighted projective hypersurfaces of degree $10$ in a
weighted projective three-space with weights $1, 1, 2, 5$
| 0 | 0 | 1 | 0 | 0 | 0 |
Finding Differentially Covarying Needles in a Temporally Evolving Haystack: A Scan Statistics Perspective | Recent results in coupled or temporal graphical models offer schemes for
estimating the relationship structure between features when the data come from
related (but distinct) longitudinal sources. A novel application of these ideas
is for analyzing group-level differences, i.e., in identifying if trends of
estimated objects (e.g., covariance or precision matrices) are different across
disparate conditions (e.g., gender or disease). Often, poor effect sizes make
detecting the differential signal over the full set of features difficult: for
example, dependencies between only a subset of features may manifest
differently across groups. In this work, we first give a parametric model for
estimating trends in the space of SPD matrices as a function of one or more
covariates. We then generalize scan statistics to graph structures, to search
over distinct subsets of features (graph partitions) whose temporal dependency
structure may show statistically significant group-wise differences. We
theoretically analyze the Family Wise Error Rate (FWER) and bounds on Type 1
and Type 2 error. On a cohort of individuals with risk factors for Alzheimer's
disease (but otherwise cognitively healthy), we find scientifically interesting
group differences where the default analysis, i.e., models estimated on the
full graph, do not survive reasonable significance thresholds.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Distributed Algorithm for Computing a Common Fixed Point of a Finite Family of Paracontractions | A distributed algorithm is described for finding a common fixed point of a
family of m>1 nonlinear maps M_i : R^n -> R^n assuming that each map is a
paracontraction and that at least one such common fixed point exists. The
common fixed point is simultaneously computed by m agents assuming each agent i
knows only M_i, the current estimates of the fixed point generated by its
neighbors, and nothing more. Each agent recursively updates its estimate of a
fixed point by utilizing the current estimates generated by each of its
neighbors. Neighbor relations are characterized by a time-varying directed
graph N(t). It is shown under suitably general conditions on N(t), that the
algorithm causes all agents estimates to converge to the same common fixed
point of the m nonlinear maps.
| 1 | 0 | 1 | 0 | 0 | 0 |
The maximum of the 1-measurement of a metric measure space | For a metric measure space, we treat the set of distributions of 1-Lipschitz
functions, which is called the 1-measurement. On the 1-measurement, we have a
partial order relation by the Lipschitz order introduced by Gromov. The aim of
this paper is to study the maximum and maximal elements of the 1-measurement
with respect to the Lipschitz order. We present a necessary condition of a
metric measure space for the existence of the maximum of the 1-measurement. We
also consider a metric measure space that has the maximum of its 1-measurement.
| 0 | 0 | 1 | 0 | 0 | 0 |
Limits to Arbitrage in Markets with Stochastic Settlement Latency | Distributed ledger technologies rely on consensus protocols confronting
traders with random waiting times until the transfer of ownership is
accomplished. This time-consuming settlement process exposes arbitrageurs to
price risk and imposes limits to arbitrage. We derive theoretical arbitrage
boundaries under general assumptions and show that they increase with expected
latency, latency uncertainty, spot volatility, and risk aversion. Using
high-frequency data from the Bitcoin network, we estimate arbitrage boundaries
due to settlement latency of on average 124 basis points, covering 88 percent
of the observed cross-exchange price differences. Settlement through
decentralized systems thus induces non-trivial frictions affecting market
efficiency and price formation.
| 0 | 0 | 0 | 0 | 0 | 1 |
Normalized Information Distance and the Oscillation Hierarchy | We study the complexity of approximations to the normalized information
distance. We introduce a hierarchy of computable approximations by considering
the number of oscillations. This is a function version of the difference
hierarchy for sets. We show that the normalized information distance is not in
any level of this hierarchy, strengthening previous nonapproximability results.
As an ingredient to the proof, we also prove a conditional undecidability
result about independence.
| 1 | 0 | 1 | 0 | 0 | 0 |
Exponential Moving Average Model in Parallel Speech Recognition Training | As training data rapid growth, large-scale parallel training with multi-GPUs
cluster is widely applied in the neural network model learning currently.We
present a new approach that applies exponential moving average method in
large-scale parallel training of neural network model. It is a non-interference
strategy that the exponential moving average model is not broadcasted to
distributed workers to update their local models after model synchronization in
the training process, and it is implemented as the final model of the training
system. Fully-connected feed-forward neural networks (DNNs) and deep
unidirectional Long short-term memory (LSTM) recurrent neural networks (RNNs)
are successfully trained with proposed method for large vocabulary continuous
speech recognition on Shenma voice search data in Mandarin. The character error
rate (CER) of Mandarin speech recognition further degrades than
state-of-the-art approaches of parallel training.
| 1 | 0 | 0 | 0 | 0 | 0 |
Is One Hyperparameter Optimizer Enough? | Hyperparameter tuning is the black art of automatically finding a good
combination of control parameters for a data miner. While widely applied in
empirical Software Engineering, there has not been much discussion on which
hyperparameter tuner is best for software analytics. To address this gap in the
literature, this paper applied a range of hyperparameter optimizers (grid
search, random search, differential evolution, and Bayesian optimization) to
defect prediction problem. Surprisingly, no hyperparameter optimizer was
observed to be `best' and, for one of the two evaluation measures studied here
(F-measure), hyperparameter optimization, in 50\% cases, was no better than
using default configurations.
We conclude that hyperparameter optimization is more nuanced than previously
believed. While such optimization can certainly lead to large improvements in
the performance of classifiers used in software analytics, it remains to be
seen which specific optimizers should be applied to a new dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Generalized Canonical Correlation Analysis | We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a
method for learning nonlinear transformations of arbitrarily many views of
data, such that the resulting transformations are maximally informative of each
other. While methods for nonlinear two-view representation learning (Deep CCA,
(Andrew et al., 2013)) and linear many-view representation learning
(Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview
representation learning technique that combines the flexibility of nonlinear
(deep) representation learning with the statistical power of incorporating
information from many independent sources, or views. We present the DGCCA
formulation as well as an efficient stochastic optimization algorithm for
solving it. We learn DGCCA representations on two distinct datasets for three
downstream tasks: phonetic transcription from acoustic and articulatory
measurements, and recommending hashtags and friends on a dataset of Twitter
users. We find that DGCCA representations soundly beat existing methods at
phonetic transcription and hashtag recommendation, and in general perform no
worse than standard linear many-view techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
Faithfulness of Probability Distributions and Graphs | A main question in graphical models and causal inference is whether, given a
probability distribution $P$ (which is usually an underlying distribution of
data), there is a graph (or graphs) to which $P$ is faithful. The main goal of
this paper is to provide a theoretical answer to this problem. We work with
general independence models, which contain probabilistic independence models as
a special case. We exploit a generalization of ordering, called preordering, of
the nodes of (mixed) graphs. This allows us to provide sufficient conditions
for a given independence model to be Markov to a graph with the minimum
possible number of edges, and more importantly, necessary and sufficient
conditions for a given probability distribution to be faithful to a graph. We
present our results for the general case of mixed graphs, but specialize the
definitions and results to the better-known subclasses of undirected
(concentration) and bidirected (covariance) graphs as well as directed acyclic
graphs.
| 0 | 0 | 1 | 1 | 0 | 0 |
On the multipliers of repelling periodic points of entire functions | We give a lower bound for the multipliers of repelling periodic points of
entire functions. The bound is deduced from a bound for the multipliers of
fixed points of composite entire functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
The BCS critical temperature in a weak homogeneous magnetic field | We show that, within a linear approximation of BCS theory, a weak homogeneous
magnetic field lowers the critical temperature by an explicit constant times
the field strength, up to higher order terms. This provides a rigorous
derivation and generalization of results obtained in the physics literature
from WHH theory of the upper critical magnetic field. A new ingredient in our
proof is a rigorous phase approximation to control the effects of the magnetic
field.
| 0 | 1 | 1 | 0 | 0 | 0 |
25 Tweets to Know You: A New Model to Predict Personality with Social Media | Predicting personality is essential for social applications supporting
human-centered activities, yet prior modeling methods with users written text
require too much input data to be realistically used in the context of social
media. In this work, we aim to drastically reduce the data requirement for
personality modeling and develop a model that is applicable to most users on
Twitter. Our model integrates Word Embedding features with Gaussian Processes
regression. Based on the evaluation of over 1.3K users on Twitter, we find that
our model achieves comparable or better accuracy than state of the art
techniques with 8 times fewer data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Low Resolution Face Recognition Using a Two-Branch Deep Convolutional Neural Network Architecture | We propose a novel couple mappings method for low resolution face recognition
using deep convolutional neural networks (DCNNs). The proposed architecture
consists of two branches of DCNNs to map the high and low resolution face
images into a common space with nonlinear transformations. The branch
corresponding to transformation of high resolution images consists of 14 layers
and the other branch which maps the low resolution face images to the common
space includes a 5-layer super-resolution network connected to a 14-layer
network. The distance between the features of corresponding high and low
resolution images are backpropagated to train the networks. Our proposed method
is evaluated on FERET data set and compared with state-of-the-art competing
methods. Our extensive experimental results show that the proposed method
significantly improves the recognition performance especially for very low
resolution probe face images (11.4% improvement in recognition accuracy).
Furthermore, it can reconstruct a high resolution image from its corresponding
low resolution probe image which is comparable with state-of-the-art
super-resolution methods in terms of visual quality.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Comparative Study of Full-Duplex Relaying Schemes for Low Latency Applications | Various sectors are likely to carry a set of emerging applications while
targeting a reliable communication with low latency transmission. To address
this issue, upon a spectrally-efficient transmission, this paper investigates
the performance of a one full-dulpex (FD) relay system, and considers for that
purpose, two basic relaying schemes, namely the symbol-by-symbol transmission,
i.e., amplify-and-forward (AF) and the block-by-block transmission, i.e.,
selective decode-and-forward (SDF). The conducted analysis presents an
exhaustive comparison, covering both schemes, over two different transmission
modes, i.e., the non combining mode where the best link, direct or relay link
is decoded and the signals combining mode, where direct and relay links are
combined at the receiver side. While targeting latency purpose as a necessity,
simulations show a refined results of performed comparisons, and reveal that AF
relaying scheme is more adapted to combining mode, whereas the SDF relaying
scheme is more suitable for non combining mode.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some algebraic invariants of edge ideal of circulant graphs | Let $G$ be the circulant graph $C_n(S)$ with $S\subseteq\{ 1,\ldots,\left
\lfloor\frac{n}{2}\right \rfloor\}$ and let $I(G)$ be its edge ideal in the
ring $K[x_0,\ldots,x_{n-1}]$. Under the hypothesis that $n$ is prime we : 1)
compute the regularity index of $R/I(G)$; 2) compute the Castelnuovo-Mumford
regularity when $R/I(G)$ is Cohen-Macaulay; 3) prove that the circulant graphs
with $S=\{1,\ldots,s\}$ are sequentially $S_2$ . We end characterizing the
Cohen-Macaulay circulant graphs of Krull dimension $2$ and computing their
Cohen-Macaulay type and Castelnuovo-Mumford regularity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficient Pricing of Barrier Options on High Volatility Assets using Subset Simulation | Barrier options are one of the most widely traded exotic options on stock
exchanges. In this paper, we develop a new stochastic simulation method for
pricing barrier options and estimating the corresponding execution
probabilities. We show that the proposed method always outperforms the standard
Monte Carlo approach and becomes substantially more efficient when the
underlying asset has high volatility, while it performs better than multilevel
Monte Carlo for special cases of barrier options and underlying assets. These
theoretical findings are confirmed by numerous simulation results.
| 0 | 0 | 0 | 1 | 0 | 1 |
Massively parallel multicanonical simulations | Generalized-ensemble Monte Carlo simulations such as the multicanonical
method and similar techniques are among the most efficient approaches for
simulations of systems undergoing discontinuous phase transitions or with
rugged free- energy landscapes. As Markov chain methods, they are inherently
serial computationally. It was demonstrated recently, however, that a
combination of independent simulations that communicate weight updates at
variable intervals allows for the efficient utilization of parallel
computational resources for multicanonical simulations. Implementing this
approach for the many-thread architecture provided by current generations of
graphics processing units (GPUs), we show how it can be efficiently employed
with of the order of $10^4$ parallel walkers and beyond, thus constituting a
versatile tool for Monte Carlo simulations in the era of massively parallel
computing. We provide the fully documented source code for the approach applied
to the paradigmatic example of the two-dimensional Ising model as starting
point and reference for practitioners in the field.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gaia and VLT astrometry of faint stars: Precision of Gaia DR1 positions and updated VLT parallaxes of ultracool dwarfs | We compared positions of the Gaia first data release (DR1) secondary data set
at its faint limit with CCD positions of stars in 20 fields observed with the
VLT/FORS2 camera. The FORS2 position uncertainties are smaller than one
milli-arcsecond (mas) and allowed us to perform an independent verification of
the DR1 astrometric precision. In the fields that we observed with FORS2, we
projected the Gaia DR1 positions into the CCD plane, performed a polynomial fit
between the two sets of matching stars, and carried out statistical analyses of
the residuals in positions. The residual RMS roughly matches the expectations
given by the Gaia DR1 uncertainties, where we identified three regimes in terms
of Gaia DR1 precision: for G = 17-20 stars we found that the formal DR1
position uncertainties of stars with DR1 precisions in the range of 0.5-5 mas
are underestimated by 63 +/- 5\%, whereas the DR1 uncertainties of stars in the
range 7-10 mas are overestimated by a factor of two. For the best-measured and
generally brighter G = 16-18 stars with DR1 positional uncertainties of <0.5
mas, we detected 0.44 +/- 0.13 mas excess noise in the residual RMS, whose
origin can be in both FORS2 and Gaia DR1. By adopting Gaia DR1 as the absolute
reference frame we refined the pixel scale determination of FORS2, leading to
minor updates to the parallaxes of 20 ultracool dwarfs that we published
previously. We also updated the FORS2 absolute parallax of the Luhman 16 binary
brown dwarf system to 501.42 +/- 0.11 mas
| 0 | 1 | 0 | 0 | 0 | 0 |
Parallel transport in shape analysis: a scalable numerical scheme | The analysis of manifold-valued data requires efficient tools from Riemannian
geometry to cope with the computational complexity at stake. This complexity
arises from the always-increasing dimension of the data, and the absence of
closed-form expressions to basic operations such as the Riemannian logarithm.
In this paper, we adapt a generic numerical scheme recently introduced for
computing parallel transport along geodesics in a Riemannian manifold to
finite-dimensional manifolds of diffeomorphisms. We provide a qualitative and
quantitative analysis of its behavior on high-dimensional manifolds, and
investigate an application with the prediction of brain structures progression.
| 0 | 0 | 1 | 1 | 0 | 0 |
Spectral Projector-Based Graph Fourier Transforms | The paper presents the graph Fourier transform (GFT) of a signal in terms of
its spectral decomposition over the Jordan subspaces of the graph adjacency
matrix $A$. This representation is unique and coordinate free, and it leads to
unambiguous definition of the spectral components ("harmonics") of a graph
signal. This is particularly meaningful when $A$ has repeated eigenvalues, and
it is very useful when $A$ is defective or not diagonalizable (as it may be the
case with directed graphs). Many real world large sparse graphs have defective
adjacency matrices. We present properties of the GFT and show it to satisfy a
generalized Parseval inequality and to admit a total variation ordering of the
spectral components. We express the GFT in terms of spectral projectors and
present an illustrative example for a real world large urban traffic dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum Mechanical Approach to Modelling Reliability of Sensor Reports | Dempster-Shafer evidence theory is wildly applied in multi-sensor data
fusion. However, lots of uncertainty and interference exist in practical
situation, especially in the battle field. It is still an open issue to model
the reliability of sensor reports. Many methods are proposed based on the
relationship among collected data. In this letter, we proposed a quantum
mechanical approach to evaluate the reliability of sensor reports, which is
based on the properties of a sensor itself. The proposed method is used to
modify the combining of evidences.
| 1 | 0 | 0 | 0 | 0 | 0 |
Midgar: Detection of people through computer vision in the Internet of Things scenarios to improve the security in Smart Cities, Smart Towns, and Smart Homes | Could we use Computer Vision in the Internet of Things for using pictures as
sensors? This is the principal hypothesis that we want to resolve. Currently,
in order to create safety areas, cities, or homes, people use IP cameras.
Nevertheless, this system needs people who watch the camera images, watch the
recording after something occurred, or watch when the camera notifies them of
any movement. These are the disadvantages. Furthermore, there are many Smart
Cities and Smart Homes around the world. This is why we thought of using the
idea of the Internet of Things to add a way of automating the use of IP
cameras. In our case, we propose the analysis of pictures through Computer
Vision to detect people in the analysed pictures. With this analysis, we are
able to obtain if these pictures contain people and handle the pictures as if
they were sensors with two possible states. Notwithstanding, Computer Vision is
a very complicated field. This is why we needed a second hypothesis: Could we
work with Computer Vision in the Internet of Things with a good accuracy to
automate or semi-automate this kind of events? The demonstration of these
hypotheses required a testing over our Computer Vision module to check the
possibilities that we have to use this module in a possible real environment
with a good accuracy. Our proposal, as a possible solution, is the analysis of
entire sequence instead of isolated pictures for using pictures as sensors in
the Internet of Things.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals | Interpretability of deep neural networks is a recently emerging area of
machine learning research targeting a better understanding of how models
perform feature selection and derive their classification decisions. In this
paper, two neural network architectures are trained on spectrogram and raw
waveform data for audio classification tasks on a newly created audio dataset
and layer-wise relevance propagation (LRP), a previously proposed
interpretability method, is applied to investigate the models' feature
selection and decision making. It is demonstrated that the networks are highly
reliant on feature marked as relevant by LRP through systematic manipulation of
the input data. Our results show that by making deep audio classifiers
interpretable, one can analyze and compare the properties and strategies of
different models beyond classification accuracy, which potentially opens up new
ways for model improvements.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Performance of a Canonical Labeling for Matching Correlated Erdős-Rényi Graphs | Graph matching in two correlated random graphs refers to the task of
identifying the correspondence between vertex sets of the graphs. Recent
results have characterized the exact information-theoretic threshold for graph
matching in correlated Erdős-Rényi graphs. However, very little is known
about the existence of efficient algorithms to achieve graph matching without
seeds. In this work we identify a region in which a straightforward $O(n^2\log
n)$-time canonical labeling algorithm, initially introduced in the context of
graph isomorphism, succeeds in matching correlated Erdős-Rényi graphs.
The algorithm has two steps. In the first step, all vertices are labeled by
their degrees and a trivial minimum distance matching (i.e., simply sorting
vertices according to their degrees) matches a fixed number of highest degree
vertices in the two graphs. Having identified this subset of vertices, the
remaining vertices are matched using a matching algorithm for bipartite graphs.
| 0 | 0 | 0 | 1 | 0 | 0 |
On semi-supervised learning | Semi-supervised learning deals with the problem of how, if possible, to take
advantage of a huge amount of unclassified data, to perform a classification in
situations when, typically, there is little labeled data. Even though this is
not always possible (it depends on how useful, for inferring the labels, it
would be to know the distribution of the unlabeled data), several algorithm
have been proposed recently. %but in general they are not proved to outperform
A new algorithm is proposed, that under almost necessary conditions, %and it is
proved that it attains asymptotically the performance of the best theoretical
rule as the amount of unlabeled data tends to infinity. The set of necessary
assumptions, although reasonable, show that semi-supervised classification only
works for very well conditioned problems. The focus is on understanding when
and why semi-supervised learning works when the size of the initial training
sample remains fixed and the asymptotic is on the size of the unlabeled data.
The performance of the algorithm is assessed in the well known "Isolet"
real-data of phonemes, where a strong dependence on the choice of the initial
training sample is shown.
| 0 | 0 | 0 | 1 | 0 | 0 |
Fourier dimension and spectral gaps for hyperbolic surfaces | We obtain an essential spectral gap for a convex co-compact hyperbolic
surface $M=\Gamma\backslash\mathbb H^2$ which depends only on the dimension
$\delta$ of the limit set. More precisely, we show that when $\delta>0$ there
exists $\varepsilon_0=\varepsilon_0(\delta)>0$ such that the Selberg zeta
function has only finitely many zeroes $s$ with $\Re s>\delta-\varepsilon_0$.
The proof uses the fractal uncertainty principle approach developed by
Dyatlov-Zahl [arXiv:1504.06589]. The key new component is a Fourier decay bound
for the Patterson-Sullivan measure, which may be of independent interest. This
bound uses the fact that transformations in the group $\Gamma$ are nonlinear,
together with estimates on exponential sums due to Bourgain which follow from
the discretized sum-product theorem in $\mathbb R$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Semantic Evolutionary Concept Distances for Effective Information Retrieval in Query Expansion | In this work several semantic approaches to concept-based query expansion and
reranking schemes are studied and compared with different ontology-based
expansion methods in web document search and retrieval. In particular, we focus
on concept-based query expansion schemes, where, in order to effectively
increase the precision of web document retrieval and to decrease the users
browsing time, the main goal is to quickly provide users with the most suitable
query expansion. Two key tasks for query expansion in web document retrieval
are to find the expansion candidates, as the closest concepts in web document
domain, and to rank the expanded queries properly. The approach we propose aims
at improving the expansion phase for better web document retrieval and
precision. The basic idea is to measure the distance between candidate concepts
using the PMING distance, a collaborative semantic proximity measure, i.e. a
measure which can be computed by using statistical results from web search
engine. Experiments show that the proposed technique can provide users with
more satisfying expansion results and improve the quality of web document
retrieval.
| 1 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.