title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models | For high-dimensional sparse linear models, how to construct confidence
intervals for coefficients remains a difficult question. The main reason is the
complicated limiting distributions of common estimators such as the Lasso.
Several confidence interval construction methods have been developed, and
Bootstrap Lasso+OLS is notable for its simple technicality, good
interpretability, and comparable performance with other more complicated
methods. However, Bootstrap Lasso+OLS depends on the beta-min assumption, a
theoretic criterion that is often violated in practice. In this paper, we
introduce a new method called Bootstrap Lasso+Partial Ridge (LPR) to relax this
assumption. LPR is a two-stage estimator: first using Lasso to select features
and subsequently using Partial Ridge to refit the coefficients. Simulation
results show that Bootstrap LPR outperforms Bootstrap Lasso+OLS when there
exist small but non-zero coefficients, a common situation violating the
beta-min assumption. For such coefficients, compared to Bootstrap Lasso+OLS,
confidence intervals constructed by Bootstrap LPR have on average 50% larger
coverage probabilities. Bootstrap LPR also has on average 35% shorter
confidence interval lengths than the de-sparsified Lasso methods, regardless of
whether linear models are misspecified. Additionally, we provide theoretical
guarantees of Bootstrap LPR under appropriate conditions and implement it in
the R package "HDCI."
| 0 | 0 | 0 | 1 | 0 | 0 |
The Music Streaming Sessions Dataset | At the core of many important machine learning problems faced by online
streaming services is a need to model how users interact with the content.
These problems can often be reduced to a combination of 1) sequentially
recommending items to the user, and 2) exploiting the user's interactions with
the items as feedback for the machine learning model. Unfortunately, there are
no public datasets currently available that enable researchers to explore this
topic. In order to spur that research, we release the Music Streaming Sessions
Dataset (MSSD), which consists of approximately 150 million listening sessions
and associated user actions. Furthermore, we provide audio features and
metadata for the approximately 3.7 million unique tracks referred to in the
logs. This is the largest collection of such track metadata currently available
to the public. This dataset enables research on important problems including
how to model user listening and interaction behaviour in streaming, as well as
Music Information Retrieval (MIR), and session-based sequential
recommendations.
| 1 | 0 | 0 | 0 | 0 | 0 |
CUSBoost: Cluster-based Under-sampling with Boosting for Imbalanced Classification | Class imbalance classification is a challenging research problem in data
mining and machine learning, as most of the real-life datasets are often
imbalanced in nature. Existing learning algorithms maximise the classification
accuracy by correctly classifying the majority class, but misclassify the
minority class. However, the minority class instances are representing the
concept with greater interest than the majority class instances in real-life
applications. Recently, several techniques based on sampling methods
(under-sampling of the majority class and over-sampling the minority class),
cost-sensitive learning methods, and ensemble learning have been used in the
literature for classifying imbalanced datasets. In this paper, we introduce a
new clustering-based under-sampling approach with boosting (AdaBoost)
algorithm, called CUSBoost, for effective imbalanced classification. The
proposed algorithm provides an alternative to RUSBoost (random under-sampling
with AdaBoost) and SMOTEBoost (synthetic minority over-sampling with AdaBoost)
algorithms. We evaluated the performance of CUSBoost algorithm with the
state-of-the-art methods based on ensemble learning like AdaBoost, RUSBoost,
SMOTEBoost on 13 imbalance binary and multi-class datasets with various
imbalance ratios. The experimental results show that the CUSBoost is a
promising and effective approach for dealing with highly imbalanced datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Finding events in temporal networks: Segmentation meets densest-subgraph discovery | In this paper we study the problem of discovering a timeline of events in a
temporal network. We model events as dense subgraphs that occur within
intervals of network activity. We formulate the event-discovery task as an
optimization problem, where we search for a partition of the network timeline
into k non-overlapping intervals, such that the intervals span subgraphs with
maximum total density. The output is a sequence of dense subgraphs along with
corresponding time intervals, capturing the most interesting events during the
network lifetime.
A naive solution to our optimization problem has polynomial but prohibitively
high running time complexity. We adapt existing recent work on dynamic
densest-subgraph discovery and approximate dynamic programming to design a fast
approximation algorithm. Next, to ensure richer structure, we adjust the
problem formulation to encourage coverage of a larger set of nodes. This
problem is NP-hard even for static graphs. However, on static graphs a simple
greedy algorithm leads to approximate solution due to submodularity. We
extended this greedy approach for the case of temporal networks. However, the
approximation guarantee does not hold. Nevertheless, according to the
experiments, the algorithm finds good quality solutions.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the area of constrained polygonal linkages | We study configuration spaces of linkages whose underlying graph are polygons
with diagonal constrains, or more general, partial two-trees. We show that
(with an appropriate definition) the oriented area is a Bott-Morse function on
the configuration space. Its critical points are described and Bott-Morse
indices are computed. This paper is a generalization of analogous results for
polygonal linkages (obtained earlier by G. Khimshiashvili, G. Panina, and A.
Zhukova).
| 0 | 0 | 1 | 0 | 0 | 0 |
Effects of geometrical frustration on ferromagnetism in the Hubbard model on the Shastry-Sutherland lattice | The small-cluster exact-diagonalization calculations and the projector
quantum Monte Carlo method are used to examine the competing effects of
geometrical frustration and interaction on ferromagnetism in the Hubbard model
on the Shastry-Sutherland lattice. It is shown that the geometrical frustration
stabilizes the ferromagnetic state at high electron concentrations ($n \gtrsim
7/4$), where strong correlations between ferromagnetism and the shape of the
noninteracting density of states are observed. In particular, it is found that
ferromagnetism is stabilized only for these values of frustration parameters,
which lead to the single peaked noninterating density of states at the band
edge. Once, two or more peaks appear in the noninteracting density of states at
the band egde the ferromagnetic state is suppressed. This opens a new route
towards the understanding of ferromagnetism in strongly correlated systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Hybrid Process Models From Events: Process Discovery Without Faking Confidence | Process discovery techniques return process models that are either formal
(precisely describing the possible behaviors) or informal (merely a "picture"
not allowing for any form of formal reasoning). Formal models are able to
classify traces (i.e., sequences of events) as fitting or non-fitting. Most
process mining approaches described in the literature produce such models. This
is in stark contrast with the over 25 available commercial process mining tools
that only discover informal process models that remain deliberately vague on
the precise set of possible traces. There are two main reasons why vendors
resort to such models: scalability and simplicity. In this paper, we propose to
combine the best of both worlds: discovering hybrid process models that have
formal and informal elements. As a proof of concept we present a discovery
technique based on hybrid Petri nets. These models allow for formal reasoning,
but also reveal information that cannot be captured in mainstream formal
models. A novel discovery algorithm returning hybrid Petri nets has been
implemented in ProM and has been applied to several real-life event logs. The
results clearly demonstrate the advantages of remaining "vague" when there is
not enough "evidence" in the data or standard modeling constructs do not "fit".
Moreover, the approach is scalable enough to be incorporated in
industrial-strength process mining tools.
| 1 | 0 | 0 | 0 | 0 | 0 |
El Lenguaje Natural como Lenguaje Formal | Formal languages theory is useful for the study of natural language. In
particular, it is of interest to study the adequacy of the grammatical
formalisms to express syntactic phenomena present in natural language. First,
it helps to draw hypothesis about the nature and complexity of the
speaker-hearer linguistic competence, a fundamental question in linguistics and
other cognitive sciences. Moreover, from an engineering point of view, it
allows the knowledge of practical limitations of applications based on those
formalisms. In this article I introduce the adequacy problem of grammatical
formalisms for natural language, also introducing some formal language theory
concepts required for this discussion. Then, I review the formalisms that have
been proposed in history, and the arguments that have been given to support or
reject their adequacy.
-----
La teoría de lenguajes formales es útil para el estudio de los lenguajes
naturales. En particular, resulta de interés estudiar la adecuación de los
formalismos gramaticales para expresar los fenómenos sintácticos presentes
en el lenguaje natural. Primero, ayuda a trazar hipótesis acerca de la
naturaleza y complejidad de las competencias lingüísticas de los
hablantes-oyentes del lenguaje, un interrogante fundamental de la
lingüística y otras ciencias cognitivas. Además, desde el punto de vista
de la ingeniería, permite conocer limitaciones prácticas de las
aplicaciones basadas en dichos formalismos. En este artículo hago una
introducción al problema de la adecuación de los formalismos gramaticales
para el lenguaje natural, introduciendo también algunos conceptos de la
teoría de lenguajes formales necesarios para esta discusión. Luego, hago un
repaso de los formalismos que han sido propuestos a lo largo de la historia, y
de los argumentos que se han dado para sostener o refutar su adecuación.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bounds for the completely positive rank of a symmetric matrix over a tropical semiring | In this paper, we find an upper bound for the CP-rank of a matrix over a
tropical semiring, according to the vertex clique cover of the graph prescribed
by the pattern of the matrix. We study the graphs that beget the patterns of
matrices with the lowest possible CP-ranks and prove that any such graph must
have its diameter equal to 2.
| 0 | 0 | 1 | 0 | 0 | 0 |
Gaussian process classification using posterior linearisation | This paper proposes a new algorithm for Gaussian process classification based
on posterior linearisation (PL). In PL, a Gaussian approximation to the
posterior density is obtained iteratively using the best possible linearisation
of the conditional mean of the labels and accounting for the linearisation
error. Considering three widely-used likelihood functions, in general, PL
provides lower classification errors in real data sets than expectation
propagation and Laplace algorithms.
| 0 | 0 | 0 | 1 | 0 | 0 |
On Inconsistency Indices and Inconsistency Axioms in Pairwise Comparisons | Pairwise comparisons are an important tool of modern (multiple criteria)
decision making. Since human judgments are often inconsistent, many studies
focused on the ways how to express and measure this inconsistency, and several
inconsistency indices were proposed as an alternative to Saaty inconsistency
index and inconsistency ratio for reciprocal pairwise comparisons matrices.
This paper aims to: firstly, introduce a new measure of inconsistency of
pairwise comparisons and to prove its basic properties; secondly, to postulate
an additional axiom, an upper boundary axiom, to an existing set of axioms; and
the last, but not least, the paper provides proofs of satisfaction of this
additional axiom by selected inconsistency indices as well as it provides their
numerical comparison.
| 1 | 0 | 0 | 0 | 0 | 0 |
Heteroclinic traveling fronts for a generalized Fisher-Burgers equation with saturating diffusion | We study the existence of monotone heteroclinic traveling waves for a general
Fisher-Burgers equation with nonlinear and possibly density-dependent
diffusion. Such a model arises, for instance, in physical phenomena where a
saturation effect appears for large values of the gradient. We give an estimate
for the critical speed (namely, the first speed for which a monotone
heteroclinic traveling wave exists) for some different shapes of the reaction
term, and we analyze its dependence on a small real parameter when this brakes
the diffusion, complementing our study with some numerical simulations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computer Algebra for Microhydrodynamics | I describe a method for computer algebra that helps with laborious
calculations typically encountered in theoretical microhydrodynamics. The
program mimics how humans calculate by matching patterns and making
replacements according to the rules of algebra and calculus. This note gives an
overview and walks through an example, while the accompanying code repository
contains the implementation details, a tutorial, and more examples. The code
repository is attached as supplementary material to this note, and maintained
at this https URL
| 1 | 1 | 0 | 0 | 0 | 0 |
Rapid micro fluorescence in situ hybridization in tissue sections | This paper describes a micro fluorescence in situ hybridization
({\mu}FISH)-based rapid detection of cytogenetic biomarkers on formalin-fixed
paraffin embedded (FFPE) tissue sections. We demonstrated this method in the
context of detecting human epidermal growth factor 2 (HER2) in breast tissue
sections. This method uses a non-contact microfluidic scanning probe (MFP),
which localizes FISH probes at the micrometer length-scale to selected cells of
the tissue section. The scanning ability of the MFP allows for a versatile
implementation of FISH on tissue sections. We demonstrated the use of
oligonucleotide FISH probes in ethylene carbonate-based buffer enabling rapid
hybridization within < 1 min for chromosome enumeration and 10-15 min for
assessment of the HER2 status in FFPE sections. We further demonstrated
recycling of FISH probes for multiple sequential tests using a defined volume
of probes by forming hierarchical hydrodynamic flow confinements. This
microscale method is compatible with the standard FISH protocols and with the
Instant Quality (IQ) FISH assay, reduces the FISH probe consumption ~100-fold
and the hybridization time 4-fold, resulting in an assay turnaround time of < 3
h. We believe rapid {\mu}FISH has the potential of being used in pathology
workflows as a standalone method or in combination with other molecular methods
for diagnostic and prognostic analysis of FFPE sections.
| 0 | 0 | 0 | 0 | 1 | 0 |
Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding | Neural time-series data contain a wide variety of prototypical signal
waveforms (atoms) that are of significant importance in clinical and cognitive
research. One of the goals for analyzing such data is hence to extract such
'shift-invariant' atoms. Even though some success has been reported with
existing algorithms, they are limited in applicability due to their heuristic
nature. Moreover, they are often vulnerable to artifacts and impulsive noise,
which are typically present in raw neural recordings. In this study, we address
these issues and propose a novel probabilistic convolutional sparse coding
(CSC) model for learning shift-invariant atoms from raw neural signals
containing potentially severe artifacts. In the core of our model, which we
call $\alpha$CSC, lies a family of heavy-tailed distributions called
$\alpha$-stable distributions. We develop a novel, computationally efficient
Monte Carlo expectation-maximization algorithm for inference. The maximization
step boils down to a weighted CSC problem, for which we develop a
computationally efficient optimization algorithm. Our results show that the
proposed algorithm achieves state-of-the-art convergence speeds. Besides,
$\alpha$CSC is significantly more robust to artifacts when compared to three
competing algorithms: it can extract spike bursts, oscillations, and even
reveal more subtle phenomena such as cross-frequency coupling when applied to
noisy neural time series.
| 0 | 0 | 0 | 1 | 0 | 0 |
Machine Learning for Drug Overdose Surveillance | We describe two recently proposed machine learning approaches for discovering
emerging trends in fatal accidental drug overdoses. The Gaussian Process Subset
Scan enables early detection of emerging patterns in spatio-temporal data,
accounting for both the non-iid nature of the data and the fact that detecting
subtle patterns requires integration of information across multiple spatial
areas and multiple time steps. We apply this approach to 17 years of
county-aggregated data for monthly opioid overdose deaths in the New York City
metropolitan area, showing clear advantages in the utility of discovered
patterns as compared to typical anomaly detection approaches.
To detect and characterize emerging overdose patterns that differentially
affect a subpopulation of the data, including geographic, demographic, and
behavioral patterns (e.g., which combinations of drugs are involved), we apply
the Multidimensional Tensor Scan to 8 years of case-level overdose data from
Allegheny County, PA. We discover previously unidentified overdose patterns
which reveal unusual demographic clusters, show impacts of drug legislation,
and demonstrate potential for early detection and targeted intervention. These
approaches to early detection of overdose patterns can inform prevention and
response efforts, as well as understanding the effects of policy changes.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Capsule based Approach for Polyphonic Sound Event Detection | Polyphonic sound event detection (polyphonic SED) is an interesting but
challenging task due to the concurrence of multiple sound events. Recently, SED
methods based on convolutional neural networks (CNN) and recurrent neural
networks (RNN) have shown promising performance. Generally, CNN are designed
for local feature extraction while RNN are used to model the temporal
dependency among these local features. Despite their success, it is still
insufficient for existing deep learning techniques to separate individual sound
event from their mixture, largely due to the overlapping characteristic of
features. Motivated by the success of Capsule Networks (CapsNet), we propose a
more suitable capsule based approach for polyphonic SED. Specifically, several
capsule layers are designed to effectively select representative frequency
bands for each individual sound event. The temporal dependency of capsule's
outputs is then modeled by a RNN. And a dynamic threshold method is proposed
for making the final decision based on RNN outputs. Experiments on the TUT-SED
Synthetic 2016 dataset show that the proposed approach obtains an F1-score of
68.8% and an error rate of 0.45, outperforming the previous state-of-the-art
method of 66.4% and 0.48, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Linear simulation of ion temperature gradient driven instabilities in W7-X and LHD stellarators using GTC | The global gyrokinetic toroidal code (GTC) has been recently upgraded to do
simulations in non-axisymmetric equilibrium configuration, such as
stellarators. Linear simulation of ion temperature gradient (ITG) driven
instabilities has been done in Wendelstein7-X (W7-X) and Large Helical Device
(LHD) stellarators using GTC. Several results are discussed to study
characteristics of ITG in stellarators, including toroidal grids convergence,
nmodes number convergence, poloidal and parallel spectrums, and electrostatic
potential mode structure on flux surface.
| 0 | 1 | 0 | 0 | 0 | 0 |
Advanced engineering of single-crystal gold nanoantennas | A nanofabrication process for realizing optical nanoantennas carved from a
single-crystal gold plate is presented in this communication. The method relies
on synthesizing two-dimensional micron-size gold crystals followed by the dry
etching of a desired antenna layout. The fabrication of single-crystal optical
nanoantennas with standard electron-beam lithography tool and dry etching
reactor represents an alternative technological solution to focused ion beam
milling of the objects. The process is exemplified by engineering nanorod
antennas. Dark-field spectroscopy indicates that optical antennas produced from
single crystal flakes have reduced localized surface plasmon resonance losses
compared to amorphous designs of similar shape. The present process is easily
applicable to other metals such as silver or copper and offers a design
flexibility not found in crystalline particles synthesized by colloidal
chemistry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stick-breaking processes, clumping, and Markov chain occupation laws | We consider the connections among `clumped' residual allocation models
(RAMs), a general class of stick-breaking processes including Dirichlet
processes, and the occupation laws of certain discrete space time-inhomogeneous
Markov chains related to simulated annealing and other applications. An
intermediate structure is introduced in a given RAM, where proportions between
successive indices in a list are added or clumped together to form another RAM.
In particular, when the initial RAM is a Griffiths-Engen-McCloskey (GEM)
sequence and the indices are given by the random times that an auxiliary Markov
chain jumps away from its current state, the joint law of the intermediate RAM
and the locations visited in the sojourns is given in terms of a `disordered'
GEM sequence, and an induced Markov chain. Through this joint law, we identify
a large class of `stick breaking' processes as the limits of empirical
occupation measures for associated time-inhomogeneous Markov chains.
| 0 | 0 | 1 | 1 | 0 | 0 |
Scaling laws of Rydberg excitons | Rydberg atoms have attracted considerable interest due to their huge
interaction among each other and with external fields. They demonstrate
characteristic scaling laws in dependence on the principal quantum number $n$
for features such as the magnetic field for level crossing. While bearing
striking similarities to Rydberg atoms, fundamentally new insights may be
obtained for Rydberg excitons, as the crystal environment gives easy optical
access to many states within an exciton multiplet. Here we study experimentally
and theoretically the scaling of several characteristic parameters of Rydberg
excitons with $n$. From absorption spectra in magnetic field we find for the
first crossing of levels with adjacent principal quantum numbers a $B_r \propto
n^{-4}$ dependence of the resonance field strength, $B_r$, due to the dominant
paramagnetic term unlike in the atomic case where the diamagnetic contribution
is decisive. By contrast, in electric field we find scaling laws just like for
Rydberg atoms. The resonance electric field strength scales as $E_r \propto
n^{-5}$. We observe anticrossings of the states belonging to multiplets with
different principal quantum numbers. The energy splittings at the avoided
crossings scale as $n^{-4}$ which we relate to the crystal specific deviation
of the exciton Hamiltonian from the hydrogen model. We observe the exciton
polarizability in the electric field to scale as $n^7$. In magnetic field the
crossover field strength from a hydrogen-like exciton to a magnetoexciton
dominated by electron and hole Landau level quantization scales as $n^{-3}$.
The ionization voltages demonstrate a $n^{-4}$ scaling as for atoms. The width
of the absorption lines remains constant before dissociation for high enough
$n$, while for small $n \lesssim 12$ an exponential increase with the field is
found. These results are in excellent agreement with theoretical calculations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Variable selection in multivariate linear models with high-dimensional covariance matrix estimation | In this paper, we propose a novel variable selection approach in the
framework of multivariate linear models taking into account the dependence that
may exist between the responses. It consists in estimating beforehand the
covariance matrix of the responses and to plug this estimator in a Lasso
criterion, in order to obtain a sparse estimator of the coefficient matrix. The
properties of our approach are investigated both from a theoretical and a
numerical point of view. More precisely, we give general conditions that the
estimators of the covariance matrix and its inverse have to satisfy in order to
recover the positions of the null and non null entries of the coefficient
matrix when the size of the covariance matrix is not fixed and can tend to
infinity. We prove that these conditions are satisfied in the particular case
of some Toeplitz matrices. Our approach is implemented in the R package
MultiVarSel available from the Comprehensive R Archive Network (CRAN) and is
very attractive since it benefits from a low computational load. We also assess
the performance of our methodology using synthetic data and compare it with
alternative approaches. Our numerical experiments show that including the
estimation of the covariance matrix in the Lasso criterion dramatically
improves the variable selection performance in many cases.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Novel Model of Cancer-Induced Peripheral Neuropathy and the Role of TRPA1 in Pain Transduction | Background. Models of cancer-induced neuropathy are designed by injecting
cancer cells near the peripheral nerves. The interference of tissue-resident
immune cells does not allow a direct contact with nerve fibres which affects
the tumor microenvironment and the invasion process. Methods. Anaplastic
tumor-1 (AT-1) cells were inoculated within the sciatic nerves (SNs) of male
Copenhagen rats. Lumbar dorsal root ganglia (DRGs) and the SNs were collected
on days 3, 7, 14, and 21. SN tissues were examined for morphological changes
and DRG tissues for immunofluorescence, electrophoretic tendency, and mRNA
quantification. Hypersensitivities to cold, mechanical, and thermal stimuli
were determined. HC-030031, a selective TRPA1 antagonist, was used to treat
cold allodynia. Results. Nociception thresholds were identified on day 6.
Immunofluorescent micrographs showed overexpression of TRPA1 on days 7 and 14
and of CGRP on day 14 until day 21. Both TRPA1 and CGRP were coexpressed on the
same cells. Immunoblots exhibited an increase in TRPA1 expression on day 14.
TRPA1 mRNA underwent an increase on day 7 (normalized to 18S). Injection of
HC-030031 transiently reversed the cold allodynia. Conclusion. A novel and a
promising model of cancer-induced neuropathy was established, and the role of
TRPA1 and CGRP in pain transduction was examined.
| 0 | 0 | 0 | 0 | 1 | 0 |
Statistical inference using SGD | We present a novel method for frequentist statistical inference in
$M$-estimation problems, based on stochastic gradient descent (SGD) with a
fixed step size: we demonstrate that the average of such SGD sequences can be
used for statistical inference, after proper scaling. An intuitive analysis
using the Ornstein-Uhlenbeck process suggests that such averages are
asymptotically normal. From a practical perspective, our SGD-based inference
procedure is a first order method, and is well-suited for large scale problems.
To show its merits, we apply it to both synthetic and real datasets, and
demonstrate that its accuracy is comparable to classical statistical methods,
while requiring potentially far less computation.
| 1 | 0 | 1 | 1 | 0 | 0 |
Core Discovery in Hidden Graphs | Massive network exploration is an important research direction with many
applications. In such a setting, the network is, usually, modeled as a graph
$G$, whereas any structural information of interest is extracted by inspecting
the way nodes are connected together. In the case where the adjacency matrix or
the adjacency list of $G$ is available, one can directly apply graph mining
algorithms to extract useful knowledge. However, there are cases where this is
not possible because the graph is \textit{hidden} or \textit{implicit}, meaning
that the edges are not recorded explicitly in the form of an adjacency
representation. In such a case, the only alternative is to pose a sequence of
\textit{edge probing queries} asking for the existence or not of a particular
graph edge. However, checking all possible node pairs is costly (quadratic on
the number of nodes). Thus, our objective is to pose as few edge probing
queries as possible, since each such query is expected to be costly. In this
work, we center our focus on the \textit{core decomposition} of a hidden graph.
In particular, we provide an efficient algorithm to detect the maximal subgraph
of $S_k$ of $G$ where the induced degree of every node $u \in S_k$ is at least
$k$. Performance evaluation results demonstrate that significant performance
improvements are achieved in comparison to baseline approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some exact Bradlow vortex solutions | We consider the Bradlow equation for vortices which was recently found by
Manton and find a two-parameter class of analytic solutions in closed form on
nontrivial geometries with non-constant curvature. The general solution to our
class of metrics is given by a hypergeometric function and the area of the
vortex domain by the Gaussian hypergeometric function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Saturating sets in projective planes and hypergraph covers | Let $\Pi_q$ be an arbitrary finite projective plane of order $q$. A subset
$S$ of its points is called saturating if any point outside $S$ is collinear
with a pair of points from $S$. Applying probabilistic tools we improve the
upper bound on the smallest possible size of the saturating set to
$\lceil\sqrt{3q\ln{q}}\rceil+ \lceil(\sqrt{q}+1)/2\rceil$. The same result is
presented using an algorithmic approach as well, which points out the
connection with the transversal number of uniform multiple intersecting
hypergraphs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Accelerated Optimization in the PDE Framework: Formulations for the Active Contour Case | Following the seminal work of Nesterov, accelerated optimization methods have
been used to powerfully boost the performance of first-order, gradient-based
parameter estimation in scenarios where second-order optimization strategies
are either inapplicable or impractical. Not only does accelerated gradient
descent converge considerably faster than traditional gradient descent, but it
also performs a more robust local search of the parameter space by initially
overshooting and then oscillating back as it settles into a final
configuration, thereby selecting only local minimizers with a basis of
attraction large enough to contain the initial overshoot. This behavior has
made accelerated and stochastic gradient search methods particularly popular
within the machine learning community. In their recent PNAS 2016 paper,
Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated
schemes can be cast in a variational framework formulated around the Bregman
divergence, leading to continuum limit ODE's. We show how their formulation may
be further extended to infinite dimension manifolds (starting here with the
geometric space of curves and surfaces) by substituting the Bregman divergence
with inner products on the tangent space and explicitly introducing a
distributed mass model which evolves in conjunction with the object of interest
during the optimization process. The co-evolving mass model, which is
introduced purely for the sake of endowing the optimization with helpful
dynamics, also links the resulting class of accelerated PDE based optimization
schemes to fluid dynamical formulations of optimal mass transport.
| 1 | 0 | 0 | 0 | 0 | 0 |
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization | We study the conditions under which one is able to efficiently apply
variance-reduction and acceleration schemes on finite sum optimization
problems. First, we show that, perhaps surprisingly, the finite sum structure
by itself, is not sufficient for obtaining a complexity bound of
$\tilde{\cO}((n+L/\mu)\ln(1/\epsilon))$ for $L$-smooth and $\mu$-strongly
convex individual functions - one must also know which individual function is
being referred to by the oracle at each iteration. Next, we show that for a
broad class of first-order and coordinate-descent finite sum algorithms
(including, e.g., SDCA, SVRG, SAG), it is not possible to get an `accelerated'
complexity bound of $\tilde{\cO}((n+\sqrt{n L/\mu})\ln(1/\epsilon))$, unless
the strong convexity parameter is given explicitly. Lastly, we show that when
this class of algorithms is used for minimizing $L$-smooth and convex finite
sums, the optimal complexity bound is $\tilde{\cO}(n+L/\epsilon)$, assuming
that (on average) the same update rule is used in every iteration, and
$\tilde{\cO}(n+\sqrt{nL/\epsilon})$, otherwise.
| 1 | 0 | 1 | 1 | 0 | 0 |
Stability Enhanced Large-Margin Classifier Selection | Stability is an important aspect of a classification procedure because
unstable predictions can potentially reduce users' trust in a classification
system and also harm the reproducibility of scientific conclusions. The major
goal of our work is to introduce a novel concept of classification instability,
i.e., decision boundary instability (DBI), and incorporate it with the
generalization error (GE) as a standard for selecting the most accurate and
stable classifier. Specifically, we implement a two-stage algorithm: (i)
initially select a subset of classifiers whose estimated GEs are not
significantly different from the minimal estimated GE among all the candidate
classifiers; (ii) the optimal classifier is chosen as the one achieving the
minimal DBI among the subset selected in stage (i). This general selection
principle applies to both linear and nonlinear classifiers. Large-margin
classifiers are used as a prototypical example to illustrate the above idea.
Our selection method is shown to be consistent in the sense that the optimal
classifier simultaneously achieves the minimal GE and the minimal DBI. Various
simulations and real examples further demonstrate the advantage of our method
over several alternative approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
Some Sphere Theorems in Linear Potential Theory | In this paper we analyze the capacitary potential due to a charged body in
order to deduce sharp analytic and geometric inequalities, whose equality cases
are saturated by domains with spherical symmetry. In particular, for a regular
bounded domain $\Omega \subset \mathbb{R}^n$, $n\geq 3$, we prove that if the
mean curvature $H$ of the boundary obeys the condition $$ - \bigg[
\frac{1}{\text{Cap}(\Omega)} \bigg]^{\frac{1}{n-2}} \leq \frac{H}{n-1} \leq
\bigg[ \frac{1}{\text{Cap}(\Omega)} \bigg]^{\frac{1}{n-2}} , $$ then $\Omega$
is a round ball.
| 0 | 0 | 1 | 0 | 0 | 0 |
Non-Gaussian Component Analysis using Entropy Methods | Non-Gaussian component analysis (NGCA) is a problem in multidimensional data
analysis which, since its formulation in 2006, has attracted considerable
attention in statistics and machine learning. In this problem, we have a random
variable $X$ in $n$-dimensional Euclidean space. There is an unknown subspace
$\Gamma$ of the $n$-dimensional Euclidean space such that the orthogonal
projection of $X$ onto $\Gamma$ is standard multidimensional Gaussian and the
orthogonal projection of $X$ onto $\Gamma^{\perp}$, the orthogonal complement
of $\Gamma$, is non-Gaussian, in the sense that all its one-dimensional
marginals are different from the Gaussian in a certain metric defined in terms
of moments. The NGCA problem is to approximate the non-Gaussian subspace
$\Gamma^{\perp}$ given samples of $X$.
Vectors in $\Gamma^{\perp}$ correspond to `interesting' directions, whereas
vectors in $\Gamma$ correspond to the directions where data is very noisy. The
most interesting applications of the NGCA model is for the case when the
magnitude of the noise is comparable to that of the true signal, a setting in
which traditional noise reduction techniques such as PCA don't apply directly.
NGCA is also related to dimension reduction and to other data analysis problems
such as ICA. NGCA-like problems have been studied in statistics for a long time
using techniques such as projection pursuit.
We give an algorithm that takes polynomial time in the dimension $n$ and has
an inverse polynomial dependence on the error parameter measuring the angle
distance between the non-Gaussian subspace and the subspace output by the
algorithm. Our algorithm is based on relative entropy as the contrast function
and fits under the projection pursuit framework. The techniques we develop for
analyzing our algorithm maybe of use for other related problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Positive Geometries and Canonical Forms | Recent years have seen a surprising connection between the physics of
scattering amplitudes and a class of mathematical objects--the positive
Grassmannian, positive loop Grassmannians, tree and loop Amplituhedra--which
have been loosely referred to as "positive geometries". The connection between
the geometry and physics is provided by a unique differential form canonically
determined by the property of having logarithmic singularities (only) on all
the boundaries of the space, with residues on each boundary given by the
canonical form on that boundary. In this paper we initiate an exploration of
"positive geometries" and "canonical forms" as objects of study in their own
right in a more general mathematical setting. We give a precise definition of
positive geometries and canonical forms, introduce general methods for finding
forms for more complicated positive geometries from simpler ones, and present
numerous examples of positive geometries in projective spaces, Grassmannians,
and toric, cluster and flag varieties. We also illustrate a number of
strategies for computing canonical forms which yield interesting
representations for the forms associated with wide classes of positive
geometries, ranging from the simplest Amplituhedra to new expressions for the
volume of arbitrary convex polytopes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computational Thinking in Education: Where does it Fit? A systematic literary review | Computational Thinking (CT) has been described as an essential skill which
everyone should learn and can therefore include in their skill set. Seymour
Papert is credited as concretising Computational Thinking in 1980 but since
Wing popularised the term in 2006 and brought it to the international
community's attention, more and more research has been conducted on CT in
education. The aim of this systematic literary review is to give educators and
education researchers an overview of what work has been carried out in the
domain, as well as potential gaps and opportunities that still exist.
Overall it was found in this review that, although there is a lot of work
currently being done around the world in many different educational contexts,
the work relating to CT is still in its infancy. Along with the need to create
an agreed-upon definition of CT lots of countries are still in the process of,
or have not yet started, introducing CT into curriculums in all levels of
education. It was also found that Computer Science/Computing, which could be
the most obvious place to teach CT, has yet to become a mainstream subject in
some countries, although this is improving. Of encouragement to educators is
the wealth of tools and resources being developed to help teach CT as well as
more and more work relating to curriculum development. For those teachers
looking to incorporate CT into their schools or classes then there are
bountiful options which include programming, hands-on exercises and more. The
need for more detailed lesson plans and curriculum structure however, is
something that could be of benefit to teachers.
| 1 | 1 | 0 | 0 | 0 | 0 |
Spincaloritronic signal generation in non-degenerate Si | Spincaloritronic signal generation due to thermal spin injection and spin
transport is demonstrated in a non-degenerate Si spin valve. The spin-dependent
Seebeck effect is used for the spincaloritronic signal generation, and the
thermal gradient of about 200 mK at an interface of Fe and Si enables
generating a spin voltage of 8 {\mu}V at room temperature. A simple expansion
of a conventional spin drift-diffusion model with taking into account the
spin-dependent Seebeck effect shows semiconductor materials are quite potential
for the spincaloritronic signal generation comparing with metallic materials,
which can allow efficient heat recycling in semiconductor spin devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal fidelity multi-level Monte Carlo for quantification of uncertainty in simulations of cloud cavitation collapse | We quantify uncertainties in the location and magnitude of extreme pressure
spots revealed from large scale multi-phase flow simulations of cloud
cavitation collapse. We examine clouds containing 500 cavities and quantify
uncertainties related to their initial spatial arrangement. The resulting
2000-dimensional space is sampled using a non-intrusive and computationally
efficient Multi-Level Monte Carlo (MLMC) methodology. We introduce novel
optimal control variate coefficients to enhance the variance reduction in MLMC.
The proposed optimal fidelity MLMC leads to more than two orders of magnitude
speedup when compared to standard Monte Carlo methods. We identify large
uncertainties in the location and magnitude of the peak pressure pulse and
present its statistical correlations and joint probability density functions
with the geometrical characteristics of the cloud. Characteristic properties of
spatial cloud structure are identified as potential causes of significant
uncertainties in exerted collapse pressures.
| 1 | 0 | 0 | 1 | 0 | 0 |
A New Take on Protecting Cyclists in Smart Cities | Pollution in urban centres is becoming a major societal problem. While
pollution is a concern for all urban dwellers, cyclists are one of the most
exposed groups due to their proximity to vehicle tailpipes. Consequently, new
solutions are required to help protect citizens, especially cyclists, from the
harmful effects of exhaust-gas emissions. In this context, hybrid vehicles
(HVs) offer new actuation possibilities that can be exploited in this
direction. More specifically, such vehicles when working together as a group,
have the ability to dynamically lower the emissions in a given area, thus
benefiting citizens, whilst still giving the vehicle owner the flexibility of
using an Internal Combustion Engine (ICE). This paper aims to develop an
algorithm, that can be deployed in such vehicles, whereby geofences (virtual
geographic boundaries) are used to specify areas of low pollution around
cyclists. The emissions level inside the geofence is controlled via a coin
tossing algorithm to switch the HV motor into, and out of, electric mode, in a
manner that is in some sense optimal. The optimality criterion is based on how
polluting vehicles inside the geofence are, and the expected density of
cyclists near each vehicle. The algorithm is triggered once a vehicle detects a
cyclist. Implementations are presented, both in simulation, and in a real
vehicle, and the system is tested using a Hardware-In-the-Loop (HIL) platform
(video provided).
| 0 | 0 | 1 | 0 | 0 | 0 |
The effect upon neutrinos of core-collapse supernova accretion phase turbulence | During the accretion phase of a core-collapse supernovae, large amplitude
turbulence is generated by the combination of the standing accretion shock
instability and convection driven by neutrino heating. The turbulence directly
affects the dynamics of the explosion, but there is also the possibility of an
additional, indirect, feedback mechanism due to the effect turbulence can have
upon neutrino flavor evolution and thus the neutrino heating. In this paper we
consider the effect of turbulence during the accretion phase upon neutrino
evolution, both numerically and analytically. Adopting representative supernova
profiles taken from the accretion phase of a supernova simulation, we find the
numerical calculations exhibit no effect from turbulence. We explain this
absence using two analytic descriptions: the Stimulated Transition model and
the Distorted Phase Effect model. In the Stimulated Transition model turbulence
effects depend upon six different lengthscales, and three criteria must be
satisfied between them if one is to observe a change in the flavor evolution
due to Stimulated Transition. We further demonstrate that the Distorted Phase
Effect depends upon the presence of multiple semi-adiabatic MSW resonances or
discontinuities that also can be expressed as a relationship between three of
the same lengthscales. When we examine the supernova profiles used in the
numerical calculations we find the three Stimulated Transition criteria cannot
be satisfied, independent of the form of the turbulence power spectrum, and
that the same supernova profiles lack the multiple semi-adiabatic MSW
resonances or discontinuities necessary to produce a Distorted Phase Effect.
Thus we conclude that even though large amplitude turbulence is present in
supernova during the accretion phase, it has no effect upon neutrino flavor
evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Positive-Unlabeled Learning with Non-Negative Risk Estimator | From only positive (P) and unlabeled (U) data, a binary classifier could be
trained with PU learning, in which the state of the art is unbiased PU
learning. However, if its model is very flexible, empirical risks on training
data will go negative, and we will suffer from serious overfitting. In this
paper, we propose a non-negative risk estimator for PU learning: when getting
minimized, it is more robust against overfitting, and thus we are able to use
very flexible models (such as deep neural networks) given limited P data.
Moreover, we analyze the bias, consistency, and mean-squared-error reduction of
the proposed risk estimator, and bound the estimation error of the resulting
empirical risk minimizer. Experiments demonstrate that our risk estimator fixes
the overfitting problem of its unbiased counterparts.
| 1 | 0 | 0 | 1 | 0 | 0 |
PSZ2LenS. Weak lensing analysis of the Planck clusters in the CFHTLenS and in the RCSLenS | The possibly unbiased selection process in surveys of the Sunyaev Zel'dovich
effect can unveil new populations of galaxy clusters. We performed a weak
lensing analysis of the PSZ2LenS sample, i.e. the PSZ2 galaxy clusters detected
by the Planck mission in the sky portion covered by the lensing surveys
CFHTLenS and RCSLenS. PSZ2LenS consists of 35 clusters and it is a
statistically complete and homogeneous subsample of the PSZ2 catalogue. The
Planck selected clusters appear to be unbiased tracers of the massive end of
the cosmological haloes. The mass concentration relation of the sample is in
excellent agreement with predictions from the Lambda cold dark matter model.
The stacked lensing signal is detected at 14 sigma significance over the radial
range 0.1<R<3.2 Mpc/h, and is well described by the cuspy dark halo models
predicted by numerical simulations. We confirmed that Planck estimated masses
are biased low by b_SZ= 27+-11(stat)+-8(sys) per cent with respect to weak
lensing masses. The bias is higher for the cosmological subsample, b_SZ=
40+-14+-(stat)+-8(sys) per cent.
| 0 | 1 | 0 | 0 | 0 | 0 |
GP CaKe: Effective brain connectivity with causal kernels | A fundamental goal in network neuroscience is to understand how activity in
one region drives activity elsewhere, a process referred to as effective
connectivity. Here we propose to model this causal interaction using
integro-differential equations and causal kernels that allow for a rich
analysis of effective connectivity. The approach combines the tractability and
flexibility of autoregressive modeling with the biophysical interpretability of
dynamic causal modeling. The causal kernels are learned nonparametrically using
Gaussian process regression, yielding an efficient framework for causal
inference. We construct a novel class of causal covariance functions that
enforce the desired properties of the causal kernels, an approach which we call
GP CaKe. By construction, the model and its hyperparameters have biophysical
meaning and are therefore easily interpretable. We demonstrate the efficacy of
GP CaKe on a number of simulations and give an example of a realistic
application on magnetoencephalography (MEG) data.
| 0 | 0 | 0 | 1 | 0 | 0 |
On boundary behavior of mappings on Riemannian manifolds in terms of prime ends | A boundary behavior of ring mappings on Riemannian manifolds, which are
generalization of quasiconformal mappings by Gehring, is investigated. In terms
of prime ends, there are obtained theorems about continuous extension to a
boundary of classes mentioned above. In the terms mentioned above, there are
obtained results about equicontinuity of these classes in the closure of the
domain.
| 0 | 0 | 1 | 0 | 0 | 0 |
Privacy Preserving Face Retrieval in the Cloud for Mobile Users | Recently, cloud storage and processing have been widely adopted. Mobile users
in one family or one team may automatically backup their photos to the same
shared cloud storage space. The powerful face detector trained and provided by
a 3rd party may be used to retrieve the photo collection which contains a
specific group of persons from the cloud storage server. However, the privacy
of the mobile users may be leaked to the cloud server providers. In the
meanwhile, the copyright of the face detector should be protected. Thus, in
this paper, we propose a protocol of privacy preserving face retrieval in the
cloud for mobile users, which protects the user photos and the face detector
simultaneously. The cloud server only provides the resources of storage and
computing and can not learn anything of the user photos and the face detector.
We test our protocol inside several families and classes. The experimental
results reveal that our protocol can successfully retrieve the proper photos
from the cloud server and protect the user photos and the face detector.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares) | This work provides a simplified proof of the statistical minimax optimality
of (iterate averaged) stochastic gradient descent (SGD), for the special case
of least squares. This result is obtained by analyzing SGD as a stochastic
process and by sharply characterizing the stationary covariance matrix of this
process. The finite rate optimality characterization captures the constant
factors and addresses model mis-specification.
| 1 | 0 | 0 | 1 | 0 | 0 |
Averages of Unlabeled Networks: Geometric Characterization and Asymptotic Behavior | It is becoming increasingly common to see large collections of network data
objects -- that is, data sets in which a network is viewed as a fundamental
unit of observation. As a result, there is a pressing need to develop
network-based analogues of even many of the most basic tools already standard
for scalar and vector data. In this paper, our focus is on averages of
unlabeled, undirected networks with edge weights. Specifically, we (i)
characterize a certain notion of the space of all such networks, (ii) describe
key topological and geometric properties of this space relevant to doing
probability and statistics thereupon, and (iii) use these properties to
establish the asymptotic behavior of a generalized notion of an empirical mean
under sampling from a distribution supported on this space. Our results rely on
a combination of tools from geometry, probability theory, and statistical shape
analysis. In particular, the lack of vertex labeling necessitates working with
a quotient space modding out permutations of labels. This results in a
nontrivial geometry for the space of unlabeled networks, which in turn is found
to have important implications on the types of probabilistic and statistical
results that may be obtained and the techniques needed to obtain them.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Abelian distribution | We define the Abelian distribution and study its basic properties. Abelian
distributions arise in the context of neural modeling and describe the size of
neural avalanches in fully-connected integrate-and-fire models of
self-organized criticality in neural systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
PULSEDYN - A dynamical simulation tool for studying strongly nonlinear chains | We introduce PULSEDYN, a particle dynamics program in $C++$, to solve
many-body nonlinear systems in one dimension. PULSEDYN is designed to make
computing accessible to non-specialists in the field of nonlinear dynamics of
many-body systems and to ensure transparency and easy benchmarking of numerical
results for an integrable model (Toda chain) and three non-integrable models
(Fermi-Pasta-Ulam-Tsingou, Morse and Lennard-Jones). To achieve the latter, we
have made our code open source and free to distribute. We examine (i) soliton
propagation and two-soliton collision in the Toda system, (ii) the recurrence
phenomenon in the Fermi-Pasta-Ulam-Tsingou system and the decay of a single
localized nonlinear excitation in the same system through quasi-equilibrium to
an equipartitioned state, and SW propagation in chains with (iii) Morse and
(iv) Lennard-Jones potentials. We recover well known results from theory and
other numerical results in the literature. We have obtained these results by
setting up a parameter file interface which allows the code to be used as a
black box. Therefore, we anticipate that the code would prove useful to
students and non-specialists. At the same time, PULSEDYN provides
scientifically accurate simulations thus making the study of rich dynamical
processes broadly accessible.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stabilization Control of the Differential Mobile Robot Using Lyapunov Function and Extended Kalman Filter | This paper presents the design of a control model to navigate the
differential mobile robot to reach the desired destination from an arbitrary
initial pose. The designed model is divided into two stages: the state
estimation and the stabilization control. In the state estimation, an extended
Kalman filter is employed to optimally combine the information from the system
dynamics and measurements. Two Lyapunov functions are constructed that allow a
hybrid feedback control law to execute the robot movements. The asymptotical
stability and robustness of the closed loop system are assured. Simulations and
experiments are carried out to validate the effectiveness and applicability of
the proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Random Sample Partition Data Model for Big Data Analysis | Big data sets must be carefully partitioned into statistically similar data
subsets that can be used as representative samples for big data analysis tasks.
In this paper, we propose the random sample partition (RSP) data model to
represent a big data set as a set of non-overlapping data subsets, called RSP
data blocks, where each RSP data block has a probability distribution similar
to the whole big data set. Under this data model, efficient block level
sampling is used to randomly select RSP data blocks, replacing expensive record
level sampling to select sample data from a big distributed data set on a
computing cluster. We show how RSP data blocks can be employed to estimate
statistics of a big data set and build models which are equivalent to those
built from the whole big data set. In this approach, analysis of a big data set
becomes analysis of few RSP data blocks which have been generated in advance on
the computing cluster. Therefore, the new method for data analysis based on RSP
data blocks is scalable to big data.
| 1 | 0 | 0 | 1 | 0 | 0 |
A family of transformed copulas with singular component | In this paper, we present a family of bivariate copulas by transforming a
given copula function with two increasing functions, named as transformed
copula. One distinctive characteristic of the transformed copula is its
singular component along the main diagonal. Conditions guaranteeing the
transformed function to be a copula function are provided, and several classes
of the transformed copulas are given. The singular component along the main
diagonal of the transformed copula is verified, and the tail dependence
coefficients of the transformed copulas are obtained. Finally, some properties
of the transformed copula are discussed, such as the totally positive of order
2 and the concordance order.
| 0 | 0 | 1 | 1 | 0 | 0 |
Deep Learning for micro-Electrocorticographic (μECoG) Data | Machine learning can extract information from neural recordings, e.g.,
surface EEG, ECoG and {\mu}ECoG, and therefore plays an important role in many
research and clinical applications. Deep learning with artificial neural
networks has recently seen increasing attention as a new approach in brain
signal decoding. Here, we apply a deep learning approach using convolutional
neural networks to {\mu}ECoG data obtained with a wireless, chronically
implanted system in an ovine animal model. Regularized linear discriminant
analysis (rLDA), a filter bank component spatial pattern (FBCSP) algorithm and
convolutional neural networks (ConvNets) were applied to auditory evoked
responses captured by {\mu}ECoG. We show that compared with rLDA and FBCSP,
significantly higher decoding accuracy can be obtained by ConvNets trained in
an end-to-end manner, i.e., without any predefined signal features. Deep
learning thus proves a promising technique for {\mu}ECoG-based brain-machine
interfacing applications.
| 0 | 0 | 0 | 0 | 1 | 0 |
Clipped Matrix Completion: A Remedy for Ceiling Effects | We consider the problem of recovering a low-rank matrix from its clipped
observations. Clipping is conceivable in many scientific areas that obstructs
statistical analyses. On the other hand, matrix completion (MC) methods can
recover a low-rank matrix from various information deficits by using the
principle of low-rank completion. However, the current theoretical guarantees
for low-rank MC do not apply to clipped matrices, as the deficit depends on the
underlying values. Therefore, the feasibility of clipped matrix completion
(CMC) is not trivial. In this paper, we first provide a theoretical guarantee
for the exact recovery of CMC by using a trace-norm minimization algorithm.
Furthermore, we propose practical CMC algorithms by extending ordinary MC
methods. Our extension is to use the squared hinge loss in place of the squared
loss for reducing the penalty of over-estimation on clipped entries. We also
propose a novel regularization term tailored for CMC. It is a combination of
two trace-norm terms, and we theoretically bound the recovery error under the
regularization. We demonstrate the effectiveness of the proposed methods
through experiments using both synthetic and benchmark data for recommendation
systems.
| 0 | 0 | 0 | 1 | 0 | 0 |
End-to-End Multi-Task Denoising for joint SDR and PESQ Optimization | Supervised learning based on a deep neural network recently has achieved
substantial improvement on speech enhancement. Denoising networks learn mapping
from noisy speech to clean one directly, or to a spectra mask which is the
ratio between clean and noisy spectrum. In either case, the network is
optimized by minimizing mean square error (MSE) between predefined labels and
network output of spectra or time-domain signal. However, existing schemes have
either of two critical issues: spectra and metric mismatches. The spectra
mismatch is a well known issue that any spectra modification after short-time
Fourier transform (STFT), in general, cannot be fully recovered after inverse
STFT. The metric mismatch is that a conventional MSE metric is sub-optimal to
maximize our target metrics, signal-to-distortion ratio (SDR) and perceptual
evaluation of speech quality (PESQ). This paper presents a new end-to-end
denoising framework with the goal of joint SDR and PESQ optimization. First,
the network optimization is performed on the time-domain signals after ISTFT to
avoid spectra mismatch. Second, two loss functions which have improved
correlations with SDR and PESQ metrics are proposed to minimize metric
mismatch. The experimental result showed that the proposed denoising scheme
significantly improved both SDR and PESQ performance over the existing methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Scalable Importance Tempering and Bayesian Variable Selection | We propose a Monte Carlo algorithm to sample from high-dimensional
probability distributions that combines Markov chain Monte Carlo (MCMC) and
importance sampling. We provide a careful theoretical analysis, including
guarantees on robustness to high-dimensionality, explicit comparison with
standard MCMC and illustrations of the potential improvements in efficiency.
Simple and concrete intuition is provided for when the novel scheme is expected
to outperform standard schemes. When applied to Bayesian Variable Selection
problems, the novel algorithm is orders of magnitude more efficient than
available alternative sampling schemes and allows to perform fast and reliable
fully Bayesian inferences with tens of thousands regressors.
| 0 | 0 | 0 | 1 | 0 | 0 |
Distributed Convolutional Dictionary Learning (DiCoDiLe): Pattern Discovery in Large Images and Signals | Convolutional dictionary learning (CDL) estimates shift invariant basis
adapted to multidimensional data. CDL has proven useful for image denoising or
inpainting, as well as for pattern discovery on multivariate signals. As
estimated patterns can be positioned anywhere in signals or images,
optimization techniques face the difficulty of working in extremely high
dimensions with millions of pixels or time samples, contrarily to standard
patch-based dictionary learning. To address this optimization problem, this
work proposes a distributed and asynchronous algorithm, employing locally
greedy coordinate descent and an asynchronous locking mechanism that does not
require a central server. This algorithm can be used to distribute the
computation on a number of workers which scales linearly with the encoded
signal's size. Experiments confirm the scaling properties which allows us to
learn patterns on large scales images from the Hubble Space Telescope.
| 1 | 0 | 0 | 1 | 0 | 0 |
Performance Analysis of Robust Stable PID Controllers Using Dominant Pole Placement for SOPTD Process Models | This paper derives new formulations for designing dominant pole placement
based proportional-integral-derivative (PID) controllers to handle second order
processes with time delays (SOPTD). Previously, similar attempts have been made
for pole placement in delay-free systems. The presence of the time delay term
manifests itself as a higher order system with variable number of interlaced
poles and zeros upon Pade approximation, which makes it difficult to achieve
precise pole placement control. We here report the analytical expressions to
constrain the closed loop dominant and non-dominant poles at the desired
locations in the complex s-plane, using a third order Pade approximation for
the delay term. However, invariance of the closed loop performance with
different time delay approximation has also been verified using increasing
order of Pade, representing a closed to reality higher order delay dynamics.
The choice of the nature of non-dominant poles e.g. all being complex, real or
a combination of them modifies the characteristic equation and influences the
achievable stability regions. The effect of different types of non-dominant
poles and the corresponding stability regions are obtained for nine test-bench
processes indicating different levels of open-loop damping and lag to delay
ratio. Next, we investigate which expression yields a wider stability region in
the design parameter space by using Monte Carlo simulations while uniformly
sampling a chosen design parameter space. Various time and frequency domain
control performance parameters are investigated next, as well as their
deviations with uncertain process parameters, using thousands of Monte Carlo
simulations, around the robust stable solution for each of the nine test-bench
processes.
| 0 | 0 | 0 | 1 | 0 | 0 |
On Conjugates and Adjoint Descent | In this note we present an $\infty$-categorical framework for descent along
adjunctions and a general formula for counting conjugates up to equivalence
which unifies several known formulae from different fields.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning to Parse and Translate Improves Neural Machine Translation | There has been relatively little attention to incorporating linguistic prior
to neural machine translation. Much of the previous work was further
constrained to considering linguistic prior on the source side. In this paper,
we propose a hybrid model, called NMT+RNNG, that learns to parse and translate
by combining the recurrent neural network grammar into the attention-based
neural machine translation. Our approach encourages the neural machine
translation model to incorporate linguistic prior during training, and lets it
translate on its own afterward. Extensive experiments with four language pairs
show the effectiveness of the proposed NMT+RNNG.
| 1 | 0 | 0 | 0 | 0 | 0 |
Boundary feedback stabilization of a flexible wing model under unsteady aerodynamic loads | This paper addresses the boundary stabilization of a flexible wing model,
both in bending and twisting displacements, under unsteady aerodynamic loads,
and in presence of a store. The wing dynamics is captured by a distributed
parameter system as a coupled Euler-Bernoulli and Timoshenko beam model. The
problem is tackled in the framework of semigroup theory, and a Lyapunov-based
stability analysis is carried out to assess that the system energy, as well as
the bending and twisting displacements, decay exponentially to zero. The
effectiveness of the proposed boundary control scheme is evaluated based on
simulations.
| 1 | 0 | 1 | 0 | 0 | 0 |
Out-colourings of Digraphs | We study vertex colourings of digraphs so that no out-neighbourhood is
monochromatic and call such a colouring an {\bf out-colouring}. The problem of
deciding whether a given digraph has an out-colouring with only two colours
(called a 2-out-colouring) is ${\cal
NP}$-complete. We show that for every choice of positive integers $r,k$ there
exists a $k$-strong bipartite tournament which needs at least $r$ colours in
every out-colouring. Our main results are on tournaments and semicomplete
digraphs. We prove that, except for the Paley tournament $P_7$, every strong
semicomplete digraph of minimum out-degree at least 3 has a 2-out-colouring.
Furthermore, we show that every semicomplete digraph on at least 7 vertices has
a 2-out-colouring if and only if it has a {\bf balanced} such colouring, that
is, the difference between the number of vertices that receive colour 1 and
colour 2 is at most one. In the second half of the paper we consider the
generalization of 2-out-colourings to vertex partitions $(V_1,V_2)$ of a
digraph $D$ so that each of the three digraphs induced by respectively, the
vertices of $V_1$, the vertices of $V_2$ and all arcs between $V_1$ and $V_2$
have minimum out-degree $k$ for a prescribed integer $k\geq 1$. Using
probabilistic arguments we prove that there exists an absolute positive
constant $c$ so that every semicomplete digraph of minimum out-degree at least
$2k+c\sqrt{k}$ has such a partition. This is tight up to the value of $c$.
| 1 | 0 | 0 | 0 | 0 | 0 |
An Improved Video Analysis using Context based Extension of LSH | Locality Sensitive Hashing (LSH) based algorithms have already shown their
promise in finding approximate nearest neighbors in high dimen- sional data
space. However, there are certain scenarios, as in sequential data, where the
proximity of a pair of points cannot be captured without considering their
surroundings or context. In videos, as for example, a particular frame is
meaningful only when it is seen in the context of its preceding and following
frames. LSH has no mechanism to handle the con- texts of the data points. In
this article, a novel scheme of Context based Locality Sensitive Hashing
(conLSH) has been introduced, in which points are hashed together not only
based on their closeness, but also because of similar context. The contribution
made in this article is three fold. First, conLSH is integrated with a recently
proposed fast optimal sequence alignment algorithm (FOGSAA) using a layered
approach. The resultant method is applied to video retrieval for extracting
similar sequences. The pro- posed algorithm yields more than 80% accuracy on an
average in different datasets. It has been found to save 36.3% of the total
time, consumed by the exhaustive search. conLSH reduces the search space to
approximately 42% of the entire dataset, when compared with an exhaustive
search by the aforementioned FOGSAA, Bag of Words method and the standard LSH
implementations. Secondly, the effectiveness of conLSH is demon- strated in
action recognition of the video clips, which yields an average gain of 12.83%
in terms of classification accuracy over the state of the art methods using
STIP descriptors. The last but of great significance is that this article
provides a way of automatically annotating long and composite real life videos.
The source code of conLSH is made available at
this http URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Commuting graphs on Coxeter groups, Dynkin diagrams and finite subgroups of $SL(2,\mathbb{C})$ | For a group $H$ and a non empty subset $\Gamma\subseteq H$, the commuting
graph $G=\mathcal{C}(H,\Gamma)$ is the graph with $\Gamma$ as the node set and
where any $x,y \in \Gamma$ are joined by an edge if $x$ and $y$ commute in $H$.
We prove that any simple graph can be obtained as a commuting graph of a
Coxeter group, solving the realizability problem in this setup. In particular
we can recover every Dynkin diagram of ADE type as a commuting graph. Thanks to
the relation between the ADE classification and finite subgroups of
$\SL(2,\C)$, we are able to rephrase results from the {\em McKay
correspondence} in terms of generators of the corresponding Coxeter groups. We
finish the paper studying commuting graphs $\mathcal{C}(H,\Gamma)$ for every
finite subgroup $H\subset\SL(2,\C)$ for different subsets $\Gamma\subseteq H$,
and investigating metric properties of them when $\Gamma=H$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exact energy stability of Bénard-Marangoni convection at infinite Prandtl number | Using the energy method we investigate the stability of pure conduction in
Pearson's model for Bénard-Marangoni convection in a layer of fluid at
infinite Prandtl number. Upon extending the space of admissible perturbations
to the conductive state, we find an exact solution to the energy stability
variational problem for a range of thermal boundary conditions describing
perfectly conducting, imperfectly conducting, and insulating boundaries. Our
analysis extends and improves previous results, and shows that with the energy
method global stability can be proven up to the linear instability threshold
only when the top and bottom boundaries of the fluid layer are insulating.
Contrary to the well-known Rayleigh-Bénard convection setup, therefore,
energy stability theory does not exclude the possibility of subcritical
instabilities against finite-amplitude perturbations.
| 0 | 1 | 0 | 0 | 0 | 0 |
A sharpening of a problem on Bernstein polynomials and convex functions | We present an elementary proof of a conjecture proposed by I. Rasa in 2017
which is an inequality involving Bernstein basis polynomials and convex
functions. It was affirmed in positive by A. Komisarski and T. Rajba very
recently by the use of stochastic convex orderings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Local Estimate on Convexity Radius and decay of injectivity radius in a Riemannian manifold | In this paper we prove the following pointwise and curvature-free estimates
on convexity radius, injectivity radius and local behavior of geodesics in a
complete Riemannian manifold $M$: 1) the convexity radius of $p$,
$\operatorname{conv}(p)\ge
\min\{\frac{1}{2}\operatorname{inj}(p),\operatorname{foc}(B_{\operatorname{inj}(p)}(p))\}$,
where $\operatorname{inj}(p)$ is the injectivity radius of $p$ and
$\operatorname{foc}(B_r(p))$ is the focal radius of open ball centered at $p$
with radius $r$; 2) for any two points $p,q$ in $M$, $\operatorname{inj}(q)\ge
\min\{\operatorname{inj}(p), \operatorname{conj}(q)\}-d(p,q),$ where
$\operatorname{conj}(q)$ is the conjugate radius of $q$; 3) for any
$0<r<\min\{\operatorname{inj}(p),\frac{1}{2}\operatorname{conj}(B_{\operatorname{inj}(p)}(p))\}$,
any (not necessarily minimizing) geodesic in $B_r(p)$ has length $\le 2r$. We
also clarify two different concepts on convexity radius and give examples to
illustrate that the one more frequently used in literature is not continuous.
| 0 | 0 | 1 | 0 | 0 | 0 |
Magnetic properties of the spin-1 chain compound NiCl$_3$C$_6$H$_5$CH$_2$CH$_2$NH$_3$ | We report experimental results of the static magnetization, ESR and NMR
spectroscopic measurements of the Ni-hybrid compound
NiCl$_3$C$_6$H$_5$CH$_2$CH$_2$NH$_3$. In this material NiCl$_3$ octahedra are
structurally arranged in chains along the crystallographic $a$-axis. According
to the static susceptibility and ESR data Ni$^{2+}$ spins $S = 1$ are isotropic
and are coupled antiferromagnetically (AFM) along the chain with the exchange
constant $J = 25.5$ K. These are important prerequisites for the realization of
the so-called Haldane spin-1 chain with the spin-singlet ground state and a
quantum spin gap. However, experimental results evidence AFM order at $T_{\rm
N} \approx 10$ K presumably due to small interchain couplings. Interestingly,
frequency-, magnetic field-, and temperature-dependent ESR measurements, as
well as the NMR data, reveal signatures which could presumably indicate an
inhomogeneous ground state of co-existent mesoscopically spatially separated
AFM ordered and spin-singlet state regions similar to the situation observed
before in some spin-diluted Haldane magnets.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multiple core hole formation by free-electron laser radiation in molecular nitrogen | We investigate the formation of multiple-core-hole states of molecular
nitrogen interacting with a free-electron laser pulse. We obtain bound and
continuum molecular orbitals in the single-center expansion scheme and use
these orbitals to calculate photo-ionization and Auger decay rates. Using these
rates, we compute the atomic ion yields generated in this interaction. We track
the population of all states throughout this interaction and compute the
proportion of the population which accesses different core-hole states. We also
investigate the pulse parameters that favor the formation of these core-hole
states for 525 eV and 1100 eV photons.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution | Few ideas have enjoyed as large an impact on deep learning as convolution.
For any problem involving pixels or spatial representations, common intuition
holds that convolutional neural networks may be appropriate. In this paper we
show a striking counterexample to this intuition via the seemingly trivial
coordinate transform problem, which simply requires learning a mapping between
coordinates in (x,y) Cartesian space and one-hot pixel space. Although
convolutional networks would seem appropriate for this task, we show that they
fail spectacularly. We demonstrate and carefully analyze the failure first on a
toy problem, at which point a simple fix becomes obvious. We call this solution
CoordConv, which works by giving convolution access to its own input
coordinates through the use of extra coordinate channels. Without sacrificing
the computational and parametric efficiency of ordinary convolution, CoordConv
allows networks to learn either complete translation invariance or varying
degrees of translation dependence, as required by the end task. CoordConv
solves the coordinate transform problem with perfect generalization and 150
times faster with 10--100 times fewer parameters than convolution. This stark
contrast raises the question: to what extent has this inability of convolution
persisted insidiously inside other tasks, subtly hampering performance from
within? A complete answer to this question will require further investigation,
but we show preliminary evidence that swapping convolution for CoordConv can
improve models on a diverse set of tasks. Using CoordConv in a GAN produced
less mode collapse as the transform between high-level spatial latents and
pixels becomes easier to learn. A Faster R-CNN detection model trained on MNIST
showed 24% better IOU when using CoordConv, and in the RL domain agents playing
Atari games benefit significantly from the use of CoordConv layers.
| 0 | 0 | 0 | 1 | 0 | 0 |
Uniformly recurrent subgroups and the ideal structure of reduced crossed products | We study the ideal structure of reduced crossed product of topological
dynamical systems of a countable discrete group. More concretely, for a compact
Hausdorff space $X$ with an action of a countable discrete group $\Gamma$, we
consider the absence of a non-zero ideals in the reduced crossed product $C(X)
\rtimes_r \Gamma$ which has a zero intersection with $C(X)$. We characterize
this condition by a property for amenable subgroups of the stabilizer subgroups
of $X$ in terms of the Chabauty space of $\Gamma$. This generalizes Kennedy's
algebraic characterization of the simplicity for a reduced group
$\mathrm{C}^{*}$-algebra of a countable discrete group.
| 0 | 0 | 1 | 0 | 0 | 0 |
TrajectoryNet: An Embedded GPS Trajectory Representation for Point-based Classification Using Recurrent Neural Networks | Understanding and discovering knowledge from GPS (Global Positioning System)
traces of human activities is an essential topic in mobility-based urban
computing. We propose TrajectoryNet-a neural network architecture for
point-based trajectory classification to infer real world human transportation
modes from GPS traces. To overcome the challenge of capturing the underlying
latent factors in the low-dimensional and heterogeneous feature space imposed
by GPS data, we develop a novel representation that embeds the original feature
space into another space that can be understood as a form of basis expansion.
We also enrich the feature space via segment-based information and use Maxout
activations to improve the predictive power of Recurrent Neural Networks
(RNNs). We achieve over 98% classification accuracy when detecting four types
of transportation modes, outperforming existing models without additional
sensory data or location-based prior knowledge.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modularity Matters: Learning Invariant Relational Reasoning Tasks | We focus on two supervised visual reasoning tasks whose labels encode a
semantic relational rule between two or more objects in an image: the MNIST
Parity task and the colorized Pentomino task. The objects in the images undergo
random translation, scaling, rotation and coloring transformations. Thus these
tasks involve invariant relational reasoning. We report uneven performance of
various deep CNN models on these two tasks. For the MNIST Parity task, we
report that the VGG19 model soundly outperforms a family of ResNet models.
Moreover, the family of ResNet models exhibits a general sensitivity to random
initialization for the MNIST Parity task. For the colorized Pentomino task, now
both the VGG19 and ResNet models exhibit sluggish optimization and very poor
test generalization, hovering around 30% test error. The CNN we tested all
learn hierarchies of fully distributed features and thus encode the distributed
representation prior. We are motivated by a hypothesis from cognitive
neuroscience which posits that the human visual cortex is modularized, and this
allows the visual cortex to learn higher order invariances. To this end, we
consider a modularized variant of the ResNet model, referred to as a Residual
Mixture Network (ResMixNet) which employs a mixture-of-experts architecture to
interleave distributed representations with more specialized, modular
representations. We show that very shallow ResMixNets are capable of learning
each of the two tasks well, attaining less than 2% and 1% test error on the
MNIST Parity and the colorized Pentomino tasks respectively. Most importantly,
the ResMixNet models are extremely parameter efficient: generalizing better
than various non-modular CNNs that have over 10x the number of parameters.
These experimental results support the hypothesis that modularity is a robust
prior for learning invariant relational reasoning.
| 0 | 0 | 0 | 1 | 1 | 0 |
Proofs of some Propositions of the semi-Intuitionistic Logic with Strong Negation | We offer the proofs that complete our article introducing the propositional
calculus called semi-intuitionistic logic with strong negation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Entanglement in topological systems | These lecture notes on entanglement in topological systems are part of the
48th IFF Spring School 2017 on Topological Matter: Topological Insulators,
Skyrmions and Majoranas at the Forschungszentrum Juelich, Germany. They cover a
short discussion on topologically ordered phases and review the two main tools
available for detecting topological order - the entanglement entropy and the
entanglement spectrum.
| 0 | 1 | 0 | 0 | 0 | 0 |
Equivalence between Differential Inclusions Involving Prox-regular sets and maximal monotone operators | In this paper, we study the existence and the stability in the sense of
Lyapunov of solutions for\ differential inclusions governed by the normal cone
to a prox-regular set and subject to a Lipschitzian perturbation. We prove that
such, apparently, more general nonsmooth dynamics can be indeed remodelled into
the classical theory of differential inclusions involving maximal monotone
operators. This result is new in the literature and permits us to make use of
the rich and abundant achievements in this class of monotone operators to
derive the desired existence result and stability analysis, as well as the
continuity and differentiability properties of the solutions. This going back
and forth between these two models of differential inclusions is made possible
thanks to a viability result for maximal monotone operators. As an application,
we study a Luenberger-like observer, which is shown to converge exponentially
to the actual state when the initial value of the state's estimation remains in
a neighborhood of the initial value of the original system.
| 0 | 0 | 1 | 0 | 0 | 0 |
Effect of iron oxide loading on magnetoferritin structure in solution as revealed by SAXS and SANS | Synthetic biological macromolecule of magnetoferritin containing an iron
oxide core inside a protein shell (apoferritin) is prepared with different
content of iron. Its structure in aqueous solution is analyzed by small-angle
synchrotron X-ray (SAXS) and neutron (SANS) scattering. The loading factor (LF)
defined as the average number of iron atoms per protein is varied up to LF=800.
With an increase of the LF, the scattering curves exhibit a relative increase
in the total scattered intensity, a partial smearing and a shift of the match
point in the SANS contrast variation data. The analysis shows an increase in
the polydispersity of the proteins and a corresponding effective increase in
the relative content of magnetic material against the protein moiety of the
shell with the LF growth. At LFs above ~150, the apoferritin shell undergoes
structural changes, which is strongly indicative of the fact that the shell
stability is affected by iron oxide presence.
| 0 | 1 | 0 | 0 | 0 | 0 |
Volumetric Super-Resolution of Multispectral Data | Most multispectral remote sensors (e.g. QuickBird, IKONOS, and Landsat 7
ETM+) provide low-spatial high-spectral resolution multispectral (MS) or
high-spatial low-spectral resolution panchromatic (PAN) images, separately. In
order to reconstruct a high-spatial/high-spectral resolution multispectral
image volume, either the information in MS and PAN images are fused (i.e.
pansharpening) or super-resolution reconstruction (SRR) is used with only MS
images captured on different dates. Existing methods do not utilize temporal
information of MS and high spatial resolution of PAN images together to improve
the resolution. In this paper, we propose a multiframe SRR algorithm using
pansharpened MS images, taking advantage of both temporal and spatial
information available in multispectral imagery, in order to exceed spatial
resolution of given PAN images. We first apply pansharpening to a set of
multispectral images and their corresponding PAN images captured on different
dates. Then, we use the pansharpened multispectral images as input to the
proposed wavelet-based multiframe SRR method to yield full volumetric SRR. The
proposed SRR method is obtained by deriving the subband relations between
multitemporal MS volumes. We demonstrate the results on Landsat 7 ETM+ images
comparing our method to conventional techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
Atomic Convolutional Networks for Predicting Protein-Ligand Binding Affinity | Empirical scoring functions based on either molecular force fields or
cheminformatics descriptors are widely used, in conjunction with molecular
docking, during the early stages of drug discovery to predict potency and
binding affinity of a drug-like molecule to a given target. These models
require expert-level knowledge of physical chemistry and biology to be encoded
as hand-tuned parameters or features rather than allowing the underlying model
to select features in a data-driven procedure. Here, we develop a general
3-dimensional spatial convolution operation for learning atomic-level chemical
interactions directly from atomic coordinates and demonstrate its application
to structure-based bioactivity prediction. The atomic convolutional neural
network is trained to predict the experimentally determined binding affinity of
a protein-ligand complex by direct calculation of the energy associated with
the complex, protein, and ligand given the crystal structure of the binding
pose. Non-covalent interactions present in the complex that are absent in the
protein-ligand sub-structures are identified and the model learns the
interaction strength associated with these features. We test our model by
predicting the binding free energy of a subset of protein-ligand complexes
found in the PDBBind dataset and compare with state-of-the-art cheminformatics
and machine learning-based approaches. We find that all methods achieve
experimental accuracy and that atomic convolutional networks either outperform
or perform competitively with the cheminformatics based methods. Unlike all
previous protein-ligand prediction systems, atomic convolutional networks are
end-to-end and fully-differentiable. They represent a new data-driven,
physics-based deep learning model paradigm that offers a strong foundation for
future improvements in structure-based bioactivity prediction.
| 1 | 1 | 0 | 1 | 0 | 0 |
Random characters under the $L$-measure, I : Dirichlet characters | We define the $L$-measure on the set of Dirichlet characters as an analogue
of the Plancherel measure, once considered as a measure on the irreducible
characters of the symmetric group.
We compare the two measures and study the limit in distribution of characters
evaluations when the size of the underlying group grows. These evaluations are
proven to converge in law to imaginary exponentials of a Cauchy distribution in
the same way as the rescaled windings of the complex Brownian motion. This
contrasts with the case of the symmetric group where the renormalised
characters converge in law to Gaussians after rescaling (Kerov Central Limit
Theorem).
| 0 | 0 | 1 | 0 | 0 | 0 |
The Effects of Superheating Treatment on Distribution of Eutectic Silicon Particles in A357-Continuous Stainless Steel Composite | In the present study, superheating treatment has been applied on A357
reinforced with 0.5 wt. % (Composite 1) and 1.0 wt.% (Composite 2) continuous
stainless steel composite. In Composite 1 the microstructure displayed poor
bonding between matrix and reinforcement interface. Poor bonding associated
with large voids also can be seen in Composite 1. The results also showed that
coarser eutectic silicon (Si) particles were less intensified around the
matrix-reinforcement interface. From energy dispersive spectrometry (EDS)
elemental mapping, it was clearly shown that the distribution of eutectic Si
particles were less concentrated at poor bonding regions associated with large
voids. Meanwhile in Composite 2, the microstructure displayed good bonding
combined with more concentrated finer eutectic Si particles around the
matrix-reinforcement interface. From EDS elemental mapping, it was clearly
showed more concentrated of eutectic Si particles were distributed at the good
bonding area. The superheating prior to casting has influenced the
microstructure and tends to produce finer, rounded and preferred oriented
{\alpha}-Al dendritic structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cycle-of-Learning for Autonomous Systems from Human Interaction | We discuss different types of human-robot interaction paradigms in the
context of training end-to-end reinforcement learning algorithms. We provide a
taxonomy to categorize the types of human interaction and present our
Cycle-of-Learning framework for autonomous systems that combines different
human-interaction modalities with reinforcement learning. Two key concepts
provided by our Cycle-of-Learning framework are how it handles the integration
of the different human-interaction modalities (demonstration, intervention, and
evaluation) and how to define the switching criteria between them.
| 1 | 0 | 0 | 0 | 0 | 0 |
Comparison of methods for early-readmission prediction in a high-dimensional heterogeneous covariates and time-to-event outcome framework | Background: Choosing the most performing method in terms of outcome
prediction or variables selection is a recurring problem in prognosis studies,
leading to many publications on methods comparison. But some aspects have
received little attention. First, most comparison studies treat prediction
performance and variable selection aspects separately. Second, methods are
either compared within a binary outcome setting (based on an arbitrarily chosen
delay) or within a survival setting, but not both. In this paper, we propose a
comparison methodology to weight up those different settings both in terms of
prediction and variables selection, while incorporating advanced machine
learning strategies. Methods: Using a high-dimensional case study on a
sickle-cell disease (SCD) cohort, we compare 8 statistical methods. In the
binary outcome setting, we consider logistic regression (LR), support vector
machine (SVM), random forest (RF), gradient boosting (GB) and neural network
(NN); while on the survival analysis setting, we consider the Cox Proportional
Hazards (PH), the CURE and the C-mix models. We then compare performances of
all methods both in terms of risk prediction and variable selection, with a
focus on the use of Elastic-Net regularization technique. Results: Among all
assessed statistical methods assessed, the C-mix model yields the better
performances in both the two considered settings, as well as interesting
interpretation aspects. There is some consistency in selected covariates across
methods within a setting, but not much across the two settings. Conclusions: It
appears that learning withing the survival setting first, and then going back
to a binary prediction using the survival estimates significantly enhance
binary predictions.
| 0 | 0 | 0 | 1 | 0 | 0 |
How the notion of ACCESS guides the organization of a European research infrastructure: the example of DARIAH | This contribution will show how Access play a strong role in the creation and
structuring of DARIAH, a European Digital Research Infrastructure in Arts and
Humanities.To achieve this goal, this contribution will develop the concept of
Access from five examples: Interdisciplinarity point of view, Manage
contradiction between national and international perspectives, Involve
different communities (not only researchers stakeholders), Manage tools and
services, Develop and use new collaboration tools. We would like to demonstrate
that speaking about Access always implies a selection, a choice, even in the
perspective of "Open Access".
| 1 | 0 | 0 | 0 | 0 | 0 |
Soliton-potential interactions for nonlinear Schrödinger equation in $\mathbb{R}^3$ | In this work we mainly consider the dynamics and scattering of a narrow
soliton of NLS equation with a potential in $\mathbb{R}^3$, where the
asymptotic state of the system can be far from the initial state in parameter
space. Specifically, if we let a narrow soliton state with initial velocity
$\upsilon_{0}$ to interact with an extra potential $V(x)$, then the velocity
$\upsilon_{+}$ of outgoing solitary wave in infinite time will in general be
very different from $\upsilon_{0}$. In contrast to our present work, previous
works proved that the soliton is asymptotically stable under the assumption
that $\upsilon_{+}$ stays close to $\upsilon_{0}$ in a certain manner.
| 0 | 0 | 1 | 0 | 0 | 0 |
Incommensurately modulated twin structure of nyerereite Na1.64K0.36Ca(CO3)2 | Incommensurately modulated twin structure of nyerereite Na1.64K0.36Ca(CO3)2
has been first determined in the (3+1)D symmetry group Cmcm({\alpha}00)00s with
modulation vector q = 0.383a*. Unit-cell values are a = 5.062(1), b = 8.790(1),
c = 12.744(1) {\AA}. Three orthorhombic components are related by threefold
rotation about [001]. Discontinuous crenel functions are used to describe
occupation modulation of Ca and some CO3 groups. Strong displacive modulation
of the oxygen atoms in vertexes of such CO3 groups is described using
x-harmonics in crenel intervals. The Na, K atoms occupy mixed sites whose
occupation modulation is described by two ways using either complementary
harmonic functions or crenels. The nyerereite structure has been compared both
with commensurately modulated structure of K-free Na2Ca(CO3)2 and with widely
known incommensurately modulated structure of {\gamma}-Na2CO3.
| 0 | 1 | 0 | 0 | 0 | 0 |
Jensen's force and the statistical mechanics of cortical asynchronous states | The cortex exhibits self-sustained highly-irregular activity even under
resting conditions, whose origin and function need to be fully understood. It
is believed that this can be described as an "asynchronous state" stemming from
the balance between excitation and inhibition, with important consequences for
information-processing, though a competing hypothesis claims it stems from
critical dynamics. By analyzing a parsimonious neural-network model with
excitatory and inhibitory interactions, we elucidate a noise-induced mechanism
called "Jensen's force" responsible for the emergence of a novel phase of
arbitrarily-low but self-sustained activity, which reproduces all the
experimental features of asynchronous states. The simplicity of our framework
allows for a deep understanding of asynchronous states from a broad
statistical-mechanics perspective and of the phase transitions to other
standard phases it exhibits, opening the door to reconcile, asynchronous-state
and critical-state hypotheses. We argue that Jensen's forces are measurable
experimentally and might be relevant in contexts beyond neuroscience.
| 0 | 0 | 0 | 0 | 1 | 0 |
Notes on relative normalizations of ruled surfaces in the three-dimensional Euclidean space | This paper deals with relative normalizations of skew ruled surfaces in the
Euclidean space $\mathbb{E}^{3}$. In section 2 we investigate some new formulae
concerning the Pick invariant, the relative curvature, the relative mean
curvature and the curvature of the relative metric of a relatively normalized
ruled surface $\varPhi$ and in section 3 we introduce some special
normalizations of it. All ruled surfaces and their corresponding normalizations
that make $\varPhi$ an improper or a proper relative sphere are determined in
section 4. In the last section we study ruled surfaces, which are
\emph{centrally} normalized, i.e., their relative normals at each point lie on
the corresponding central plane. Especially we study various properties of the
Tchebychev vector field. We conclude the paper by the study of the central
image of $\varPhi$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ferroelectric control of the giant Rashba spin orbit coupling in GeTe(111)/InP(111) superlattice | GeTe wins the renewed research interest due to its giant bulk Rashba spin
orbit coupling (SOC), and becomes the father of a new multifunctional material,
i.e., ferroelectric Rashba semiconductor. In the present work, we investigate
Rashba SOC at the interface of the ferroelectric semiconductor superlattice
GeTe(111)/InP(111) by using the first principles calculation. Contribution of
the interface electric field and the ferroelectric field to Rashba SOC is
revealed. A large modulation to Rashba SOC and a reversal of the spin
polarization is obtained by switching the ferroelectric polarization. Our
investigation about GeTe(111)/InP(111) superlattice is of great importance in
the application of ferroelectric Rashba semiconductor in the spin field effect
transistor.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological phase of the interlayer exchange coupling with application to magnetic switching | We show, theoretically, that the phase of the interlayer exchange coupling
(IEC) undergoes a topological change of approximately $2\pi$ as the chemical
potential of the ferromagnetic (FM) lead moves across a hybridization gap (HG).
The effect is largely independent of the detailed parameters of the system, in
particular the width of the gap. The implication is that for a narrow gap, a
small perturbation in the chemical potential of the lead can give a sign
reversal of the exchange coupling. This offers the possibility of controlling
magnetization switching in spintronic devices such as MRAM, with little power
consumption. Furthermore we believe that this effect has already been
indirectly observed, in existing measurements of the IEC as a function of
temperature and of doping of the leads.
| 0 | 1 | 0 | 0 | 0 | 0 |
New constraints on time-dependent variations of fundamental constants using Planck data | Observations of the CMB today allow us to answer detailed questions about the
properties of our Universe, targeting both standard and non-standard physics.
In this paper, we study the effects of varying fundamental constants (i.e., the
fine-structure constant, $\alpha_{\rm EM}$, and electron rest mass, $m_{\rm
e}$) around last scattering using the recombination codes CosmoRec and
Recfast++. We approach the problem in a pedagogical manner, illustrating the
importance of various effects on the free electron fraction, Thomson visibility
function and CMB power spectra, highlighting various degeneracies. We
demonstrate that the simpler Recfast++ treatment (based on a three-level atom
approach) can be used to accurately represent the full computation of CosmoRec.
We also include explicit time-dependent variations using a phenomenological
power-law description. We reproduce previous Planck 2013 results in our
analysis. Assuming constant variations relative to the standard values, we find
the improved constraints $\alpha_{\rm EM}/\alpha_{\rm EM,0}=0.9993\pm 0.0025$
(CMB only) and $m_{\rm e}/m_{\rm e,0}= 1.0039 \pm 0.0074$ (including BAO) using
Planck 2015 data. For a redshift-dependent variation, $\alpha_{\rm
EM}(z)=\alpha_{\rm EM}(z_0)\,[(1+z)/1100]^p$ with $\alpha_{\rm
EM}(z_0)\equiv\alpha_{\rm EM,0}$ at $z_0=1100$, we obtain $p=0.0008\pm 0.0025$.
Allowing simultaneous variations of $\alpha_{\rm EM}(z_0)$ and $p$ yields
$\alpha_{\rm EM}(z_0)/\alpha_{\rm EM,0} = 0.9998\pm 0.0036$ and $p = 0.0006\pm
0.0036$. We also discuss combined limits on $\alpha_{\rm EM}$ and $m_{\rm e}$.
Our analysis shows that existing data is not only sensitive to the value of the
fundamental constants around recombination but also its first time derivative.
This suggests that a wider class of varying fundamental constant models can be
probed using the CMB.
| 0 | 1 | 0 | 0 | 0 | 0 |
ForestClaw: A parallel algorithm for patch-based adaptive mesh refinement on a forest of quadtrees | We describe a parallel, adaptive, multi-block algorithm for explicit
integration of time dependent partial differential equations on two-dimensional
Cartesian grids. The grid layout we consider consists of a nested hierarchy of
fixed size, non-overlapping, logically Cartesian grids stored as leaves in a
quadtree. Dynamic grid refinement and parallel partitioning of the grids is
done through the use of the highly scalable quadtree/octree library p4est.
Because our concept is multi-block, we are able to easily solve on a variety of
geometries including the cubed sphere. In this paper, we pay special attention
to providing details of the parallel ghost-filling algorithm needed to ensure
that both corner and edge ghost regions around each grid hold valid values.
We have implemented this algorithm in the ForestClaw code using single-grid
solvers from ClawPack, a software package for solving hyperbolic PDEs using
finite volumes methods. We show weak and strong scalability results for scalar
advection problems on two-dimensional manifold domains on 1 to 64Ki MPI
processes, demonstrating neglible regridding overhead.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distributed Triangle Counting in the Graphulo Matrix Math Library | Triangle counting is a key algorithm for large graph analysis. The Graphulo
library provides a framework for implementing graph algorithms on the Apache
Accumulo distributed database. In this work we adapt two algorithms for
counting triangles, one that uses the adjacency matrix and another that also
uses the incidence matrix, to the Graphulo library for server-side processing
inside Accumulo. Cloud-based experiments show a similar performance profile for
these different approaches on the family of power law Graph500 graphs, for
which data skew increasingly bottlenecks. These results motivate the design of
skew-aware hybrid algorithms that we propose for future work.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ensemble Classifier for Eye State Classification using EEG Signals | The growing importance and utilization of measuring brain waves (e.g. EEG
signals of eye state) in brain-computer interface (BCI) applications
highlighted the need for suitable classification methods. In this paper, a
comparison between three of well-known classification methods (i.e. support
vector machine (SVM), hidden Markov map (HMM), and radial basis function (RBF))
for EEG based eye state classification was achieved. Furthermore, a suggested
method that is based on ensemble model was tested. The suggested (ensemble
system) method based on a voting algorithm with two kernels: random forest (RF)
and Kstar classification methods. The performance was tested using three
measurement parameters: accuracy, mean absolute error (MAE), and confusion
matrix. Results showed that the proposed method outperforms the other tested
methods. For instance, the suggested method's performance was 97.27% accuracy
and 0.13 MAE.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nutritionally recommended food for semi- to strict vegetarian diets based on large-scale nutrient composition data | Diet design for vegetarian health is challenging due to the limited food
repertoire of vegetarians. This challenge can be partially overcome by
quantitative, data-driven approaches that utilise massive nutritional
information collected for many different foods. Based on large-scale data of
foods' nutrient compositions, the recent concept of nutritional fitness helps
quantify a nutrient balance within each food with regard to satisfying daily
nutritional requirements. Nutritional fitness offers prioritisation of
recommended foods using the foods' occurrence in nutritionally adequate food
combinations. Here, we systematically identify nutritionally recommendable
foods for semi- to strict vegetarian diets through the computation of
nutritional fitness. Along with commonly recommendable foods across different
diets, our analysis reveals favourable foods specific to each diet, such as
immature lima beans for a vegan diet as an amino acid and choline source, and
mushrooms for ovo-lacto vegetarian and vegan diets as a vitamin D source.
Furthermore, we find that selenium and other essential micronutrients can be
subject to deficiency in plant-based diets, and suggest nutritionally-desirable
dietary patterns. We extend our analysis to two hypothetical scenarios of
highly personalised, plant-based methionine-restricted diets. Our
nutrient-profiling approach may provide a useful guide for designing different
types of personalised vegetarian diets.
| 1 | 0 | 0 | 0 | 1 | 0 |
Does data interpolation contradict statistical optimality? | We show that learning methods interpolating the training data can achieve
optimal rates for the problems of nonparametric regression and prediction with
square loss.
| 0 | 0 | 0 | 1 | 0 | 0 |
Spin-Frustrated Pyrochlore Chains in the Volcanic Mineral Kamchatkite (KCu3OCl(SO4)2) | Search of new frustrated magnetic systems is of a significant importance for
physics studying the condensed matter. The platform for geometric frustration
of magnetic systems can be provided by copper oxocentric tetrahedra (OCu4)
forming the base of crystalline structures of copper minerals from Tolbachik
volcanos in Kamchatka. The present work was devoted to a new frustrated
antiferromagnetic - kamchatkite (KCu3OCl(SO4)2). The calculation of the sign
and strength of magnetic couplings in KCu3OCl(SO4)2 has been performed on the
basis of structural data by the phenomenological crystal chemistry method with
taking into account corrections on the Jahn-Teller orbital degeneracy of Cu2.
It has been established that kamchatkite (KCu3OCl(SO4)2) contains AFM
spin-frustrated chains of the pyrochlore type composed of cone-sharing Cu4
tetrahedra. Strong AFM intrachain and interchain couplings compete with each
other. Frustration of magnetic couplings in tetrahedral chains is combined with
the presence of electric polarization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Detail-revealing Deep Video Super-resolution | Previous CNN-based video super-resolution approaches need to align multiple
frames to the reference. In this paper, we show that proper frame alignment and
motion compensation is crucial for achieving high quality results. We
accordingly propose a `sub-pixel motion compensation' (SPMC) layer in a CNN
framework. Analysis and experiments show the suitability of this layer in video
SR. The final end-to-end, scalable CNN framework effectively incorporates the
SPMC layer and fuses multiple frames to reveal image details. Our
implementation can generate visually and quantitatively high-quality results,
superior to current state-of-the-arts, without the need of parameter tuning.
| 1 | 0 | 0 | 0 | 0 | 0 |
MON: Mission-optimized Overlay Networks | Large organizations often have users in multiple sites which are connected
over the Internet. Since resources are limited, communication between these
sites needs to be carefully orchestrated for the most benefit to the
organization. We present a Mission-optimized Overlay Network (MON), a hybrid
overlay network architecture for maximizing utility to the organization. We
combine an offline and an online system to solve non-concave utility
maximization problems. The offline tier, the Predictive Flow Optimizer (PFO),
creates plans for routing traffic using a model of network conditions. The
online tier, MONtra, is aware of the precise local network conditions and is
able to react quickly to problems within the network. Either tier alone is
insufficient. The PFO may take too long to react to network changes. MONtra
only has local information and cannot optimize non-concave mission utilities.
However, by combining the two systems, MON is robust and achieves near-optimal
utility under a wide range of network conditions. While best-effort overlay
networks are well studied, our work is the first to design overlays that are
optimized for mission utility.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generalized Results on Monoids as Memory | We show that some results from the theory of group automata and monoid
automata still hold for more general classes of monoids and models. Extending
previous work for finite automata over commutative groups, we demonstrate a
context-free language that can not be recognized by any rational monoid
automaton over a finitely generated permutable monoid. We show that the class
of languages recognized by rational monoid automata over finitely generated
completely simple or completely 0-simple permutable monoids is a semi-linear
full trio. Furthermore, we investigate valence pushdown automata, and prove
that they are only as powerful as (finite) valence automata. We observe that
certain results proven for monoid automata can be easily lifted to the case of
context-free valence grammars.
| 1 | 0 | 0 | 0 | 0 | 0 |
Radio Resource Allocation for Multicarrier-Low Density Spreading Multiple Access | Multicarrier-low density spreading multiple access (MC-LDSMA) is a promising
multiple access technique that enables near optimum multiuser detection. In
MC-LDSMA, each user's symbol spread on a small set of subcarriers, and each
subcarrier is shared by multiple users. The unique structure of MC-LDSMA makes
the radio resource allocation more challenging comparing to some well-known
multiple access techniques. In this paper, we study the radio resource
allocation for single-cell MC-LDSMA system. Firstly, we consider the
single-user case, and derive the optimal power allocation and subcarriers
partitioning schemes. Then, by capitalizing on the optimal power allocation of
the Gaussian multiple access channel, we provide an optimal solution for
MC-LDSMA that maximizes the users' weighted sum-rate under relaxed constraints.
Due to the prohibitive complexity of the optimal solution, suboptimal
algorithms are proposed based on the guidelines inferred by the optimal
solution. The performance of the proposed algorithms and the effect of
subcarrier loading and spreading are evaluated through Monte Carlo simulations.
Numerical results show that the proposed algorithms significantly outperform
conventional static resource allocation, and MC-LDSMA can improve the system
performance in terms of spectral efficiency and fairness in comparison with
OFDMA.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimating the Spectral Density of Large Implicit Matrices | Many important problems are characterized by the eigenvalues of a large
matrix. For example, the difficulty of many optimization problems, such as
those arising from the fitting of large models in statistics and machine
learning, can be investigated via the spectrum of the Hessian of the empirical
loss function. Network data can be understood via the eigenstructure of a graph
Laplacian matrix using spectral graph theory. Quantum simulations and other
many-body problems are often characterized via the eigenvalues of the solution
space, as are various dynamic systems. However, naive eigenvalue estimation is
computationally expensive even when the matrix can be represented; in many of
these situations the matrix is so large as to only be available implicitly via
products with vectors. Even worse, one may only have noisy estimates of such
matrix vector products. In this work, we combine several different techniques
for randomized estimation and show that it is possible to construct unbiased
estimators to answer a broad class of questions about the spectra of such
implicit matrices, even in the presence of noise. We validate these methods on
large-scale problems in which graph theory and random matrix theory provide
ground truth.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.