title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials | Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from
the parietal and occipital regions of the brain that are evoked from flickering
visual stimuli. SSVEPs are robust signals measurable in the
electroencephalogram (EEG) and are commonly used in brain-computer interfaces
(BCIs). However, methods for high-accuracy decoding of SSVEPs usually require
hand-crafted approaches that leverage domain-specific knowledge of the stimulus
signals, such as specific temporal frequencies in the visual stimuli and their
relative spatial arrangement. When this knowledge is unavailable, such as when
SSVEP signals are acquired asynchronously, such approaches tend to fail. In
this paper, we show how a compact convolutional neural network (Compact-CNN),
which only requires raw EEG signals for automatic feature extraction, can be
used to decode signals from a 12-class SSVEP dataset without the need for any
domain-specific knowledge or calibration data. We report across subject mean
accuracy of approximately 80% (chance being 8.3%) and show this is
substantially better than current state-of-the-art hand-crafted approaches
using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, we
analyze our Compact-CNN to examine the underlying feature representation,
discovering that the deep learner extracts additional phase and amplitude
related features associated with the structure of the dataset. We discuss how
our Compact-CNN shows promise for BCI applications that allow users to freely
gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as
provides a method for analyzing SSVEP signals in a way that might augment our
understanding about the basic processing in the visual cortex.
| 0 | 0 | 0 | 1 | 1 | 0 |
Long-range proximity effect in Nb-based heterostructures induced by a magnetically inhomogeneous permalloy layer | Odd-frequency triplet Cooper pairs are believed to be the carriers of
long-range superconducting correlations in ferromagnets. Such triplet pairs are
generated by inhomogeneous magnetism at the interface between a superconductor
(S) and a ferromagnet (F). So far, reproducible long-range effects were
reported only in complex layered structures designed to provide the magnetic
inhomogeneity. Here we show that spin triplet pair formation can be found in
simple unstructured Nb/Permalloy (Py = Ni_0.8Fe_0.2)/Nb trilayers and Nb/Py
bilayers, but only when the thickness of the ferromagnetic layer ranges between
140 and 250 nm. The effect is related to the emergence of an intrinsically
inhomogeneous magnetic state, which is a precursor of the well-known stripe
regime in Py that in our samples sets in at thickness larger than 300 nm.
| 0 | 1 | 0 | 0 | 0 | 0 |
ASK/PSK-correspondence and the r-map | We formulate a correspondence between affine and projective special Kähler
manifolds of the same dimension. As an application, we show that, under this
correspondence, the affine special Kähler manifolds in the image of the rigid
r-map are mapped to one-parameter deformations of projective special Kähler
manifolds in the image of the supergravity r-map. The above one-parameter
deformations are interpreted as perturbative $\alpha'$-corrections in heterotic
and type-II string compactifications with $N=2$ supersymmetry. Also affine
special Kähler manifolds with quadratic prepotential are mapped to
one-parameter families of projective special Kähler manifolds with quadratic
prepotential. We show that the completeness of the deformed supergravity r-map
metric depends solely on the (well-understood) completeness of the undeformed
metric and the sign of the deformation parameter.
| 0 | 0 | 1 | 0 | 0 | 0 |
Trust Region Value Optimization using Kalman Filtering | Policy evaluation is a key process in reinforcement learning. It assesses a
given policy using estimation of the corresponding value function. When using a
parameterized function to approximate the value, it is common to optimize the
set of parameters by minimizing the sum of squared Bellman Temporal Differences
errors. However, this approach ignores certain distributional properties of
both the errors and value parameters. Taking these distributions into account
in the optimization process can provide useful information on the amount of
confidence in value estimation. In this work we propose to optimize the value
by minimizing a regularized objective function which forms a trust region over
its parameters. We present a novel optimization method, the Kalman Optimization
for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA
minimizes the regularized objective function by adopting a Bayesian perspective
over both the value parameters and noisy observed returns. This distributional
property provides information on parameter uncertainty in addition to value
estimates. We provide theoretical results of our approach and analyze the
performance of our proposed optimizer on domains with large state and action
spaces.
| 1 | 0 | 0 | 1 | 0 | 0 |
Simplified Long Short-term Memory Recurrent Neural Networks: part I | We present five variants of the standard Long Short-term Memory (LSTM)
recurrent neural networks by uniformly reducing blocks of adaptive parameters
in the gating mechanisms. For simplicity, we refer to these models as LSTM1,
LSTM2, LSTM3, LSTM4, and LSTM5, respectively. Such parameter-reduced variants
enable speeding up data training computations and would be more suitable for
implementations onto constrained embedded platforms. We comparatively evaluate
and verify our five variant models on the classical MNIST dataset and
demonstrate that these variant models are comparable to a standard
implementation of the LSTM model while using less number of parameters.
Moreover, we observe that in some cases the standard LSTM's accuracy
performance will drop after a number of epochs when using the ReLU
nonlinearity; in contrast, however, LSTM3, LSTM4 and LSTM5 will retain their
performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Contracts as specifications for dynamical systems in driving variable form | This paper introduces assume/guarantee contracts on continuous-time control
systems, hereby extending contract theories for discrete systems to certain new
model classes and specifications. Contracts are regarded as formal
characterizations of control specifications, providing an alternative to
specifications in terms of dissipativity properties or set-invariance. The
framework has the potential to capture a richer class of specifications more
suitable for complex engineering systems. The proposed contracts are supported
by results that enable the verification of contract implementation and the
comparison of contracts. These results are illustrated by an example of a
vehicle following system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Visualizing the Phase-Space Dynamics of an External Cavity Semiconductor Laser | We map the phase-space trajectories of an external-cavity semiconductor laser
using phase portraits. This is both a visualization tool as well as a
thoroughly quantitative approach enabling unprecedented insight into the
dynamical regimes, from continuous-wave through coherence collapse as feedback
is increased. Namely, the phase portraits in the intensity versus laser-diode
terminal-voltage (serving as a surrogate for inversion) plane are mapped out.
We observe a route to chaos interrupted by two types of limit cycles, a
subharmonic regime and period-doubled dynamics at the edge of chaos. The
transition of the dynamics are analyzed utilizing bifurcation diagrams for both
the optical intensity and the laser-diode terminal voltage. These observations
provide visual insight into the dynamics in these systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
On decision regions of narrow deep neural networks | We show that for neural network functions that have width less or equal to
the input dimension all connected components of decision regions are unbounded.
The result holds for continuous and strictly monotonic activation functions as
well as for ReLU activation. This complements recent results on approximation
capabilities of [Hanin 2017 Approximating] and connectivity of decision regions
of [Nguyen 2018 Neural] for such narrow neural networks. Further, we give an
example that negatively answers the question posed in [Nguyen 2018 Neural]
whether one of their main results still holds for ReLU activation. Our results
are illustrated by means of numerical experiments.
| 0 | 0 | 0 | 1 | 0 | 0 |
An adelic arithmeticity theorem for lattices in products | We prove that, under mild assumptions, a lattice in a product of semi-simple
Lie group and a totally disconnected locally compact group is, in a certain
sense, arithmetic. We do not assume the lattice to be finitely generated or the
ambient group to be compactly generated.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quadratic automaton algebras and intermediate growth | We present an example of a quadratic algebra given by three generators and
three relations, which is automaton (the set of normal words forms a regular
language) and such that its ideal of relations does not possess a finite
Gröbner basis with respect to any choice of generators and any choice of a
well-ordering of monomials compatible with multiplication. This answers a
question of Ufnarovski.
Another result is a simple example (4 generators and 7 relations) of a
quadratic algebra of intermediate growth.
| 0 | 0 | 1 | 0 | 0 | 0 |
Homotopy types of gauge groups related to $S^3$-bundles over $S^4$ | Let $M_{l,m}$ be the total space of the $S^3$-bundle over $S^4$ classified by
the element $l\sigma+m\rho\in{\pi_4(SO(4))}$, $l,m\in\mathbb Z$. In this paper
we study the homotopy theory of gauge groups of principal $G$-bundles over
manifolds $M_{l,m}$ when $G$ is a simply connected simple compact Lie group
such that $\pi_6(G)=0$. That is, $G$ is one of the following groups: $SU(n)$
$(n\geq4)$, $Sp(n)$ $(n\geq2)$, $Spin(n)$ $(n\geq5)$, $F_4$, $E_6$, $E_7$,
$E_8$. If the integral homology of $M_{l,m}$ is torsion-free, we describe the
homotopy type of the gauge groups over $M_{l,m}$ as products of recognisable
spaces. For any manifold $M_{l,m}$ with non-torsion-free homology, we give a
$p$-local homotopy decomposition, for a prime $p\geq 5$, of the loop space of
the gauge groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optical quality assurance of GEM foils | An analysis software was developed for the high aspect ratio optical scanning
system in the Detec- tor Laboratory of the University of Helsinki and the
Helsinki Institute of Physics. The system is used e.g. in the quality assurance
of the GEM-TPC detectors being developed for the beam diagnostics system of the
SuperFRS at future FAIR facility. The software was tested by analyzing five
CERN standard GEM foils scanned with the optical scanning system. The
measurement uncertainty of the diameter of the GEM holes and the pitch of the
hole pattern was found to be 0.5 {\mu}m and 0.3 {\mu}m, respectively. The
software design and the performance are discussed. The correlation between the
GEM hole size distribution and the corresponding gain variation was studied by
comparing them against a detailed gain mapping of a foil and a set of six lower
precision control measurements. It can be seen that a qualitative estimation of
the behavior of the local variation in gain across the GEM foil can be made
based on the measured sizes of the outer and inner holes.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Global Continuity of the Roots of Families of Monic Polynomials (in Russian) | We raise a question on the existence of continuous roots of families of monic
polynomials (by the root of a family of polynomials we mean a function of the
coefficients of polynomials of a given family that maps each tuple of
coefficients to a root of the polynomial with these coefficients). We prove
that the family of monic second-degree polynomials with complex coefficients
and the families of monic fourth-degree and fifth-degree polynomials with real
coefficients have no continuous root. We also prove that the family of monic
second-degree polynomials with real coefficients has continuous roots and we
describe the set of all such roots.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Number of Rich Words | Any finite word $w$ of length $n$ contains at most $n+1$ distinct palindromic
factors. If the bound $n+1$ is reached, the word $w$ is called rich. The number
of rich words of length $n$ over an alphabet of cardinality $q$ is denoted
$R_n(q)$. For binary alphabet, Rubinchik and Shur deduced that ${R_n(2)}\leq c
1.605^n $ for some constant $c$. We prove that $\lim\limits_{n\rightarrow
\infty }\sqrt[n]{R_n(q)}=1$ for any $q$, i.e. $R_n(q)$ has a subexponential
growth on any alphabet.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Geometric Analysis of Power System Loadability Regions | Understanding the feasible power flow region is of central importance to
power system analysis. In this paper, we propose a geometric view of the power
system loadability problem. By using rectangular coordinates for complex
voltages, we provide an integrated geometric understanding of active and
reactive power flow equations on loadability boundaries. Based on such an
understanding, we develop a linear programming framework to 1) verify if an
operating point is on the loadability boundary, 2) compute the margin of an
operating point to the loadability boundary, and 3) calculate a loadability
boundary point of any direction. The proposed method is computationally more
efficient than existing methods since it does not require solving nonlinear
optimization problems or calculating the eigenvalues of the power flow
Jacobian. Standard IEEE test cases demonstrate the capability of the new method
compared to the current state-of-the-art methods.
| 1 | 0 | 1 | 0 | 0 | 0 |
Model Selection Confidence Sets by Likelihood Ratio Testing | The traditional activity of model selection aims at discovering a single
model superior to other candidate models. In the presence of pronounced noise,
however, multiple models are often found to explain the same data equally well.
To resolve this model selection ambiguity, we introduce the general approach of
model selection confidence sets (MSCSs) based on likelihood ratio testing. A
MSCS is defined as a list of models statistically indistinguishable from the
true model at a user-specified level of confidence, which extends the familiar
notion of confidence intervals to the model-selection framework. Our approach
guarantees asymptotically correct coverage probability of the true model when
both sample size and model dimension increase. We derive conditions under which
the MSCS contains all the relevant information about the true model structure.
In addition, we propose natural statistics based on the MSCS to measure
importance of variables in a principled way that accounts for the overall model
uncertainty. When the space of feasible models is large, MSCS is implemented by
an adaptive stochastic search algorithm which samples MSCS models with high
probability. The MSCS methodology is illustrated through numerical experiments
on synthetic data and real data examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
Analysis of Dirichlet forms on graphs | In this thesis, we study connections between metric and combinatorial graphs
from a Dirichlet space point of view.
| 0 | 0 | 1 | 0 | 0 | 0 |
Designing and building the mlpack open-source machine learning library | mlpack is an open-source C++ machine learning library with an emphasis on
speed and flexibility. Since its original inception in 2007, it has grown to be
a large project implementing a wide variety of machine learning algorithms,
from standard techniques such as decision trees and logistic regression to
modern techniques such as deep neural networks as well as other
recently-published cutting-edge techniques not found in any other library.
mlpack is quite fast, with benchmarks showing mlpack outperforming other
libraries' implementations of the same methods. mlpack has an active community,
with contributors from around the world---including some from PUST. This short
paper describes the goals and design of mlpack, discusses how the open-source
community functions, and shows an example usage of mlpack for a simple data
science problem.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Vietoris-Smale mapping theorem for the homotopy of hyperdefinable sets | Results of Smale (1957) and Dugundji (1969) allow to compare the homotopy
groups of two topological spaces $X$ and $Y$ whenever a map $f:X\to Y$ with
strong connectivity conditions on the fibers is given. We apply similar
techniques in o-minimal expansions of fields to compare the o-minimal homotopy
of a definable set $X$ with the homotopy of some of its bounded hyperdefinable
quotients $X/E$. Under suitable assumption, we show that $\pi_{n}(X)^{\rm
def}\cong\pi_{n}(X/E)$ and $\dim(X)=\dim_{\mathbb R}(X/E)$. As a special case,
given a definably compact group, we obtain a new proof of Pillay's group
conjecture "$\dim(G)=\dim_{\mathbb R}(G/G^{00}$)" largely independent of the
group structure of $G$. We also obtain different proofs of various comparison
results between classical and o-minimal homotopy.
| 0 | 0 | 1 | 0 | 0 | 0 |
Characterization of multivariate Bernoulli distributions with given margins | We express each Fréchet class of multivariate Bernoulli distributions with
given margins as the convex hull of a set of densities, which belong to the
same Fréchet class. This characterisation allows us to establish whether a
given correlation matrix is compatible with the assigned margins and, if it is,
to easily construct one of the corresponding joint densities. % Such
%representation is based on a polynomial expression of the distributions of a
Fréchet class. We reduce the problem of finding a density belonging to a
Fréchet class and with given correlation matrix to the solution of a linear
system of equations. Our methodology also provides the bounds that each
correlation must satisfy to be compatible with the assigned margins. An
algorithm and its use in some examples is shown.
| 0 | 0 | 1 | 1 | 0 | 0 |
Note on character varieties and cluster algebras | We use Bonahon-Wong's trace map to study character varieties of the
once-punctured torus and of the 4-punctured sphere. We clarify a relationship
with cluster algebra associated with ideal triangulations of surfaces, and we
show that the Goldman Poisson algebra of loops on surfaces is recovered from
the Poisson structure of cluster algebra. It is also shown that cluster
mutations give the automorphism of the character varieties. Motivated by a work
of Chekhov-Mazzocco-Rubtsov, we revisit confluences of punctures on sphere from
cluster algebraic viewpoint, and we obtain associated affine cubic surfaces
constructed by van der Put-Saito based on the Riemann-Hilbert correspondence.
Further studied are quantizations of character varieties by use of quantum
cluster algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
Privacy-Preserving Multi-Period Demand Response: A Game Theoretic Approach | We study a multi-period demand response problem in the smart grid with
multiple companies and their consumers. We model the interactions by a
Stackelberg game, where companies are the leaders and consumers are the
followers. It is shown that this game has a unique equilibrium at which the
companies set prices to maximize their revenues while the consumers respond
accordingly to maximize their utilities subject to their local constraints.
Billing minimization is achieved as an outcome of our method. Closed-form
expressions are provided for the strategies of all players. Based on these
solutions, a power allocation game has been formulated, and which is shown to
admit a unique pure-strategy Nash equilibrium, for which closed-form
expressions are provided. For privacy, we provide a distributed algorithm for
the computation of all strategies. We study the asymptotic behavior of
equilibrium strategies when the numbers of periods and consumers grow. We find
an appropriate company-to-user ratio for the large population regime.
Furthermore, it is shown, both analytically and numerically, that the
multi-period scheme, compared with the single-period one, provides more
incentives for energy consumers to participate in demand response. We have also
carried out case studies on real life data to demonstrate the benefits of our
approach, including billing savings of up to 30\%.
| 1 | 0 | 0 | 0 | 0 | 0 |
An ALMA survey of submillimetre galaxies in the COSMOS field: The extent of the radio-emitting region revealed by 3 GHz imaging with the Very Large Array | We determine the radio size distribution of a large sample of 152 SMGs in
COSMOS that were detected with ALMA at 1.3 mm. For this purpose, we used the
observations taken by the VLA-COSMOS 3 GHz Large Project. One hundred and
fifteen of the 152 target SMGs were found to have a 3 GHz counterpart. The
median value of the major axis FWHM at 3 GHz is derived to be $4.6\pm0.4$ kpc.
The radio sizes show no evolutionary trend with redshift, or difference between
different galaxy morphologies. We also derived the spectral indices between 1.4
and 3 GHz, and 3 GHz brightness temperatures for the sources, and the median
values were found to be $\alpha=-0.67$ and $T_{\rm B}=12.6\pm2$ K. Three of the
target SMGs, which are also detected with the VLBA, show clearly higher
brightness temperatures than the typical values. Although the observed radio
emission appears to be predominantly powered by star formation and supernova
activity, our results provide a strong indication of the presence of an AGN in
the VLBA and X-ray-detected SMG AzTEC/C61. The median radio-emitting size we
have derived is 1.5-3 times larger than the typical FIR dust-emitting sizes of
SMGs, but similar to that of the SMGs' molecular gas component traced through
mid-$J$ line emission of CO. The physical conditions of SMGs probably render
the diffusion of cosmic-ray electrons inefficient, and hence an unlikely
process to lead to the observed extended radio sizes. Instead, our results
point towards a scenario where SMGs are driven by galaxy interactions and
mergers. Besides triggering vigorous starbursts, galaxy collisions can also
pull out the magnetised fluids from the interacting disks, and give rise to a
taffy-like synchrotron-emitting bridge. This provides an explanation for the
spatially extended radio emission of SMGs, and can also cause a deviation from
the well-known IR-radio correlation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Negative differential resistance and magnetoresistance in zigzag borophene nanoribbons | We investigate the transport properties of pristine zigzag-edged borophene
nanoribbons (ZBNRs) of different widths, using the fist-principles
calculations. We choose ZBNRs with widths of 5 and 6 as odd and even widths.
The differences of the quantum transport properties are found, where even-N
BNRs and odd-N BNRs have different current-voltage relationships. Moreover, the
negative differential resistance (NDR) can be observed within certain bias
range in 5-ZBNR, while 6-ZBNR behaves as metal whose current rises with the
increase of the voltage. The spin filter effect of 36% can be revealed when the
two electrodes have opposite magnetization direction. Furthermore, the
magnetoresistance effect appears to be in even-N ZBNRs, and the maximum value
can reach 70%.
| 0 | 1 | 0 | 0 | 0 | 0 |
Incremental Eigenpair Computation for Graph Laplacian Matrices: Theory and Applications | The smallest eigenvalues and the associated eigenvectors (i.e., eigenpairs)
of a graph Laplacian matrix have been widely used in spectral clustering and
community detection. However, in real-life applications the number of clusters
or communities (say, $K$) is generally unknown a-priori. Consequently, the
majority of the existing methods either choose $K$ heuristically or they repeat
the clustering method with different choices of $K$ and accept the best
clustering result. The first option, more often, yields suboptimal result,
while the second option is computationally expensive. In this work, we propose
an incremental method for constructing the eigenspectrum of the graph Laplacian
matrix. This method leverages the eigenstructure of graph Laplacian matrix to
obtain the $K$-th smallest eigenpair of the Laplacian matrix given a collection
of all previously computed $K-1$ smallest eigenpairs. Our proposed method
adapts the Laplacian matrix such that the batch eigenvalue decomposition
problem transforms into an efficient sequential leading eigenpair computation
problem. As a practical application, we consider user-guided spectral
clustering. Specifically, we demonstrate that users can utilize the proposed
incremental method for effective eigenpair computation and for determining the
desired number of clusters based on multiple clustering metrics.
| 1 | 0 | 0 | 1 | 0 | 0 |
Audio-replay attack detection countermeasures | This paper presents the Speech Technology Center (STC) replay attack
detection systems proposed for Automatic Speaker Verification Spoofing and
Countermeasures Challenge 2017. In this study we focused on comparison of
different spoofing detection approaches. These were GMM based methods, high
level features extraction with simple classifier and deep learning frameworks.
Experiments performed on the development and evaluation parts of the challenge
dataset demonstrated stable efficiency of deep learning approaches in case of
changing acoustic conditions. At the same time SVM classifier with high level
features provided a substantial input in the efficiency of the resulting STC
systems according to the fusion systems results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Joint Scheduling and Transmission Power Control in Wireless Ad Hoc Networks | In this paper, we study how to determine concurrent transmissions and the
transmission power level of each link to maximize spectrum efficiency and
minimize energy consumption in a wireless ad hoc network. The optimal joint
transmission packet scheduling and power control strategy are determined when
the node density goes to infinity and the network area is unbounded. Based on
the asymptotic analysis, we determine the fundamental capacity limits of a
wireless network, subject to an energy consumption constraint. We propose a
scheduling and transmission power control mechanism to approach the optimal
solution to maximize spectrum and energy efficiencies in a practical network.
The distributed implementation of the proposed scheduling and transmission
power control scheme is presented based on our MAC framework proposed in [1].
Simulation results demonstrate that the proposed scheme achieves 40% higher
throughput than existing schemes. Also, the energy consumption using the
proposed scheme is about 20% of the energy consumed using existing power saving
MAC protocols.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proceedings 15th International Conference on Automata and Formal Languages | The 15th International Conference on Automata and Formal Languages (AFL 2017)
was held in Debrecen, Hungary, from September 4 to 6, 2017. The conference was
organized by the Faculty of Informatics of the University of Debrecen and the
Faculty of Informatics of the Eötvös Loránd University of Budapest.
Topics of interest covered all aspects of automata and formal languages,
including theory and applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Correlative cellular ptychography with functionalized nanoparticles at the Fe L-edge | Precise localization of nanoparticles within a cell is crucial to the
understanding of cell-particle interactions and has broad applications in
nanomedicine. Here, we report a proof-of-principle experiment for imaging
individual functionalized nanoparticles within a mammalian cell by correlative
microscopy. Using a chemically-fixed, HeLa cell labeled with fluorescent
core-shell nanoparticles as a model system, we implemented a graphene-oxide
layer as a substrate to significantly reduce background scattering. We
identified cellular features of interest by fluorescence microscopy, followed
by scanning transmission X-ray tomography to localize the particles in 3D, and
ptychographic coherent diffractive imaging of the fine features in the region
at high resolution. By tuning the X-ray energy to the Fe L-edge, we
demonstrated sensitive detection of nanoparticles composed of a 22 nm magnetic
Fe3O4 core encased by a 25-nm-thick fluorescent silica (SiO2) shell. These
fluorescent core-shell nanoparticles act as landmarks and offer clarity in a
cellular context. Our correlative microscopy results confirmed a subset of
particles to be fully internalized, and high-contrast ptychographic images
showed two oxidation states of individual nanoparticles with a resolution of
~16.5 nm. The ability to precisely localize individual fluorescent
nanoparticles within mammalian cells will expand our understanding of the
structure/function relationships for functionalized nanoparticles.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning | Operating in a dynamic real world environment requires a forward thinking and
adversarial aware design for classifiers, beyond fitting the model to the
training data. In such scenarios, it is necessary to make classifiers - a)
harder to evade, b) easier to detect changes in the data distribution over
time, and c) be able to retrain and recover from model degradation. While most
works in the security of machine learning has concentrated on the evasion
resistance (a) problem, there is little work in the areas of reacting to
attacks (b and c). Additionally, while streaming data research concentrates on
the ability to react to changes to the data distribution, they often take an
adversarial agnostic view of the security problem. This makes them vulnerable
to adversarial activity, which is aimed towards evading the concept drift
detection mechanism itself. In this paper, we analyze the security of machine
learning, from a dynamic and adversarial aware perspective. The existing
techniques of Restrictive one class classifier models, Complex learning models
and Randomization based ensembles, are shown to be myopic as they approach
security as a static task. These methodologies are ill suited for a dynamic
environment, as they leak excessive information to an adversary, who can
subsequently launch attacks which are indistinguishable from the benign data.
Based on empirical vulnerability analysis against a sophisticated adversary, a
novel feature importance hiding approach for classifier design, is proposed.
The proposed design ensures that future attacks on classifiers can be detected
and recovered from. The proposed work presents motivation, by serving as a
blueprint, for future work in the area of Dynamic-Adversarial mining, which
combines lessons learned from Streaming data mining, Adversarial learning and
Cybersecurity.
| 0 | 0 | 0 | 1 | 0 | 0 |
Software stage-effort estimation based on association rule mining and fuzzy set theory | Relaying on early effort estimation to predict the required number of
resources is not often sufficient, and could lead to under or over estimation.
It is widely acknowledge that that software development process should be
refined regularly and that software prediction made at early stage of software
development is yet kind of guesses. Even good predictions are not sufficient
with inherent uncertainty and risks. The stage-effort estimation allows project
manager to re-allocate correct number of resources, re-schedule project and
control project progress to finish on time and within budget. In this paper we
propose an approach to utilize prior effort records to predict stage effort.
The proposed model combines concepts of Fuzzy set theory and association rule
mining. The results were good in terms of prediction accuracy and have
potential to deliver good stage-effort estimation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Phase Transitions in Approximate Ranking | We study the problem of approximate ranking from observations of pairwise
interactions. The goal is to estimate the underlying ranks of $n$ objects from
data through interactions of comparison or collaboration. Under a general
framework of approximate ranking models, we characterize the exact optimal
statistical error rates of estimating the underlying ranks. We discover
important phase transition boundaries of the optimal error rates. Depending on
the value of the signal-to-noise ratio (SNR) parameter, the optimal rate, as a
function of SNR, is either trivial, polynomial, exponential or zero. The four
corresponding regimes thus have completely different error behaviors. To the
best of our knowledge, this phenomenon, especially the phase transition between
the polynomial and the exponential rates, has not been discovered before.
| 0 | 0 | 1 | 1 | 0 | 0 |
Finding Influential Training Samples for Gradient Boosted Decision Trees | We address the problem of finding influential training samples for a
particular case of tree ensemble-based models, e.g., Random Forest (RF) or
Gradient Boosted Decision Trees (GBDT). A natural way of formalizing this
problem is studying how the model's predictions change upon leave-one-out
retraining, leaving out each individual training sample. Recent work has shown
that, for parametric models, this analysis can be conducted in a
computationally efficient way. We propose several ways of extending this
framework to non-parametric GBDT ensembles under the assumption that tree
structures remain fixed. Furthermore, we introduce a general scheme of
obtaining further approximations to our method that balance the trade-off
between performance and computational complexity. We evaluate our approaches on
various experimental setups and use-case scenarios and demonstrate both the
quality of our approach to finding influential training samples in comparison
to the baselines and its computational efficiency.
| 0 | 0 | 0 | 1 | 0 | 0 |
Vulnerability and co-susceptibility determine the size of network cascades | In a network, a local disturbance can propagate and eventually cause a
substantial part of the system to fail, in cascade events that are easy to
conceptualize but extraordinarily difficult to predict. Here, we develop a
statistical framework that can predict cascade size distributions by
incorporating two ingredients only: the vulnerability of individual components
and the co-susceptibility of groups of components (i.e., their tendency to fail
together). Using cascades in power grids as a representative example, we show
that correlations between component failures define structured and often
surprisingly large groups of co-susceptible components. Aside from their
implications for blackout studies, these results provide insights and a new
modeling framework for understanding cascades in financial systems, food webs,
and complex networks in general.
| 1 | 1 | 0 | 0 | 0 | 0 |
Transfer learning for music classification and regression tasks | In this paper, we present a transfer learning approach for music
classification and regression tasks. We propose to use a pre-trained convnet
feature, a concatenated feature vector using the activations of feature maps of
multiple layers in a trained convolutional network. We show how this convnet
feature can serve as general-purpose music representation. In the experiments,
a convnet is trained for music tagging and then transferred to other
music-related classification and regression tasks. The convnet feature
outperforms the baseline MFCC feature in all the considered tasks and several
previous approaches that are aggregating MFCCs as well as low- and high-level
music features.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic Transition in Symbiotic Evolution Induced by Growth Rate Variation | In a standard bifurcation of a dynamical system, the stationary points (or
more generally attractors) change qualitatively when varying a control
parameter. Here we describe a novel unusual effect, when the change of a
parameter, e.g. a growth rate, does not influence the stationary states, but
nevertheless leads to a qualitative change of dynamics. For instance, such a
dynamic transition can be between the convergence to a stationary state and a
strong increase without stationary states, or between the convergence to one
stationary state and that to a different state. This effect is illustrated for
a dynamical system describing two symbiotic populations, one of which exhibits
a growth rate larger than the other one. We show that, although the stationary
states of the dynamical system do not depend on the growth rates, the latter
influence the boundary of the basins of attraction. This change of the basins
of attraction explains this unusual effect of the quantitative change of
dynamics by growth rate variation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scale-invariant magnetoresistance in a cuprate superconductor | The anomalous metallic state in high-temperature superconducting cuprates is
masked by the onset of superconductivity near a quantum critical point. Use of
high magnetic fields to suppress superconductivity has enabled a detailed study
of the ground state in these systems. Yet, the direct effect of strong magnetic
fields on the metallic behavior at low temperatures is poorly understood,
especially near critical doping, $x=0.19$. Here we report a high-field
magnetoresistance study of thin films of \LSCO cuprates in close vicinity to
critical doping, $0.161\leq x\leq0.190$. We find that the metallic state
exposed by suppressing superconductivity is characterized by a
magnetoresistance that is linear in magnetic field up to the highest measured
fields of $80$T. The slope of the linear-in-field resistivity is
temperature-independent at very high fields. It mirrors the magnitude and
doping evolution of the linear-in-temperature resistivity that has been
ascribed to Planckian dissipation near a quantum critical point. This
establishes true scale-invariant conductivity as the signature of the strange
metal state in the high-temperature superconducting cuprates.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thermoelectric Devices: Principles and Future Trends | The principles of the thermoelectric phenomenon, including Seebeck effect,
Peltier effect, and Thomson effect are discussed. The dependence of the
thermoelectric devices on the figure of merit, Seebeck coefficient, electrical
conductivity, and thermal conductivity are explained in details. The paper
provides an overview of the different types of thermoelectric materials,
explains the techniques used to grow thin films for these materials, and
discusses future research and development trends for this technology.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exploring one particle orbitals in large Many-Body Localized systems | Strong disorder in interacting quantum systems can give rise to the
phenomenon of Many-Body Localization (MBL), which defies thermalization due to
the formation of an extensive number of quasi local integrals of motion. The
one particle operator content of these integrals of motion is related to the
one particle orbitals of the one particle density matrix and shows a strong
signature across the MBL transition as recently pointed out by Bera et al.
[Phys. Rev. Lett. 115, 046603 (2015); Ann. Phys. 529, 1600356 (2017)]. We study
the properties of the one particle orbitals of many-body eigenstates of an MBL
system in one dimension. Using shift-and-invert MPS (SIMPS), a matrix product
state method to target highly excited many-body eigenstates introduced in
[Phys. Rev. Lett. 118, 017201 (2017)], we are able to obtain accurate results
for large systems of sizes up to L = 64. We find that the one particle orbitals
drawn from eigenstates at different energy densities have high overlap and
their occupations are correlated with the energy of the eigenstates. Moreover,
the standard deviation of the inverse participation ratio of these orbitals is
maximal at the nose of the mobility edge. Also, the one particle orbitals decay
exponentially in real space, with a correlation length that increases at low
disorder. In addition, we find a 1/f distribution of the coupling constants of
a certain range of the number operators of the OPOs, which is related to their
exponential decay.
| 0 | 1 | 0 | 0 | 0 | 0 |
On transient waves in linear viscoelasticity | The aim of this paper is to present a comprehensive review of method of the
wave-front expansion, also known in the literature as the Buchen-Mainardi
algorithm. In particular, many applications of this technique to the
fundamental models of both ordinary and fractional linear viscoelasticity are
thoroughly presented and discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Detection of an Optical Counterpart to the ALFALFA Ultra-compact High Velocity Cloud AGC 249525 | We report on the detection at $>$98% confidence of an optical counterpart to
AGC 249525, an Ultra-Compact High Velocity Cloud (UCHVC) discovered by the
ALFALFA blind neutral hydrogen survey. UCHVCs are compact, isolated HI clouds
with properties consistent with their being nearby low-mass galaxies, but
without identified counterparts in extant optical surveys. Analysis of the
resolved stellar sources in deep $g$- and $i$-band imaging from the WIYN pODI
camera reveals a clustering of possible Red Giant Branch stars associated with
AGC 249525 at a distance of 1.64$\pm$0.45 Mpc. Matching our optical detection
with the HI synthesis map of AGC 249525 from Adams et al. (2016) shows that the
stellar overdensity is exactly coincident with the highest-density HI contour
from that study. Combining our optical photometry and the HI properties of this
object yields an absolute magnitude of $-7.1 \leq M_V \leq -4.5$, a stellar
mass between $2.2\pm0.6\times10^4 M_{\odot}$ and $3.6\pm1.0\times10^5
M_{\odot}$, and an HI to stellar mass ratio between 9 and 144. This object has
stellar properties within the observed range of gas-poor Ultra-Faint Dwarfs in
the Local Group, but is gas-dominated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Boosting the power factor with resonant states: a model study | A particularly promising pathway to enhance the efficiency of thermoelectric
materials lies in the use of resonant states, as suggested by experimentalists
and theorists alike. In this paper, we go over the mechanisms used in the
literature to explain how resonant levels affect the thermoelectric properties,
and we suggest that the effects of hybridization are crucial yet
ill-understood. In order to get a good grasp of the physical picture and to
draw guidelines for thermoelectric enhancement, we use a tight-binding model
containing a conduction band hybridized with a flat band. We find that the
conductivity is suppressed in a wide energy range near the resonance, but that
the Seebeck coefficient can be boosted for strong enough hybridization, thus
allowing for a significant increase of the power factor. The Seebeck
coefficient can also display a sign change as the Fermi level crosses the
resonance. Our results suggest that in order to boost the power factor, the
hybridization strength must not be too low, the resonant level must not be too
close to the conduction (or valence) band edge, and the Fermi level must be
located around, but not inside, the resonant peak.
| 0 | 1 | 0 | 0 | 0 | 0 |
Activation cross-section data for alpha-particle induced nuclear reactions on natural ytterbium for some longer lived radioisotopes | Additional experimental cross sections were deduced for the long half-life
activation products (172Hf and 173Lu) from the alpha particle induced reactions
on ytterbium up to 38 MeV from late, long measurements and for 175Yb, 167Tm
from a re-evaluation of earlier measured spectra. The cross-sections are
compared with the earlier experimental datasets and with the data based on the
TALYS theoretical nuclear reaction model (available in the TENDL-2014 and 2015
libraries) and the ALICE-IPPE code.
| 0 | 0 | 0 | 1 | 0 | 0 |
Congestion-Aware Distributed Network Selection for Integrated Cellular and Wi-Fi Networks | Intelligent network selection plays an important role in achieving an
effective data offloading in the integrated cellular and Wi-Fi networks.
However, previously proposed network selection schemes mainly focused on
offloading as much data traffic to Wi-Fi as possible, without systematically
considering the Wi-Fi network congestion and the ping-pong effect, both of
which may lead to a poor overall user quality of experience. Thus, in this
paper, we study a more practical network selection problem by considering both
the impacts of the network congestion and switching penalties. More
specifically, we formulate the users' interactions as a Bayesian network
selection game (NSG) under the incomplete information of the users' mobilities.
We prove that it is a Bayesian potential game and show the existence of a pure
Bayesian Nash equilibrium that can be easily reached. We then propose a
distributed network selection (DNS) algorithm based on the network congestion
statistics obtained from the operator. Furthermore, we show that computing the
optimal centralized network allocation is an NP-hard problem, which further
justifies our distributed approach. Simulation results show that the DNS
algorithm achieves the highest user utility and a good fairness among users, as
compared with the on-the-spot offloading and cellular-only benchmark schemes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Learning Based Large-Scale Automatic Satellite Crosswalk Classification | High-resolution satellite imagery have been increasingly used on remote
sensing classification problems. One of the main factors is the availability of
this kind of data. Even though, very little effort has been placed on the zebra
crossing classification problem. In this letter, crowdsourcing systems are
exploited in order to enable the automatic acquisition and annotation of a
large-scale satellite imagery database for crosswalks related tasks. Then, this
dataset is used to train deep-learning-based models in order to accurately
classify satellite images that contains or not zebra crossings. A novel dataset
with more than 240,000 images from 3 continents, 9 countries and more than 20
cities was used in the experiments. Experimental results showed that freely
available crowdsourcing data can be used to accurately (97.11%) train robust
models to perform crosswalk classification on a global scale.
| 1 | 0 | 0 | 1 | 0 | 0 |
Complexity of the Regularized Newton Method | Newton's method for finding an unconstrained minimizer for strictly convex
functions, generally speaking, does not converge from any starting point.
We introduce and study the damped regularized Newton's method (DRNM). It
converges globally for any strictly convex function, which has a minimizer in
$R^n$.
Locally DRNM converges with a quadratic rate. We characterize the
neighborhood of the minimizer, where the quadratic rate occurs. Based on it we
estimate the number of DRNM's steps required for finding an $\varepsilon$-
approximation for the minimizer.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantifying Program Bias | With the range and sensitivity of algorithmic decisions expanding at a
break-neck speed, it is imperative that we aggressively investigate whether
programs are biased. We propose a novel probabilistic program analysis
technique and apply it to quantifying bias in decision-making programs.
Specifically, we (i) present a sound and complete automated verification
technique for proving quantitative properties of probabilistic programs; (ii)
show that certain notions of bias, recently proposed in the fairness
literature, can be phrased as quantitative correctness properties; and (iii)
present FairSquare, the first verification tool for quantifying program bias,
and evaluate it on a range of decision-making programs.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Theory of Exoplanet Transits with Light Scattering | Exoplanet transit spectroscopy enables the characterization of distant
worlds, and will yield key results for NASA's James Webb Space Telescope.
However, transit spectra models are often simplified, omitting potentially
important processes like refraction and multiple scattering. While the former
process has seen recent development, the effects of light multiple scattering
on exoplanet transit spectra has received little attention. Here, we develop a
detailed theory of exoplanet transit spectroscopy that extends to the full
refracting and multiple scattering case. We explore the importance of
scattering for planet-wide cloud layers, where the relevant parameters are the
slant scattering optical depth, the scattering asymmetry parameter, and the
angular size of the host star. The latter determines the size of the "target"
for a photon that is back-mapped from an observer. We provide results that
straightforwardly indicate the potential importance of multiple scattering for
transit spectra. When the orbital distance is smaller than 10-20 times the
stellar radius, multiple scattering effects for aerosols with asymmetry
parameters larger than 0.8-0.9 can become significant. We provide examples of
the impacts of cloud/haze multiple scattering on transit spectra of a hot
Jupiter-like exoplanet. For cases with a forward and conservatively scattering
cloud/haze, differences due to multiple scattering effects can exceed 200 ppm,
but shrink to zero at wavelength ranges corresponding to strong gas absorption
or when the slant optical depth of the cloud exceeds several tens. We conclude
with a discussion of types of aerosols for which multiple scattering in transit
spectra may be important.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tuning Majorana zero modes with temperature in $π$-phase Josephson junctions | We study a superconductor-normal state-superconductor (SNS) Josephson
junction along the edge of a quantum spin Hall insulator (QSHI) with a
superconducting $\pi$-phase across the junction. We solve self-consistently for
the superconducting order parameter and find both real junctions, where the
order parameter is fully real throughout the system, and junctions where the
order parameter has a complex phase winding. Real junctions host two Majorana
zero modes (MZMs), while phase-winding junctions have no subgap states close to
zero energy. At zero temperature we find that the phase-winding solution always
has the lowest free energy, which we establish being due to a strong
proximity-effect into the N region. With increasing temperature this
proximity-effect is dramatically decreased and we find a phase transition into
a real junction with two MZMs.
| 0 | 1 | 0 | 0 | 0 | 0 |
A repulsive skyrmion chain as guiding track for a race track memory | A skyrmion racetrack design is proposed that allows for thermally stable
skyrmions to code information and dynamical pinning sites that move with the
applied current. This concept solves the problem of intrinsic distributions of
pinning times and pinning currents of skyrmions at static geometrical or
magnetic pinning sites. The dynamical pinning sites are realized by a skyrmion
carrying wire, where the skyrmion repulsion is used in order to keep the
skyrmions at equal distances. The information is coded by an additional layer
where the presence and absence of a skyrmion is used to code the information.
The lowest energy barrier for a data loss is calculated to be DE = 55 kBT300
which is sufficient for long time thermal stability.
| 0 | 1 | 0 | 0 | 0 | 0 |
Compressive Sensing-Based Detection with Multimodal Dependent Data | Detection with high dimensional multimodal data is a challenging problem when
there are complex inter- and intra- modal dependencies. While several
approaches have been proposed for dependent data fusion (e.g., based on copula
theory), their advantages come at a high price in terms of computational
complexity. In this paper, we treat the detection problem with compressive
sensing (CS) where compression at each sensor is achieved via low dimensional
random projections. CS has recently been exploited to solve detection problems
under various assumptions on the signals of interest, however, its potential
for dependent data fusion has not been explored adequately. We exploit the
capability of CS to capture statistical properties of uncompressed data in
order to compute decision statistics for detection in the compressed domain.
First, a Gaussian approximation is employed to perform likelihood ratio (LR)
based detection with compressed data. In this approach, inter-modal dependence
is captured via a compressed version of the covariance matrix of the
concatenated (temporally and spatially) uncompressed data vector. We show that,
under certain conditions, this approach with a small number of compressed
measurements per node leads to enhanced performance compared to detection with
uncompressed data using widely considered suboptimal approaches. Second, we
develop a nonparametric approach where a decision statistic based on the second
order statistics of uncompressed data is computed in the compressed domain. The
second approach is promising over other related nonparametric approaches and
the first approach when multimodal data is highly correlated at the expense of
slightly increased computational complexity.
| 1 | 0 | 0 | 1 | 0 | 0 |
Factors in Recommending Contrarian Content on Social Media | Polarization is a troubling phenomenon that can lead to societal divisions
and hurt the democratic process. It is therefore important to develop methods
to reduce it.
We propose an algorithmic solution to the problem of reducing polarization.
The core idea is to expose users to content that challenges their point of
view, with the hope broadening their perspective, and thus reduce their
polarity. Our method takes into account several aspects of the problem, such as
the estimated polarity of the user, the probability of accepting the
recommendation, the polarity of the content, and popularity of the content
being recommended.
We evaluate our recommendations via a large-scale user study on Twitter users
that were actively involved in the discussion of the US elections results.
Results shows that, in most cases, the factors taken into account in the
recommendation affect the users as expected, and thus capture the essential
features of the problem.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generating the Log Law of the Wall with Superposition of Standing Waves | Turbulence remains an unsolved multidisciplinary science problem. As one of
the most well-known examples in turbulent flows, knowledge of the logarithmic
mean velocity profile (MVP), so called the log law of the wall, plays an
important role everywhere turbulent flow meets the solid wall, such as fluids
in any kind of channels, skin friction of all types of transportations, the
atmospheric wind on a planetary ground, and the oceanic current on the seabed.
However, the mechanism of how this log-law MVP is formed under the multiscale
nature of turbulent shears remains one of the greatest interests of turbulence
puzzles. To untangle the multiscale coupling of turbulent shear stresses, we
explore for a known fundamental tool in physics. Here we present how to
reproduce the log-law MVP with the even harmonic modes of fixed-end standing
waves. We find that when these harmonic waves of same magnitude are considered
as the multiscale turbulent shear stresses, the wave envelope of their
superposition simulates the mean shear stress profile of the wall-bounded flow.
It implies that the log-law MVP is not expectedly related to the turbulent
scales in the inertial subrange associated with the Kolmogorov energy cascade,
revealing the dissipative nature of all scales involved. The MVP with reduced
harmonic modes also shows promising connection to the understanding of flow
transition to turbulence. The finding here suggests the simple harmonic waves
as good agents to help unravel the complex turbulent dynamics in wall-bounded
flow.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive p-value weighting with power optimality | Weighting the p-values is a well-established strategy that improves the power
of multiple testing procedures while dealing with heterogeneous data. However,
how to achieve this task in an optimal way is rarely considered in the
literature. This paper contributes to fill the gap in the case of
group-structured null hypotheses, by introducing a new class of procedures
named ADDOW (for Adaptive Data Driven Optimal Weighting) that adapts both to
the alternative distribution and to the proportion of true null hypotheses. We
prove the asymptotical FDR control and power optimality among all weighted
procedures of ADDOW, which shows that it dominates all existing procedures in
that framework. Some numerical experiments show that the proposed method
preserves its optimal properties in the finite sample setting when the number
of tests is moderately large.
| 0 | 0 | 1 | 1 | 0 | 0 |
Topical homophily in online social systems | Understanding the dynamics of social interactions is crucial to comprehend
human behavior. The emergence of online social media has enabled access to data
regarding people relationships at a large scale. Twitter, specifically, is an
information oriented network, with users sharing and consuming information. In
this work, we study whether users tend to be in contact with people interested
in similar topics, i.e., topical homophily. To do so, we propose an approach
based on the use of hashtags to extract information topics from Twitter
messages and model users' interests. Our results show that, on average, users
are connected with other users similar to them and stronger relationships are
due to a higher topical similarity. Furthermore, we show that topical homophily
provides interesting information that can eventually allow inferring users'
connectivity. Our work, besides providing a way to assess the topical
similarity of users, quantifies topical homophily among individuals,
contributing to a better understanding of how complex social systems are
structured.
| 1 | 0 | 0 | 0 | 0 | 0 |
Perceptual Context in Cognitive Hierarchies | Cognition does not only depend on bottom-up sensor feature abstraction, but
also relies on contextual information being passed top-down. Context is higher
level information that helps to predict belief states at lower levels. The main
contribution of this paper is to provide a formalisation of perceptual context
and its integration into a new process model for cognitive hierarchies. Several
simple instantiations of a cognitive hierarchy are used to illustrate the role
of context. Notably, we demonstrate the use context in a novel approach to
visually track the pose of rigid objects with just a 2D camera.
| 1 | 0 | 0 | 0 | 0 | 0 |
Coherence for braided and symmetric pseudomonoids | Presentations for unbraided, braided and symmetric pseudomonoids are defined.
Biequivalences characterising the semistrict bicategories generated by these
presentations are proven. It is shown that these biequivalences categorify
results in the theory of monoids and commutative monoids, and generalise
standard coherence theorems for braided and symmetric monoidal categories.
| 1 | 0 | 1 | 0 | 0 | 0 |
Linear Optimal Power Flow Using Cycle Flows | Linear optimal power flow (LOPF) algorithms use a linearization of the
alternating current (AC) load flow equations to optimize generator dispatch in
a network subject to the loading constraints of the network branches. Common
algorithms use the voltage angles at the buses as optimization variables, but
alternatives can be computationally advantageous. In this article we provide a
review of existing methods and describe a new formulation that expresses the
loading constraints directly in terms of the flows themselves, using a
decomposition of the network graph into a spanning tree and closed cycles. We
provide a comprehensive study of the computational performance of the various
formulations, in settings that include computationally challenging applications
such as multi-period LOPF with storage dispatch and generation capacity
expansion. We show that the new formulation of the LOPF solves up to 7 times
faster than the angle formulation using a commercial linear programming solver,
while another existing cycle-based formulation solves up to 20 times faster,
with an average speed-up of factor 3 for the standard networks considered here.
If generation capacities are also optimized, the average speed-up rises to a
factor of 12, reaching up to factor 213 in a particular instance. The speed-up
is largest for networks with many buses and decentral generators throughout the
network, which is highly relevant given the rise of distributed renewable
generation and the computational challenge of operation and planning in such
networks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Toward construction of a consistent field theory with Poincare covariance in terms of step-function-type basis functions showing confinement/deconfinement, mass-gap and Regge trajectory for non-pure/pure non-Abelian gauge fields | This article is a review by the authors concerning the construction of a
Poincar${\rm \acute{e}}$ covariant (owing to spacetime continuum)
field-theoretic formalism in terms of step-function-type basis functions
without ultraviolet divergences. This formalism analytically derives
confinement/deconfinement, mass-gap and Regge trajectory for non-Abelian gauge
fields, and gives solutions for self-interacting scalar fields. Fields
propagate in spacetime continuum and fields with finite degrees of freedom
toward continuum limit have no ultraviolet divergence. Basis functions defined
in a parameter spacetime are mapped to real spacetime. The authors derive a new
solution comprised of classical fields as a vacuum and quantum fluctuations,
leading to the linear potential between the particle and antiparticle from the
Wilson loop. The Polyakov line gives finite binding energies and reveals the
deconfining property at high temperatures. The quantum action yields positive
mass from the classical fields and quantum fluctuations produces the Coulomb
potential. Pure Yang-Mills fields show the same mass-gap owing to the
particle-antiparticle pair creation. The Dirac equation under linear potential
is analytically solved in this formalism, reproducing the principal properties
of Regge trajectories at a quantum level. Further outlook mentions a
possibility of the difference between conventional continuum and present wave
functions responsible for the cosmological constant.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Exact Solution to Rank-1 L1-norm TUCKER2 Decomposition | We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way
tensors, treated as a collection of $N$ $D \times M$ matrices that are to be
jointly decomposed. Our contributions are as follows. i) We prove that the
problem is equivalent to combinatorial optimization over $N$ antipodal-binary
variables. ii) We derive the first two algorithms in the literature for its
exact solution. The first algorithm has cost exponential in $N$; the second one
has cost polynomial in $N$ (under a mild assumption). Our algorithms are
accompanied by formal complexity analysis. iii) We conduct numerical studies to
compare the performance of exact L1-TUCKER2 (proposed) with standard HOSVD,
HOOI, GLRAM, PCA, L1-PCA, and TPCA-L1. Our studies show that L1-TUCKER2
outperforms (in tensor approximation) all the above counterparts when the
processed data are outlier corrupted.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Statistical Comparative Planetology Approach to the Hunt for Habitable Exoplanets and Life Beyond the Solar System | The search for habitable exoplanets and life beyond the Solar System is one
of the most compelling scientific opportunities of our time. Nevertheless, the
high cost of building facilities that can address this topic and the keen
public interest in the results of such research requires the rigorous
development of experiments that can deliver a definitive advance in our
understanding. Most work to date in this area has focused on a "systems
science" approach of obtaining and interpreting comprehensive data for
individual planets to make statements about their habitability and the
possibility that they harbor life. This strategy is challenging because of the
diversity of exoplanets, both observed and expected, and the limited
information that can be obtained with astronomical instruments. Here we propose
a complementary approach that is based on performing surveys of key planetary
characteristics and using statistical marginalization to answer broader
questions than can be addressed with a small sample of objects. The fundamental
principle of this comparative planetology approach is maximizing what can be
learned from each type of measurement by applying it widely rather than
requiring that multiple kinds of observations be brought to bear on a single
object. As a proof of concept, we outline a survey of terrestrial exoplanet
atmospheric water and carbon dioxide abundances that would test the habitable
zone hypothesis and lead to a deeper understanding of the frequency of
habitable planets. We also discuss ideas for additional surveys that could be
developed to test other foundational hypotheses is this area.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast Generation for Convolutional Autoregressive Models | Convolutional autoregressive models have recently demonstrated
state-of-the-art performance on a number of generation tasks. While fast,
parallel training methods have been crucial for their success, generation is
typically implemented in a naïve fashion where redundant computations are
unnecessarily repeated. This results in slow generation, making such models
infeasible for production environments. In this work, we describe a method to
speed up generation in convolutional autoregressive models. The key idea is to
cache hidden states to avoid redundant computation. We apply our fast
generation method to the Wavenet and PixelCNN++ models and achieve up to
$21\times$ and $183\times$ speedups respectively.
| 1 | 0 | 0 | 1 | 0 | 0 |
Variable domain N-linked glycosylation and negative surface charge are key features of monoclonal ACPA: implications for B-cell selection | Autoreactive B cells have a central role in the pathogenesis of rheumatoid
arthritis (RA), and recent findings have proposed that anti-citrullinated
protein autoantibodies (ACPA) may be directly pathogenic. Herein, we
demonstrate the frequency of variable-region glycosylation in single-cell
cloned mAbs. A total of 14 ACPA mAbs were evaluated for predicted N-linked
glycosylation motifs in silico and compared to 452 highly-mutated mAbs from RA
patients and controls. Variable region N-linked motifs (N-X-S/T) were
strikingly prevalent within ACPA (100%) compared to somatically hypermutated
(SHM) RA bone marrow plasma cells (21%), and synovial plasma cells from
seropositive (39%) and seronegative RA (7%). When normalized for SHM, ACPA
still had significantly higher frequency of N-linked motifs compared to all
studied mAbs including highly-mutated HIV broadly-neutralizing and
malaria-associated mAbs. The Fab glycans of ACPA-mAbs were highly sialylated,
contributed to altered charge, but did not influence antigen binding. The
analysis revealed evidence of unusual B-cell selection pressure and
SHM-mediated decreased in surface charge and isoelectric point in ACPA. It is
still unknown how these distinct features of anti-citrulline immunity may have
an impact on pathogenesis. However, it is evident that they offer selective
advantages for ACPA+ B cells, possibly also through non-antigen driven
mechanisms.
| 0 | 0 | 0 | 0 | 1 | 0 |
Periodic Airy process and equilibrium dynamics of edge fermions in a trap | We establish an exact mapping between (i) the equilibrium (imaginary time)
dynamics of non-interacting fermions trapped in a harmonic potential at
temperature $T=1/\beta$ and (ii) non-intersecting Ornstein-Uhlenbeck (OU)
particles constrained to return to their initial positions after time $\beta$.
Exploiting the determinantal structure of the process we compute the universal
correlation functions both in the bulk and at the edge of the trapped Fermi
gas. The latter corresponds to the top path of the non-intersecting OU
particles, and leads us to introduce and study the time-periodic Airy$_2$
process, ${\cal A}^b_2(u)$, depending on a single parameter, the period $b$.
The standard Airy$_2$ process is recovered for $b=+\infty$. We discuss
applications of our results to the real time quantum dynamics of trapped
fermions.
| 0 | 1 | 1 | 0 | 0 | 0 |
Quantum machine learning: a classical perspective | Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.
| 1 | 0 | 0 | 1 | 0 | 0 |
Robot Assisted Tower Construction - A Resource Distribution Task to Study Human-Robot Collaboration and Interaction with Groups of People | Research on human-robot collaboration or human-robot teaming, has focused
predominantly on understanding and enabling collaboration between a single
robot and a single human. Extending human-robot collaboration research beyond
the dyad, raises novel questions about how a robot should distribute resources
among group members and about what the social and task related consequences of
the distribution are. Methodological advances are needed to allow researchers
to collect data about human robot collaboration that involves multiple people.
This paper presents Tower Construction, a novel resource distribution task that
allows researchers to examine collaboration between a robot and groups of
people. By focusing on the question of whether and how a robot's distribution
of resources (wooden blocks required for a building task) affects collaboration
dynamics and outcomes, we provide a case of how this task can be applied in a
laboratory study with 124 participants to collect data about human robot
collaboration that involves multiple humans. We highlight the kinds of insights
the task can yield. In particular we find that the distribution of resources
affects perceptions of performance, and interpersonal dynamics between human
team-members.
| 1 | 0 | 0 | 0 | 0 | 0 |
Counterexample Guided Inductive Optimization | This paper describes three variants of a counterexample guided inductive
optimization (CEGIO) approach based on Satisfiability Modulo Theories (SMT)
solvers. In particular, CEGIO relies on iterative executions to constrain a
verification procedure, in order to perform inductive generalization, based on
counterexamples extracted from SMT solvers. CEGIO is able to successfully
optimize a wide range of functions, including non-linear and non-convex
optimization problems based on SMT solvers, in which data provided by
counterexamples are employed to guide the verification engine, thus reducing
the optimization domain. The present algorithms are evaluated using a large set
of benchmarks typically employed for evaluating optimization techniques.
Experimental results show the efficiency and effectiveness of the proposed
algorithms, which find the optimal solution in all evaluated benchmarks, while
traditional techniques are usually trapped by local minima.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bounds for the difference between two Čebyšev functionals | In this work, a generalization of pre-Grüss inequality is established.
Several bounds for the difference between two Čebyšev functional are
proved.
| 0 | 0 | 1 | 0 | 0 | 0 |
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals | An electroencephalography (EEG) based Brain Computer Interface (BCI) enables
people to communicate with the outside world by interpreting the EEG signals of
their brains to interact with devices such as wheelchairs and intelligent
robots. More specifically, motor imagery EEG (MI-EEG), which reflects a
subjects active intent, is attracting increasing attention for a variety of BCI
applications. Accurate classification of MI-EEG signals while essential for
effective operation of BCI systems, is challenging due to the significant noise
inherent in the signals and the lack of informative correlation between the
signals and brain activities. In this paper, we propose a novel deep neural
network based learning framework that affords perceptive insights into the
relationship between the MI-EEG data and brain activities. We design a joint
convolutional recurrent neural network that simultaneously learns robust
high-level feature presentations through low-dimensional dense embeddings from
raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various
artifacts such as background activities. The proposed approach has been
evaluated extensively on a large- scale public MI-EEG dataset and a limited but
easy-to-deploy dataset collected in our lab. The results show that our approach
outperforms a series of baselines and the competitive state-of-the- art
methods, yielding a classification accuracy of 95.53%. The applicability of our
proposed approach is further demonstrated with a practical BCI system for
typing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Model Checking of Cache for WCET Analysis Refinement | On real-time systems running under timing constraints, scheduling can be
performed when one is aware of the worst case execution time (WCET) of tasks.
Usually, the WCET of a task is unknown and schedulers make use of safe
over-approximations given by static WCET analysis. To reduce the
over-approximation, WCET analysis has to gain information about the underlying
hardware behavior, such as pipelines and caches. In this paper, we focus on the
cache analysis, which classifies memory accesses as hits/misses according to
the set of possible cache states. We propose to refine the results of classical
cache analysis using a model checker, introducing a new cache model for the
least recently used (LRU) policy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rational Solutions of the Painlevé-II Equation Revisited | The rational solutions of the Painlevé-II equation appear in several
applications and are known to have many remarkable algebraic and analytic
properties. They also have several different representations, useful in
different ways for establishing these properties. In particular,
Riemann-Hilbert representations have proven to be useful for extracting the
asymptotic behavior of the rational solutions in the limit of large degree
(equivalently the large-parameter limit). We review the elementary properties
of the rational Painlevé-II functions, and then we describe three different
Riemann-Hilbert representations of them that have appeared in the literature: a
representation by means of the isomonodromy theory of the Flaschka-Newell Lax
pair, a second representation by means of the isomonodromy theory of the
Jimbo-Miwa Lax pair, and a third representation found by Bertola and Bothner
related to pseudo-orthogonal polynomials. We prove that the Flaschka-Newell and
Bertola-Bothner Riemann-Hilbert representations of the rational Painlevé-II
functions are explicitly connected to each other. Finally, we review recent
results describing the asymptotic behavior of the rational Painlevé-II
functions obtained from these Riemann-Hilbert representations by means of the
steepest descent method.
| 0 | 1 | 1 | 0 | 0 | 0 |
PHAST: Protein-like heteropolymer analysis by statistical thermodynamics | PHAST is a software package written in standard Fortran, with MPI and CUDA
extensions, able to efficiently perform parallel multicanonical Monte Carlo
simulations of single or multiple heteropolymeric chains, as coarse-grained
models for proteins. The outcome data can be straightforwardly analyzed within
its microcanonical Statistical Thermodynamics module, which allows for
computing the entropy, caloric curve, specific heat and free energies. As a
case study, we investigate the aggregation of heteropolymers bioinspired on
$A\beta_{25-33}$ fragments and their cross-seeding with $IAPP_{20-29}$
isoforms. Excellent parallel scaling is observed, even under numerically
difficult first-order like phase transitions, which are properly described by
the built-in fully reconfigurable force fields. Still, the package is free and
open source, this shall motivate users to readily adapt it to specific
purposes.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Information-Theoretic Analysis for Thompson Sampling with Many Actions | Information-theoretic Bayesian regret bounds of Russo and Van Roy capture the
dependence of regret on prior uncertainty. However, this dependence is through
entropy, which can become arbitrarily large as the number of actions increases.
We establish new bounds that depend instead on a notion of rate-distortion.
Among other things, this allows us to recover through information-theoretic
arguments a near-optimal bound for the linear bandit. We also offer a bound for
the logistic bandit that dramatically improves on the best previously
available, though this bound depends on an information-theoretic statistic that
we have only been able to quantify via computation.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal portfolio selection in an Itô-Markov additive market | We study a portfolio selection problem in a continuous-time Itô-Markov
additive market with prices of financial assets described by Markov additive
processes which combine Lévy processes and regime switching models. Thus the
model takes into account two sources of risk: the jump diffusion risk and the
regime switching risk. For this reason the market is incomplete. We complete
the market by enlarging it with the use of a set of Markovian jump securities,
Markovian power-jump securities and impulse regime switching securities.
Moreover, we give conditions under which the market is
asymptotic-arbitrage-free. We solve the portfolio selection problem in the
Itô-Markov additive market for the power utility and the logarithmic utility.
| 0 | 0 | 0 | 0 | 0 | 1 |
SPIRITS: Uncovering Unusual Infrared Transients With Spitzer | We present an ongoing, systematic search for extragalactic infrared
transients, dubbed SPIRITS --- SPitzer InfraRed Intensive Transients Survey. In
the first year, using Spitzer/IRAC, we searched 190 nearby galaxies with
cadence baselines of one month and six months. We discovered over 1958
variables and 43 transients. Here, we describe the survey design and highlight
14 unusual infrared transients with no optical counterparts to deep limits,
which we refer to as SPRITEs (eSPecially Red Intermediate Luminosity Transient
Events). SPRITEs are in the infrared luminosity gap between novae and
supernovae, with [4.5] absolute magnitudes between -11 and -14 (Vega-mag) and
[3.6]-[4.5] colors between 0.3 mag and 1.6 mag. The photometric evolution of
SPRITEs is diverse, ranging from < 0.1 mag/yr to > 7 mag/yr. SPRITEs occur in
star-forming galaxies. We present an in-depth study of one of them, SPIRITS
14ajc in Messier 83, which shows shock-excited molecular hydrogen emission.
This shock may have been triggered by the dynamic decay of a non-hierarchical
system of massive stars that led to either the formation of a binary or a
proto-stellar merger.
| 0 | 1 | 0 | 0 | 0 | 0 |
Is the annual growth rate in balance of trade time series for Ireland nonlinear | We describe the Time Series Multivariate Adaptive Regressions Splines
(TSMARS) method. This method is useful for identifying nonlinear structure in a
time series. We use TSMARS to model the annual change in the balance of trade
for Ireland from 1970 to 2007. We compare the TSMARS estimate with long memory
ARFIMA estimates and long-term parsimonious linear models. We show that the
change in the balance of trade is nonlinear and possesses weakly long range
effects. Moreover, we compare the period prior to the introduction of the
Intrastat system in 1993 with the period from 1993 onward. Here we show that in
the earlier period the series had a substantial linear signal embedded in it
suggesting that estimation efforts in the earlier period may have resulted in
an over-smoothed series.
| 0 | 0 | 0 | 1 | 0 | 0 |
Sparse-View X-Ray CT Reconstruction Using $\ell_1$ Prior with Learned Transform | A major challenge in X-ray computed tomography (CT) is reducing radiation
dose while maintaining high quality of reconstructed images. To reduce the
radiation dose, one can reduce the number of projection views (sparse-view CT);
however, it becomes difficult to achieve high quality image reconstruction as
the number of projection views decreases. Researchers have applied the concept
of learning sparse representations from (high-quality) CT image dataset to the
sparse-view CT reconstruction. We propose a new statistical CT reconstruction
model that combines penalized weighted-least squares (PWLS) and $\ell_1$
regularization with learned sparsifying transform (PWLS-ST-$\ell_1$), and an
algorithm for PWLS-ST-$\ell_1$. Numerical experiments for sparse-view 2D
fan-beam CT and 3D axial cone-beam CT show that the $\ell_1$ regularizer
significantly improves the sharpness of edges of reconstructed images compared
to the CT reconstruction methods using edge-preserving regularizer and $\ell_2$
regularization with learned ST.
| 1 | 1 | 0 | 1 | 0 | 0 |
Adversarial classification: An adversarial risk analysis approach | Classification problems in security settings are usually contemplated as
confrontations in which one or more adversaries try to fool a classifier to
obtain a benefit. Most approaches to such adversarial classification problems
have focused on game theoretical ideas with strong underlying common knowledge
assumptions, which are actually not realistic in security domains. We provide
an alternative framework to such problem based on adversarial risk analysis,
which we illustrate with several examples. Computational and implementation
issues are discussed.
| 0 | 0 | 0 | 1 | 0 | 0 |
Volume growth in the component of fibered twists | For a Liouville domain $W$ whose boundary admits a periodic Reeb flow, we can
consider the connected component $[\tau] \in \pi_0(\text{Symp}^c(\widehat W))$
of fibered twists. In this paper, we investigate an entropy-type invariant,
called the slow volume growth, of the component $[\tau]$ and give a uniform
lower bound of the growth using wrapped Floer homology. We also show that
$[\tau]$ has infinite order in $\pi_0(\text{Symp}^c(\widehat W))$ if there is
an admissible Lagrangian $L$ in $W$ whose wrapped Floer homology is infinite
dimensional. We apply our results to fibered twists coming from the Milnor
fibers of $A_k$-type singularities and complements of a symplectic hypersurface
in a real symplectic manifold. They admit so-called real Lagrangians, and we
can explicitly compute wrapped Floer homology groups using a version of
Morse-Bott spectral sequences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Scalar Reduction of a Neural Field Model with Spike Frequency Adaptation | We study a deterministic version of a one- and two-dimensional attractor
neural network model of hippocampal activity first studied by Itskov et al
2011. We analyze the dynamics of the system on the ring and torus domain with
an even periodized weight matrix, assum- ing weak and slow spike frequency
adaptation and a weak stationary input current. On these domains, we find
transitions from spatially localized stationary solutions ("bumps") to
(periodically modulated) solutions ("sloshers"), as well as constant and
non-constant velocity traveling bumps depending on the relative strength of
external input current and adaptation. The weak and slow adaptation allows for
a reduction of the system from a distributed partial integro-differential
equation to a system of scalar Volterra integro-differential equations
describing the movement of the centroid of the bump solution. Using this
reduction, we show that on both domains, sloshing solutions arise through an
Andronov-Hopf bifurcation and derive a normal form for the Hopf bifurcation on
the ring. We also show existence and stability of constant velocity solutions
on both domains using Evans functions. In contrast to existing studies, we
assume a general weight matrix of Mexican-hat type in addition to a smooth
firing rate function.
| 0 | 0 | 0 | 0 | 1 | 0 |
Revisiting Elementary Denotational Semantics | Operational semantics have been enormously successful, in large part due to
its flexibility and simplicity, but they are not compositional. Denotational
semantics, on the other hand, are compositional but the lattice-theoretic
models are complex and difficult to scale to large languages. However, there
are elementary models of the $\lambda$-calculus that are much less complex: by
Coppo, Dezani-Ciancaglini, and Salle (1979), Engeler (1981), and Plotkin
(1993).
This paper takes first steps toward answering the question: can elementary
models be good for the day-to-day work of language specification,
mechanization, and compiler correctness? The elementary models in the
literature are simple, but they are not as intuitive as they could be. To
remedy this, we create a new model that represents functions literally as
finite graphs. Regarding mechanization, we give the first machine-checked proof
of soundness and completeness of an elementary model with respect to an
operational semantics. Regarding compiler correctness, we define a polyvariant
inliner for the call-by-value $\lambda$-calculus and prove that its output is
contextually equivalent to its input. Toward scaling elementary models to
larger languages, we formulate our semantics in a monadic style, give a
semantics for System F with general recursion, and mechanize the proof of type
soundness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dirac Composite Fermion - A Particle-Hole Spinor | The particle-hole (PH) symmetry at half-filled Landau level requires the
relationship between the flux number N_phi and the particle number N on a
sphere to be exactly N_phi - 2(N-1) = 1. The wave functions of composite
fermions with 1/2 "orbital spin", which contributes to the shift "1" in the
N_phi and N relationship, are proposed, shown to be PH symmetric, and validated
with exact finite system results. It is shown the many-body composite electron
and composite hole wave functions at half-filling can be formed from the two
components of the same spinor wave function of a massless Dirac fermion at
zero-magnetic field. It is further shown that away from half-filling, the
many-body composite electron wave function at filling factor nu and its PH
conjugated composite hole wave function at 1-nu can be formed from the two
components of the very same spinor wave functions of a massless Dirac fermion
at non-zero magnetic field. This relationship leads to the proposal of a very
simple Dirac composite fermion effective field theory, where the two-component
Dirac fermion field is a particle-hole spinor field coupled to the same
emergent gauge field, with one field component describing the composite
electrons and the other describing the PH conjugated composite holes. As such,
the density of the Dirac spinor field is the density sum of the composite
electron and hole field components, and therefore is equal to the degeneracy of
the Lowest Landau level. On the other hand, the charge density coupled to the
external magnetic field is the density difference between the composite
electron and hole field components, and is therefore neutral at exactly
half-filling. It is shown that the proposed particle-hole spinor effective
field theory gives essentially the same electromagnetic responses as Son's
Dirac composite fermion theory does.
| 0 | 1 | 0 | 0 | 0 | 0 |
Safe Adaptive Importance Sampling | Importance sampling has become an indispensable strategy to speed up
optimization algorithms for large-scale applications. Improved adaptive
variants - using importance values defined by the complete gradient information
which changes during optimization - enjoy favorable theoretical properties, but
are typically computationally infeasible. In this paper we propose an efficient
approximation of gradient-based sampling, which is based on safe bounds on the
gradient. The proposed sampling distribution is (i) provably the best sampling
with respect to the given bounds, (ii) always better than uniform sampling and
fixed importance sampling and (iii) can efficiently be computed - in many
applications at negligible extra cost. The proposed sampling scheme is generic
and can easily be integrated into existing algorithms. In particular, we show
that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy
significant a speed-up under the novel scheme. The proven efficiency of the
proposed sampling is verified by extensive numerical testing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Secure Grouping Protocol Using a Deck of Cards | We consider a problem, which we call secure grouping, of dividing a number of
parties into some subsets (groups) in the following manner: Each party has to
know the other members of his/her group, while he/she may not know anything
about how the remaining parties are divided (except for certain public
predetermined constraints, such as the number of parties in each group). In
this paper, we construct an information-theoretically secure protocol using a
deck of physical cards to solve the problem, which is jointly executable by the
parties themselves without a trusted third party. Despite the non-triviality
and the potential usefulness of the secure grouping, our proposed protocol is
fairly simple to describe and execute. Our protocol is based on algebraic
properties of conjugate permutations. A key ingredient of our protocol is our
new techniques to apply multiplication and inverse operations to hidden
permutations (i.e., those encoded by using face-down cards), which would be of
independent interest and would have various potential applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Categorically closed topological groups | Let $\mathcal C$ be a subcategory of the category of topologized semigroups
and their partial continuous homomorphisms. An object $X$ of the category
${\mathcal C}$ is called ${\mathcal C}$-closed if for each morphism $f:X\to Y$
of the category ${\mathcal C}$ the image $f(X)$ is closed in $Y$. In the paper
we detect topological groups which are $\mathcal C$-closed for the categories
$\mathcal C$ whose objects are Hausdorff topological (semi)groups and whose
morphisms are isomorphic topological embeddings, injective continuous
homomorphisms, continuous homomorphisms, or partial continuous homomorphisms
with closed domain.
| 0 | 0 | 1 | 0 | 0 | 0 |
Extracting significant signal of news consumption from social networks: the case of Twitter in Italian political elections | According to the Eurobarometer report about EU media use of May 2018, the
number of European citizens who consult on-line social networks for accessing
information is considerably increasing. In this work we analyze approximately
$10^6$ tweets exchanged during the last Italian elections. By using an
entropy-based null model discounting the activity of the users, we first
identify potential political alliances within the group of verified accounts:
if two verified users are retweeted more than expected by the non-verified
ones, they are likely to be related. Then, we derive the users' affiliation to
a coalition measuring the polarization of unverified accounts. Finally, we
study the bipartite directed representation of the tweets and retweets network,
in which tweets and users are collected on the two layers. Users with the
highest out-degree identify the most popular ones, whereas highest out-degree
posts are the most "viral". We identify significant content spreaders by
statistically validating the connections that cannot be explained by users'
tweeting activity and posts' virality by using an entropy-based null model as
benchmark. The analysis of the directed network of validated retweets reveals
signals of the alliances formed after the elections, highlighting commonalities
of interests before the event of the national elections.
| 1 | 0 | 0 | 0 | 0 | 0 |
Annealed Generative Adversarial Networks | We introduce a novel framework for adversarial training where the target
distribution is annealed between the uniform distribution and the data
distribution. We posited a conjecture that learning under continuous annealing
in the nonparametric regime is stable irrespective of the divergence measures
in the objective function and proposed an algorithm, dubbed {\ss}-GAN, in
corollary. In this framework, the fact that the initial support of the
generative network is the whole ambient space combined with annealing are key
to balancing the minimax game. In our experiments on synthetic data, MNIST, and
CelebA, {\ss}-GAN with a fixed annealing schedule was stable and did not suffer
from mode collapse.
| 1 | 0 | 0 | 1 | 0 | 0 |
Attitude Control of Spacecraft Formations Subject To Distributed Communication Delays | This paper considers the problem of achieving attitude consensus in
spacecraft formations with bounded, time-varying communication delays between
spacecraft connected as specified by a strongly connected topology. A state
feedback con- troller is proposed and investigated using a time domain approach
(via LMIs) and a frequency domain approach (via the small-gain theorem) to
obtain delay depen- dent stability criteria to achieve the desired consensus.
Simulations are presented to demonstrate the application of the strategy in a
specific scenario.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cloud Radiative Effect Study Using Sky Camera | The analysis of clouds in the earth's atmosphere is important for a variety
of applications, viz. weather reporting, climate forecasting, and solar energy
generation. In this paper, we focus our attention on the impact of cloud on the
total solar irradiance reaching the earth's surface. We use weather station to
record the total solar irradiance. Moreover, we employ collocated ground-based
sky camera to automatically compute the instantaneous cloud coverage. We
analyze the relationship between measured solar irradiance and computed cloud
coverage value, and conclude that higher cloud coverage greatly impacts the
total solar irradiance. Such studies will immensely help in solar energy
generation and forecasting.
| 1 | 1 | 0 | 0 | 0 | 0 |
On Testing Machine Learning Programs | Nowadays, we are witnessing a wide adoption of Machine learning (ML) models
in many safety-critical systems, thanks to recent breakthroughs in deep
learning and reinforcement learning. Many people are now interacting with
systems based on ML every day, e.g., voice recognition systems used by virtual
personal assistants like Amazon Alexa or Google Home. As the field of ML
continues to grow, we are likely to witness transformative advances in a wide
range of areas, from finance, energy, to health and transportation. Given this
growing importance of ML-based systems in our daily life, it is becoming
utterly important to ensure their reliability. Recently, software researchers
have started adapting concepts from the software testing domain (e.g., code
coverage, mutation testing, or property-based testing) to help ML engineers
detect and correct faults in ML programs. This paper reviews current existing
testing practices for ML programs. First, we identify and explain challenges
that should be addressed when testing ML programs. Next, we report existing
solutions found in the literature for testing ML programs. Finally, we identify
gaps in the literature related to the testing of ML programs and make
recommendations of future research directions for the scientific community. We
hope that this comprehensive review of software testing practices will help ML
engineers identify the right approach to improve the reliability of their
ML-based systems. We also hope that the research community will act on our
proposed research directions to advance the state of the art of testing for ML
programs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Scalable and Efficient Statistical Inference with Estimating Functions in the MapReduce Paradigm for Big Data | The theory of statistical inference along with the strategy of
divide-and-conquer for large- scale data analysis has recently attracted
considerable interest due to great popularity of the MapReduce programming
paradigm in the Apache Hadoop software framework. The central analytic task in
the development of statistical inference in the MapReduce paradigm pertains to
the method of combining results yielded from separately mapped data batches.
One seminal solution based on the confidence distribution has recently been
established in the setting of maximum likelihood estimation in the literature.
This paper concerns a more general inferential methodology based on estimating
functions, termed as the Rao-type confidence distribution, of which the maximum
likelihood is a special case. This generalization provides a unified framework
of statistical inference that allows regression analyses of massive data sets
of important types in a parallel and scalable fashion via a distributed file
system, including longitudinal data analysis, survival data analysis, and
quantile regression, which cannot be handled using the maximum likelihood
method. This paper investigates four important properties of the proposed
method: computational scalability, statistical optimality, methodological
generality, and operational robustness. In particular, the proposed method is
shown to be closely connected to Hansen's generalized method of moments (GMM)
and Crowder's optimality. An interesting theoretical finding is that the
asymptotic efficiency of the proposed Rao-type confidence distribution
estimator is always greater or equal to the estimator obtained by processing
the full data once. All these properties of the proposed method are illustrated
via numerical examples in both simulation studies and real-world data analyses.
| 0 | 0 | 0 | 1 | 0 | 0 |
Instability of pulses in gradient reaction-diffusion systems: A symplectic approach | In a scalar reaction-diffusion equation, it is known that the stability of a
steady state can be determined from the Maslov index, a topological invariant
that counts the state's critical points. In particular, this implies that pulse
solutions are unstable. We extend this picture to pulses in reaction-diffusion
systems with gradient nonlinearity. In particular, we associate a Maslov index
to any asymptotically constant state, generalizing existing definitions of the
Maslov index for homoclinic orbits. It is shown that this index equals the
number of unstable eigenvalues for the linearized evolution equation. Finally,
we use a symmetry argument to show that any pulse solution must have nonzero
Maslov index, and hence be unstable.
| 0 | 0 | 1 | 0 | 0 | 0 |
Consistency of the plug-in functional predictor of the Ornstein-Uhlenbeck process in Hilbert and Banach spaces | New results on functional prediction of the Ornstein-Uhlenbeck process in an
autoregressive Hilbert-valued and Banach-valued frameworks are derived.
Specifically, consistency of the maximum likelihood estimator of the
autocorrelation operator, and of the associated plug-in predictor is obtained
in both frameworks.
| 0 | 0 | 1 | 1 | 0 | 0 |
Analysis of Sequence Polymorphism of LINEs and SINEs in Entamoeba histolytica | The goal of this dissertation is to study the sequence polymorphism in
retrotransposable elements of Entamoeba histolytica. The Quasispecies theory, a
concept of equilibrium (stationary), has been used to understand the behaviour
of these elements. Two datasets of retrotransposons of Entamoeba histolytica
have been used. We present results from both datasets of retrotransposons
(SINE1s) of E. histolytica. We have calculated the mutation rate of EhSINE1s
for both datasets and drawn a phylogenetic tree for newly determined EhSINE1s
(dataset II). We have also discussed the variation in lengths of EhSINE1s for
both datasets. Using the quasispecies model, we have shown how sequences of
SINE1s vary within the population. The outputs of the quasispecies model are
discussed in the presence and the absence of back mutation by taking different
values of fitness. From our study of Non-long terminal repeat retrotransposons
(LINEs and their non-autonomous partner's SINEs) of Entamoeba histolytica, we
can conclude that an active EhSINE can generate very similar copies of itself
by retrotransposition. Due to this reason, it increases mutations which give
the result of sequence polymorphism. We have concluded that the mutation rate
of SINE is very high. This high mutation rate provides an idea for the
existence of SINEs, which may affect the genetic analysis of EhSINE1
ancestries, and calculation of phylogenetic distances.
| 0 | 0 | 0 | 0 | 1 | 0 |
Classification of digital affine noncommutative geometries | It is known that connected translation invariant $n$-dimensional
noncommutative differentials $d x^i$ on the algebra $k[x^1,\cdots,x^n]$ of
polynomials in $n$-variables over a field $k$ are classified by commutative
algebras $V$ on the vector space spanned by the coordinates. This data also
applies to construct differentials on the Heisenberg algebra `spacetime' with
relations $[x^\mu,x^\nu]=\lambda\Theta^{\mu\nu}$ where $ \Theta$ is an
antisymmetric matrix as well as to Lie algebras with pre-Lie algebra
structures. We specialise the general theory to the field $k={\ \mathbb{F}}_2$
of two elements, in which case translation invariant metrics (i.e. with
constant coefficients) are equivalent to making $V$ a Frobenius algebras. We
classify all of these and their quantum Levi-Civita bimodule connections for
$n=2,3$, with partial results for $n=4$. For $n=2$ we find 3 inequivalent
differential structures admitting 1,2 and 3 invariant metrics respectively. For
$n=3$ we find 6 differential structures admitting $0,1,2,3,4,7$ invariant
metrics respectively. We give some examples for $n=4$ and general $n$.
Surprisingly, not all our geometries for $n\ge 2$ have zero quantum Riemann
curvature. Quantum gravity is normally seen as a weighted `sum' over all
possible metrics but our results are a step towards a deeper approach in which
we must also `sum' over differential structures. Over ${\mathbb{F}}_2$ we
construct some of our algebras and associated structures by digital gates,
opening up the possibility of `digital geometry'.
| 0 | 0 | 1 | 0 | 0 | 0 |
The vectorial Ribaucour transformation for submanifolds of constant sectional curvature | We obtain a reduction of the vectorial Ribaucour transformation that
preserves the class of submanifolds of constant sectional curvature of space
forms, which we call the $L$-transformation. It allows to construct a family of
such submanifolds starting with a given one and a vector-valued solution of a
system of linear partial differential equations. We prove a decomposition
theorem for the $L$-transformation, which is a far-reaching generalization of
the classical permutability formula for the Ribaucour transformation of
surfaces of constant curvature in Euclidean three space. As a consequence, we
derive a Bianchi-cube theorem, which allows to produce, from $k$ initial scalar
$L$-transforms of a given submanifold of constant curvature, a whole
$k$-dimensional cube all of whose remaining $2^k-(k+1)$ vertices are
submanifolds with the same constant sectional curvature given by explicit
algebraic formulae. We also obtain further reductions, as well as corresponding
decomposition and Bianchi-cube theorems, for the classes of $n$-dimensional
flat Lagrangian submanifolds of $\mathbb{C}^n$ and $n$-dimensional Lagrangian
submanifolds with constant curvature $c$ of the complex projective space
$\mathbb C\mathbb P^n(4c)$ or the complex hyperbolic space $\mathbb C\mathbb
H^n(4c)$ of complex dimension $n$ and constant holomorphic curvature~4c.
| 0 | 0 | 1 | 0 | 0 | 0 |
Social Networks through the Prism of Cognition | Human relations are driven by social events - people interact, exchange
information, share knowledge and emotions, or gather news from mass media.
These events leave traces in human memory. The initial strength of a trace
depends on cognitive factors such as emotions or attention span. Each trace
continuously weakens over time unless another related event activity
strengthens it. Here, we introduce a novel Cognition-driven Social Network
(CogSNet) model that accounts for cognitive aspects of social perception and
explicitly represents human memory dynamics. For validation, we apply our model
to NetSense data on social interactions among university students. The results
show that CogSNet significantly improves quality of modeling of human
interactions in social networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Metriplectic Integrators for the Landau Collision Operator | We present a novel framework for addressing the nonlinear Landau collision
integral in terms of finite element and other subspace projection methods. We
employ the underlying metriplectic structure of the Landau collision integral
and, using a Galerkin discretization for the velocity space, we transform the
infinite-dimensional system into a finite-dimensional, time-continuous
metriplectic system. Temporal discretization is accomplished using the concept
of discrete gradients. The conservation of energy, momentum, and particle
densities, as well as the production of entropy is demonstrated algebraically
for the fully discrete system. Due to the generality of our approach, the
conservation properties and the monotonic behavior of entropy are guaranteed
for finite element discretizations in general, independently of the mesh
configuration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Eliminating higher-multiplicity intersections in the metastable dimension range | The $r$-fold analogues of Whitney trick were `in the air' since 1960s.
However, only in this century they were stated, proved and applied to obtain
interesting results, most notably by Mabillard and Wagner. Here we prove and
apply a version of the $r$-fold Whitney trick when general position $r$-tuple
intersections have positive dimension.
Theorem. Assume that $D=D_1\sqcup\ldots\sqcup D_r$ is disjoint union of
$k$-dimensional disks, $rd\ge (r+1)k+3$, and $f:D\to B^d$ a proper PL (smooth)
map such that $f\partial D_1\cap\ldots\cap f\partial D_r=\emptyset$. If the map
$$f^r:\partial(D_1\times\ldots\times D_r)\to
(B^d)^r-\{(x,x,\ldots,x)\in(B^d)^r\ |\ x\in B^d\}$$ extends to
$D_1\times\ldots\times D_r$, then there is a proper PL (smooth) map $\overline
f:D\to B^d$ such that $\overline f=f$ on $\partial D$ and $\overline
fD_1\cap\ldots\cap \overline fD_r=\emptyset$.
| 1 | 0 | 1 | 0 | 0 | 0 |
Lagrangian Transport Through Surfaces in Compressible Flows | A material-based, i.e., Lagrangian, methodology for exact integration of flux
by volume-preserving flows through a surface has been developed recently in
[Karrasch, SIAM J. Appl. Math., 76 (2016), pp. 1178-1190]. In the present
paper, we first generalize this framework to general compressible flows,
thereby solving the donating region problem in full generality. Second, we
demonstrate the efficacy of this approach on a slightly idealized version of a
classic two-dimensional mixing problem: transport in a cross-channel
micromixer, as considered recently in [Balasuriya, SIAM J. Appl. Dyn. Syst., 16
(2017), pp. 1015-1044].
| 1 | 1 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.