title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Binary Image Selection (BISON): Interpretable Evaluation of Visual Grounding | Providing systems the ability to relate linguistic and visual content is one
of the hallmarks of computer vision. Tasks such as image captioning and
retrieval were designed to test this ability, but come with complex evaluation
measures that gauge various other abilities and biases simultaneously. This
paper presents an alternative evaluation task for visual-grounding systems:
given a caption the system is asked to select the image that best matches the
caption from a pair of semantically similar images. The system's accuracy on
this Binary Image SelectiON (BISON) task is not only interpretable, but also
measures the ability to relate fine-grained text content in the caption to
visual content in the images. We gathered a BISON dataset that complements the
COCO Captions dataset and used this dataset in auxiliary evaluations of
captioning and caption-based retrieval systems. While captioning measures
suggest visual grounding systems outperform humans, BISON shows that these
systems are still far away from human performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization | In this paper we study the limitations of parallelization in convex
optimization. A convenient approach to study parallelization is through the
prism of \emph{adaptivity} which is an information theoretic measure of the
parallel runtime of an algorithm. Informally, adaptivity is the number of
sequential rounds an algorithm needs to make when it can execute
polynomially-many queries in parallel at every round. For combinatorial
optimization with black-box oracle access, the study of adaptivity has recently
led to exponential accelerations in parallel runtime and the natural question
is whether dramatic accelerations are achievable for convex optimization.
Our main result is a spoiler. We show that, in general, parallelization does
not accelerate convex optimization. In particular, for the problem of
minimizing a non-smooth Lipschitz and strongly convex function with black-box
oracle access we give information theoretic lower bounds that indicate that the
number of adaptive rounds of any randomized algorithm exactly match the upper
bounds of single-query-per-round (i.e. non-parallel) algorithms.
| 0 | 0 | 0 | 1 | 0 | 0 |
Velocity dependence of point masses, moving on timelike geodesics, in weak gravitational fields | Applying the principle of equivalence, analogous to Einstein's original 1907
approach demonstrating the bending of light in a gravitational field, we deduce
that radial geodesics of point masses are velocity dependent. Then, using the
Schwarzschild solution for observers at spatial infinity, we analyze the
similar case of masses moving on timelike geodesics, rederiving a previous
result by Hilbert from 1917. We find that the Schwarzschild solution gives more
than twice the rate of falling than found from the simpler acceleration
arguments in flat space. We note Einstein also found a similar difference for
the bending of light between these two approaches and in this case the
increased deflection of light was due to space curvature. Similarly we find
that in our case, the discrepancy between the two approaches can be attributed
to space curvature. Although we have calculated the effect locally for
observers under a Schwarzschild coordinate system in a weak field, further work
needs to be carried out to explore the stronger field case.
| 0 | 1 | 0 | 0 | 0 | 0 |
Demand Response in the Smart Grid: the Impact of Consumers Temporal Preferences | In Demand Response programs, price incentives might not be sufficient to
modify residential consumers load profile. Here, we consider that each consumer
has a preferred profile and a discomfort cost when deviating from it. Consumers
can value this discomfort at a varying level that we take as a parameter. This
work analyses Demand Response as a game theoretic environment. We study the
equilibria of the game between consumers with preferences within two different
dynamic pricing mechanisms, respectively the daily proportional mechanism
introduced by Mohsenian-Rad et al, and an hourly proportional mechanism. We
give new results about equilibria as functions of the preference level in the
case of quadratic system costs and prove that, whatever the preference level,
system costs are smaller with the hourly mechanism. We simulate the Demand
Response environment using real consumption data from PecanStreet database.
While the Price of Anarchy remains always close to one up to 0.1% with the
hourly mechanism, it can be more than 10% bigger with the daily mechanism.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distributed Nesterov gradient methods over arbitrary graphs | In this letter, we introduce a distributed Nesterov method, termed as
$\mathcal{ABN}$, that does not require doubly-stochastic weight matrices.
Instead, the implementation is based on a simultaneous application of both row-
and column-stochastic weights that makes this method applicable to arbitrary
(strongly-connected) graphs. Since constructing column-stochastic weights needs
additional information (the number of outgoing neighbors at each agent), not
available in certain communication protocols, we derive a variation, termed as
FROZEN, that only requires row-stochastic weights but at the expense of
additional iterations for eigenvector learning. We numerically study these
algorithms for various objective functions and network parameters and show that
the proposed distributed Nesterov methods achieve acceleration compared to the
current state-of-the-art methods for distributed optimization.
| 1 | 0 | 0 | 1 | 0 | 0 |
Casualty Detection from 3D Point Cloud Data for Autonomous Ground Mobile Rescue Robots | One of the most important features of mobile rescue robots is the ability to
autonomously detect casualties, i.e. human bodies, which are usually lying on
the ground. This paper proposes a novel method for autonomously detecting
casualties lying on the ground using obtained 3D point-cloud data from an
on-board sensor, such as an RGB-D camera or a 3D LIDAR, on a mobile rescue
robot. In this method, the obtained 3D point-cloud data is projected onto the
detected ground plane, i.e. floor, within the point cloud. Then, this projected
point cloud is converted into a grid-map that is used afterwards as an input
for the algorithm to detect human body shapes. The proposed method is evaluated
by performing detection of a human dummy, placed in different random positions
and orientations, using an on-board RGB-D camera on a mobile rescue robot
called ResQbot. To evaluate the robustness of the casualty detection method to
different camera angles, the orientation of the camera is set to different
angles. The experimental results show that using the point-cloud data from the
on-board RGB-D camera, the proposed method successfully detects the casualty in
all tested body positions and orientations relative to the on-board camera, as
well as in all tested camera angles.
| 1 | 0 | 0 | 0 | 0 | 0 |
Representations of superconformal algebras and mock theta functions | It is well known that the normaized characters of integrable highest weight
modules of given level over an affine Lie algebra $\hat{\frak{g}}$ span an
$SL_2(\mathbf{Z})$-invariant space. This result extends to admissible
$\hat{\frak{g}}$-modules, where $\frak{g}$ is a simple Lie algebra or
$osp_{1|n}$. Applying the quantum Hamiltonian reduction (QHR) to admissible
$\hat{\frak{g}}$-modules when $\frak{g} =sl_2$ (resp. $=osp_{1|2}$) one obtains
minimal series modules over the Virasoro (resp. $N=1$ superconformal algebras),
which form modular invariant families.
Another instance of modular invariance occurs for boundary level admissible
modules, including when $\frak{g}$ is a basic Lie superalgebra. For example, if
$\frak{g}=sl_{2|1}$ (resp. $=osp_{3|2}$), we thus obtain modular invariant
families of $\hat{\frak{g}}$-modules, whose QHR produces the minimal series
modules for the $N=2$ superconformal algebras (resp. a modular invariant family
of $N=3$ superconformal algebra modules).
However, in the case when $\frak{g}$ is a basic Lie superalgebra different
from a simple Lie algebra or $osp_{1|n}$, modular invariance of normalized
supercharacters of admissible $\hat{\frak{g}}$-modules holds outside of
boundary levels only after their modification in the spirit of Zwegers'
modification of mock theta functions. Applying the QHR, we obtain families of
representations of $N=2,3,4$ and big $N=4$ superconformal algebras, whose
modified (super)characters span an $SL_2(\mathbf{Z})$-invariant space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Anchored Network Users: Stochastic Evolutionary Dynamics of Cognitive Radio Network Selection | To solve the spectrum scarcity problem, the cognitive radio technology
involves licensed users and unlicensed users. A fundamental issue for the
network users is whether it is better to act as a licensed user by using a
primary network or an unlicensed user by using a secondary network. To model
the network selection process by the users, the deterministic replicator
dynamics is often used, but in a less practical way that it requires each user
to know global information on the network state for reaching a Nash
equilibrium. This paper addresses the network selection process in a more
practical way such that only noise-prone estimation of local information is
required and, yet, it obtains an efficient system performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the rates of convergence of Parallelized Averaged Stochastic Gradient Algorithms | The growing interest for high dimensional and functional data analysis led in
the last decade to an important research developing a consequent amount of
techniques. Parallelized algorithms, which consist in distributing and treat
the data into different machines, for example, are a good answer to deal with
large samples taking values in high dimensional spaces. We introduce here a
parallelized averaged stochastic gradient algorithm, which enables to treat
efficiently and recursively the data, and so, without taking care if the
distribution of the data into the machines is uniform. The rate of convergence
in quadratic mean as well as the asymptotic normality of the parallelized
estimates are given, for strongly and locally strongly convex objectives.
| 0 | 0 | 1 | 1 | 0 | 0 |
On Convergence of Extended Dynamic Mode Decomposition to the Koopman Operator | Extended Dynamic Mode Decomposition (EDMD) is an algorithm that approximates
the action of the Koopman operator on an $N$-dimensional subspace of the space
of observables by sampling at $M$ points in the state space. Assuming that the
samples are drawn either independently or ergodically from some measure $\mu$,
it was shown that, in the limit as $M\rightarrow\infty$, the EDMD operator
$\mathcal{K}_{N,M}$ converges to $\mathcal{K}_N$, where $\mathcal{K}_N$ is the
$L_2(\mu)$-orthogonal projection of the action of the Koopman operator on the
finite-dimensional subspace of observables. In this work, we show that, as $N
\rightarrow \infty$, the operator $\mathcal{K}_N$ converges in the strong
operator topology to the Koopman operator. This in particular implies
convergence of the predictions of future values of a given observable over any
finite time horizon, a fact important for practical applications such as
forecasting, estimation and control. In addition, we show that accumulation
points of the spectra of $\mathcal{K}_N$ correspond to the eigenvalues of the
Koopman operator with the associated eigenfunctions converging weakly to an
eigenfunction of the Koopman operator, provided that the weak limit of
eigenfunctions is nonzero. As a by-product, we propose an analytic version of
the EDMD algorithm which, under some assumptions, allows one to construct
$\mathcal{K}_N$ directly, without the use of sampling. Finally, under
additional assumptions, we analyze convergence of $\mathcal{K}_{N,N}$ (i.e.,
$M=N$), proving convergence, along a subsequence, to weak eigenfunctions (or
eigendistributions) related to the eigenmeasures of the Perron-Frobenius
operator. No assumptions on the observables belonging to a finite-dimensional
invariant subspace of the Koopman operator are required throughout.
| 0 | 0 | 1 | 0 | 0 | 0 |
Random problems with R | R (Version 3.5.1 patched) has an issue with its random sampling
functionality. R generates random integers between $1$ and $m$ by multiplying
random floats by $m$, taking the floor, and adding $1$ to the result.
Well-known quantization effects in this approach result in a non-uniform
distribution on $\{ 1, \ldots, m\}$. The difference, which depends on $m$, can
be substantial. Because the sample function in R relies on generating random
integers, random sampling in R is biased. There is an easy fix: construct
random integers directly from random bits, rather than multiplying a random
float by $m$. That is the strategy taken in Python's numpy.random.randint()
function, among others. Example source code in Python is available at
this https URL
(see functions getrandbits() and randbelow_from_randbits()).
| 0 | 0 | 0 | 1 | 0 | 0 |
Quasinormal modes as a distinguisher between general relativity and f(R) gravity | Quasi-Normal Modes (QNM) or ringdown phase of gravitational waves provide
critical information about the structure of compact objects like Black Holes.
Thus, QNMs can be a tool to test General Relativity (GR) and possible
deviations from it. In the case of GR, it is known for a long time that a
relation between two types of Black Hole perturbations: scalar (Zerilli) and
vector (Regge-Wheeler), leads to an equal share of emitted gravitational
energy. With the direct detection of Gravitational waves, it is now natural to
ask: whether the same relation (between scalar and vector perturbations) holds
for modified gravity theories? If not, whether one can use this as a way to
probe deviations from General Relativity. As a first step, we show explicitly
that the above relation between Regge-Wheeler and Zerilli breaks down for a
general f (R) model, and hence the two perturbations do not share equal amounts
of emitted gravitational energy. We discuss the implication of this imbalance
on observations and the no-hair conjecture.
| 0 | 1 | 0 | 0 | 0 | 0 |
Polymorphism and the obstinate circularity of second order logic: a victims' tale | The investigations on higher-order type theories and on the related notion of
parametric polymorphism constitute the technical counterpart of the old
foundational problem of the circularity (or impredicativity) of second and
higher order logic. However, the epistemological significance of such
investigations, and of their often non trivial results, has not received much
attention in the contemporary foundational debate. The results recalled in this
paper suggest that the question of the circularity of second order logic cannot
be reduced to the simple assessment of a vicious circle. Through a comparison
between the faulty consistency arguments given by Frege and Martin-Löf,
respectively for the logical system of the Grundgesetze (shown inconsistent by
Russell's paradox) and for the intuitionistic type theory with a type of all
types (shown inconsistent by Girard's paradox), and the normalization argument
for second order type theory (or System F), we indicate a bunch of subtle
mathematical problems and logical concepts hidden behind the hazardous idea of
impredicative quantification, constituting a vast (and largely unexplored)
domain for foundational research.
| 1 | 0 | 1 | 0 | 0 | 0 |
Bayesian significance test for discriminating between survival distributions | An evaluation of FBST, Fully Bayesian Significance Test, restricted to
survival models is the main objective of the present paper. A Survival
distribution should be chosen among the tree celebrated ones, lognormal, gamma,
and Weibull. For this discrimination, a linear mixture of the three
distributions, for which the mixture weights are defined by a Dirichlet
distribution of order three, is an important tool: the FBST is used to test the
hypotheses defined on the mixture weights space. Another feature of the paper
is that all three distributions are reparametrized in that all the six
parameters - two for each distribution - are written as functions of the mean
and the variance of the population been studied. Note that the three
distributions share the same two parameters in the mixture model. The mixture
density has then four parameters, the same two for the three discriminating
densities and two for the mixture weights. Some numerical results from
simulations with some right-censored data are considered. The
lognormal-gamma-Weibull model is also applied to a real study with dataset
being composed by patient's survival times of patients in the end-stage of
chronic kidney failure subjected to hemodialysis procedures; data from Rio de
Janeiro hospitals. The posterior density of the weights indicates an order of
the mixture weights and the FBST is used for discriminating between the three
survival distributions.
Keywords: Model choice; Separate Models; Survival distributions; Mixture
model; Significance test; FBST
| 0 | 0 | 0 | 1 | 0 | 0 |
A Minimal Closed-Form Solution for Multi-Perspective Pose Estimation using Points and Lines | We propose a minimal solution for pose estimation using both points and lines
for a multi-perspective camera. In this paper, we treat the multi-perspective
camera as a collection of rigidly attached perspective cameras. These type of
imaging devices are useful for several computer vision applications that
require a large coverage such as surveillance, self-driving cars, and
motion-capture studios. While prior methods have considered the cases using
solely points or lines, the hybrid case involving both points and lines has not
been solved for multi-perspective cameras. We present the solutions for two
cases. In the first case, we are given 2D to 3D correspondences for two points
and one line. In the later case, we are given 2D to 3D correspondences for one
point and two lines. We show that the solution for the case of two points and
one line can be formulated as a fourth degree equation. This is interesting
because we can get a closed-form solution and thereby achieve high
computational efficiency. The later case involving two lines and one point can
be mapped to an eighth degree equation. We show simulations and real
experiments to demonstrate the advantages and benefits over existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning K-way D-dimensional Discrete Code For Compact Embedding Representations | Embedding methods such as word embedding have become pillars for many
applications containing discrete structures. Conventional embedding methods
directly associate each symbol with a continuous embedding vector, which is
equivalent to applying linear transformation based on "one-hot" encoding of the
discrete symbols. Despite its simplicity, such approach yields number of
parameters that grows linearly with the vocabulary size and can lead to
overfitting. In this work we propose a much more compact K-way D-dimensional
discrete encoding scheme to replace the "one-hot" encoding. In "KD encoding",
each symbol is represented by a $D$-dimensional code, and each of its dimension
has a cardinality of $K$. The final symbol embedding vector can be generated by
composing the code embedding vectors. To learn the semantically meaningful
code, we derive a relaxed discrete optimization technique based on stochastic
gradient descent. By adopting the new coding system, the efficiency of
parameterization can be significantly improved (from linear to logarithmic),
and this can also mitigate the over-fitting problem. In our experiments with
language modeling, the number of embedding parameters can be reduced by 97\%
while achieving similar or better performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Attention based convolutional neural network for predicting RNA-protein binding sites | RNA-binding proteins (RBPs) play crucial roles in many biological processes,
e.g. gene regulation. Computational identification of RBP binding sites on RNAs
are urgently needed. In particular, RBPs bind to RNAs by recognizing sequence
motifs. Thus, fast locating those motifs on RNA sequences is crucial and
time-efficient for determining whether the RNAs interact with the RBPs or not.
In this study, we present an attention based convolutional neural network,
iDeepA, to predict RNA-protein binding sites from raw RNA sequences. We first
encode RNA sequences into one-hot encoding. Next, we design a deep learning
model with a convolutional neural network (CNN) and an attention mechanism,
which automatically search for important positions, e.g. binding motifs, to
learn discriminant high-level features for predicting RBP binding sites. We
evaluate iDeepA on publicly gold-standard RBP binding sites derived from
CLIP-seq data. The results demonstrate iDeepA achieves comparable performance
with other state-of-the-art methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Ramp Reversal Memory and Phase-Boundary Scarring in Transition Metal Oxides | Transition metal oxides (TMOs) are complex electronic systems which exhibit a
multitude of collective phenomena. Two archetypal examples are VO2 and NdNiO3,
which undergo a metal-insulator phase-transition (MIT), the origin of which is
still under debate. Here we report the discovery of a memory effect in both
systems, manifest through an increase of resistance at a specific temperature,
which is set by reversing the temperature-ramp from heating to cooling during
the MIT. The characteristics of this ramp-reversal memory effect do not
coincide with any previously reported history or memory effects in manganites,
electron-glass or magnetic systems. From a broad range of experimental
features, supported by theoretical modelling, we find that the main ingredients
for the effect to arise are the spatial phase-separation of metallic and
insulating regions during the MIT and the coupling of lattice strain to the
local critical temperature of the phase transition. We conclude that the
emergent memory effect originates from phase boundaries at the
reversal-temperature leaving `scars` in the underlying lattice structure,
giving rise to a local increase in the transition temperature. The universality
and robustness of the effect shed new light on the MIT in complex oxides.
| 0 | 1 | 0 | 0 | 0 | 0 |
A multi-channel approach for automatic microseismic event localization using RANSAC-based arrival time event clustering(RATEC) | In the presence of background noise and interference, arrival times picked
from a surface microseismic data set usually include a number of false picks
which lead to uncertainty in location estimation. To eliminate false picks and
improve the accuracy of location estimates, we develop a classification
algorithm (RATEC) that clusters picked arrival times into event groups based on
random sampling and fitting moveout curves that approximate hyperbolas. Arrival
times far from the fitted hyperbolas are classified as false picks and removed
from the data set prior to location estimation. Simulations of synthetic data
for a 1-D linear array show that RATEC is robust under different noise
conditions and generally applicable to various types of media. By generalizing
the underlying moveout model, RATEC is extended to the case of a 2-D surface
monitoring array. The effectiveness of event location for the 2-D case is
demonstrated using a data set collected by a 5200-element dense 2-D array
deployed for microearthquake monitoring.
| 1 | 1 | 0 | 0 | 0 | 0 |
Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms | Currently, diagnosis of skin diseases is based primarily on visual pattern
recognition skills and expertise of the physician observing the lesion. Even
though dermatologists are trained to recognize patterns of morphology, it is
still a subjective visual assessment. Tools for automated pattern recognition
can provide objective information to support clinical decision-making.
Noninvasive skin imaging techniques provide complementary information to the
clinician. In recent years, optical coherence tomography has become a powerful
skin imaging technique. According to specific functional needs, skin
architecture varies across different parts of the body, as do the textural
characteristics in OCT images. There is, therefore, a critical need to
systematically analyze OCT images from different body sites, to identify their
significant qualitative and quantitative differences. Sixty-three optical and
textural features extracted from OCT images of healthy and diseased skin are
analyzed and in conjunction with decision-theoretic approaches used to create
computational models of the diseases. We demonstrate that these models provide
objective information to the clinician to assist in the diagnosis of
abnormalities of cutaneous microstructure, and hence, aid in the determination
of treatment. Specifically, we demonstrate the performance of this methodology
on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC)
from healthy tissue.
| 0 | 1 | 0 | 0 | 0 | 0 |
Batched Large-scale Bayesian Optimization in High-dimensional Spaces | Bayesian optimization (BO) has become an effective approach for black-box
function optimization problems when function evaluations are expensive and the
optimum can be achieved within a relatively small number of queries. However,
many cases, such as the ones with high-dimensional inputs, may require a much
larger number of observations for optimization. Despite an abundance of
observations thanks to parallel experiments, current BO techniques have been
limited to merely a few thousand observations. In this paper, we propose
ensemble Bayesian optimization (EBO) to address three current challenges in BO
simultaneously: (1) large-scale observations; (2) high dimensional input
spaces; and (3) selections of batch queries that balance quality and diversity.
The key idea of EBO is to operate on an ensemble of additive Gaussian process
models, each of which possesses a randomized strategy to divide and conquer. We
show unprecedented, previously impossible results of scaling up BO to tens of
thousands of observations within minutes of computation.
| 1 | 0 | 1 | 1 | 0 | 0 |
Will a Large Economy Be Stable? | We study networks of firms with Leontief production functions. Relying on
results from Random Matrix Theory, we argue that such networks generically
become unstable when their size increases, or when the heterogeneity in
productivities/connectivities becomes too strong. At marginal stability and for
large heterogeneities, we find that the distribution of firm sizes develops a
power-law tail, as observed empirically. Crises can be triggered by small
idiosyncratic shocks, which lead to "avalanches" of defaults characterized by a
power-law distribution of total output losses. We conjecture that evolutionary
and behavioural forces conspire to keep the economy close to marginal
stability. This scenario would naturally explain the well-known "small shocks,
large business cycles" puzzle, as anticipated long ago by Bak, Chen, Scheinkman
and Woodford.
| 0 | 0 | 0 | 0 | 0 | 1 |
Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization | In this paper, we study the stochastic gradient descent (SGD) method for the
nonconvex nonsmooth optimization, and propose an accelerated SGD method by
combining the variance reduction technique with Nesterov's extrapolation
technique. Moreover, based on the local error bound condition, we establish the
linear convergence of our method to obtain a stationary point of the nonconvex
optimization. In particular, we prove that not only the sequence generated
linearly converges to a stationary point of the problem, but also the
corresponding sequence of objective values is linearly convergent. Finally,
some numerical experiments demonstrate the effectiveness of our method. To the
best of our knowledge, it is first proved that the accelerated SGD method
converges linearly to the local minimum of the nonconvex optimization.
| 0 | 0 | 1 | 1 | 0 | 0 |
TRINITY: Coordinated Performance, Energy and Temperature Management in 3D Processor-Memory Stacks | The consistent demand for better performance has lead to innovations at
hardware and microarchitectural levels. 3D stacking of memory and logic dies
delivers an order of magnitude improvement in available memory bandwidth. The
price paid however is, tight thermal constraints.
In this paper, we study the complex multiphysics interactions between
performance, energy and temperature. Using a cache coherent multicore processor
cycle level simulator coupled with power and thermal estimation tools, we
investigate the interactions between (a) thermal behaviors (b) compute and
memory microarchitecture and (c) application workloads. The key insights from
this exploration reveal the need to manage performance, energy and temperature
in a coordinated fashion. Furthermore, we identify the concept of "effective
heat capacity" i.e. the heat generated beyond which no further gains in
performance is observed with increases in voltage-frequency of the compute
logic. Subsequently, a real-time, numerical optimization based, application
agnostic controller (TRINITY) is developed which intelligently manages the
three parameters of interest. We observe up to $30\%$ improvement in Energy
Delay$^2$ Product and up to $8$ Kelvin lower core temperatures as compared to
fixed frequencies. Compared to the \texttt{ondemand} Linux CPU DVFS governor,
for similar energy efficiency, TRINITY keeps the cores cooler by $6$ Kelvin
which increases the lifetime reliability by up to 59\%.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Covariate Shift Prediction with General Losses and Feature Views | Covariate shift relaxes the widely-employed independent and identically
distributed (IID) assumption by allowing different training and testing input
distributions. Unfortunately, common methods for addressing covariate shift by
trying to remove the bias between training and testing distributions using
importance weighting often provide poor performance guarantees in theory and
unreliable predictions with high variance in practice. Recently developed
methods that construct a predictor that is inherently robust to the
difficulties of learning under covariate shift are restricted to minimizing
logloss and can be too conservative when faced with high-dimensional learning
tasks. We address these limitations in two ways: by robustly minimizing various
loss functions, including non-convex ones, under the testing distribution; and
by separately shaping the influence of covariate shift according to different
feature-based views of the relationship between input variables and example
labels. These generalizations make robust covariate shift prediction applicable
to more task scenarios. We demonstrate the benefits on classification under
covariate shift tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the Ergodic Control of Ensembles | Across smart-grid and smart-city applications, there are problems where an
ensemble of agents is to be controlled such that both the aggregate behaviour
and individual-level perception of the system's performance are acceptable. In
many applications, traditional PI control is used to regulate aggregate
ensemble performance. Our principal contribution in this note is to demonstrate
that PI control may not be always suitable for this purpose, and in some
situations may lead to a loss of ergodicity for closed-loop systems. Building
on this observation, a theoretical framework is proposed to both analyse and
design control systems for the regulation of large scale ensembles of agents
with a probabilistic intent. Examples are given to illustrate our results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Relative merits of Phononics vs. Plasmonics: the energy balance approach | The common feature of various plasmonic schemes is their ability to confine
optical fields of surface plasmon polaritons (SPPs) into sub-wavelength volumes
and thus achieve a large enhancement of linear and nonlinear optical
properties. This ability, however, is severely limited by the large ohmic loss
inherent to even the best of metals. However, in the mid and far infrared
ranges of the spectrum there exists a viable alternative to metals, polar
dielectrics and semiconductors in which dielectric permittivity (the real part)
turns negative in the Reststrahlen region. This feature engenders the so-called
surface phonon polaritons (SPhPs) capable of confining the field in a way akin
to their plasmonic analogues, the SPPs. Since the damping rate of polar phonons
is substantially less than that of free electrons, it is not unreasonable to
expect that phononic devices may outperform their plasmonic counterparts. Yet a
more rigorous analysis of the comparative merits of phononics and plasmonics
reveals a more nuanced answer, namely that while phononic schemes do exhibit
narrower resonances and can achieve a very high degree of energy concentration,
most of the energy is contained in the form of lattice vibrations so that
enhancement of the electric field, and hence the Purcell factor, is rather
small compared to what can be achieved with metal nanoantennas. Still, the
sheer narrowness of phononic resonances is expected to make phononics viable in
applications where frequency selectivity is important.
| 0 | 1 | 0 | 0 | 0 | 0 |
Memory footprint reduction for the FFT-based volume integral equation method via tensor decompositions | We present a method of memory footprint reduction for FFT-based,
electromagnetic (EM) volume integral equation (VIE) formulations. The arising
Green's function tensors have low multilinear rank, which allows Tucker
decomposition to be employed for their compression, thereby greatly reducing
the required memory storage for numerical simulations. Consequently, the
compressed components are able to fit inside a graphical processing unit (GPU)
on which highly parallelized computations can vastly accelerate the iterative
solution of the arising linear system. In addition, the element-wise products
throughout the iterative solver's process require additional flops, thus, we
provide a variety of novel and efficient methods that maintain the linear
complexity of the classic element-wise product with an additional
multiplicative small constant. We demonstrate the utility of our approach via
its application to VIE simulations for the Magnetic Resonance Imaging (MRI) of
a human head. For these simulations we report an order of magnitude
acceleration over standard techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
On a property of the nodal set of least energy sign-changing solutions for quasilinear elliptic equations | In this note we prove the Payne-type conjecture about the behaviour of the
nodal set of least energy sign-changing solutions for the equation $-\Delta_p u
= f(u)$ in bounded Steiner symmetric domains $\Omega \subset \mathbb{R}^N$
under the zero Dirichlet boundary conditions. The nonlinearity $f$ is assumed
to be either superlinear or resonant. In the latter case, least energy
sign-changing solutions are second eigenfunctions of the zero Dirichlet
$p$-Laplacian in $\Omega$. We show that the nodal set of any least energy
sign-changing solution intersects the boundary of $\Omega$. The proof is based
on a moving polarization argument.
| 0 | 0 | 1 | 0 | 0 | 0 |
Smooth contractible threefolds with hyperbolic $\mathbb{G}_{m}$-actions via ps-divisors | The aim of this note is to give an alternative proof of a theorem of Koras
and Russell, that is, a characterization of smooth contractible affine
varieties endowed with a hyperbolic action of the group
$\mathbb{G}_{m}\simeq\mathbb{C}^{\text{*}}$, using the language of polyhedral
divisors developed by Altmann and Hausen as generalization of
$\mathbb{Q}$-divisors.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ergodic Theorems for Nonconventional Arrays and an Extension of the Szemeredi Theorem | The paper is primarily concerned with the asymptotic behavior as $N\to\infty$
of averages of nonconventional arrays having the form
$N^{-1}\sum_{n=1}^N\prod_{j=1}^\ell T^{P_j(n,N)}f_j$ where $f_j$'s are bounded
measurable functions, $T$ is an invertible measure preserving transformation
and $P_j$'s are polynomials of $n$ and $N$ taking on integer values on
integers. It turns out that when $T$ is weakly mixing and $P_j(n,N)=p_jn+q_jN$
are linear or, more generally, have the form $P_j(n,N)=P_j(n)+Q_j(N)$ for some
integer valued polynomials $P_j$ and $Q_j$ then the above averages converge in
$L^2$ but for general polynomials $P_j$ the $L^2$ convergence can be ensured
even in the case $\ell=1$ only when $T$ is strongly mixing. Studying also
weakly mixing and compact extensions and relying on Furstenberg's structure
theorem we derive an extension of Szemer\' edi's theorem saying that for any
subset of integers $\Lambda$ with positive upper density there exists a subset
$\mathcal N_\Lambda$ of positive integers having uniformly bounded gaps such
that for $N\in\mathcal N_\Lambda$ and at least $\varepsilon N,\,\varepsilon>0$
of $n$'s all numbers $p_jn+q_jN,\, j=1,...,\ell$ belong to $\Lambda$. We obtain
also a version of these results for several commuting transformations which
yields a corresponding extension of the multidimensional Szemer\' edi theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bellman Gradient Iteration for Inverse Reinforcement Learning | This paper develops an inverse reinforcement learning algorithm aimed at
recovering a reward function from the observed actions of an agent. We
introduce a strategy to flexibly handle different types of actions with two
approximations of the Bellman Optimality Equation, and a Bellman Gradient
Iteration method to compute the gradient of the Q-value with respect to the
reward function. These methods allow us to build a differentiable relation
between the Q-value and the reward function and learn an approximately optimal
reward function with gradient methods. We test the proposed method in two
simulated environments by evaluating the accuracy of different approximations
and comparing the proposed method with existing solutions. The results show
that even with a linear reward function, the proposed method has a comparable
accuracy with the state-of-the-art method adopting a non-linear reward
function, and the proposed method is more flexible because it is defined on
observed actions instead of trajectories.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data | We consider the question of accurately and efficiently computing low-rank
matrix or tensor factorizations given data compressed via random projections.
This problem arises naturally in the many settings in which data is acquired
via compressive sensing. We examine the approach of first performing
factorization in the compressed domain, and then reconstructing the original
high-dimensional factors from the recovered (compressed) factors. In both the
tensor and matrix settings, we establish conditions under which this natural
approach will provably recover the original factors. We support these
theoretical results with experiments on synthetic data and demonstrate the
practical applicability of our methods on real-world gene expression and EEG
time series data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum gravity corrections to the thermodynamics and phase transition of Schwarzschild black hole | In this work, we derive a new kind of rainbow functions, which has
generalized uncertainty principle parameter. Then, we investigate modified
thermodynamic quantities and phase transition of rainbow Schwarzschild black
hole by employing this new kind of rainbow functions. Our results demonstrate
that the effect of rainbow gravity and generalized uncertainty principle have a
great effect on the picture of Hawking radiation. It prevents black holes from
total evaporation and causes the remnant. In addition, after analyzing the the
modified local thermodynamic quantities, we find that effect of rainbow gravity
and generalized uncertainty principle lead to one first-order phase transition,
two second-order phase transitions, and two Hawking-Page-type phase transitions
in the thermodynamic system of rainbow Schwarzschild black hole.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non-abelian reciprocity laws and higher Brauer-Manin obstructions | We reinterpret Kim's non-abelian reciprocity maps for algebraic varieties as
obstruction towers of mapping spaces of etale homotopy types, removing
technical hypotheses such as global basepoints and cohomological constraints.
We then extend the theory by considering alternative natural series of
extensions, one of which gives an obstruction tower whose first stage is the
Brauer--Manin obstruction, allowing us to determine when Kim's maps recover the
Brauer-Manin locus. A tower based on relative completions yields non-trivial
reciprocity maps even for Shimura varieties; for the stacky modular curve,
these take values in Galois cohomology of modular forms, and give obstructions
to an adelic elliptic curve with global Tate module underlying a global
elliptic curve.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning with Bounded Instance- and Label-dependent Label Noise | Instance- and label-dependent label noise (ILN) is widely existed in
real-world datasets but has been rarely studied. In this paper, we focus on a
particular case of ILN where the label noise rates, representing the
probabilities that the true labels of examples flip into the corrupted labels,
have upper bounds. We propose to handle this bounded instance- and
label-dependent label noise under two different conditions. First,
theoretically, we prove that when the marginal distributions $P(X|Y=+1)$ and
$P(X|Y=-1)$ have non-overlapping supports, we can recover every noisy example's
true label and perform supervised learning directly on the cleansed examples.
Second, for the overlapping situation, we propose a novel approach to learn a
well-performing classifier which needs only a few noisy examples to be labeled
manually. Experimental results demonstrate that our method works well on both
synthetic and real-world datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Community Detection in Hypergraphs, Spiked Tensor Models, and Sum-of-Squares | We study the problem of community detection in hypergraphs under a stochastic
block model. Similarly to how the stochastic block model in graphs suggests
studying spiked random matrices, our model motivates investigating statistical
and computational limits of exact recovery in a certain spiked tensor model. In
contrast with the matrix case, the spiked model naturally arising from
community detection in hypergraphs is different from the one arising in the
so-called tensor Principal Component Analysis model. We investigate the
effectiveness of algorithms in the Sum-of-Squares hierarchy on these models.
Interestingly, our results suggest that these two apparently similar models
exhibit significantly different computational to statistical gaps.
| 1 | 0 | 1 | 1 | 0 | 0 |
Presentations of the saturated cluster modular groups of finite mutation type $X_6$ and $X_7$ | We give finite presentations of the saturated cluster modular groups of type
$X_6$ and $X_7$. We compute the first homology of these groups and conclude
that they are different from Artin-Tits braid groups and mapping class groups
of surfaces. We verify that the cluster modular group of type $X_7$ is
generated by cluster Dehn twists. Further we discuss several relations between
these cluster modular groups and the mapping class group of an annulus.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantum Query Algorithms are Completely Bounded Forms | We prove a characterization of $t$-query quantum algorithms in terms of the
unit ball of a space of degree-$2t$ polynomials. Based on this, we obtain a
refined notion of approximate polynomial degree that equals the quantum query
complexity, answering a question of Aaronson et al. (CCC'16). Our proof is
based on a fundamental result of Christensen and Sinclair (J. Funct. Anal.,
1987) that generalizes the well-known Stinespring representation for quantum
channels to multilinear forms. Using our characterization, we show that many
polynomials of degree four are far from those coming from two-query quantum
algorithms. We also give a simple and short proof of one of the results of
Aaronson et al. showing an equivalence between one-query quantum algorithms and
bounded quadratic polynomials.
| 1 | 0 | 1 | 0 | 0 | 0 |
Massively-Parallel Feature Selection for Big Data | We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for
feature selection (FS) in Big Data settings (high dimensionality and/or sample
size). To tackle the challenges of Big Data FS PFBP partitions the data matrix
both in terms of rows (samples, training examples) as well as columns
(features). By employing the concepts of $p$-values of conditional independence
tests and meta-analysis techniques PFBP manages to rely only on computations
local to a partition while minimizing communication costs. Then, it employs
powerful and safe (asymptotically sound) heuristics to make early, approximate
decisions, such as Early Dropping of features from consideration in subsequent
iterations, Early Stopping of consideration of features within the same
iteration, or Early Return of the winner in each iteration. PFBP provides
asymptotic guarantees of optimality for data distributions faithfully
representable by a causal network (Bayesian network or maximal ancestral
graph). Our empirical analysis confirms a super-linear speedup of the algorithm
with increasing sample size, linear scalability with respect to the number of
features and processing cores, while dominating other competitive algorithms in
its class.
| 1 | 0 | 0 | 1 | 0 | 0 |
Gravitational Wave Sources from Pop III Stars are Preferentially Located within the Cores of their Host Galaxies | The detection of gravitational waves (GWs) generated by merging black holes
has recently opened up a new observational window into the Universe. The mass
of the black holes in the first and third LIGO detections, ($36-29 \,
\mathrm{M_{\odot}}$ and $32-19 \, \mathrm{M_{\odot}}$), suggests
low-metallicity stars as their most likely progenitors. Based on
high-resolution N-body simulations, coupled with state-of-the-art metal
enrichment models, we find that the remnants of Pop III stars are
preferentially located within the cores of galaxies. The probability of a GW
signal to be generated by Pop III stars reaches $\sim 90\%$ at $\sim 0.5 \,
\mathrm{kpc}$ from the galaxy center, compared to a benchmark value of $\sim
5\%$ outside the core. The predicted merger rates inside bulges is $\sim 60
\times \beta_{III} \, \mathrm{Gpc^{-3} \, yr^{-1}}$ ($\beta_{III}$ is the Pop
III binarity fraction). To match the $90\%$ credible range of LIGO merger
rates, we obtain: $0.03 < \beta_{III} < 0.88$. Future advances in GW
observatories and the discovery of possible electromagnetic counterparts could
allow the localization of such sources within their host galaxies. The
preferential concentration of GW events within the bulge of galaxies would then
provide an indirect proof for the existence of Pop III stars.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Markov Chain Model for the Cure Rate of Non-Performing Loans | A Markov-chain model is developed for the purpose estimation of the cure rate
of non-performing loans. The technique is performed collectively, on portfolios
and it can be applicable in the process of calculation of credit impairment. It
is efficient in terms of data manipulation costs which makes it accessible even
to smaller financial institutions. In addition, several other applications to
portfolio optimization are suggested.
| 0 | 0 | 0 | 1 | 0 | 1 |
Query-Efficient Black-box Adversarial Examples (superceded) | Note that this paper is superceded by "Black-Box Adversarial Attacks with
Limited Queries and Information."
Current neural network-based image classifiers are susceptible to adversarial
examples, even in the black-box setting, where the attacker is limited to query
access without access to gradients. Previous methods --- substitute networks
and coordinate-based finite-difference methods --- are either unreliable or
query-inefficient, making these methods impractical for certain problems.
We introduce a new method for reliably generating adversarial examples under
more restricted, practical black-box threat models. First, we apply natural
evolution strategies to perform black-box attacks using two to three orders of
magnitude fewer queries than previous methods. Second, we introduce a new
algorithm to perform targeted adversarial attacks in the partial-information
setting, where the attacker only has access to a limited number of target
classes. Using these techniques, we successfully perform the first targeted
adversarial attack against a commercially deployed machine learning system, the
Google Cloud Vision API, in the partial information setting.
| 1 | 0 | 0 | 1 | 0 | 0 |
Beyond the Erdős Matching Conjecture | A family $\mathcal F\subset {[n]\choose k}$ is $U(s,q)$ of for any
$F_1,\ldots, F_s\in \mathcal F$ we have $|F_1\cup\ldots\cup F_s|\le q$. This
notion generalizes the property of a family to be $t$-intersecting and to have
matching number smaller than $s$.
In this paper, we find the maximum $|\mathcal F|$ for $\mathcal F$ that are
$U(s,q)$, provided $n>C(s,q)k$ with moderate $C(s,q)$. In particular, we
generalize the result of the first author on the Erdős Matching Conjecture
and prove a generalization of the Erdős-Ko-Rado theorem, which states that
for $n> s^2k$ the largest family $\mathcal F\subset {[n]\choose k}$ with
property $U(s,s(k-1)+1)$ is the star and is in particular intersecting.
(Conversely, it is easy to see that any intersecting family in ${[n]\choose k}$
is $U(s,s(k-1)+1)$.)
We investigate the case $k=3$ more thoroughly, showing that, unlike in the
case of the Erdős Matching Conjecture, in general there may be $3$ extremal
families.
| 1 | 0 | 0 | 0 | 0 | 0 |
pH dependence of charge multipole moments in proteins | Electrostatic interactions play a fundamental role in the structure and
function of proteins. Due to ionizable amino acid residues present on the
solvent-exposed surfaces of proteins, the protein charge is not constant but
varies with the changes in the environment -- most notably, the pH of the
surrounding solution. We study the effects of pH on the charge of four globular
proteins by expanding their surface charge distributions in terms of
multipoles. The detailed representation of the charges on the proteins is in
this way replaced by the magnitudes and orientations of the multipole moments
of varying order. Focusing on the three lowest-order multipoles -- the total
charge, dipole, and quadrupole moment -- we show that the value of pH
influences not only their magnitudes, but more notably and importantly also the
spatial orientation of their principal axes. Our findings imply important
consequences for the study of protein-protein interactions and the assembly of
both proteinaceous shells and patchy colloids with dissociable charge groups.
| 0 | 1 | 0 | 0 | 0 | 0 |
A streamlined, general approach for computing ligand binding free energies and its application to GPCR-bound cholesterol | The theory of receptor-ligand binding equilibria has long been
well-established in biochemistry, and was primarily constructed to describe
dilute aqueous solutions. Accordingly, few computational approaches have been
developed for making quantitative predictions of binding probabilities in
environments other than dilute isotropic solution. Existing techniques, ranging
from simple automated docking procedures to sophisticated thermodynamics-based
methods, have been developed with soluble proteins in mind. Biologically and
pharmacologically relevant protein-ligand interactions often occur in complex
environments, including lamellar phases like membranes and crowded, non-dilute
solutions. Here we revisit the theoretical bases of ligand binding equilibria,
avoiding overly specific assumptions that are nearly always made when
describing receptor-ligand binding. Building on this formalism, we extend the
asymptotically exact Alchemical Free Energy Perturbation technique to
quantifying occupancies of sites on proteins in a complex bulk, including
phase-separated, anisotropic, or non-dilute solutions, using a
thermodynamically consistent and easily generalized approach that resolves
several ambiguities of current frameworks. To incorporate the complex bulk
without overcomplicating the overall thermodynamic cycle, we simplify the
common approach for ligand restraints by using a single
distance-from-bound-configuration (DBC) ligand restraint during AFEP decoupling
from protein. DBC restraints should be generalizable to binding modes of most
small molecules, even those with strong orientational dependence. We apply this
approach to compute the likelihood that membrane cholesterol binds to known
crystallographic sites on 3 GPCRs at a range of concentrations. Non-ideality of
cholesterol in a binary cholesterol:POPC bilayer is characterized and
consistently incorporated into the interpretation.
| 0 | 0 | 0 | 0 | 1 | 0 |
Dynamics of one-dimensional electrons with broken spin-charge separation | Spin-charge separation is known to be broken in many physically interesting
one-dimensional (1D) and quasi-1D systems with spin-orbit interaction because
of which spin and charge degrees of freedom are mixed in collective
excitations. Mixed spin-charge modes carry an electric charge and therefore can
be investigated by electrical means. We explore this possibility by studying
the dynamic conductance of a 1D electron system with image-potential-induced
spin-orbit interaction. The real part of the admittance reveals an oscillatory
behavior versus frequency that reflects the collective excitation resonances
for both modes at their respective transit frequencies. By analyzing the
frequency dependence of the conductance the mode velocities can be found and
their spin-charge structure can be determined quantitatively.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Rational Distributed Process-level Account of Independence Judgment | It is inconceivable how chaotic the world would look to humans, faced with
innumerable decisions a day to be made under uncertainty, had they been lacking
the capacity to distinguish the relevant from the irrelevant---a capacity which
computationally amounts to handling probabilistic independence relations. The
highly parallel and distributed computational machinery of the brain suggests
that a satisfying process-level account of human independence judgment should
also mimic these features. In this work, we present the first rational,
distributed, message-passing, process-level account of independence judgment,
called $\mathcal{D}^\ast$. Interestingly, $\mathcal{D}^\ast$ shows a curious,
but normatively-justified tendency for quick detection of dependencies,
whenever they hold. Furthermore, $\mathcal{D}^\ast$ outperforms all the
previously proposed algorithms in the AI literature in terms of worst-case
running time, and a salient aspect of it is supported by recent work in
neuroscience investigating possible implementations of Bayes nets at the neural
level. $\mathcal{D}^\ast$ nicely exemplifies how the pursuit of cognitive
plausibility can lead to the discovery of state-of-the-art algorithms with
appealing properties, and its simplicity makes $\mathcal{D}^\ast$ potentially a
good candidate for pedagogical purposes.
| 0 | 0 | 0 | 1 | 1 | 0 |
Automata in the Category of Glued Vector Spaces | In this paper we adopt a category-theoretic approach to the conception of
automata classes enjoying minimization by design. The main instantiation of our
construction is a new class of automata that are hybrid between deterministic
automata and automata weighted over a field.
| 1 | 0 | 0 | 0 | 0 | 0 |
Second order structural phase transitions, free energy curvature, and temperature-dependent anharmonic phonons in the self-consistent harmonic approximation: theory and stochastic implementation | The self-consistent harmonic approximation is an effective harmonic theory to
calculate the free energy of systems with strongly anharmonic atomic
vibrations, and its stochastic implementation has proved to be an efficient
method to study, from first-principles, the anharmonic properties of solids.
The free energy as a function of average atomic positions (centroids) can be
used to study quantum or thermal lattice instability. In particular the
centroids are order parameters in second-order structural phase transitions
such as, e.g., charge-density-waves or ferroelectric instabilities. According
to Landau's theory, the knowledge of the second derivative of the free energy
(i.e. the curvature) with respect to the centroids in a high-symmetry
configuration allows the identification of the phase-transition and of the
instability modes. In this work we derive the exact analytic formula for the
second derivative of the free energy in the self-consistent harmonic
approximation for a generic atomic configuration. The analytic derivative is
expressed in terms of the atomic displacements and forces in a form that can be
evaluated by a stochastic technique using importance sampling. Our approach is
particularly suitable for applications based on first-principles
density-functional-theory calculations, where the forces on atoms can be
obtained with a negligible computational effort compared to total energy
determination. Finally we propose a dynamical extension of the theory to
calculate spectral properties of strongly anharmonic phonons, as probed by
inelastic scattering processes. We illustrate our method with a numerical
application on a toy model that mimics the ferroelectric transition in
rock-salt crystals such as SnTe or GeTe.
| 0 | 1 | 0 | 0 | 0 | 0 |
Prioritized Norms in Formal Argumentation | To resolve conflicts among norms, various nonmonotonic formalisms can be used
to perform prioritized normative reasoning. Meanwhile, formal argumentation
provides a way to represent nonmonotonic logics. In this paper, we propose a
representation of prioritized normative reasoning by argumentation. Using
hierarchical abstract normative systems, we define three kinds of prioritized
normative reasoning approaches, called Greedy, Reduction, and Optimization.
Then, after formulating an argumentation theory for a hierarchical abstract
normative system, we show that for a totally ordered hierarchical abstract
normative system, Greedy and Reduction can be represented in argumentation by
applying the weakest link and the last link principles respectively, and
Optimization can be represented by introducing additional defeats capturing the
idea that for each argument that contains a norm not belonging to the maximal
obeyable set then this argument should be rejected.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast and robust tensor decomposition with applications to dictionary learning | We develop fast spectral algorithms for tensor decomposition that match the
robustness guarantees of the best known polynomial-time algorithms for this
problem based on the sum-of-squares (SOS) semidefinite programming hierarchy.
Our algorithms can decompose a 4-tensor with $n$-dimensional orthonormal
components in the presence of error with constant spectral norm (when viewed as
an $n^2$-by-$n^2$ matrix). The running time is $n^5$ which is close to linear
in the input size $n^4$.
We also obtain algorithms with similar running time to learn sparsely-used
orthogonal dictionaries even when feature representations have constant
relative sparsity and non-independent coordinates.
The only previous polynomial-time algorithms to solve these problem are based
on solving large semidefinite programs. In contrast, our algorithms are easy to
implement directly and are based on spectral projections and tensor-mode
rearrangements.
Or work is inspired by recent of Hopkins, Schramm, Shi, and Steurer (STOC'16)
that shows how fast spectral algorithms can achieve the guarantees of SOS for
average-case problems. In this work, we introduce general techniques to capture
the guarantees of SOS for worst-case problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Distributed Testing of Conductance | We study the problem of testing conductance in the setting of distributed
computing and give a two-sided tester that takes $\mathcal{O}(\log(n) /
(\epsilon \Phi^2))$ rounds to decide if a graph has conductance at least $\Phi$
or is $\epsilon$-far from having conductance at least $\Phi^2 / 1000$ in the
distributed CONGEST model. We also show that $\Omega(\log n)$ rounds are
necessary for testing conductance even in the LOCAL model. In the case of a
connected graph, we show that we can perform the test even when the number of
vertices in the graph is not known a priori. This is the first two-sided tester
in the distributed model we are aware of. A key observation is that one can
perform a polynomial number of random walks from a small set of vertices if it
is sufficient to track only some small statistics of the walks. This greatly
reduces the congestion on the edges compared to tracking each walk
individually.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ultra high stiffness and thermal conductivity of graphene like C3N | Recently, single crystalline carbon nitride 2D material with a C3N
stoichiometry has been synthesized. In this investigation, we explored the
mechanical response and thermal transport along pristine, free-standing and
single-layer C3N. To this aim, we conducted extensive first-principles density
functional theory (DFT) calculations as well as molecular dynamics (MD)
simulations. DFT results reveal that C3N nanofilms can yield remarkably high
elastic modulus of 341 GPa.nm and tensile strength of 35 GPa.nm, very close to
those of defect-free graphene. Classical MD simulations performed at a low
temperature, predict accurately the elastic modulus of 2D C3N with less than 3%
difference with the first-principles estimation. The deformation process of C3N
nanosheets was studied both by the DFT and MD simulations. Ab initio molecular
dynamics simulations show that single-layer C3N can withstand high temperatures
like 4000 K. Notably, the phononic thermal conductivity of free-standing C3N
was predicted to be as high as 815 W/mK. Our atomistic modelling results reveal
ultra high stiffness and thermal conductivity of C3N nanomembranes and
therefore propose them as promising candidates for new application such as the
thermal management in nanoelectronics or simultaneously reinforcing the thermal
and mechanical properties of polymeric materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automatic Liver Lesion Detection using Cascaded Deep Residual Networks | Automatic segmentation of liver lesions is a fundamental requirement towards
the creation of computer aided diagnosis (CAD) and decision support systems
(CDS). Traditional segmentation approaches depend heavily upon hand-crafted
features and a priori knowledge of the user. As such, these methods are
difficult to adopt within a clinical environment. Recently, deep learning
methods based on fully convolutional networks (FCNs) have been successful in
many segmentation problems primarily because they leverage a large labelled
dataset to hierarchically learn the features that best correspond to the
shallow visual appearance as well as the deep semantics of the areas to be
segmented. However, FCNs based on a 16 layer VGGNet architecture have limited
capacity to add additional layers. Therefore, it is challenging to learn more
discriminative features among different classes for FCNs. In this study, we
overcome these limitations using deep residual networks (ResNet) to segment
liver lesions. ResNet contain skip connections between convolutional layers,
which solved the problem of the training degradation of training accuracy in
very deep networks and thereby enables the use of additional layers for
learning more discriminative features. In addition, we achieve more precise
boundary definitions through a novel cascaded ResNet architecture with
multi-scale fusion to gradually learn and infer the boundaries of both the
liver and the liver lesions. Our proposed method achieved 4th place in the ISBI
2017 Liver Tumor Segmentation Challenge by the submission deadline.
| 1 | 0 | 0 | 0 | 0 | 0 |
Combinatorial metrics: MacWilliams-type identities, isometries and extension property | In this work we characterize the combinatorial metrics admitting a
MacWilliams-type identity and describe the group of linear isometries of such
metrics. Considering coverings that are not connected, we classify the metrics
satisfying the MacWilliams extension property.
| 1 | 0 | 0 | 0 | 0 | 0 |
The curtain remains open: NGC 2617 continues in a high state | Optical and near-infrared photometry, optical spectroscopy, and soft X-ray
and UV monitoring of the changing look active galactic nucleus NGC 2617 show
that it continues to have the appearance of a type-1 Seyfert galaxy. An optical
light curve for 2010-2016 indicates that the change of type probably occurred
between 2010 October and 2012 February and was not related to the brightening
in 2013. In 2016 NGC 2617 brightened again to a level of activity close to that
in 2013 April. We find variations in all passbands and in both the intensities
and profiles of the broad Balmer lines. A new displaced emission peak has
appeared in H$\beta$. X-ray variations are well correlated with UV-optical
variability and possibly lead by $\sim$ 2-3 d. The $K$ band lags the $J$ band
by about 21.5 $\pm$ 2.5 d. and lags the combined $B+J$ filters by $\sim$ 25 d.
$J$ lags $B$ by about 3 d. This could be because $J$-band variability arises
from the outer part of the accretion disc, while $K$-band variability comes
from thermal re-emission by dust. We propose that spectral-type changes are a
result of increasing central luminosity causing sublimation of the innermost
dust in the hollow biconical outflow. We briefly discuss various other possible
reasons that might explain the dramatic changes in NGC 2617.
| 0 | 1 | 0 | 0 | 0 | 0 |
The topology on Berkovich affine lines over complete valuation rings | In this article, we give a full description of the topology of the one
dimensional affine analytic space $\mathbb{A}_R^1$ over a complete valuation
ring $R$ (i.e. a valuation ring with "real valued valuation" which is complete
under the induced metric), when its field of fractions $K$ is algebraically
closed. In particular, we show that $\mathbb{A}_R^1$ is both connected and
locally path connected. Furthermore, $\mathbb{A}_R^1$ is the completion of
$K\times (1,\infty)$ under a canonical uniform structure. As an application, we
describe the Berkovich spectrum $\mathfrak{M}(\mathbb{Z}_p[G])$ of the Banach
group ring $\mathbb{Z}_p[G]$ of a cyclic $p$-group $G$ over the ring
$\mathbb{Z}_p$ of $p$-adic integers.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Original Mechanism for the Acceleration of Ultra-High-Energy Cosmic Rays | We suggest that ultra-high-energy (UHE) cosmic rays (CRs) may be accelerated
in ultra-relativistic flows via a one-shot mechanism, the "espresso"
acceleration, in which already-energetic particles are generally boosted by a
factor of $\sim\Gamma^2$ in energy, where $\Gamma$ is the flow Lorentz factor.
More precisely, we consider blazar-like jets with $\Gamma\gtrsim 30$
propagating into a halo of "seed" CRs produced in supernova remnants, which can
accelerate UHECRs up to $10^{20}$\,eV. Such a re-acceleration process naturally
accounts for the chemical composition measured by the Pierre Auger
Collaboration, which resembles the one around and above the knee in the CR
spectrum, and is consistent with the distribution of potential sources in the
local universe, particularly intriguing is the coincidence of the powerful
blazar Mrk 421 with the hotspot reported by the Telescope Array Collaboration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Expected Time to Extinction of SIS Epidemic Model Using Quasy Stationary Distribution | We study that the breakdown of epidemic depends on some parameters, that is
expressed in epidemic reproduction ratio number. It is noted that when $R_0 $
exceeds 1, the stochastic model have two different results. But, eventually the
extinction will be reached even though the major epidemic occurs. The question
is how long this process will reach extinction. In this paper, we will focus on
the Markovian process of SIS model when major epidemic occurs. Using the
approximation of quasi--stationary distribution, the expected mean time of
extinction only occurs when the process is one step away from being extinct.
Combining the theorm from Ethier and Kurtz, we use CLT to find the
approximation of this quasi distribution and successfully determine the
asymptotic mean time to extinction of SIS model without demography.
| 0 | 0 | 0 | 0 | 1 | 0 |
The gyrokinetic limit for the Vlasov-Poisson system with a point charge | We consider the asymptotics of large external magnetic field for a 2D
Vlasov-Poisson system governing the evolution of a bounded density interacting
with a point charge. In a suitable asymptotical regime, we show that the
solution converges to a measure-valued solution of the Euler equation with a
defect measure.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deriving Verb Predicates By Clustering Verbs with Arguments | Hand-built verb clusters such as the widely used Levin classes (Levin, 1993)
have proved useful, but have limited coverage. Verb classes automatically
induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other
hand, can give clusters with much larger coverage, and can be adapted to
specific corpora such as Twitter. We present a method for clustering the
outputs of VerbKB: verbs with their multiple argument types, e.g.
"marry(person, person)", "feel(person, emotion)." We make use of a novel
low-dimensional embedding of verbs and their arguments to produce high quality
clusters in which the same verb can be in different clusters depending on its
argument type. The resulting verb clusters do a better job than hand-built
clusters of predicting sarcasm, sentiment, and locus of control in tweets.
| 1 | 0 | 0 | 0 | 0 | 0 |
A propagation tool to connect remote-sensing observations with in-situ measurements of heliospheric structures | The remoteness of the Sun and the harsh conditions prevailing in the solar
corona have so far limited the observational data used in the study of solar
physics to remote-sensing observations taken either from the ground or from
space. In contrast, the `solar wind laboratory' is directly measured in situ by
a fleet of spacecraft measuring the properties of the plasma and magnetic
fields at specific points in space. Since 2007, the solar-terrestrial relations
observatory (STEREO) has been providing images of the solar wind that flows
between the solar corona and spacecraft making in-situ measurements. This has
allowed scientists to directly connect processes imaged near the Sun with the
subsequent effects measured in the solar wind. This new capability prompted the
development of a series of tools and techniques to track heliospheric
structures through space. This article presents one of these tools, a web-based
interface called the 'Propagation Tool' that offers an integrated research
environment to study the evolution of coronal and solar wind structures, such
as Coronal Mass Ejections (CMEs), Corotating Interaction Regions (CIRs) and
Solar Energetic Particles (SEPs). These structures can be propagated from the
Sun outwards to or alternatively inwards from planets and spacecraft situated
in the inner and outer heliosphere. In this paper, we present the global
architecture of the tool, discuss some of the assumptions made to simulate the
evolution of the structures and show how the tool connects to different
databases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chemical-disorder-caused Medium Range Order in Covalent Glass | How atoms in covalent solids rearrange over a medium-range length-scale
during amorphization is a long pursued question whose answer could profoundly
shape our understanding on amorphous (a-) networks. Based on ab-intio
calculations and reverse Monte Carlo simulations of experiments, we
surprisingly find that even though the severe chemical disorder in a-GeTe
undermined the prevailing medium range order (MRO) picture, it is responsible
for the experimentally observed MRO. That this thing could happen depends on a
novel atomic packing scheme. And this scheme results in a kind of homopolar
bond chain-like polyhedral clusters. Within this scheme, the formation of
homopolar bonds can be well explained by an electron-counting model and further
validated by quantitative bond energy analysis based. Our study suggests that
the underlying physics for chemical disorder in a-GeTe is intrinsic and
universal to all severely chemically disordered covalent glasses.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal compromise between incompatible conditional probability distributions, with application to Objective Bayesian Kriging | Models are often defined through conditional rather than joint distributions,
but it can be difficult to check whether the conditional distributions are
compatible, i.e. whether there exists a joint probability distribution which
generates them. When they are compatible, a Gibbs sampler can be used to sample
from this joint distribution. When they are not, the Gibbs sampling algorithm
may still be applied, resulting in a "pseudo-Gibbs sampler". We show its
stationary probability distribution to be the optimal compromise between the
conditional distributions, in the sense that it minimizes a mean squared misfit
between them and its own conditional distributions. This allows us to perform
Objective Bayesian analysis of correlation parameters in Kriging models by
using univariate conditional Jeffreys-rule posterior distributions instead of
the widely used multivariate Jeffreys-rule posterior. This strategy makes the
full-Bayesian procedure tractable. Numerical examples show it has near-optimal
frequentist performance in terms of prediction interval coverage.
| 0 | 0 | 1 | 1 | 0 | 0 |
Estimation for the Prediction of Point Processes with Many Covariates | Estimation of the intensity of a point process is considered within a
nonparametric framework. The intensity measure is unknown and depends on
covariates, possibly many more than the observed number of jumps. Only a single
trajectory of the counting process is observed. Interest lies in estimating the
intensity conditional on the covariates. The impact of the covariates is
modelled by an additive model where each component can be written as a linear
combination of possibly unknown functions. The focus is on prediction as
opposed to variable screening. Conditions are imposed on the coefficients of
this linear combination in order to control the estimation error. The rates of
convergence are optimal when the number of active covariates is large. As an
application, the intensity of the buy and sell trades of the New Zealand dollar
futures is estimated and a test for forecast evaluation is presented. A
simulation is included to provide some finite sample intuition on the model and
asymptotic properties.
| 0 | 0 | 1 | 1 | 0 | 0 |
Chordal SLE$_6$ explorations of a quantum disk | We consider a particular type of $\sqrt{8/3}$-Liouville quantum gravity
surface called a doubly marked quantum disk (equivalently, a Brownian disk)
decorated by an independent chordal SLE$_6$ curve $\eta$ between its marked
boundary points. We obtain descriptions of the law of the quantum surfaces
parameterized by the complementary connected components of $\eta([0,t])$ for
each time $t \geq 0$ as well as the law of the left/right $\sqrt{8/3}$-quantum
boundary length process for $\eta$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimal rate list decoding over bounded alphabets using algebraic-geometric codes | We give new constructions of two classes of algebraic code families which are
efficiently list decodable with small output list size from a fraction
$1-R-\epsilon$ of adversarial errors where $R$ is the rate of the code, for any
desired positive constant $\epsilon$. The alphabet size depends only $\epsilon$
and is nearly-optimal.
The first class of codes are obtained by folding algebraic-geometric codes
using automorphisms of the underlying function field. The list decoding
algorithm is based on a linear-algebraic approach, which pins down the
candidate messages to a subspace with a nice "periodic" structure. The list is
pruned by precoding into a special form of "subspace-evasive" sets, which are
constructed pseudorandomly. Instantiating this construction with the
Garcia-Stichtenoth function field tower yields codes list-decodable up to a
$1-R-\epsilon$ error fraction with list size bounded by $O(1/\epsilon)$,
matching the existential bound up to constant factors. The parameters we
achieve are thus quite close to the existential bounds in all three aspects:
error-correction radius, alphabet size, and list-size.
The second class of codes are obtained by restricting evaluation points of an
algebraic-geometric code to rational points from a subfield. Once again, the
linear-algebraic approach to list decoding to pin down candidate messages to a
periodic subspace. We develop an alternate approach based on "subspace designs"
to precode messages. Together with the subsequent explicit constructions of
subspace designs, this yields a deterministic construction of an algebraic code
family of rate $R$ with efficient list decoding from $1-R-\epsilon$ fraction of
errors over a constant-sized alphabet. The list size is bounded by a very
slowly growing function of the block length $N$; in particular, it is at most
$O(\log^{(r)} N)$ (the $r$'th iterated logarithm) for any fixed integer $r$.
| 1 | 0 | 1 | 0 | 0 | 0 |
Quadratic Programming Approach to Fit Protein Complexes into Electron Density Maps | The paper investigates the problem of fitting protein complexes into electron
density maps. They are represented by high-resolution cryoEM density maps
converted into overlapping matrices and partly show a structure of a complex.
The general purpose is to define positions of all proteins inside it. This
problem is known to be NP-hard, since it lays in the field of combinatorial
optimization over a set of discrete states of the complex. We introduce
quadratic programming approaches to the problem. To find an approximate
solution, we convert a density map into an overlapping matrix, which is
generally indefinite. Since the matrix is indefinite, the optimization problem
for the corresponding quadratic form is non-convex. To treat non-convexity of
the optimization problem, we use different convex relaxations to find which set
of proteins minimizes the quadratic form best.
| 0 | 0 | 1 | 0 | 0 | 0 |
Conditions for the equivalence between IQC and graph separation stability results | This paper provides a link between time-domain and frequency-domain stability
results in the literature. Specifically, we focus on the comparison between
stability results for a feedback interconnection of two nonlinear systems
stated in terms of frequency-domain conditions. While the Integral Quadratic
Constrain (IQC) theorem can cope with them via a homotopy argument for the
Lurye problem, graph separation results require the transformation of the
frequency-domain conditions into truncated time-domain conditions. To date,
much of the literature focuses on "hard" factorizations of the multiplier,
considering only one of the two frequency-domain conditions. Here it is shown
that a symmetric, "doubly-hard" factorization is required to convert both
frequency-domain conditions into truncated time-domain conditions. By using the
appropriate factorization, a novel comparison between the results obtained by
IQC and separation theories is then provided. As a result, we identify under
what conditions the IQC theorem may provide some advantage.
| 1 | 0 | 0 | 0 | 0 | 0 |
Energy-Efficient Wireless Content Delivery with Proactive Caching | We propose an intelligent proactive content caching scheme to reduce the
energy consumption in wireless downlink. We consider an online social network
(OSN) setting where new contents are generated over time, and remain
\textit{relevant} to the user for a random lifetime. Contents are downloaded to
the user equipment (UE) through a time-varying wireless channel at an energy
cost that depends on the channel state and the number of contents downloaded.
The user accesses the OSN at random time instants, and consumes all the
relevant contents. To reduce the energy consumption, we propose
\textit{proactive caching} of contents under favorable channel conditions to a
finite capacity cache memory. Assuming that the channel quality (or
equivalently, the cost of downloading data) is memoryless over time slots, we
show that the optimal caching policy, which may replace contents in the cache
with shorter remaining lifetime with contents at the server that remain
relevant longer, has certain threshold structure with respect to the channel
quality. Since the optimal policy is computationally demanding in practice, we
introduce a simplified caching scheme and optimize its parameters using policy
search. We also present two lower bounds on the energy consumption. We
demonstrate through numerical simulations that the proposed caching scheme
significantly reduces the energy consumption compared to traditional reactive
caching tools, and achieves close-to-optimal performance for a wide variety of
system parameters.
| 1 | 0 | 1 | 0 | 0 | 0 |
Low- and high-order gravitational harmonics of rigidly rotating Jupiter | The Juno Orbiter has provided improved estimates of the even gravitational
harmonics J2 to J8 of Jupiter. To compute higher-order moments, new methods
such as the Concentric Maclaurin Spheroids (CMS) method have been developed
which surpass the so far commonly used Theory of Figures (ToF) method in
accuracy. This progress rises the question whether ToF can still provide a
useful service for deriving the internal structure of giant planets in the
Solar system. In this paper, I apply both the ToF and the CMS method to compare
results for polytropic Jupiter and for the physical equation of state
H/He-REOS.3 based models. An accuracy in the computed values of J2 and J4 of
0.1% is found to be sufficient in order to obtain the core mass safely within
0.5 Mearth numerical accuracy and the atmospheric metallicity within about
0.0004. ToF to 4th order provides that accuracy, while ToF to 3rd order does
not for J4. Furthermore, I find that the assumption of rigid rotation yields J6
and J8 values in agreement with the current Juno estimates, and that higher
order terms (J10 to J18) deviate by about 10% from predictions by polytropic
models. This work suggests that ToF4 can still be applied to infer the deep
internal structure, and that the zonal winds on Jupiter reach less deep than
0.9 RJup.
| 0 | 1 | 0 | 0 | 0 | 0 |
Language Modeling with Generative Adversarial Networks | Generative Adversarial Networks (GANs) have been promising in the field of
image generation, however, they have been hard to train for language
generation. GANs were originally designed to output differentiable values, so
discrete language generation is challenging for them which causes high levels
of instability in training GANs. Consequently, past work has resorted to
pre-training with maximum-likelihood or training GANs without pre-training with
a WGAN objective with a gradient penalty. In this study, we present a
comparison of those approaches. Furthermore, we present the results of some
experiments that indicate better training and convergence of Wasserstein GANs
(WGANs) when a weaker regularization term is enforcing the Lipschitz
constraint.
| 0 | 0 | 0 | 1 | 0 | 0 |
How AD Can Help Solve Differential-Algebraic Equations | A characteristic feature of differential-algebraic equations is that one
needs to find derivatives of some of their equations with respect to time, as
part of so called index reduction or regularisation, to prepare them for
numerical solution. This is often done with the help of a computer algebra
system. We show in two significant cases that it can be done efficiently by
pure algorithmic differentiation. The first is the Dummy Derivatives method,
here we give a mainly theoretical description, with tutorial examples. The
second is the solution of a mechanical system directly from its Lagrangian
formulation. Here we outline the theory and show several non-trivial examples
of using the "Lagrangian facility" of the Nedialkov-Pryce initial-value solver
DAETS, namely: a spring-mass-multipendulum system, a prescribed-trajectory
control problem, and long-time integration of a model of the outer planets of
the solar system, taken from the DETEST testing package for ODE solvers.
| 0 | 0 | 1 | 0 | 0 | 0 |
Secure Coding Practices in Java: Challenges and Vulnerabilities | Java platform and third-party libraries provide various security features to
facilitate secure coding. However, misusing these features can cost tremendous
time and effort of developers or cause security vulnerabilities in software.
Prior research was focused on the misuse of cryptography and SSL APIs, but did
not explore the key fundamental research question: what are the biggest
challenges and vulnerabilities in secure coding practices? In this paper, we
conducted a comprehensive empirical study on StackOverflow posts to understand
developers' concerns on Java secure coding, their programming obstacles, and
potential vulnerabilities in their code. We observed that developers have
shifted their effort to the usage of authentication and authorization features
provided by Spring security--a third-party framework designed to secure
enterprise applications. Multiple programming challenges are related to APIs or
libraries, including the complicated cross-language data handling of
cryptography APIs, and the complex Java-based or XML-based approaches to
configure Spring security. More interestingly, we identified security
vulnerabilities in the suggested code of accepted answers. The vulnerabilities
included using insecure hash functions such as MD5, breaking SSL/TLS security
through bypassing certificate validation, and insecurely disabling the default
protection against Cross Site Request Forgery (CSRF) attacks. Our findings
reveal the insufficiency of secure coding assistance and education, and the gap
between security theory and coding practices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Complex Urban LiDAR Data Set | This paper presents a Light Detection and Ranging (LiDAR) data set that
targets complex urban environments. Urban environments with high-rise buildings
and congested traffic pose a significant challenge for many robotics
applications. The presented data set is unique in the sense it is able to
capture the genuine features of an urban environment (e.g. metropolitan areas,
large building complexes and underground parking lots). Data of two-dimensional
(2D) and threedimensional (3D) LiDAR, which are typical types of LiDAR sensors,
are provided in the data set. The two 16-ray 3D LiDARs are tilted on both sides
for maximal coverage. One 2D LiDAR faces backward while the other faces
forwards to collect data of roads and buildings, respectively. Raw sensor data
from Fiber Optic Gyro (FOG), Inertial Measurement Unit (IMU), and the Global
Positioning System (GPS) are presented in a file format for vehicle pose
estimation. The pose information of the vehicle estimated at 100 Hz is also
presented after applying the graph simultaneous localization and mapping (SLAM)
algorithm. For the convenience of development, the file player and data viewer
in Robot Operating System (ROS) environment were also released via the web
page. The full data sets are available at: this http URL. In
this website, 3D preview of each data set is provided using WebGL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Blind Source Separation Using Mixtures of Alpha-Stable Distributions | We propose a new blind source separation algorithm based on mixtures of
alpha-stable distributions. Complex symmetric alpha-stable distributions have
been recently showed to better model audio signals in the time-frequency domain
than classical Gaussian distributions thanks to their larger dynamic range.
However, inference of these models is notoriously hard to perform because their
probability density functions do not have a closed-form expression in general.
Here, we introduce a novel method for estimating mixture of alpha-stable
distributions based on characteristic function matching. We apply this to the
blind estimation of binary masks in individual frequency bands from
multichannel convolutive audio mixes. We show that the proposed method yields
better separation performance than Gaussian-based binary-masking methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Student and instructor framing in upper-division physics | Upper-division physics students spend much of their time solving problems. In
addition to their basic skills and background, their epistemic framing can form
an important part of their ability to learn physics from these problems.
Encouraging students to move toward productive framing may help them solve
problems. Thus, an instructor should understand the specifics of how student
have framed a problem and understand how her interaction with the students will
impact that framing. In this study we investigate epistemic framing of students
in problem solving situations where math is applied to physics. To analyze the
frames and changes in frames, we develop and use a two-axis framework involving
conceptual and algorithmic physics and math. We examine student and instructor
framing and the interactions of these frames over a range of problems in an
upper-division electromagnetic fields course. Within interactions, students and
instructors generally follow each others' leads in framing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Generative Model using Unregularized Score for Anomaly Detection with Heterogeneous Complexity | Accurate and automated detection of anomalous samples in a natural image
dataset can be accomplished with a probabilistic model for end-to-end modeling
of images. Such images have heterogeneous complexity, however, and a
probabilistic model overlooks simply shaped objects with small anomalies. This
is because the probabilistic model assigns undesirably lower likelihoods to
complexly shaped objects that are nevertheless consistent with set standards.
To overcome this difficulty, we propose an unregularized score for deep
generative models (DGMs), which are generative models leveraging deep neural
networks. We found that the regularization terms of the DGMs considerably
influence the anomaly score depending on the complexity of the samples. By
removing these terms, we obtain an unregularized score, which we evaluated on a
toy dataset and real-world manufacturing datasets. Empirical results
demonstrate that the unregularized score is robust to the inherent complexity
of samples and can be used to better detect anomalies.
| 0 | 0 | 0 | 1 | 0 | 0 |
Non-collinear magnetic structure and multipolar order in Eu$_2$Ir$_2$O$_7$ | The magnetic properties of the pyrochlore iridate material Eu$_2$Ir$_2$O$_7$
(5$d^5$) have been studied based on the first principle calculations, where the
crystal field splitting $\Delta$, spin-orbit coupling (SOC) $\lambda$ and
Coulomb interaction $U$ within Ir 5$d$ orbitals are all playing significant
roles. The ground state phase diagram has been obtained with respect to the
strength of SOC and Coulomb interaction $U$, where a stable anti-ferromagnetic
ground state with all-in/all-out (AIAO) spin structure has been found. Besides,
another anti-ferromagnetic states with close energy to AIAO have also been
found to be stable. The calculated nonlinear magnetization of the two stable
states both have the d-wave pattern but with a $\pi/4$ phase difference, which
can perfectly explain the experimentally observed nonlinear magnetization
pattern. Compared with the results of the non-distorted structure, it turns out
that the trigonal lattice distortion is crucial for stabilizing the AIAO state
in Eu$_2$Ir$_2$O$_7$. Furthermore, besides large dipolar moments, we also find
considerable octupolar moments in the magnetic states.
| 0 | 1 | 0 | 0 | 0 | 0 |
OGLE-2013-BLG-1761Lb: A Massive Planet Around an M/K Dwarf | We report the discovery and the analysis of the planetary microlensing event,
OGLE-2013-BLG-1761. There are some degenerate solutions in this event because
the planetary anomaly is only sparsely sampled. But the detailed light curve
analysis ruled out all stellar binary models and shows that the lens to be a
planetary system. There is the so-called close/wide degeneracy in the solutions
with the planet/host mass ratio of $q \sim (7.5 \pm 1.5) \times 10^{-3}$ and $q
\sim (9.3 \pm 2.9) \times 10^{-3}$ with the projected separation in Einstein
radius units of $s = 0.95$ (close) and $s = 1.19$ (wide), respectively. The
microlens parallax effect is not detected but the finite source effect is
detected. Our Bayesian analysis indicates that the lens system is located at
$D_{\rm L}=6.9_{-1.2}^{+1.0} \ {\rm kpc}$ away from us and the host star is an
M/K-dwarf with the mass of $M_{\rm L}=0.33_{-0.18}^{+0.32} \ M_{\odot}$ orbited
by a super-Jupiter mass planet with the mass of $m_{\rm P}=2.8_{-1.5}^{+2.5} \
M_{\rm Jup}$ at the projected separation of $a_{\perp}=1.8_{-0.5}^{+0.5} \ {\rm
AU}$. The preference of the large lens distance in the Bayesian analysis is due
to the relatively large observed source star radius. The distance and other
physical parameters can be constrained by the future high resolution imaging by
ground large telescopes or HST. If the estimated lens distance is correct, this
planet provides another sample for testing the claimed deficit of planets in
the Galactic bulge.
| 0 | 1 | 0 | 0 | 0 | 0 |
Phase-Aware Single-Channel Speech Enhancement with Modulation-Domain Kalman Filtering | We present a single-channel phase-sensitive speech enhancement algorithm that
is based on modulation-domain Kalman filtering and on tracking the speech phase
using circular statistics. With Kalman filtering, using that speech and noise
are additive in the complex STFT domain, the algorithm tracks the speech
log-spectrum, the noise log-spectrum and the speech phase. Joint amplitude and
phase estimation of speech is performed. Given the noisy speech signal,
conventional algorithms use the noisy phase for signal reconstruction
approximating the speech phase with the noisy phase. In the proposed Kalman
filtering algorithm, the speech phase posterior is used to create an enhanced
speech phase spectrum for signal reconstruction. The Kalman filter prediction
models the temporal/inter-frame correlation of the speech and noise log-spectra
and of the speech phase, while the Kalman filter update models their nonlinear
relations. With the proposed algorithm, speech is tracked and estimated both in
the log-spectral and spectral phase domains. The algorithm is evaluated in
terms of speech quality and different algorithm configurations, dependent on
the signal model, are compared in different noise types. Experimental results
show that the proposed algorithm outperforms traditional enhancement algorithms
over a range of SNRs for various noise types.
| 1 | 0 | 0 | 0 | 0 | 0 |
Reduction and specialization of hyperelliptic continued fractions | For a monic polynomial $D(X)$ of even degree, express $\sqrt D$ as a Laurent
series in $X^{-1}$; this yields a continued fraction expansion (similar to
continued fractions of real numbers): \[\sqrt
D=a_0+\dfrac{1}{a_1+\dfrac{1}{a_2+\dfrac{1}{\ddots}}},\quad a_i\text{
polynomials in }X.\] Such continued fractions were first considered by Abel in
1826, and later by Chebyshev. It turns out they are rarely periodic unless $D$
is defined over a finite field.
Around 2001 van der Poorten studied non-periodic continued fractions of
$\sqrt D$, with $D$ defined over the rationals, and simultaneously the
continued fraction of $\sqrt D$ modulo a suitable prime $p$; the latter
continued fraction is automatically periodic. He found that one recovers all
the convergents (rational function approximations to $\sqrt D$ obtained by
cutting off the continued fraction) of $\sqrt D \mod{p}$ by appropriately
normalising and then reducing the convergents of $\sqrt D$.
By developing a general specialization theory for continued fractions of
Laurent series, I produced a rigorous proof of this result stated by van der
Poorten and further was able to show the following:
If $D$ is defined over the rationals and the continued fraction of $\sqrt D$
is non-periodic, then for all but finitely many primes $p \in \mathbb Z$, this
prime $p$ occurs in the denominator of the leading coefficient of infinitely
many $a_i$.
For $\mathrm{deg}\,D = 4$, I can even give a description of the orders in
which the prime appears, and the $p$-adic Gauss norms of the $a_i$ and the
convergents. These results also generalise to number fields.
Moreover, I derive optimised formulae for computing quadratic continued
fractions, along with several example expansions. I discuss a few known results
on the heights of the convergents, and explain some relations with the
reduction of hyperelliptic curves and Jacobians.
| 0 | 0 | 1 | 0 | 0 | 0 |
Three years of SPHERE: the latest view of the morphology and evolution of protoplanetary discs | Spatially resolving the immediate surroundings of young stars is a key
challenge for the planet formation community. SPHERE on the VLT represents an
important step forward by increasing the opportunities offered by optical or
near-infrared imaging instruments to image protoplanetary discs. The Guaranteed
Time Observation Disc team has concentrated much of its efforts on polarimetric
differential imaging, a technique that enables the efficient removal of stellar
light and thus facilitates the detection of light scattered by the disc within
a few au from the central star. These images reveal intriguing complex disc
structures and diverse morphological features that are possibly caused by
ongoing planet formation in the disc. An overview of the recent advances
enabled by SPHERE is presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tensor Methods for Nonlinear Matrix Completion | In the low rank matrix completion (LRMC) problem, the low rank assumption
means that the columns (or rows) of the matrix to be completed are points on a
low-dimensional linear algebraic variety. This paper extends this thinking to
cases where the columns are points on a low-dimensional nonlinear algebraic
variety, a problem we call Low Algebraic Dimension Matrix Completion (LADMC).
Matrices whose columns belong to a union of subspaces (UoS) are an important
special case. We propose a LADMC algorithm that leverages existing LRMC methods
on a tensorized representation of the data. For example, a second-order
tensorization representation is formed by taking the outer product of each
column with itself, and we consider higher order tensorizations as well. This
approach will succeed in many cases where traditional LRMC is guaranteed to
fail because the data are low-rank in the tensorized representation but not in
the original representation. We also provide a formal mathematical
justification for the success of our method. In particular, we show bounds of
the rank of these data in the tensorized representation, and we prove sampling
requirements to guarantee uniqueness of the solution. Interestingly, the
sampling requirements of our LADMC algorithm nearly match the information
theoretic lower bounds for matrix completion under a UoS model. We also provide
experimental results showing that the new approach significantly outperforms
existing state-of-the-art methods for matrix completion in many situations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Visual Interaction Networks | From just a glance, humans can make rich predictions about the future state
of a wide range of physical systems. On the other hand, modern approaches from
engineering, robotics, and graphics are often restricted to narrow domains and
require direct measurements of the underlying states. We introduce the Visual
Interaction Network, a general-purpose model for learning the dynamics of a
physical system from raw visual observations. Our model consists of a
perceptual front-end based on convolutional neural networks and a dynamics
predictor based on interaction networks. Through joint training, the perceptual
front-end learns to parse a dynamic visual scene into a set of factored latent
object representations. The dynamics predictor learns to roll these states
forward in time by computing their interactions and dynamics, producing a
predicted physical trajectory of arbitrary length. We found that from just six
input video frames the Visual Interaction Network can generate accurate future
trajectories of hundreds of time steps on a wide range of physical systems. Our
model can also be applied to scenes with invisible objects, inferring their
future states from their effects on the visible objects, and can implicitly
infer the unknown mass of objects. Our results demonstrate that the perceptual
module and the object-based dynamics predictor module can induce factored
latent representations that support accurate dynamical predictions. This work
opens new opportunities for model-based decision-making and planning from raw
sensory observations in complex physical environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Using polarimetry to retrieve the cloud coverage of Earth-like exoplanets | Context. Clouds have already been detected in exoplanetary atmospheres. They
play crucial roles in a planet's atmosphere and climate and can also create
ambiguities in the determination of atmospheric parameters such as trace gas
mixing ratios. Knowledge of cloud properties is required when assessing the
habitability of a planet. Aims. We aim to show that various types of cloud
cover such as polar cusps, subsolar clouds, and patchy clouds on Earth-like
exoplanets can be distinguished from each other using the polarization and flux
of light that is reflected by the planet. Methods. We have computed the flux
and polarization of reflected starlight for different types of (liquid water)
cloud covers on Earth-like model planets using the adding-doubling method, that
fully includes multiple scattering and polarization. Variations in cloud-top
altitudes and planet-wide cloud cover percentages were taken into account.
Results. We find that the different types of cloud cover (polar cusps, subsolar
clouds, and patchy clouds) can be distinguished from each other and that the
percentage of cloud cover can be estimated within 10%. Conclusions. Using our
proposed observational strategy, one should be able to determine basic orbital
parameters of a planet such as orbital inclination and estimate cloud coverage
with reduced ambiguities from the planet's polarization signals along its
orbit.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dirac nodal lines and induced spin Hall effect in metallic rutile oxides | We have found Dirac nodal lines (DNLs) in the band structures of metallic
rutile oxides IrO$_2$, OsO$_2$, and RuO$_2$ and revealed a large spin Hall
conductivity contributed by these nodal lines, which explains a strong spin
Hall effect (SHE) of IrO$_2$ discovered recently. Two types of DNLs exist. The
first type forms DNL networks that extend in the whole Brillouin zone and
appears only in the absence of spin-orbit coupling (SOC), which induces surface
states on the boundary. Because of SOC-induced band anti-crossing, a large
intrinsic SHE can be realized in these compounds. The second type appears at
the Brillouin zone edges and is stable against SOC because of the protection of
nonsymmorphic symmetry. Besides reporting new DNL materials, our work reveals
the general relationship between DNLs and the SHE, indicating a way to apply
Dirac nodal materials for spintronics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chaotic Dynamics Enhance the Sensitivity of Inner Ear Hair Cells | Hair cells of the auditory and vestibular systems are capable of detecting
sounds that induce sub-nanometer vibrations of the hair bundle, below the
stochastic noise levels of the surrounding fluid. Hair cells of certain species
are also known to oscillate without external stimulation, indicating the
presence of an underlying active mechanism. We previously demonstrated that
these spontaneous oscillations exhibit chaotic dynamics. By varying the Calcium
concentration and the viscosity of the Endolymph solution, we are able to
modulate the degree of chaos in the hair cell dynamics. We find that the hair
cell is most sensitive to a stimulus of small amplitude when it is poised in
the weakly chaotic regime. Further, we show that the response time to a force
step decreases with increasing levels of chaos. These results agree well with
our numerical simulations of a chaotic Hopf oscillator and suggest that chaos
may be responsible for the extreme sensitivity and temporal resolution of hair
cells.
| 0 | 0 | 0 | 0 | 1 | 0 |
A Robot Localization Framework Using CNNs for Object Detection and Pose Estimation | External localization is an essential part for the indoor operation of small
or cost-efficient robots, as they are used, for example, in swarm robotics. We
introduce a two-stage localization and instance identification framework for
arbitrary robots based on convolutional neural networks. Object detection is
performed on an external camera image of the operation zone providing robot
bounding boxes for an identification and orientation estimation convolutional
neural network. Additionally, we propose a process to generate the necessary
training data. The framework was evaluated with 3 different robot types and
various identification patterns. We have analyzed the main framework
hyperparameters providing recommendations for the framework operation settings.
We achieved up to 98% [email protected] and only 1.6° orientation error, running
with a frame rate of 50 Hz on a GPU.
| 1 | 0 | 0 | 0 | 0 | 0 |
Executable Trigger-Action Comments | Natural language elements, e.g., todo comments, are frequently used to
communicate among the developers and to describe tasks that need to be
performed (actions) when specific conditions hold in the code repository
(triggers). As projects evolve, development processes change, and development
teams reorganize, these comments, because of their informal nature, frequently
become irrelevant or forgotten.
We present the first technique, dubbed TrigIt, to specify triggeraction todo
comments as executable statements. Thus, actions are executed automatically
when triggers evaluate to true. TrigIt specifications are written in the host
language (e.g., Java) and are evaluated as part of the build process. The
triggers are specified as query statements over abstract syntax trees and
abstract representation of build configuration scripts, and the actions are
specified as code transformation steps. We implemented TrigIt for the Java
programming language and migrated 20 existing trigger-action comments from 8
popular open-source projects. We evaluate the cost of using TrigIt in terms of
the number of tokens in the executable comments and the time overhead
introduced in the build process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cardinal Virtues: Extracting Relation Cardinalities from Text | Information extraction (IE) from text has largely focused on relations
between individual entities, such as who has won which award. However, some
facts are never fully mentioned, and no IE method has perfect recall. Thus, it
is beneficial to also tap contents about the cardinalities of these relations,
for example, how many awards someone has won. We introduce this novel problem
of extracting cardinalities and discusses the specific challenges that set it
apart from standard IE. We present a distant supervision method using
conditional random fields. A preliminary evaluation results in precision
between 3% and 55%, depending on the difficulty of relations.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Decision Tree Approach to Predicting Recidivism in Domestic Violence | Domestic violence (DV) is a global social and public health issue that is
highly gendered. Being able to accurately predict DV recidivism, i.e.,
re-offending of a previously convicted offender, can speed up and improve risk
assessment procedures for police and front-line agencies, better protect
victims of DV, and potentially prevent future re-occurrences of DV. Previous
work in DV recidivism has employed different classification techniques,
including decision tree (DT) induction and logistic regression, where the main
focus was on achieving high prediction accuracy. As a result, even the diagrams
of trained DTs were often too difficult to interpret due to their size and
complexity, making decision-making challenging. Given there is often a
trade-off between model accuracy and interpretability, in this work our aim is
to employ DT induction to obtain both interpretable trees as well as high
prediction accuracy. Specifically, we implement and evaluate different
approaches to deal with class imbalance as well as feature selection. Compared
to previous work in DV recidivism prediction that employed logistic regression,
our approach can achieve comparable area under the ROC curve results by using
only 3 of 11 available features and generating understandable decision trees
that contain only 4 leaf nodes.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the Necessity of Superparametric Geometry Representation for Discontinuous Galerkin Methods on Domains with Curved Boundaries | We provide numerical evidence demonstrating the necessity of employing a
superparametric geometry representation in order to obtain optimal convergence
orders on two-dimensional domains with curved boundaries when solving the Euler
equations using Discontinuous Galerkin methods. However, concerning the
obtention of optimal convergence orders for the Navier-Stokes equations, we
demonstrate numerically that the use of isoparametric geometry representation
is sufficient for the case considered here.
| 1 | 0 | 0 | 0 | 0 | 0 |
Low-dose cryo electron ptychography via non-convex Bayesian optimization | Electron ptychography has seen a recent surge of interest for phase sensitive
imaging at atomic or near-atomic resolution. However, applications are so far
mainly limited to radiation-hard samples because the required doses are too
high for imaging biological samples at high resolution. We propose the use of
non-convex, Bayesian optimization to overcome this problem and reduce the dose
required for successful reconstruction by two orders of magnitude compared to
previous experiments. We suggest to use this method for imaging single
biological macromolecules at cryogenic temperatures and demonstrate 2D
single-particle reconstructions from simulated data with a resolution of 7.9
\AA$\,$ at a dose of 20 $e^- / \AA^2$. When averaging over only 15 low-dose
datasets, a resolution of 4 \AA$\,$ is possible for large macromolecular
complexes. With its independence from microscope transfer function, direct
recovery of phase contrast and better scaling of signal-to-noise ratio,
cryo-electron ptychography may become a promising alternative to Zernike
phase-contrast microscopy.
| 0 | 1 | 1 | 1 | 0 | 0 |
Efficient Charge Collection in Coplanar Grid Radiation Detectors | We have modeled laser-induced transient current waveforms in radiation
coplanar grid detectors. Poisson's equation has been solved by finite element
method and currents induced by photo-generated charge were obtained using
Shockley-Ramo theorem. The spectral response on a radiation flux has been
modeled by Monte-Carlo simulations. We show 10$\times$ improved spectral
resolution of coplanar grid detector using differential signal sensing. We
model the current waveform dependence on doping, depletion width, diffusion and
detector shielding and their mutual dependence is discussed in terms of
detector optimization. The numerical simulations are successfully compared to
experimental data and further model simplifications are proposed. The space
charge below electrodes and a non-homogeneous electric field on a coplanar grid
anode are found to be the dominant contributions to laser-induced transient
current waveforms.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thermalization near integrability in a dipolar quantum Newton's cradle | Isolated quantum many-body systems with integrable dynamics generically do
not thermalize when taken far from equilibrium. As one perturbs such systems
away from the integrable point, thermalization sets in, but the nature of the
crossover from integrable to thermalizing behavior is an unresolved and
actively discussed question. We explore this question by studying the dynamics
of the momentum distribution function in a dipolar quantum Newton's cradle
consisting of highly magnetic dysprosium atoms. This is accomplished by
creating the first one-dimensional Bose gas with strong magnetic dipole-dipole
interactions. These interactions provide tunability of both the strength of the
integrability-breaking perturbation and the nature of the near-integrable
dynamics. We provide the first experimental evidence that thermalization close
to a strongly interacting integrable point occurs in two steps:
prethermalization followed by near-exponential thermalization. Exact numerical
calculations on a two-rung lattice model yield a similar two-timescale process,
suggesting that this is generic in strongly interacting near-integrable models.
Moreover, the measured thermalization rate is consistent with a parameter-free
theoretical estimate, based on identifying the types of collisions that
dominate thermalization. By providing tunability between regimes of integrable
and nonintegrable dynamics, our work sheds light both on the mechanisms by
which isolated quantum many-body systems thermalize, and on the temporal
structure of the onset of thermalization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hausdorff dimension of the boundary of bubbles of additive Brownian motion and of the Brownian sheet | We first consider the additive Brownian motion process $(X(s_1,s_2),\
(s_1,s_2) \in \mathbb{R}^2)$ defined by $X(s_1,s_2) = Z_1(s_1) - Z_2 (s_2)$,
where $Z_1$ and $Z_2 $ are two independent (two-sided) Brownian motions. We
show that with probability one, the Hausdorff dimension of the boundary of any
connected component of the random set $\{(s_1,s_2)\in \mathbb{R}^2: X(s_1,s_2)
>0\}$ is equal to $$
\frac{1}{4}\left(1 + \sqrt{13 + 4 \sqrt{5}}\right) \simeq 1.421\, . $$ Then
the same result is shown to hold when $X$ is replaced by a standard Brownian
sheet indexed by the nonnegative quadrant.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inverse mean curvature flow in quaternionic hyperbolic space | In this paper we complete the study started in [Pi2] of evolution by inverse
mean curvature flow of star-shaped hypersurface in non-compact rank one
symmetric spaces. We consider the evolution by inverse mean curvature flow of a
closed, mean convex and star-shaped hypersurface in the quaternionic hyperbolic
space. We prove that the flow is defined for any positive time, the evolving
hypersurface stays star-shaped and mean convex. Moreover the induced metric
converges, after rescaling, to a conformal multiple of the standard
sub-Riemannian metric on the sphere defined on a codimension 3 distribution.
Finally we show that there exists a family of examples such that the qc-scalar
curvature of this sub-Riemannian limit is not constant.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mosquito detection with low-cost smartphones: data acquisition for malaria research | Mosquitoes are a major vector for malaria, causing hundreds of thousands of
deaths in the developing world each year. Not only is the prevention of
mosquito bites of paramount importance to the reduction of malaria transmission
cases, but understanding in more forensic detail the interplay between malaria,
mosquito vectors, vegetation, standing water and human populations is crucial
to the deployment of more effective interventions. Typically the presence and
detection of malaria-vectoring mosquitoes is only quantified by hand-operated
insect traps or signified by the diagnosis of malaria. If we are to gather
timely, large-scale data to improve this situation, we need to automate the
process of mosquito detection and classification as much as possible. In this
paper, we present a candidate mobile sensing system that acts as both a
portable early warning device and an automatic acoustic data acquisition
pipeline to help fuel scientific inquiry and policy. The machine learning
algorithm that powers the mobile system achieves excellent off-line
multi-species detection performance while remaining computationally efficient.
Further, we have conducted preliminary live mosquito detection tests using
low-cost mobile phones and achieved promising results. The deployment of this
system for field usage in Southeast Asia and Africa is planned in the near
future. In order to accelerate processing of field recordings and labelling of
collected data, we employ a citizen science platform in conjunction with
automated methods, the former implemented using the Zooniverse platform,
allowing crowdsourcing on a grand scale.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.