title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Large-scale Nonlinear Variable Selection via Kernel Random Features | We propose a new method for input variable selection in nonlinear regression.
The method is embedded into a kernel regression machine that can model general
nonlinear functions, not being a priori limited to additive models. This is the
first kernel-based variable selection method applicable to large datasets. It
sidesteps the typical poor scaling properties of kernel methods by mapping the
inputs into a relatively low-dimensional space of random features. The
algorithm discovers the variables relevant for the regression task together
with learning the prediction model through learning the appropriate nonlinear
random feature maps. We demonstrate the outstanding performance of our method
on a set of large-scale synthetic and real datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Couple microscale periodic patches to simulate macroscale emergent dynamics | This article proposes a new way to construct computationally efficient
`wrappers' around fine scale, microscopic, detailed descriptions of dynamical
systems, such as molecular dynamics, to make predictions at the macroscale
`continuum' level. It is often significantly easier to code a microscale
simulator with periodicity: so the challenge addressed here is to develop a
scheme that uses only a given periodic microscale simulator; specifically, one
for atomistic dynamics. Numerical simulations show that applying a suitable
proportional controller within `action regions' of a patch of atomistic
simulation effectively predicts the macroscale transport of heat. Theoretical
analysis establishes that such an approach will generally be effective and
efficient, and also determines good values for the strength of the proportional
controller. This work has the potential to empower systematic analysis and
understanding at a macroscopic system level when only a given microscale
simulator is available.
| 0 | 0 | 1 | 0 | 0 | 0 |
Determinantal Point Processes for Mini-Batch Diversification | We study a mini-batch diversification scheme for stochastic gradient descent
(SGD). While classical SGD relies on uniformly sampling data points to form a
mini-batch, we propose a non-uniform sampling scheme based on the Determinantal
Point Process (DPP). The DPP relies on a similarity measure between data points
and gives low probabilities to mini-batches which contain redundant data, and
higher probabilities to mini-batches with more diverse data. This
simultaneously balances the data and leads to stochastic gradients with lower
variance. We term this approach Diversified Mini-Batch SGD (DM-SGD). We show
that regular SGD and a biased version of stratified sampling emerge as special
cases. Furthermore, DM-SGD generalizes stratified sampling to cases where no
discrete features exist to bin the data into groups. We show experimentally
that our method results more interpretable and diverse features in unsupervised
setups, and in better classification accuracies in supervised setups.
| 1 | 0 | 0 | 1 | 0 | 0 |
Proton-induced halo formation in charged meteors | Despite a very long history of meteor science, our understanding of meteor
ablation and its shocked plasma physics is still far from satisfactory as we
are still missing the microphysics of meteor shock formation and its plasma
dynamics. Here we argue that electrons and ions in the meteor plasma above
$\sim$100 km altitude undergo spatial separation due to electrons being trapped
by gyration in the Earth's magnetic field, while the ions are carried by the
meteor as their dynamics is dictated by collisions. This separation process
charges the meteor and creates a strong local electric field. We show how
acceleration of protons in this field leads to the collisional excitation of
ionospheric N$_2$ on the scale of many 100 m. This mechanism explains the
puzzling large halo detected around Leonid meteors, while it also fits into the
theoretical expectations of several other unexplained meteor related phenomena.
We expect our work to lead to more advanced models of meteor-ionosphere
interaction, combined with the electrodynamics of meteor trail evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generalised Seiberg-Witten equations and almost-Hermitian geometry | In this article, we study a generalisation of the Seiberg-Witten equations,
replacing the spinor representation with a hyperKahler manifold equipped with
certain symmetries. Central to this is the construction of a (non-linear) Dirac
operator acting on the sections of the non-linear fibre-bundle. For hyperKahler
manifolds admitting a hyperKahler potential, we derive a transformation formula
for the Dirac operator under the conformal change of metric on the base
manifold.
As an application, we show that when the hyperKahler manifold is of dimension
four, then away from a singular set, the equations can be expressed as a second
order PDE in terms of almost-complex structure on the base manifold and a
conformal factor. This extends a result of Donaldson to generalised
Seiberg-Witten equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Enhanced Photon Traps for Hyper-Kamiokande | Hyper-Kamiokande, the next generation large water Cherenkov detector in
Japan, is planning to use approximately 80,000 20-inch photomultiplier tubes
(PMTs). They are one of the major cost factors of the experiment. We propose a
novel enhanced photon trap design based on a smaller and more economical PMT in
combination with wavelength shifters, dichroic mirrors, and broadband mirrors.
GEANT4 is utilized to obtain photon collection efficiencies and timing
resolution of the photon traps. We compare the performance of different trap
configurations and sizes. Our simulations indicate an enhanced photon trap with
a 12-inch PMT can match a 20-inch PMTs collection efficiency, however at a cost
of reduced timing resolution. The photon trap might be suitable as detection
module for the outer detector with large photo coverage area.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Deep Reinforcement Learning Chatbot (Short Version) | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including neural network and
template-based models. By applying reinforcement learning to crowdsourced data
and real-world user interactions, the system has been trained to select an
appropriate response from the models in its ensemble. The system has been
evaluated through A/B testing with real-world users, where it performed
significantly better than other systems. The results highlight the potential of
coupling ensemble systems with deep reinforcement learning as a fruitful path
for developing real-world, open-domain conversational agents.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Recurrent Neural Network for Sentiment Quantification | Quantification is a supervised learning task that consists in predicting,
given a set of classes C and a set D of unlabelled items, the prevalence (or
relative frequency) p(c|D) of each class c in C. Quantification can in
principle be solved by classifying all the unlabelled items and counting how
many of them have been attributed to each class. However, this "classify and
count" approach has been shown to yield suboptimal quantification accuracy;
this has established quantification as a task of its own, and given rise to a
number of methods specifically devised for it. We propose a recurrent neural
network architecture for quantification (that we call QuaNet) that observes the
classification predictions to learn higher-order "quantification embeddings",
which are then refined by incorporating quantification predictions of simple
classify-and-count-like methods. We test {QuaNet on sentiment quantification on
text, showing that it substantially outperforms several state-of-the-art
baselines.
| 0 | 0 | 0 | 1 | 0 | 0 |
Data Noising as Smoothing in Neural Network Language Models | Data noising is an effective technique for regularizing neural network
models. While noising is widely adopted in application domains such as vision
and speech, commonly used noising primitives have not been developed for
discrete sequence-level settings such as language modeling. In this paper, we
derive a connection between input noising in neural network language models and
smoothing in $n$-gram models. Using this connection, we draw upon ideas from
smoothing to develop effective noising schemes. We demonstrate performance
gains when applying the proposed schemes to language modeling and machine
translation. Finally, we provide empirical analysis validating the relationship
between noising and smoothing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optical Flow-based 3D Human Motion Estimation from Monocular Video | We present a generative method to estimate 3D human motion and body shape
from monocular video. Under the assumption that starting from an initial pose
optical flow constrains subsequent human motion, we exploit flow to find
temporally coherent human poses of a motion sequence. We estimate human motion
by minimizing the difference between computed flow fields and the output of an
artificial flow renderer. A single initialization step is required to estimate
motion over multiple frames. Several regularization functions enhance
robustness over time. Our test scenarios demonstrate that optical flow
effectively regularizes the under-constrained problem of human shape and motion
estimation from monocular video.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quasi-ordered Rings | A quasi-order is a binary, reflexive and transitive relation. In the Journal
of Pure and Applied Algebra 45 (1987), S.M. Fakhruddin introduced the notion of
(totally) quasi-ordered fields and showed that each such field is either an
ordered field or else a valued field. Hence, quasi-ordered fields are very well
suited to treat ordered and valued fields simultaneously.
In this note, we will prove that the same dichotomy holds for commutative
rings with 1 as well. For that purpose we first develop an appropriate notion
of (totally) quasi-ordered rings. Our proof of the dichotomy then exploits
Fakhruddin's result that was mentioned above.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mean-Field Controllability and Decentralized Stabilization of Markov Chains, Part I: Global Controllability and Rational Feedbacks | In this paper, we study the controllability and stabilizability properties of
the Kolmogorov forward equation of a continuous time Markov chain (CTMC)
evolving on a finite state space, using the transition rates as the control
parameters. Firstly, we prove small-time local and global controllability from
and to strictly positive equilibrium configurations when the underlying graph
is strongly connected. Secondly, we show that there always exists a locally
exponentially stabilizing decentralized linear (density-)feedback law that
takes zero valu at equilibrium and respects the graph structure, provided that
the transition rates are allowed to be negative and the desired target density
lies in the interior of the set of probability densities. For bidirected
graphs, that is, graphs where a directed edge in one direction implies an edge
in the opposite direction, we show that this linear control law can be realized
using a decentralized rational feedback law of the form k(x) = a(x) +
b(x)f(x)/g(x) that also respects the graph structure and control constraints
(positivity and zero at equilibrium). This enables the possibility of using
Linear Matrix Inequality (LMI) based tools to algorithmically construct
decentralized density feedback controllers for stabilization of a robotic swarm
to a target task distribution with no task-switching at equilibrium, as we
demonstrate with several numerical examples.
| 1 | 0 | 1 | 0 | 0 | 0 |
Sharp total variation results for maximal functions | In this article, we prove some total variation inequalities for maximal
functions. Our results deal with two possible generalizations of the results
contained in Aldaz and Pérez Lázaro's work, one of whose considers a
variable truncation of the maximal function, and the other one interpolates the
centered and the uncentered maximal functions. In both contexts, we find sharp
constants for the desired inequalities, which can be viewed as progress towards
the conjecture that the best constant for the variation inequality in the
centered context is one. We also provide counterexamples showing that our
methods do not apply outside the stated parameter ranges.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimal Bipartite Network Clustering | We study bipartite community detection in networks, or more generally the
network biclustering problem. We present a fast two-stage procedure based on
spectral initialization followed by the application of a pseudo-likelihood
classifier twice. Under mild regularity conditions, we establish the weak
consistency of the procedure (i.e., the convergence of the misclassification
rate to zero) under a general bipartite stochastic block model. We show that
the procedure is optimal in the sense that it achieves the optimal convergence
rate that is achievable by a biclustering oracle, adaptively over the whole
class, up to constants. This is further formalized by deriving a minimax lower
bound over a class of biclustering problems. The optimal rate we obtain
sharpens some of the existing results and generalizes others to a wide regime
of average degree growth, from sparse networks with average degrees growing
arbitrarily slowly to fairly dense networks with average degrees of order
$\sqrt{n}$. As a special case, we recover the known exact recovery threshold in
the $\log n$ regime of sparsity. To obtain the consistency result, as part of
the provable version of the algorithm, we introduce a sub-block partitioning
scheme that is also computationally attractive, allowing for distributed
implementation of the algorithm without sacrificing optimality. The provable
algorithm is derived from a general class of pseudo-likelihood biclustering
algorithms that employ simple EM type updates. We show the effectiveness of
this general class by numerical simulations.
| 1 | 0 | 0 | 1 | 0 | 0 |
End-to-end Recurrent Neural Network Models for Vietnamese Named Entity Recognition: Word-level vs. Character-level | This paper demonstrates end-to-end neural network architectures for
Vietnamese named entity recognition. Our best model is a combination of
bidirectional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Network
(CNN), Conditional Random Field (CRF), using pre-trained word embeddings as
input, which achieves an F1 score of 88.59% on a standard test set. Our system
is able to achieve a comparable performance to the first-rank system of the
VLSP campaign without using any syntactic or hand-crafted features. We also
give an extensive empirical study on using common deep learning models for
Vietnamese NER, at both word and character level.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Reduced PC-Algorithm: Improved Causal Structure Learning in Large Random Networks | We consider the task of estimating a high-dimensional directed acyclic graph,
given observations from a linear structural equation model with arbitrary noise
distribution. By exploiting properties of common random graphs, we develop a
new algorithm that requires conditioning only on small sets of variables. The
proposed algorithm, which is essentially a modified version of the
PC-Algorithm, offers significant gains in both computational complexity and
estimation accuracy. In particular, it results in more efficient and accurate
estimation in large networks containing hub nodes, which are common in
biological systems. We prove the consistency of the proposed algorithm, and
show that it also requires a less stringent faithfulness assumption than the
PC-Algorithm. Simulations in low and high-dimensional settings are used to
illustrate these findings. An application to gene expression data suggests that
the proposed algorithm can identify a greater number of clinically relevant
genes than current methods.
| 0 | 0 | 0 | 1 | 1 | 0 |
Theoretical analysis of the electron bridge process in $^{229}$Th$^{3+}$ | We investigate the deexcitation of the $^{229}$Th nucleus via the excitation
of an electron. Detailed calculations are performed for the enhancement of the
nuclear decay width due to this so called electron bridge (EB) compared to the
direct photoemission from the nucleus. The results are obtianed for triply
ionized thorium by using a B-spline pseudo basis approach to solve the Dirac
equation for a local $x_\alpha$ potential. This approach allows for an
approximation of the full electron propagator including the positive and
negative continuum. We show that the contribution of continua slightly
increases the enhancement compared to a propagator calculated by a direct
summation over bound states. Moreover we put special emphasis on the
interference between the direct and exchange Feynman diagrams that can have a
strong influence on the enhancement.
| 0 | 1 | 0 | 0 | 0 | 0 |
Memory in de Sitter space and BMS-like supertranslations | It is well known that the memory effect in flat spacetime is parametrized by
the BMS supertranslation. We investigate the relation between the memory effect
and diffeomorphism in de Sitter spacetime. We find that gravitational memory is
parametrized by a BMS-like supertranslation in the static patch of de Sitter
spacetime. We also show a diffeomorphism that corresponds to gravitational
memory in the Poincare/cosmological patch. Our method does not need to assume
the separation between the source and the detector to be small compared with
the Hubble radius, and can potentially be applicable to other FLRW universes,
as well as "ordinary memory" mediated by massive messenger particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Data-Driven Estimation Of Mutual Information Between Dependent Data | We consider the problem of estimating mutual information between dependent
data, an important problem in many science and engineering applications. We
propose a data-driven, non-parametric estimator of mutual information in this
paper. The main novelty of our solution lies in transforming the data to
frequency domain to make the problem tractable. We define a novel
metric--mutual information in frequency--to detect and quantify the dependence
between two random processes across frequency using Cramér's spectral
representation. Our solution calculates mutual information as a function of
frequency to estimate the mutual information between the dependent data over
time. We validate its performance on linear and nonlinear models. In addition,
mutual information in frequency estimated as a part of our solution can also be
used to infer cross-frequency coupling in the data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Wigner functions for gauge equivalence classes of unitary irreducible representations of noncommutative quantum mechanics | While Wigner functions forming phase space representation of quantum states
is a well-known fact, their construction for noncommutative quantum mechanics
(NCQM) remains relatively lesser known, in particular with respect to gauge
dependencies. This paper deals with the construction of Wigner functions of
NCQM for a system of 2-degrees of freedom using 2-parameter families of gauge
equivalence classes of unitary irreducible representations (UIRs) of the Lie
group $\g$ which has been identified as the kinematical symmetry group of NCQM
in an earlier paper. This general construction of Wigner functions for NCQM, in
turn, yields the special cases of Landau and symmetric gauges of NCQM.
| 0 | 0 | 1 | 0 | 0 | 0 |
Segmentation of Intracranial Arterial Calcification with Deeply Supervised Residual Dropout Networks | Intracranial carotid artery calcification (ICAC) is a major risk factor for
stroke, and might contribute to dementia and cognitive decline. Reliance on
time-consuming manual annotation of ICAC hampers much demanded further research
into the relationship between ICAC and neurological diseases. Automation of
ICAC segmentation is therefore highly desirable, but difficult due to the
proximity of the lesions to bony structures with a similar attenuation
coefficient. In this paper, we propose a method for automatic segmentation of
ICAC; the first to our knowledge. Our method is based on a 3D fully
convolutional neural network that we extend with two regularization techniques.
Firstly, we use deep supervision (hidden layers supervision) to encourage
discriminative features in the hidden layers. Secondly, we augment the network
with skip connections, as in the recently developed ResNet, and dropout layers,
inserted in a way that skip connections circumvent them. We investigate the
effect of skip connections and dropout. In addition, we propose a simple
problem-specific modification of the network objective function that restricts
the focus to the most important image regions and simplifies the optimization.
We train and validate our model using 882 CT scans and test on 1,000. Our
regularization techniques and objective improve the average Dice score by 7.1%,
yielding an average Dice of 76.2% and 97.7% correlation between predicted ICAC
volumes and manual annotations.
| 1 | 0 | 0 | 0 | 0 | 0 |
HTMoL: full-stack solution for remote access, visualization, and analysis of Molecular Dynamics trajectory data | The field of structural bioinformatics has seen significant advances with the
use of Molecular Dynamics (MD) simulations of biological systems. The MD
methodology has allowed to explain and discover molecular mechanisms in a wide
range of natural processes. There is an impending need to readily share the
ever-increasing amount of MD data, which has been hindered by the lack of
specialized tools in the past. To solve this problem, we present HTMoL, a
state-of-the-art plug-in-free hardware-accelerated web application specially
designed to efficiently transfer and visualize raw MD trajectory files on a web
browser. Now, individual research labs can publish MD data on the Internet, or
use HTMoL to profoundly improve scientific reports by including supplemental MD
data in a journal publication. HTMoL can also be used as a visualization
interface to access MD trajectories generated on a high-performance computer
center directly.
Availability: HTMoL is available free of charge for academic use. All major
browsers are supported. A complete online documentation including instructions
for download, installation, configuration, and examples is available at the
HTMoL website this http URL. Supplementary data are available
online. Corresponding author: [email protected]
| 1 | 0 | 0 | 0 | 0 | 0 |
General auction method for real-valued optimal transport | The auction method developed by Bertsekas in the late 1970s is a relaxation
technique for solving integer-valued assignment problems. It resembles a
competitive bidding process, where unsatisfied persons (bidders) attempt to
claim the objects (lots) offering the best value. By transforming
integer-valued transport problems into assignment problems, the auction method
can be extended to compute optimal transport solutions. We propose a more
general auction method that can be applied directly to real-valued transport
problems. We prove termination and provide a priori error bounds for the
general auction method. Our numerical results indicate that the complexity of
the general auction is roughly comparable to that of the original auction
method, when the latter is applicable.
| 1 | 0 | 1 | 0 | 0 | 0 |
Force sensing with an optically levitated charged nanoparticle | Levitated optomechanics is showing potential for precise force measurements.
Here, we report a case study, to show experimentally the capacity of such a
force sensor. Using an electric field as a tool to detect a Coulomb force
applied onto a levitated nanosphere. We experimentally observe the spatial
displacement of up to 6.6 nm of the levitated nanosphere by imposing a DC
field. We further apply an AC field and demonstrate resonant enhancement of
force sensing when a driving frequency, $\omega_{AC}$, and the frequency of the
levitated mechanical oscillator, $\omega_0$, converge. We directly measure a
force of $3.0 \pm 1.5 \times 10^{-20}$ N with 10 second integration time, at a
centre of mass temperature of 3 K and at a pressure of $1.6 \times 10^{-5}$
mbar.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unveiling Eilenberg-type Correspondences: Birkhoff's Theorem for (finite) Algebras + Duality | The purpose of the present paper is to show that: Eilenberg-type
correspondences = Birkhoff's theorem for (finite) algebras + duality. We
consider algebras for a monad T on a category D and we study (pseudo)varieties
of T-algebras. Pseudovarieties of algebras are also known in the literature as
varieties of finite algebras. Two well-known theorems that characterize
varieties and pseudovarieties of algebras play an important role here:
Birkhoff's theorem and Birkhoff's theorem for finite algebras, the latter also
known as Reiterman's theorem. We prove, under mild assumptions, a categorical
version of Birkhoff's theorem for (finite) algebras to establish a one-to-one
correspondence between (pseudo)varieties of T-algebras and (pseudo)equational
T-theories. Now, if C is a category that is dual to D and B is the comonad on C
that is the dual of T, we get a one-to-one correspondence between
(pseudo)equational T-theories and their dual, (pseudo)coequational B-theories.
Particular instances of (pseudo)coequational B-theories have been already
studied in language theory under the name of "varieties of languages" to
establish Eilenberg-type correspondences. All in all, we get a one-to-one
correspondence between (pseudo)varieties of T-algebras and (pseudo)coequational
B-theories, which will be shown to be exactly the nature of Eilenberg-type
correspondences.
| 1 | 0 | 1 | 0 | 0 | 0 |
Detecting transit signatures of exoplanetary rings using SOAP3.0 | CONTEXT. It is theoretically possible for rings to have formed around
extrasolar planets in a similar way to that in which they formed around the
giant planets in our solar system. However, no such rings have been detected to
date.
AIMS: We aim to test the possibility of detecting rings around exoplanets by
investigating the photometric and spectroscopic ring signatures in
high-precision transit signals.
METHODS: The photometric and spectroscopic transit signals of a ringed planet
is expected to show deviations from that of a spherical planet. We used these
deviations to quantify the detectability of rings. We present SOAP3.0 which is
a numerical tool to simulate ringed planet transits and measure ring
detectability based on amplitudes of the residuals between the ringed planet
signal and best fit ringless model.
RESULTS: We find that it is possible to detect the photometric and
spectroscopic signature of near edge-on rings especially around planets with
high impact parameter. Time resolution $\leq$ 7 mins is required for the
photometric detection, while 15 mins is sufficient for the spectroscopic
detection. We also show that future instruments like CHEOPS and ESPRESSO, with
precisions that allow ring signatures to be well above their noise-level,
present good prospects for detecting rings.
| 0 | 1 | 0 | 0 | 0 | 0 |
MatlabCompat.jl: helping Julia understand Your Matlab/Octave Code | Scientific legacy code in MATLAB/Octave not compatible with modernization of
research workflows is vastly abundant throughout academic community.
Performance of non-vectorized code written in MATLAB/Octave represents a major
burden. A new programming language for technical computing Julia, promises to
address these issues. Although Julia syntax is similar to MATLAB/Octave,
porting code to Julia may be cumbersome for researchers. Here we present
MatlabCompat.jl - a library aimed at simplifying the conversion of your
MATLAB/Octave code to Julia. We show using a simplistic image analysis use case
that MATLAB/Octave code can be easily ported to high performant Julia using
MatlabCompat.jl.
| 1 | 0 | 0 | 0 | 0 | 0 |
Extending the topological analysis and seeking the real-space subsystems in non-Coulombic systems with homogeneous potential energy functions | It is customary to conceive the interactions of all the constituents of a
molecular system, i.e. electrons and nuclei, as Coulombic. However, in a more
detailed analysis one may always find small but non-negligible non-Coulombic
interactions in molecular systems originating from the finite size of nuclei,
magnetic interactions, etc. While such small modifications of the Coulombic
interactions do not seem to alter the nature of a molecular system in real
world seriously, they are a serious obstacle for quantum chemical theories and
methodologies which their formalism is strictly confined to the Coulombic
interactions. Although the quantum theory of atoms in molecules (QTAIM) has
been formulated originally for the Coulombic systems, some recent studies have
demonstrated that apart from basin energy of an atom in a molecule, its
theoretical ingredients are not sensitive to the explicit form of the potential
energy operator. In this study, it is demonstrated that the basin energy may be
defined not only for coulombic systems but for all real-space subsystems of
those systems that are described by any member of the set of the homogeneous
potential energy functions. On the other hand, this extension opens the door
for seeking novel real-space subsystems, apart from atoms in molecules, in
non-Coulombic systems. These novel real-space subsystems call for an extended
formalism that goes beyond the orthodox QTAIM, which is not confined to the
Coulombic systems nor to the atoms in molecules as the sole real-space
subsystems. It is termed the quantum theory of real-space open subsystems
(QTROS) and its potential applications are detailed. The harmonic trap model,
containing non-interacting fermions or bosons, is considered as an example for
the QTROS analysis. The QTROS analysis of bosonic systems is particularly quite
unprecedented, not attempted before.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Uncertainty Surrounding Coastal Flood Risk Projections: A Case Study for New Orleans | Future sea-level rise drives severe risks for many coastal communities.
Strategies to manage these risks hinge on a sound characterization of the
uncertainties. For example, recent studies suggest that large fractions of the
Antarctic ice sheet (AIS) may rapidly disintegrate in response to rising global
temperatures, leading to potentially several meters of sea-level rise during
the next few centuries. It is deeply uncertain, for example, whether such an
AIS disintegration will be triggered, how much this would increase sea-level
rise, whether extreme storm surges intensify in a warming climate, or which
emissions pathway future societies will choose. Here, we assess the impacts of
these deep uncertainties on projected flooding probabilities for a levee ring
in New Orleans, Louisiana. We use 18 scenarios, presenting probabilistic
projections within each one, to sample key deeply uncertain future projections
of sea-level rise, radiative forcing pathways, storm surge characterization,
and contributions from rapid AIS mass loss. The implications of these deep
uncertainties for projected flood risk are thus characterized by a set of 18
probability distribution functions. We use a global sensitivity analysis to
assess which mechanisms contribute to uncertainty in projected flood risk over
the course of a 50-year design life. In line with previous work, we find that
the uncertain storm surge drives the most substantial risk, followed by general
AIS dynamics, in our simple model for future flood risk for New Orleans.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bounds on the Approximation Power of Feedforward Neural Networks | The approximation power of general feedforward neural networks with piecewise
linear activation functions is investigated. First, lower bounds on the size of
a network are established in terms of the approximation error and network depth
and width. These bounds improve upon state-of-the-art bounds for certain
classes of functions, such as strongly convex functions. Second, an upper bound
is established on the difference of two neural networks with identical weights
but different activation functions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Non-classification of free Araki-Woods factors and $τ$-invariants | We define the standard Borel space of free Araki-Woods factors and prove that
their isomorphism relation is not classifiable by countable structures. We also
prove that equality of $\tau$-topologies, arising as invariants of type III
factors, as well as coycle and outer conjugacy of actions of abelian groups on
free product factors are not classifiable by countable structures.
| 0 | 0 | 1 | 0 | 0 | 0 |
Data-Driven Learning and Planning for Environmental Sampling | Robots such as autonomous underwater vehicles (AUVs) and autonomous surface
vehicles (ASVs) have been used for sensing and monitoring aquatic environments
such as oceans and lakes. Environmental sampling is a challenging task because
the environmental attributes to be observed can vary both spatially and
temporally, and the target environment is usually a large and continuous domain
whereas the sampling data is typically sparse and limited. The challenges
require that the sampling method must be informative and efficient enough to
catch up with the environmental dynamics. In this paper we present a planning
and learning method that enables a sampling robot to perform persistent
monitoring tasks by learning and refining a dynamic "data map" that models a
spatiotemporal environment attribute such as ocean salinity content. Our
environmental sampling framework consists of two components: to maximize the
information collected, we propose an informative planning component that
efficiently generates sampling waypoints that contain the maximal information;
To alleviate the computational bottleneck caused by large-scale data
accumulated, we develop a component based on a sparse Gaussian Process whose
hyperparameters are learned online by taking advantage of only a subset of data
that provides the greatest contribution. We validate our method with both
simulations running on real ocean data and field trials with an ASV in a lake
environment. Our experiments show that the proposed framework is both accurate
in learning the environmental data map and efficient in catching up with the
dynamic environmental changes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Global spectral graph wavelet signature for surface analysis of carpal bones | In this paper, we present a spectral graph wavelet approach for shape
analysis of carpal bones of human wrist. We apply a metric called global
spectral graph wavelet signature for representation of cortical surface of the
carpal bone based on eigensystem of Laplace-Beltrami operator. Furthermore, we
propose a heuristic and efficient way of aggregating local descriptors of a
carpal bone surface to global descriptor. The resultant global descriptor is
not only isometric invariant, but also much more efficient and requires less
memory storage. We perform experiments on shape of the carpal bones of ten
women and ten men from a publicly-available database. Experimental results show
the excellency of the proposed GSGW compared to recent proposed GPS embedding
approach for comparing shapes of the carpal bones across populations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Rates of Regression with q-norm Loss and Threshold | This paper studies some robust regression problems associated with the
$q$-norm loss ($q\ge1$) and the $\epsilon$-insensitive $q$-norm loss in the
reproducing kernel Hilbert space. We establish a variance-expectation bound
under a priori noise condition on the conditional distribution, which is the
key technique to measure the error bound. Explicit learning rates will be given
under the approximation ability assumptions on the reproducing kernel Hilbert
space.
| 0 | 0 | 1 | 1 | 0 | 0 |
Instability, rupture and fluctuations in thin liquid films: Theory and computations | Thin liquid films are ubiquitous in natural phenomena and technological
applications. They have been extensively studied via deterministic hydrodynamic
equations, but thermal fluctuations often play a crucial role that needs to be
understood. An example of this is dewetting, which involves the rupture of a
thin liquid film and the formation of droplets. Such a process is thermally
activated and requires fluctuations to be taken into account self-consistently.
In this work we present an analytical and numerical study of a stochastic
thin-film equation derived from first principles. Following a brief review of
the derivation, we scrutinise the behaviour of the equation in the limit of
perfectly correlated noise along the wall-normal direction. The stochastic
thin-film equation is also simulated by adopting a numerical scheme based on a
spectral collocation method. The scheme allows us to explore the fluctuating
dynamics of the thin film and the behaviour of its free energy in the vicinity
of rupture. Finally, we also study the effect of the noise intensity on the
rupture time, which is in agreement with previous works.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fractional Patlak-Keller-Segel equations for chemotactic superdiffusion | The long range movement of certain organisms in the presence of a
chemoattractant can be governed by long distance runs, according to an
approximate Levy distribution. This article clarifies the form of biologically
relevant model equations: We derive Patlak-Keller-Segel-like equations
involving nonlocal, fractional Laplacians from a microscopic model for cell
movement. Starting from a power-law distribution of run times, we derive a
kinetic equation in which the collision term takes into account the long range
behaviour of the individuals. A fractional chemotactic equation is obtained in
a biologically relevant regime. Apart from chemotaxis, our work has
implications for biological diffusion in numerous processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive Risk Bounds in Univariate Total Variation Denoising and Trend Filtering | We study trend filtering, a relatively recent method for univariate
nonparametric regression. For a given positive integer $r$, the $r$-th order
trend filtering estimator is defined as the minimizer of the sum of squared
errors when we constrain (or penalize) the sum of the absolute $r$-th order
discrete derivatives of the fitted function at the design points. For $r=1$,
the estimator reduces to total variation regularization which has received much
attention in the statistics and image processing literature. In this paper, we
study the performance of the trend filtering estimator for every positive
integer $r$, both in the constrained and penalized forms. Our main results show
that in the strong sparsity setting when the underlying function is a
(discrete) spline with few "knots", the risk (under the global squared error
loss) of the trend filtering estimator (with an appropriate choice of the
tuning parameter) achieves the parametric $n^{-1}$ rate, up to a logarithmic
(multiplicative) factor. Our results therefore provide support for the use of
trend filtering, for every $r$, in the strong sparsity setting.
| 0 | 0 | 1 | 1 | 0 | 0 |
Assessment of First-Principles and Semiempirical Methodologies for Absorption and Emission Energies of Ce$^{3+}$-Doped Luminescent Materials | In search of a reliable methodology for the prediction of light absorption
and emission of Ce$^{3+}$-doped luminescent materials, 13 representative
materials are studied with first-principles and semiempirical approaches. In
the first-principles approach, that combines constrained density-functional
theory and $\Delta$SCF, the atomic positions are obtained for both ground and
excited states of the Ce$^{3+}$ ion. The structural information is fed into
Dorenbos' semiempirical model. Absorption and emission energies are calculated
with both methods and compared with experiment. The first-principles approach
matches experiment within 0.3 eV, with two exceptions at 0.5 eV. In contrast,
the semiempirical approach does not perform as well (usually more than 0.5 eV
error). The general applicability of the present first-principles scheme, with
an encouraging predictive power, opens a novel avenue for crystal site
engineering and high-throughput search for new phosphors and scintillators.
| 0 | 1 | 0 | 0 | 0 | 0 |
Regularizing deep networks using efficient layerwise adversarial training | Adversarial training has been shown to regularize deep neural networks in
addition to increasing their robustness to adversarial examples. However, its
impact on very deep state of the art networks has not been fully investigated.
In this paper, we present an efficient approach to perform adversarial training
by perturbing intermediate layer activations and study the use of such
perturbations as a regularizer during training. We use these perturbations to
train very deep models such as ResNets and show improvement in performance both
on adversarial and original test data. Our experiments highlight the benefits
of perturbing intermediate layer activations compared to perturbing only the
inputs. The results on CIFAR-10 and CIFAR-100 datasets show the merits of the
proposed adversarial training approach. Additional results on WideResNets show
that our approach provides significant improvement in classification accuracy
for a given base model, outperforming dropout and other base models of larger
size.
| 1 | 0 | 0 | 1 | 0 | 0 |
Improving Dynamic Analysis of Android Apps Using Hybrid Test Input Generation | The Android OS has become the most popular mobile operating system leading to
a significant increase in the spread of Android malware. Consequently, several
static and dynamic analysis systems have been developed to detect Android
malware. With dynamic analysis, efficient test input generation is needed in
order to trigger the potential run-time malicious behaviours. Most existing
dynamic analysis systems employ random-based input generation methods usually
built using the Android Monkey tool. Random-based input generation has several
shortcomings including limited code coverage, which motivates us to explore
combining it with a state-based method in order to improve efficiency. Hence,
in this paper, we present a novel hybrid test input generation approach
designed to improve dynamic analysis on real devices. We implemented the hybrid
system by integrating a random based tool (Monkey) with a state based tool
(DroidBot) in order to improve code coverage and potentially uncover more
malicious behaviours. The system is evaluated using 2,444 Android apps
containing 1222 benign and 1222 malware samples from the Android malware genome
project. Three scenarios, random only, state-based only, and our proposed
hybrid approach were investigated to comparatively evaluate their performances.
Our study shows that the hybrid approach significantly improved the amount of
dynamic features extracted from both benign and malware samples over the
state-based and commonly used random test input generation method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Process Monitoring Using Maximum Sequence Divergence | Process Monitoring involves tracking a system's behaviors, evaluating the
current state of the system, and discovering interesting events that require
immediate actions. In this paper, we consider monitoring temporal system state
sequences to help detect the changes of dynamic systems, check the divergence
of the system development, and evaluate the significance of the deviation. We
begin with discussions of data reduction, symbolic data representation, and the
anomaly detection in temporal discrete sequences. Time-series representation
methods are also discussed and used in this paper to discretize raw data into
sequences of system states. Markov Chains and stationary state distributions
are continuously generated from temporal sequences to represent snapshots of
the system dynamics in different time frames. We use generalized Jensen-Shannon
Divergence as the measure to monitor changes of the stationary symbol
probability distributions and evaluate the significance of system deviations.
We prove that the proposed approach is able to detect deviations of the systems
we monitor and assess the deviation significance in probabilistic manner.
| 0 | 0 | 0 | 1 | 0 | 0 |
Some generalizations of Kannan's theorems via $σ_c$-function | In this article we go on to discuss about various proper extensions of
Kannan's two different fixed point theorems, introducing the new concept of
$\sigma_c$-function; which is independent of the three notions of simulation
function, manageable functions and R-functions. These results are the analogous
to some well known theorems, and extends several known results in this
literature.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Exact Sequences of the Rigid Fibrations | In 2002, Biss investigated on a kind of fibration which is called rigid
covering fibration (we rename it by rigid fibration) with properties similar to
covering spaces. In this paper, we obtain a relation between arbitrary
topological spaces and its rigid fibrations. Using this relation we obtain a
commutative diagram of homotopy groups and quasitopological homotopy groups and
deduce some results in this field.
| 0 | 0 | 1 | 0 | 0 | 0 |
Analytic Gradients for Complete Active Space Pair-Density Functional Theory | Analytic gradient routines are a desirable feature for quantum mechanical
methods, allowing for efficient determination of equilibrium and transition
state structures and several other molecular properties. In this work, we
present analytical gradients for multiconfiguration pair-density functional
theory (MC-PDFT) when used with a state-specific complete active space
self-consistent field reference wave function. Our approach constructs a
Lagrangian that is variational in all wave function parameters. We find that
MC-PDFT locates equilibrium geometries for several small- to medium-sized
organic molecules that are similar to those located by complete active space
second-order perturbation theory but that are obtained with decreased
computational cost.
| 0 | 1 | 0 | 0 | 0 | 0 |
Northern sky Galactic Cosmic Ray anisotropy between 10-1000 TeV with the Tibet Air Shower Array | We report the analysis of the $10-1000$ TeV large-scale sidereal anisotropy
of Galactic cosmic rays (GCRs) with the data collected by the Tibet Air Shower
Array from October, 1995 to February, 2010. In this analysis, we improve the
energy estimate and extend the declination range down to $-30^{\circ}$. We find
that the anisotropy maps above 100 TeV are distinct from that at multi-TeV
band. The so-called "tail-in" and "loss-cone" features identified at low
energies get less significant and a new component appears at $\sim100$ TeV. The
spatial distribution of the GCR intensity with an excess (7.2$\sigma$
pre-trial, 5.2$\sigma$ post-trial) and a deficit ($-5.8\sigma$ pre-trial) are
observed in the 300 TeV anisotropy map, in a good agreement with IceCube's
results at 400 TeV. Combining the Tibet results in the northern sky with
IceCube's results in the southern sky, we establish a full-sky picture of the
anisotropy in hundreds of TeV band. We further find that the amplitude of the
first order anisotropy increases sharply above $\sim100$ TeV, indicating a new
component of the anisotropy. All these results may shed new light on
understanding the origin and propagation of GCRs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Count-ception: Counting by Fully Convolutional Redundant Counting | Counting objects in digital images is a process that should be replaced by
machines. This tedious task is time consuming and prone to errors due to
fatigue of human annotators. The goal is to have a system that takes as input
an image and returns a count of the objects inside and justification for the
prediction in the form of object localization. We repose a problem, originally
posed by Lempitsky and Zisserman, to instead predict a count map which contains
redundant counts based on the receptive field of a smaller regression network.
The regression network predicts a count of the objects that exist inside this
frame. By processing the image in a fully convolutional way each pixel is going
to be accounted for some number of times, the number of windows which include
it, which is the size of each window, (i.e., 32x32 = 1024). To recover the true
count we take the average over the redundant predictions. Our contribution is
redundant counting instead of predicting a density map in order to average over
errors. We also propose a novel deep neural network architecture adapted from
the Inception family of networks called the Count-ception network. Together our
approach results in a 20% relative improvement (2.9 to 2.3 MAE) over the state
of the art method by Xie, Noble, and Zisserman in 2016.
| 1 | 0 | 0 | 1 | 0 | 0 |
A robust inverse scattering transform for the focusing nonlinear Schrödinger equation | We propose a modification of the standard inverse scattering transform for
the focusing nonlinear Schrödinger equation (also other equations by natural
generalization) formulated with nonzero boundary conditions at infinity. The
purpose is to deal with arbitrary-order poles and potentially severe spectral
singularities in a simple and unified way. As an application, we use the
modified transform to place the Peregrine solution and related higher-order
"rogue wave" solutions in an inverse-scattering context for the first time.
This allows one to directly study properties of these solutions such as their
dynamical or structural stability, or their asymptotic behavior in the limit of
high order. The modified transform method also allows rogue waves to be
generated on top of other structures by elementary Darboux transformations,
rather than the generalized Darboux transformations in the literature or other
related limit processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Admissible Bayes equivariant estimation of location vectors for spherically symmetric distributions with unknown scale | This paper investigates estimation of the mean vector under invariant
quadratic loss for a spherically symmetric location family with a residual
vector with density of the form $
f(x,u)=\eta^{(p+n)/2}f(\eta\{\|x-\theta\|^2+\|u\|^2\}) $, where $\eta$ is
unknown. We show that the natural estimator $x$ is admissible for $p=1,2$.
Also, for $p\geq 3$, we find classes of generalized Bayes estimators that are
admissible within the class of equivariant estimators of the form
$\{1-\xi(x/\|u\|)\}x$. In the Gaussian case, a variant of the James--Stein
estimator, $[1-\{(p-2)/(n+2)\}/\{\|x\|^2/\|u\|^2+(p-2)/(n+2)+1\}]x$, which
dominates the natural estimator $x$, is also admissible within this class. We
also study the related regression model.
| 0 | 0 | 1 | 1 | 0 | 0 |
A proof theoretic study of abstract termination principles | We define a variety of abstract termination principles which form
generalisations of simplification orders, and investigate their computational
content. Simplification orders, which include the well-known multiset and
lexicographic path orderings, are important techniques for proving that
computer programs terminate. Moreover, an analysis of the proofs that these
orders are wellfounded can yield additional quantitative information: namely an
upper bound on the complexity of programs reducing under these orders. In this
paper we focus on extracting computational content from the typically
non-constructive wellfoundedness proofs of termination orders, with an eye
towards the establishment of general metatheorems which characterise bounds on
the derivational complexity induced by these orders. However, ultimately we
have a much broader goal, which is to explore a number of deep mathematical
concepts which underlie termination orders, including minimal-bad-sequence
constructions, modified realizability and bar recursion. We aim to describe how
these concepts all come together to form a particularly elegant illustration of
the bridge between proofs and programs.
| 1 | 0 | 1 | 0 | 0 | 0 |
Why exomoons must be rare? | The problem of the search for the satellites of the exoplanets (exomoons) is
discussed recently. There are very many satellites in our Solar System. But in
contrary of our Solar system, exoplanets have significant eccentricity. In
process of planetary migration, exoplanets can cross some resonances with
following growth of their orbital eccentricity. The stability of exomoons
decreases, and many of satellites were lost. Here we give a simple example of
loss satellite when eccentricity increased. Finally, we can conclude that
exomoons must be rare due to observed large eccentricities of exoplanets.
| 0 | 1 | 0 | 0 | 0 | 0 |
Propagation of regularity in $L^p$-spaces for Kolmogorov type hypoelliptic operators | Consider the following Kolmogorov type hypoelliptic operator $$ \mathscr
L_t:=\mbox{$\sum_{j=2}^n$}x_j\cdot\nabla_{x_{j-1}}+{\rm Tr} (a_t
\cdot\nabla^2_{x_n}), $$ where $n\geq 2$, $x=(x_1,\cdots,x_n)\in(\mathbb R^d)^n
=\mathbb R^{nd}$ and $a_t$ is a time-dependent constant symmetric $d\times
d$-matrix that is uniformly elliptic and bounded.. Let $\{\mathcal T_{s,t};
t\geq s\}$ be the time-dependent semigroup associated with $\mathscr L_t$; that
is, $\partial_s {\mathcal T}_{s, t} f = - {\mathscr L}_s {\mathcal T}_{s, t}f$.
For any $p\in(1,\infty)$, we show that there is a constant $C=C(p,n,d)>0$ such
that for any $f(t, x)\in L^p(\mathbb R \times \mathbb R^{nd})=L^p(\mathbb
R^{1+nd})$ and every $\lambda \geq 0$, $$
\left\|\Delta_{x_j}^{{1}/{(1+2(n-j)})}\int^{\infty}_0 e^{-\lambda t} {\mathcal
T}_{s, s+t }f(t+s, x)dt\right\|_p\leq C\|f\|_p,\quad j=1,\cdots, n, $$ where
$\|\cdot\|_p$ is the usual $L^p$-norm in $L^p(\mathbb R^{1+nd}; d s\times d
x)$. To show this type of estimates, we first study the propagation of
regularity in $L^2$-space from variable $x_n$ to $x_1$ for the solution of the
transport equation $\partial_t u+\sum_{j=2}^nx_j\cdot\nabla_{x_{j-1}} u=f$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Relation between Constraint Answer Set Programming and Satisfiability Modulo Theories | Constraint answer set programming is a promising research direction that
integrates answer set programming with constraint processing. It is often
informally related to the field of satisfiability modulo theories. Yet, the
exact formal link is obscured as the terminology and concepts used in these two
research areas differ. In this paper, we connect these two research areas by
uncovering the precise formal relation between them. We believe that this work
will booster the cross-fertilization of the theoretical foundations and the
existing solving methods in both areas. As a step in this direction we provide
a translation from constraint answer set programs with integer linear
constraints to satisfiability modulo linear integer arithmetic that paves the
way to utilizing modern satisfiability modulo theories solvers for computing
answer sets of constraint answer set programs.
| 1 | 0 | 0 | 0 | 0 | 0 |
The w-effect in interferometric imaging: from a fast sparse measurement operator to super-resolution | Modern radio telescopes, such as the Square Kilometre Array (SKA), will probe
the radio sky over large fields-of-view, which results in large w-modulations
of the sky image. This effect complicates the relationship between the measured
visibilities and the image under scrutiny. In algorithmic terms, it gives rise
to massive memory and computational time requirements. Yet, it can be a
blessing in terms of reconstruction quality of the sky image. In recent years,
several works have shown that large w-modulations promote the spread spectrum
effect. Within the compressive sensing framework, this effect increases the
incoherence between the sensing basis and the sparsity basis of the signal to
be recovered, leading to better estimation of the sky image. In this article,
we revisit the w-projection approach using convex optimisation in realistic
settings, where the measurement operator couples the w-terms in Fourier and the
de-gridding kernels. We provide sparse, thus fast, models of the Fourier part
of the measurement operator through adaptive sparsification procedures.
Consequently, memory requirements and computational cost are significantly
alleviated, at the expense of introducing errors on the radio-interferometric
data model. We present a first investigation of the impact of the sparse
variants of the measurement operator on the image reconstruction quality. We
finally analyse the interesting super-resolution potential associated with the
spread spectrum effect of the w-modulation, and showcase it through
simulations. Our C++ code is available online on GitHub.
| 0 | 1 | 0 | 0 | 0 | 0 |
Large Scale Graph Learning from Smooth Signals | Graphs are a prevalent tool in data science, as they model the inherent
structure of the data. They have been used successfully in unsupervised and
semi-supervised learning. Typically they are constructed either by connecting
nearest samples, or by learning them from data, solving an optimization
problem. While graph learning does achieve a better quality, it also comes with
a higher computational cost. In particular, the current state-of-the-art model
cost is $\mathcal{O}(n^2)$ for $n$ samples. In this paper, we show how to scale
it, obtaining an approximation with leading cost of $\mathcal{O}(n\log(n))$,
with quality that approaches the exact graph learning model. Our algorithm uses
known approximate nearest neighbor techniques to reduce the number of
variables, and automatically selects the correct parameters of the model,
requiring a single intuitive input: the desired edge density.
| 1 | 0 | 0 | 1 | 0 | 0 |
Power Plant Performance Modeling with Concept Drift | Power plant is a complex and nonstationary system for which the traditional
machine learning modeling approaches fall short of expectations. The
ensemble-based online learning methods provide an effective way to continuously
learn from the dynamic environment and autonomously update models to respond to
environmental changes. This paper proposes such an online ensemble regression
approach to model power plant performance, which is critically important for
operation optimization. The experimental results on both simulated and real
data show that the proposed method can achieve performance with less than 1%
mean average percentage error, which meets the general expectations in field
operations.
| 1 | 0 | 0 | 1 | 0 | 0 |
Does putting your emotions into words make you feel better? Measuring the minute-scale dynamics of emotions from online data | Studies of affect labeling, i.e. putting your feelings into words, indicate
that it can attenuate positive and negative emotions. Here we track the
evolution of individual emotions for tens of thousands of Twitter users by
analyzing the emotional content of their tweets before and after they
explicitly report having a strong emotion. Our results reveal how emotions and
their expression evolve at the temporal resolution of one minute. While the
expression of positive emotions is preceded by a short but steep increase in
positive valence and followed by short decay to normal levels, negative
emotions build up more slowly, followed by a sharp reversal to previous levels,
matching earlier findings of the attenuating effects of affect labeling. We
estimate that positive and negative emotions last approximately 1.25 and 1.5
hours from onset to evanescence. A separate analysis for male and female
subjects is suggestive of possible gender-specific differences in emotional
dynamics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Axiomatizing Epistemic Logic of Friendship via Tree Sequent Calculus | This paper positively solves an open problem if it is possible to provide a
Hilbert system to Epistemic Logic of Friendship (EFL) by Seligman, Girard and
Liu. To find a Hilbert system, we first introduce a sound, complete and
cut-free tree (or nested) sequent calculus for EFL, which is an integrated
combination of Seligman's sequent calculus for basic hybrid logic and a tree
sequent calculus for modal logic. Then we translate a tree sequent into an
ordinary formula to specify a Hilbert system of EFL and finally show that our
Hilbert system is sound and complete for the intended two-dimensional
semantics.
| 1 | 0 | 1 | 0 | 0 | 0 |
A novel approach to the Lindelöf hypothesis | Lindel{ö}f's hypothesis, one of the most important open problems in the
history of mathematics, states that for large $t$, Riemann's zeta function
$\zeta(\frac{1}{2}+it)$ is of order $O(t^{\varepsilon})$ for any
$\varepsilon>0$. It is well known that for large $t$, the leading order
asymptotics of the Riemann zeta function can be expressed in terms of a
transcendental exponential sum. The usual approach to the Lindelöf hypothesis
involves the use of ingenious techniques for the estimation of this sum.
However, since such estimates can not yield an asymptotic formula for the above
sum, it appears that this approach cannot lead to the proof of the Lindelöf
hypothesis. Here, a completely different approach is introduced: the Riemann
zeta function is embedded in a classical problem in the theory of complex
analysis known as a Riemann-Hilbert problem, and then, the large
$t$-asymptotics of the associated integral equation is formally computed. This
yields two different results. First, the formal proof that a certain Riemann
zeta-type double exponential sum satisfies the asymptotic estimate of the
Lindelöf hypothesis. Second, it is formally shown that the sum of
$|\zeta(1/2+it)|^2$ and of a certain sum which depends on $\epsilon$, satisfies
for large $t$ the estimate of the Lindelöf hypothesis. Hence, since the above
identity is valid for all $\epsilon$, this asymptotic identity suggests the
validity of Lindelöf's hypothesis. The completion of the rigorous derivation
of the above results will be presented in a companion paper.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Overview of Recent Solutions to and Lower Bounds for the Firing Synchronization Problem | Complex systems in a wide variety of areas such as biological modeling, image
processing, and language recognition can be modeled using networks of very
simple machines called finite automata. Connecting subsystems modeled using
finite automata into a network allows for more computational power. One such
network, called a cellular automaton, consists of an n-dimensional array for n
> 1 with a single finite automaton located at each point of the array. One of
the oldest problems associated with cellular automata is the firing
synchronization problem, originally proposed by John Myhill in 1957. As with
any long-standing problem, there are a large number of solutions to the firing
synchronization problem. Our goal, and the contribution of this work, is to
summarize recent solutions to the problem. We focus primarily on solutions to
the original problem, that is, the problem where the network is a
one-dimensional array and there is a single initiator located at one of the
ends. We summarize both minimal-time and non-minimal-time solutions, with an
emphasis on solutions that were published after 1998. We also focus on
solutions that minimize the number of states required by the finite automata.
In the process we also identify open problems that remain in terms of finding
minimal-state solutions to the firing synchronization problem.
| 1 | 1 | 0 | 0 | 0 | 0 |
On Tackling the Limits of Resolution in SAT Solving | The practical success of Boolean Satisfiability (SAT) solvers stems from the
CDCL (Conflict-Driven Clause Learning) approach to SAT solving. However, from a
propositional proof complexity perspective, CDCL is no more powerful than the
resolution proof system, for which many hard examples exist. This paper
proposes a new problem transformation, which enables reducing the decision
problem for formulas in conjunctive normal form (CNF) to the problem of solving
maximum satisfiability over Horn formulas. Given the new transformation, the
paper proves a polynomial bound on the number of MaxSAT resolution steps for
pigeonhole formulas. This result is in clear contrast with earlier results on
the length of proofs of MaxSAT resolution for pigeonhole formulas. The paper
also establishes the same polynomial bound in the case of modern core-guided
MaxSAT solvers. Experimental results, obtained on CNF formulas known to be hard
for CDCL SAT solvers, show that these can be efficiently solved with modern
MaxSAT solvers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Penalized Estimation in Additive Regression with High-Dimensional Data | Additive regression provides an extension of linear regression by modeling
the signal of a response as a sum of functions of covariates of relatively low
complexity. We study penalized estimation in high-dimensional nonparametric
additive regression where functional semi-norms are used to induce smoothness
of component functions and the empirical $L_2$ norm is used to induce sparsity.
The functional semi-norms can be of Sobolev or bounded variation types and are
allowed to be different amongst individual component functions. We establish
new oracle inequalities for the predictive performance of such methods under
three simple technical conditions: a sub-gaussian condition on the noise, a
compatibility condition on the design and the functional classes under
consideration, and an entropy condition on the functional classes. For random
designs, the sample compatibility condition can be replaced by its population
version under an additional condition to ensure suitable convergence of
empirical norms. In homogeneous settings where the complexities of the
component functions are of the same order, our results provide a spectrum of
explicit convergence rates, from the so-called slow rate without requiring the
compatibility condition to the fast rate under the hard sparsity or certain
$L_q$ sparsity to allow many small components in the true regression function.
These results significantly broadens and sharpens existing ones in the
literature.
| 0 | 0 | 1 | 1 | 0 | 0 |
Towards Quality Advancement of Underwater Machine Vision with Generative Adversarial Networks | Underwater machine vision has attracted significant attention, but its low
quality has prevented it from a wide range of applications. Although many
different algorithms have been developed to solve this problem, real-time
adaptive methods are frequently deficient. In this paper, based on filtering
and the use of generative adversarial networks (GANs), two approaches are
proposed for the aforementioned issue, i.e., a filtering-based restoration
scheme (FRS) and a GAN-based restoration scheme (GAN-RS). Distinct from
previous methods, FRS restores underwater images in the Fourier domain, which
is composed of a parameter search, filtering, and enhancement. Aiming to
further improve the image quality, GAN-RS can adaptively restore underwater
machine vision in real time without the need for pretreatment. In particular,
information in the Lab color space and the dark channel is developed as loss
functions, namely, underwater index loss and dark channel prior loss,
respectively. More specifically, learning from the underwater index, the
discriminator is equipped with a carefully crafted underwater branch to predict
the underwater probability of an image. A multi-stage loss strategy is then
developed to guarantee the effective training of GANs. Through extensive
comparisons on the image quality and applications, the superiority of the
proposed approaches is confirmed. Consequently, the GAN-RS is considerably
faster and achieves a state-of-the-art performance in terms of the color
correction, contrast stretch, dehazing, and feature restoration of various
underwater scenes. The source code will be made available.
| 1 | 0 | 0 | 0 | 0 | 0 |
Linking Sketches and Diagrams to Source Code Artifacts | Recent studies have shown that sketches and diagrams play an important role
in the daily work of software developers. If these visual artifacts are
archived, they are often detached from the source code they document, because
there is no adequate tool support to assist developers in capturing, archiving,
and retrieving sketches related to certain source code artifacts. This paper
presents SketchLink, a tool that aims at increasing the value of sketches and
diagrams created during software development by supporting developers in these
tasks. Our prototype implementation provides a web application that employs the
camera of smartphones and tablets to capture analog sketches, but can also be
used on desktop computers to upload, for instance, computer-generated diagrams.
We also implemented a plugin for a Java IDE that embeds the links in Javadoc
comments and visualizes them in situ in the source code editor as graphical
icons.
| 1 | 0 | 0 | 0 | 0 | 0 |
Parsimonious Data: How a single Facebook like predicts voting behaviour in multiparty systems | Recently, two influential PNAS papers have shown how our preferences for
'Hello Kitty' and 'Harley Davidson', obtained through Facebook likes, can
accurately predict details about our personality, religiosity, political
attitude and sexual orientation (Konsinski et al. 2013; Youyou et al 2015). In
this paper, we make the claim that though the wide variety of Facebook likes
might predict such personal traits, even more accurate and generalizable
results can be reached through applying a contexts-specific, parsimonious data
strategy. We built this claim by predicting present day voter intention based
solely on likes directed toward posts from political actors. Combining the
online and offline, we join a subsample of surveyed respondents to their public
Facebook activity and apply machine learning classifiers to explore the link
between their political liking behaviour and actual voting intention. Through
this work, we show how even a single well-chosen Facebook like, can reveal as
much about our political voter intention as hundreds of random likes. Further,
by including the entire political like history of the respondents, our model
reaches prediction accuracies above previous multiparty studies (60-70%). We
conclude the paper by discussing how a parsimonious data strategy applied, with
some limitations, allow us to generalize our findings to the 1,4 million Danes
with at least one political like and even to other political multiparty
systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Hidden Vulnerability of Distributed Learning in Byzantium | While machine learning is going through an era of celebrated success,
concerns have been raised about the vulnerability of its backbone: stochastic
gradient descent (SGD). Recent approaches have been proposed to ensure the
robustness of distributed SGD against adversarial (Byzantine) workers sending
poisoned gradients during the training phase. Some of these approaches have
been proven Byzantine-resilient: they ensure the convergence of SGD despite the
presence of a minority of adversarial workers.
We show in this paper that convergence is not enough. In high dimension $d
\gg 1$, an adver\-sary can build on the loss function's non-convexity to make
SGD converge to ineffective models. More precisely, we bring to light that
existing Byzantine-resilient schemes leave a margin of poisoning of
$\Omega\left(f(d)\right)$, where $f(d)$ increases at least like $\sqrt{d~}$.
Based on this leeway, we build a simple attack, and experimentally show its
strong to utmost effectivity on CIFAR-10 and MNIST.
We introduce Bulyan, and prove it significantly reduces the attackers leeway
to a narrow $O( \frac{1}{\sqrt{d~}})$ bound. We empirically show that Bulyan
does not suffer the fragility of existing aggregation rules and, at a
reasonable cost in terms of required batch size, achieves convergence as if
only non-Byzantine gradients had been used to update the model.
| 0 | 0 | 0 | 1 | 0 | 0 |
walk2friends: Inferring Social Links from Mobility Profiles | The development of positioning technologies has resulted in an increasing
amount of mobility data being available. While bringing a lot of convenience to
people's life, such availability also raises serious concerns about privacy. In
this paper, we concentrate on one of the most sensitive information that can be
inferred from mobility data, namely social relationships. We propose a novel
social relation inference attack that relies on an advanced feature learning
technique to automatically summarize users' mobility features. Compared to
existing approaches, our attack is able to predict any two individuals' social
relation, and it does not require the adversary to have any prior knowledge on
existing social relations. These advantages significantly increase the
applicability of our attack and the scope of the privacy assessment. Extensive
experiments conducted on a large dataset demonstrate that our inference attack
is effective, and achieves between 13% to 20% improvement over the best
state-of-the-art scheme. We propose three defense mechanisms -- hiding,
replacement and generalization -- and evaluate their effectiveness for
mitigating the social link privacy risks stemming from mobility data sharing.
Our experimental results show that both hiding and replacement mechanisms
outperform generalization. Moreover, hiding and replacement achieve a
comparable trade-off between utility and privacy, the former preserving better
utility and the latter providing better privacy.
| 1 | 0 | 0 | 0 | 0 | 0 |
An Optimal Algorithm for Online Unconstrained Submodular Maximization | We consider a basic problem at the interface of two fundamental fields:
submodular optimization and online learning. In the online unconstrained
submodular maximization (online USM) problem, there is a universe
$[n]=\{1,2,...,n\}$ and a sequence of $T$ nonnegative (not necessarily
monotone) submodular functions arrive over time. The goal is to design a
computationally efficient online algorithm, which chooses a subset of $[n]$ at
each time step as a function only of the past, such that the accumulated value
of the chosen subsets is as close as possible to the maximum total value of a
fixed subset in hindsight. Our main result is a polynomial-time no-$1/2$-regret
algorithm for this problem, meaning that for every sequence of nonnegative
submodular functions, the algorithm's expected total value is at least $1/2$
times that of the best subset in hindsight, up to an error term sublinear in
$T$. The factor of $1/2$ cannot be improved upon by any polynomial-time online
algorithm when the submodular functions are presented as value oracles.
Previous work on the offline problem implies that picking a subset uniformly at
random in each time step achieves zero $1/4$-regret.
A byproduct of our techniques is an explicit subroutine for the two-experts
problem that has an unusually strong regret guarantee: the total value of its
choices is comparable to twice the total value of either expert on rounds it
did not pick that expert. This subroutine may be of independent interest.
| 0 | 0 | 0 | 1 | 0 | 0 |
Modeling Grasp Motor Imagery through Deep Conditional Generative Models | Grasping is a complex process involving knowledge of the object, the
surroundings, and of oneself. While humans are able to integrate and process
all of the sensory information required for performing this task, equipping
machines with this capability is an extremely challenging endeavor. In this
paper, we investigate how deep learning techniques can allow us to translate
high-level concepts such as motor imagery to the problem of robotic grasp
synthesis. We explore a paradigm based on generative models for learning
integrated object-action representations, and demonstrate its capacity for
capturing and generating multimodal, multi-finger grasp configurations on a
simulated grasping dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Designing an Effective Metric Learning Pipeline for Speaker Diarization | State-of-the-art speaker diarization systems utilize knowledge from external
data, in the form of a pre-trained distance metric, to effectively determine
relative speaker identities to unseen data. However, much of recent focus has
been on choosing the appropriate feature extractor, ranging from pre-trained
$i-$vectors to representations learned via different sequence modeling
architectures (e.g. 1D-CNNs, LSTMs, attention models), while adopting
off-the-shelf metric learning solutions. In this paper, we argue that,
regardless of the feature extractor, it is crucial to carefully design a metric
learning pipeline, namely the loss function, the sampling strategy and the
discrimnative margin parameter, for building robust diarization systems.
Furthermore, we propose to adopt a fine-grained validation process to obtain a
comprehensive evaluation of the generalization power of metric learning
pipelines. To this end, we measure diarization performance across different
language speakers, and variations in the number of speakers in a recording.
Using empirical studies, we provide interesting insights into the effectiveness
of different design choices and make recommendations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mining Application-aware Community Organization with Expanded Feature Subspaces from Concerned Attributes in Social Networks | Social networks are typical attributed networks with node attributes.
Different from traditional attribute community detection problem aiming at
obtaining the whole set of communities in the network, we study an
application-oriented problem of mining an application-aware community
organization with respect to specific concerned attributes. The concerned
attributes are designated based on the requirements of any application by a
user in advance. The application-aware community organization w.r.t. concerned
attributes consists of the communities with feature subspaces containing these
concerned attributes. Besides concerned attributes, feature subspace of each
required community may contain some other relevant attributes. All relevant
attributes of a feature subspace jointly describe and determine the community
embedded in such subspace. Thus the problem includes two subproblems, i.e., how
to expand the set of concerned attributes to complete feature subspaces and how
to mine the communities embedded in the expanded subspaces. Two subproblems are
jointly solved by optimizing a quality function called subspace fitness. An
algorithm called ACM is proposed. In order to locate the communities
potentially belonging to the application-aware community organization, cohesive
parts of a network backbone composed of nodes with similar concerned attributes
are detected and set as the community seeds. The set of concerned attributes is
set as the initial subspace for all community seeds. Then each community seed
and its attribute subspace are adjusted iteratively to optimize the subspace
fitness. Extensive experiments on synthetic datasets demonstrate the
effectiveness and efficiency of our method and applications on real-world
networks show its application values.
| 1 | 1 | 0 | 0 | 0 | 0 |
A spin-gapped Mott insulator with the dimeric arrangement of twisted molecules Zn(tmdt)$_{2}$ | $^{13}$C nuclear magnetic resonance measurements were performed for a
single-component molecular material Zn(tmdt)$_{2}$, in which tmdt's form an
arrangement similar to the so-called ${\kappa}$-type molecular packing in
quasi-two-dimensional Mott insulators and superconductors. Detailed analysis of
the powder spectra uncovered local spin susceptibility in the tmdt ${\pi}$
orbitals. The obtained shift and relaxation rate revealed the singlet-triplet
excitations of the ${\pi}$ spins, indicating that Zn(tmdt)$_{2}$ is a
spin-gapped Mott insulator with exceptionally large electron correlations
compared to conventional molecular Mott systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Li doping kagome spin liquid compounds | Herbertsmithite and Zn-doped barlowite are two compounds for experimental
realization of twodimensional gapped kagome spin liquid. Theoretically, it has
been proposed that charge doping a quantum spin liquid gives rise to exotic
metallic states, such as high-temperature superconductivity. However, one
recent experiment about herbertsmithite with successful Li-doping shows
surprisingly the insulating state even under the heavy doped scenario, which
can hardly be explained by many-body physics. Using first-principles
calculation, we performed a comprehensive study about the Li intercalated
doping effect of these two compounds. For the Li-doped herbertsmithite, we
identified the optimized Li position at the Cl-(OH)$_3$-Cl pentahedron site
instead of previously speculated Cl-(OH)$_3$ tetrahedral site. With the
increase of Li doping concentration, the saturation magnetization decreases
linearly due to the charge transfer from Li to Cu ions. Moreover, we found that
Li forms chemical bonds with the nearby (OH)$^-$ and Cl$^-$ ions, which lowers
the surrounding chemical potential and traps the electron, as evidenced by the
localized charge distribution, explaining the insulating behavior measured
experimentally. Though with different structure from herbertsmithite, Zn-doped
Barlowite shows the same features upon Li doping. We conclude that Li doping
this family of kagome spin liquid cannot realize exotic metallic states, other
methods should be further explored, such as element substitution with different
valence electrons.
| 0 | 1 | 0 | 0 | 0 | 0 |
DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic Congestion | Non-recurring traffic congestion is caused by temporary disruptions, such as
accidents, sports games, adverse weather, etc. We use data related to real-time
traffic speed, jam factors (a traffic congestion indicator), and events
collected over a year from Nashville, TN to train a multi-layered deep neural
network. The traffic dataset contains over 900 million data records. The
network is thereafter used to classify the real-time data and identify
anomalous operations. Compared with traditional approaches of using statistical
or machine learning techniques, our model reaches an accuracy of 98.73 percent
when identifying traffic congestion caused by football games. Our approach
first encodes the traffic across a region as a scaled image. After that the
image data from different timestamps is fused with event- and time-related
data. Then a crossover operator is used as a data augmentation method to
generate training datasets with more balanced classes. Finally, we use the
receiver operating characteristic (ROC) analysis to tune the sensitivity of the
classifier. We present the analysis of the training time and the inference time
separately.
| 0 | 0 | 0 | 1 | 0 | 0 |
Facets of a mixed-integer bilinear covering set with bounds on variables | We derive a closed form description of the convex hull of mixed-integer
bilinear covering set with bounds on the integer variables. This convex hull
description is completely determined by considering some orthogonal disjunctive
sets defined in a certain way. Our description does not introduce any new
variables. We also derive a linear time separation algorithm for finding the
facet defining inequalities of this convex hull. We show the effectiveness of
the new inequalities using some examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Universal partial sums of Taylor series as functions of the centre of expansion | V. Nestoridis conjectured that if $\Omega$ is a simply connected subset of
$\mathbb{C}$ that does not contain $0$ and $S(\Omega)$ is the set of all
functions $f\in \mathcal{H}(\Omega)$ with the property that the set
$\left\{T_N(f)(z)\coloneqq\sum_{n=0}^N\dfrac{f^{(n)}(z)}{n!} (-z)^n : N =
0,1,2,\dots \right\}$ is dense in $\mathcal{H}(\Omega)$, then $S(\Omega)$ is a
dense $G_\delta$ set in $\mathcal{H}(\Omega)$. We answer the conjecture in the
affirmative in the special case where $\Omega$ is an open disc $D(z_0,r)$ that
does not contain $0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Free Cooling of a Granular Gas in Three Dimensions | Granular gases as dilute ensembles of particles in random motion are not only
at the basis of elementary structure-forming processes in the universe and
involved in many industrial and natural phenomena, but also excellent models to
study fundamental statistical dynamics. A vast number of theoretical and
numerical investigations have dealt with this apparently simple non-equilibrium
system. The essential difference to molecular gases is the energy dissipation
in particle collisions, a subtle distinction with immense impact on their
global dynamics. Its most striking manifestation is the so-called granular
cooling, the gradual loss of mechanical energy in absence of external
excitation.
We report an experimental study of homogeneous cooling of three-dimensional
(3D) granular gases in microgravity. Surprisingly, the asymptotic scaling
$E(t)\propto t^{-2}$ obtained by Haff's minimal model [J. Fluid Mech. 134, 401
(1983)] proves to be robust, despite the violation of several of its central
assumptions. The shape anisotropy of the grains influences the characteristic
time of energy loss quantitatively, but not qualitatively. We compare kinetic
energies in the individual degrees of freedom, and find a slight predominance
of the translational motions. In addition, we detect a certain preference of
the grains to align with their long axis in flight direction, a feature known
from active matter or animal flocks, and the onset of clustering.
| 0 | 1 | 0 | 0 | 0 | 0 |
The SCUBA-2 Ambitious Sky Survey: a catalogue of beam-sized sources in the Galactic longitude range 120 to 140 | The SCUBA-2 Ambitious Sky Survey (SASSy) is composed of shallow 850-$\umu$m
imaging using the Sub-millimetre Common-User Bolometer Array 2 (SCUBA-2) on the
James Clerk Maxwell Telescope. Here we describe the extraction of a catalogue
of beam-sized sources from a roughly $120\,{\rm deg}^2$ region of the Galactic
plane mapped uniformly (to an rms level of about 40\,mJy), covering longitude
120\degr\,$<$\,\textit{l}\,$<$\,140\degr\ and latitude
$\abs{\textit{b}}$\,$<$\,2.9\degr. We used a matched-filtering approach to
increase the signal-to-noise (S/N) ratio in these noisy maps and tested the
efficiency of our extraction procedure through estimates of the false discovery
rate, as well as by adding artificial sources to the real images. The primary
catalogue contains a total of 189 sources at 850\,$\umu$m, down to a S/N
threshold of approximately 4.6. Additionally, we list 136 sources detected down
to ${\rm S/N}=4.3$, but recognise that as we go lower in S/N, the reliability
of the catalogue rapidly diminishes. We perform follow-up observations of some
of our lower significance sources through small targeted SCUBA-2 images, and
list 265 sources detected in these maps down to ${\rm S/N}=5$. This illustrates
the real power of SASSy: inspecting the shallow maps for regions of 850-$\umu$m
emission and then using deeper targeted images to efficiently find fainter
sources. We also perform a comparison of the SASSy sources with the Planck
Catalogue of Compact Sources and the \textit{IRAS} Point Source Catalogue, to
determine which sources discovered in this field might be new, and hence
potentially cold regions at an early stage of star formation.
| 0 | 1 | 0 | 0 | 0 | 0 |
A stable and optimally convergent LaTIn-Cut Finite Element Method for multiple unilateral contact problems | In this paper, we propose a novel unfitted finite element method for the
simulation of multiple body contact. The computational mesh is generated
independently of the geometry of the interacting solids, which can be
arbitrarily complex. The key novelty of the approach is the combination of
elements of the CutFEM technology, namely the enrichment of the solution field
via the definition of overlapping fictitious domains with a dedicated
penalty-type regularisation of discrete operators, and the LaTIn hybrid-mixed
formulation of complex interface conditions. Furthermore, the novel P1-P1
discretisation scheme that we propose for the unfitted LaTIn solver is shown to
be stable, robust and optimally convergent with mesh refinement. Finally, the
paper introduces a high-performance 3D level-set/CutFEM framework for the
versatile and robust solution of contact problems involving multiple bodies of
complex geometries, with more than two bodies interacting at a single point.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non Fermi liquid behavior and continuously tunable resistivity exponents in the Anderson-Hubbard model at finite temperature | We employ a recently developed computational many-body technique to study for
the first time the half-filled Anderson-Hubbard model at finite temperature and
arbitrary correlation ($U$) and disorder ($V$) strengths. Interestingly, the
narrow zero temperature metallic range induced by disorder from the Mott
insulator expands with increasing temperature in a manner resembling a quantum
critical point. Our study of the resistivity temperature scaling $T^{\alpha}$
for this metal reveals non Fermi liquid characteristics. Moreover, a continuous
dependence of $\alpha$ on $U$ and $V$ from linear to nearly quadratic was
observed. We argue that these exotic results arise from a systematic change
with $U$ and $V$ of the "effective" disorder, a combination of quenched
disorder and intrinsic localized spins.
| 0 | 1 | 0 | 0 | 0 | 0 |
Beyond Parity: Fairness Objectives for Collaborative Filtering | We study fairness in collaborative-filtering recommender systems, which are
sensitive to discrimination that exists in historical data. Biased data can
lead collaborative-filtering methods to make unfair predictions for users from
minority groups. We identify the insufficiency of existing fairness metrics and
propose four new metrics that address different forms of unfairness. These
fairness metrics can be optimized by adding fairness terms to the learning
objective. Experiments on synthetic and real data show that our new metrics can
better measure fairness than the baseline, and that the fairness objectives
effectively help reduce unfairness.
| 1 | 0 | 0 | 1 | 0 | 0 |
Blind Regression via Nearest Neighbors under Latent Variable Models | We consider the setup of nonparametric 'blind regression' for estimating the
entries of a large $m \times n$ matrix, when provided with a small, random
fraction of noisy measurements. We assume that all rows $u \in [m]$ and columns
$i \in [n]$ of the matrix are associated to latent features $x_1(u)$ and
$x_2(i)$ respectively, and the $(u,i)$-th entry of the matrix, $A(u, i)$ is
equal to $f(x_1(u), x_2(i))$ for a latent function $f$. Given noisy
observations of a small, random subset of the matrix entries, our goal is to
estimate the unobserved entries of the matrix as well as to "de-noise" the
observed entries.
As the main result of this work, we introduce a neighbor-based estimation
algorithm inspired by the classical Taylor's series expansion. We establish its
consistency when the underlying latent function $f$ is Lipschitz, the latent
features belong to a compact domain, and the fraction of observed entries in
the matrix is at least $\max \left(m^{-1 + \delta}, n^{-1/2 + \delta} \right)$,
for any $\delta > 0$. As an important byproduct, our analysis sheds light into
the performance of the classical collaborative filtering (CF) algorithm for
matrix completion, which has been widely utilized in practice. Experiments with
the MovieLens and Netflix datasets suggest that our algorithm provides a
principled improvement over basic CF and is competitive with matrix
factorization methods.
Our algorithm has a natural extension to tensor completion. For a $t$-order
balanced tensor with total of $N$ entries, we prove that our approach provides
a consistent estimator when at least $N^{-\frac{\lfloor 2t/3 \rfloor}{2t}+
\delta}$ fraction of entries are observed, for any $\delta > 0$. When applied
to the setting of image in-painting (a tensor of order 3), we find that our
approach is competitive with respect to state-of-art tensor completion
algorithms across benchmark images.
| 0 | 0 | 1 | 1 | 0 | 0 |
Metalearning for Feature Selection | A general formulation of optimization problems in which various candidate
solutions may use different feature-sets is presented, encompassing supervised
classification, automated program learning and other cases. A novel
characterization of the concept of a "good quality feature" for such an
optimization problem is provided; and a proposal regarding the integration of
quality based feature selection into metalearning is suggested, wherein the
quality of a feature for a problem is estimated using knowledge about related
features in the context of related problems. Results are presented regarding
extensive testing of this "feature metalearning" approach on supervised text
classification problems; it is demonstrated that, in this context, feature
metalearning can provide significant and sometimes dramatic speedup over
standard feature selection heuristics.
| 1 | 0 | 0 | 1 | 0 | 0 |
Variability-Aware Design for Energy Efficient Computational Artificial Intelligence Platform | Portable computing devices, which include tablets, smart phones and various
types of wearable sensors, experienced a rapid development in recent years. One
of the most critical limitations for these devices is the power consumption as
they use batteries as the power supply. However, the bottleneck of the power
saving schemes in both hardware design and software algorithm is the huge
variability in power consumption. The variability is caused by a myriad of
factors, including the manufacturing process, the ambient environment
(temperature, humidity), the aging effects and etc. As the technology node
scaled down to 28nm and even lower, the variability becomes more severe. As a
result, a platform for variability characterization seems to be very necessary
and helpful.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantifying Performance of Bipedal Standing with Multi-channel EMG | Spinal cord stimulation has enabled humans with motor complete spinal cord
injury (SCI) to independently stand and recover some lost autonomic function.
Quantifying the quality of bipedal standing under spinal stimulation is
important for spinal rehabilitation therapies and for new strategies that seek
to combine spinal stimulation and rehabilitative robots (such as exoskeletons)
in real time feedback. To study the potential for automated electromyography
(EMG) analysis in SCI, we evaluated the standing quality of paralyzed patients
undergoing electrical spinal cord stimulation using both video and
multi-channel surface EMG recordings during spinal stimulation therapy
sessions. The quality of standing under different stimulation settings was
quantified manually by experienced clinicians. By correlating features of the
recorded EMG activity with the expert evaluations, we show that multi-channel
EMG recording can provide accurate, fast, and robust estimation for the quality
of bipedal standing in spinally stimulated SCI patients. Moreover, our analysis
shows that the total number of EMG channels needed to effectively predict
standing quality can be reduced while maintaining high estimation accuracy,
which provides more flexibility for rehabilitation robotic systems to
incorporate EMG recordings.
| 1 | 0 | 0 | 1 | 0 | 0 |
A framework for on-line calibration of LINAC devices | General description of an on-line procedure of calibration for IGRT (Image
Guided Radiotherapy) is given. The algorithm allows to improve targeting cancer
by estimating its position in space and suggests appropriate correction of the
position of the patient. The description is given in the Geometric Algebra
language which significantly simplifies calculations and clarifies
presentation.
| 0 | 1 | 0 | 0 | 0 | 0 |
People on Media: Jointly Identifying Credible News and Trustworthy Citizen Journalists in Online Communities | Media seems to have become more partisan, often providing a biased coverage
of news catering to the interest of specific groups. It is therefore essential
to identify credible information content that provides an objective narrative
of an event. News communities such as digg, reddit, or newstrust offer
recommendations, reviews, quality ratings, and further insights on journalistic
works. However, there is a complex interaction between different factors in
such online communities: fairness and style of reporting, language clarity and
objectivity, topical perspectives (like political viewpoint), expertise and
bias of community members, and more. This paper presents a model to
systematically analyze the different interactions in a news community between
users, news, and sources. We develop a probabilistic graphical model that
leverages this joint interaction to identify 1) highly credible news articles,
2) trustworthy news sources, and 3) expert users who perform the role of
"citizen journalists" in the community. Our method extends CRF models to
incorporate real-valued ratings, as some communities have very fine-grained
scales that cannot be easily discretized without losing information. To the
best of our knowledge, this paper is the first full-fledged analysis of
credibility, trust, and expertise in news communities.
| 1 | 0 | 0 | 1 | 0 | 0 |
Towards Provably Safe Mixed Transportation Systems with Human-driven and Automated Vehicles | Currently, we are in an environment where the fraction of automated vehicles
is negligibly small. We anticipate that this fraction will increase in coming
decades before if ever, we have a fully automated transportation system.
Motivated by this we address the problem of provable safety of mixed traffic
consisting of both intelligent vehicles (IVs) as well as human-driven vehicles
(HVs). An important issue that arises is that such mixed systems may well have
lesser throughput than all human traffic systems if the automated vehicles are
expected to remain provably safe with respect to human traffic. This
necessitates the consideration of strategies such as platooning of automated
vehicles in order to increase the throughput. In this paper, we address the
design of provably safe systems consisting of a mix of automated and
human-driven vehicles including the use of platooning by automated vehicles.
We design motion planing policies and coordination rules for participants in
this novel mixed system. HVs are considered as nearsighted and modeled with
relatively loose constraints, while IVs are considered as capable of following
much tighter constraints. HVs are expected to follow reasonable and simple
rules. IVs are designed to move under a model predictive control (MPC) based
motion plans and coordination protocols. Our contribution of this paper is in
showing how to integrate these two types of models safely into a mixed system.
System safety is proved in single lane scenarios, as well as in multi-lane
situations allowing lane changes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Streaming kernel regression with provably adaptive mean, variance, and regularization | We consider the problem of streaming kernel regression, when the observations
arrive sequentially and the goal is to recover the underlying mean function,
assumed to belong to an RKHS. The variance of the noise is not assumed to be
known. In this context, we tackle the problem of tuning the regularization
parameter adaptively at each time step, while maintaining tight confidence
bounds estimates on the value of the mean function at each point. To this end,
we first generalize existing results for finite-dimensional linear regression
with fixed regularization and known variance to the kernel setup with a
regularization parameter allowed to be a measurable function of past
observations. Then, using appropriate self-normalized inequalities we build
upper and lower bound estimates for the variance, leading to Bersntein-like
concentration bounds. The later is used in order to define the adaptive
regularization. The bounds resulting from our technique are valid uniformly
over all observation points and all time steps, and are compared against the
literature with numerical experiments. Finally, the potential of these tools is
illustrated by an application to kernelized bandits, where we revisit the
Kernel UCB and Kernel Thompson Sampling procedures, and show the benefits of
the novel adaptive kernel tuning strategy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Estimating reducible stochastic differential equations by conversion to a least-squares problem | Stochastic differential equations (SDEs) are increasingly used in
longitudinal data analysis, compartmental models, growth modelling, and other
applications in a number of disciplines. Parameter estimation, however,
currently requires specialized software packages that can be difficult to use
and understand. This work develops and demonstrates an approach for estimating
reducible SDEs using standard nonlinear least squares or mixed-effects
software. Reducible SDEs are obtained through a change of variables in linear
SDEs, and are sufficiently flexible for modelling many situations. The approach
is based on extending a known technique that converts maximum likelihood
estimation for a Gaussian model with a nonlinear transformation of the
dependent variable into an equivalent least-squares problem. A similar idea can
be used for Bayesian maximum a posteriori estimation. It is shown how to obtain
parameter estimates for reducible SDEs containing both process and observation
noise, including hierarchical models with either fixed or random group
parameters. Code and examples in R are given. Univariate SDEs are discussed in
detail, with extensions to the multivariate case outlined more briefly. The use
of well tested and familiar standard software should make SDE modelling more
transparent and accessible. Keywords: stochastic processes; longitudinal data;
growth curves; compartmental models; mixed-effects; R
| 0 | 0 | 0 | 1 | 0 | 0 |
Deep Tensor Encoding | Learning an encoding of feature vectors in terms of an over-complete
dictionary or a information geometric (Fisher vectors) construct is wide-spread
in statistical signal processing and computer vision. In content based
information retrieval using deep-learning classifiers, such encodings are
learnt on the flattened last layer, without adherence to the multi-linear
structure of the underlying feature tensor. We illustrate a variety of feature
encodings incl. sparse dictionary coding and Fisher vectors along with
proposing that a structured tensor factorization scheme enables us to perform
retrieval that can be at par, in terms of average precision, with Fisher vector
encoded image signatures. In short, we illustrate how structural constraints
increase retrieval fidelity.
| 1 | 0 | 0 | 1 | 0 | 0 |
hMDAP: A Hybrid Framework for Multi-paradigm Data Analytical Processing on Spark | We propose hMDAP, a hybrid framework for large-scale data analytical
processing on Spark, to support multi-paradigm process (incl. OLAP, machine
learning, and graph analysis etc.) in distributed environments. The framework
features a three-layer data process module and a business process module which
controls the former. We will demonstrate the strength of hMDAP by using traffic
scenarios in a real world.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Unsupervised Domain Adaptation for Neural Networks via Moment Alignment | A novel approach for unsupervised domain adaptation for neural networks is
proposed. It relies on metric-based regularization of the learning process. The
metric-based regularization aims at domain-invariant latent feature
representations by means of maximizing the similarity between domain-specific
activation distributions. The proposed metric results from modifying an
integral probability metric such that it becomes less translation-sensitive on
a polynomial function space. The metric has an intuitive interpretation in the
dual space as the sum of differences of higher order central moments of the
corresponding activation distributions. Under appropriate assumptions on the
input distributions, error minimization is proven for the continuous case. As
demonstrated by an analysis of standard benchmark experiments for sentiment
analysis, object recognition and digit recognition, the outlined approach is
robust regarding parameter changes and achieves higher classification
accuracies than comparable approaches. The source code is available at
this https URL.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Convolutional Neural Network For Cosmic String Detection in CMB Temperature Maps | We present in detail the convolutional neural network used in our previous
work to detect cosmic strings in cosmic microwave background (CMB) temperature
anisotropy maps. By training this neural network on numerically generated CMB
temperature maps, with and without cosmic strings, the network can produce
prediction maps that locate the position of the cosmic strings and provide a
probabilistic estimate of the value of the string tension $G\mu$. Supplying
noiseless simulations of CMB maps with arcmin resolution to the network
resulted in the accurate determination both of string locations and string
tension for sky maps having strings with string tension as low as
$G\mu=5\times10^{-9}$. The code is publicly available online. Though we trained
the network with a long straight string toy model, we show the network performs
well with realistic Nambu-Goto simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improving Palliative Care with Deep Learning | Improving the quality of end-of-life care for hospitalized patients is a
priority for healthcare organizations. Studies have shown that physicians tend
to over-estimate prognoses, which in combination with treatment inertia results
in a mismatch between patients wishes and actual care at the end of life. We
describe a method to address this problem using Deep Learning and Electronic
Health Record (EHR) data, which is currently being piloted, with Institutional
Review Board approval, at an academic medical center. The EHR data of admitted
patients are automatically evaluated by an algorithm, which brings patients who
are likely to benefit from palliative care services to the attention of the
Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR
data from previous years, to predict all-cause 3-12 month mortality of patients
as a proxy for patients that could benefit from palliative care. Our
predictions enable the Palliative Care team to take a proactive approach in
reaching out to such patients, rather than relying on referrals from treating
physicians, or conduct time consuming chart reviews of all patients. We also
present a novel interpretation technique which we use to provide explanations
of the model's predictions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Scalable Online Convolutional Sparse Coding | Convolutional sparse coding (CSC) improves sparse coding by learning a
shift-invariant dictionary from the data. However, existing CSC algorithms
operate in the batch mode and are expensive, in terms of both space and time,
on large datasets. In this paper, we alleviate these problems by using online
learning. The key is a reformulation of the CSC objective so that convolution
can be handled easily in the frequency domain and much smaller history matrices
are needed. We use the alternating direction method of multipliers (ADMM) to
solve the resulting optimization problem and the ADMM subproblems have
efficient closed-form solutions. Theoretical analysis shows that the learned
dictionary converges to a stationary point of the optimization problem.
Extensive experiments show that convergence of the proposed method is much
faster and its reconstruction performance is also better. Moreover, while
existing CSC algorithms can only run on a small number of images, the proposed
method can handle at least ten times more images.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion | It is challenging to recognize facial action unit (AU) from spontaneous
facial displays, especially when they are accompanied by speech. The major
reason is that the information is extracted from a single source, i.e., the
visual channel, in the current practice. However, facial activity is highly
correlated with voice in natural human communications.
Instead of solely improving visual observations, this paper presents a novel
audiovisual fusion framework, which makes the best use of visual and acoustic
cues in recognizing speech-related facial AUs. In particular, a dynamic
Bayesian network (DBN) is employed to explicitly model the semantic and dynamic
physiological relationships between AUs and phonemes as well as measurement
uncertainty. A pilot audiovisual AU-coded database has been collected to
evaluate the proposed framework, which consists of a "clean" subset containing
frontal faces under well controlled circumstances and a challenging subset with
large head movements and occlusions. Experiments on this database have
demonstrated that the proposed framework yields significant improvement in
recognizing speech-related AUs compared to the state-of-the-art visual-based
methods especially for those AUs whose visual observations are impaired during
speech, and more importantly also outperforms feature-level fusion methods by
explicitly modeling and exploiting physiological relationships between AUs and
phonemes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Re-purposing Compact Neuronal Circuit Policies to Govern Reinforcement Learning Tasks | We propose an effective method for creating interpretable control agents, by
\textit{re-purposing} the function of a biological neural circuit model, to
govern simulated and real world reinforcement learning (RL) test-beds. Inspired
by the structure of the nervous system of the soil-worm, \emph{C. elegans}, we
introduce \emph{Neuronal Circuit Policies} (NCPs) as a novel recurrent neural
network instance with liquid time-constants, universal approximation
capabilities and interpretable dynamics. We theoretically show that they can
approximate any finite simulation time of a given continuous n-dimensional
dynamical system, with $n$ output units and some hidden units. We model
instances of the policies and learn their synaptic and neuronal parameters to
control standard RL tasks and demonstrate its application for autonomous
parking of a real rover robot on a pre-defined trajectory. For reconfiguration
of the \emph{purpose} of the neural circuit, we adopt a search-based RL
algorithm. We show that our neuronal circuit policies perform as good as deep
neural network policies with the advantage of realizing interpretable dynamics
at the cell-level. We theoretically find bounds for the time-varying dynamics
of the circuits, and introduce a novel way to reason about networks' dynamics.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the Expected Value of the Determinant of Random Sum of Rank-One Matrices | We present a simple, yet useful result about the expected value of the
determinant of random sum of rank-one matrices. Computing such expectations in
general may involve a sum over exponentially many terms. Nevertheless, we show
that an interesting and useful class of such expectations that arise in, e.g.,
D-optimal estimation and random graphs can be computed efficiently via
computing a single determinant.
| 1 | 0 | 1 | 0 | 0 | 0 |
A hyperbolic-equation system approach for magnetized electron fluids in quasi-neutral plasmas | A new approach using a hyperbolic-equation system (HES) is proposed to solve
for the electron fluids in quasi-neutral plasmas. The HES approach avoids
treatments of cross-diffusion terms which cause numerical instabilities in
conventional approaches using an elliptic equation (EE). A test calculation
reveals that the HES approach can robustly solve problems of strong magnetic
confinement by using an upwind method. The computation time of the HES approach
is compared with that of the EE approach in terms of the size of the problem
and the strength of magnetic confinement. The results indicate that the HES
approach can be used to solve problems in a simple structured mesh without
increasing computational time compared to the EE approach and that it features
fast convergence in conditions of strong magnetic confinement.
| 0 | 1 | 0 | 0 | 0 | 0 |
Solving delay differential equations through RBF collocation | A general and easy-to-code numerical method based on radial basis functions
(RBFs) collocation is proposed for the solution of delay differential equations
(DDEs). It relies on the interpolation properties of infinitely smooth RBFs,
which allow for a large accuracy over a scattered and relatively small
discretization support. Hardy's multiquadric is chosen as RBF and combined with
the Residual Subsampling Algorithm of Driscoll and Heryudono for support
adaptivity. The performance of the method is very satisfactory, as demonstrated
over a cross-section of benchmark DDEs, and by comparison with existing
general-purpose and specialized numerical schemes for DDEs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.