title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Trigonometric integrators for quasilinear wave equations | Trigonometric time integrators are introduced as a class of explicit
numerical methods for quasilinear wave equations. Second-order convergence for
the semi-discretization in time with these integrators is shown for a
sufficiently regular exact solution. The time integrators are also combined
with a Fourier spectral method into a fully discrete scheme, for which error
bounds are provided without requiring any CFL-type coupling of the
discretization parameters. The proofs of the error bounds are based on energy
techniques and on the semiclassical G\aa rding inequality.
| 0 | 0 | 1 | 0 | 0 | 0 |
La leggenda del quanto centenario | Around year 2000 the centenary of Planck's thermal radiation formula awakened
interest in the origins of quantum theory, traditionally traced back to the
Planck's conference on 14 December 1900 at the Berlin Academy of Sciences. A
lot of more accurate historical reconstructions, conducted under the stimulus
of that recurrence, placed the birth date of quantum theory in March 1905 when
Einstein advanced his light quantum hypothesis. Both interpretations are yet
controversial, but science historians agree on one point: the emergence of
quantum theory from a presumed "crisis" of classical physics is a myth with
scarce adherence to the historical truth. This article, written in Italian
language, was originally presented in connection with the celebration of the
World Year of Phyics 2005 with the aim of bringing these scholarly theses to a
wider audience.
---
Tradizionalmente la nascita della teoria quantistica viene fatta risalire al
14 dicembre 1900, quando Planck presentò all'Accademia delle Scienze di
Berlino la dimostrazione della formula della radiazione termica. Numerose
ricostruzioni storiche più accurate, effettuate nel periodo intorno al 2000
sotto lo stimolo dell'interesse per il centenario di quell'avvenimento,
collocano invece la nascita della teoria quantistica nel marzo del 1905, quando
Einstein avanzò l'ipotesi dei quanti di luce. Entrambe le interpretazioni
sono tuttora controverse, ma gli storici della scienza concordano su un punto:
l'emergere della teoria quantistica da una presunta "crisi" della fisica
classica è un mito con scarsa aderenza alla verità storica. Con questo
articolo in italiano, presentato originariamente in occasione delle
celebrazioni per il World Year of Phyics 2005, si è inteso portare a un più
largo pubblico queste tesi già ben note agli specialisti.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Highly Efficient Polarization-Independent Metamaterial-Based RF Energy-Harvesting Rectenna for Low-Power Applications | A highly-efficient multi-resonant RF energy-harvesting rectenna based on a
metamaterial perfect absorber featuring closely-spaced polarization-independent
absorption modes is presented. Its effective area is larger than its physical
area, and so efficiencies of 230% and 130% are measured at power densities of
10 uW/cm2 and 1 uW/cm2 respectively, for a linear absorption mode at 0.75 GHz.
The rectenna exhibits a broad polarization-independent region between 1.4 GHz
and 1.7 GHz with maximum efficiencies of 167% and 36% for those same power
densities. Additionally, by adjustment of the distance between the rectenna and
a reflecting ground plane, the absorption frequency can be adjusted to a
limited extent within the polarization-independent region. Lastly, the rectenna
should be capable of delivering 100 uW of power to a device located within 50 m
of a cell-phone tower under ideal conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mixed Graphical Models for Causal Analysis of Multi-modal Variables | Graphical causal models are an important tool for knowledge discovery because
they can represent both the causal relations between variables and the
multivariate probability distributions over the data. Once learned, causal
graphs can be used for classification, feature selection and hypothesis
generation, while revealing the underlying causal network structure and thus
allowing for arbitrary likelihood queries over the data. However, current
algorithms for learning sparse directed graphs are generally designed to handle
only one type of data (continuous-only or discrete-only), which limits their
applicability to a large class of multi-modal biological datasets that include
mixed type variables. To address this issue, we developed new methods that
modify and combine existing methods for finding undirected graphs with methods
for finding directed graphs. These hybrid methods are not only faster, but also
perform better than the directed graph estimation methods alone for a variety
of parameter settings and data set sizes. Here, we describe a new conditional
independence test for learning directed graphs over mixed data types and we
compare performances of different graph learning strategies on synthetic data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Estimation of quantile oriented sensitivity indices | The paper concerns quantile oriented sensitivity analysis. We rewrite the
corresponding indices using the Conditional Tail Expectation risk measure.
Then, we use this new expression to built estimators.
| 0 | 0 | 1 | 1 | 0 | 0 |
Electromagnetically Induced Transparency (EIT) Amplitude Noise Spectroscopy | Intensity noise cross-correlation of the polarization eigenstates of light
emerging from an atomic vapor cell in the Hanle configuration allows one to
perform high resolution spectroscopy with free- running semiconductor lasers.
Such an approach has shown promise as an inexpensive, simpler approach to
magnetometry and timekeeping, and as a probe of dynamics of atomic coherence in
warm vapor cells. We report that varying the post-cell polarization state basis
yields intensity noise spectra which more completely probe the prepared atomic
state. We advance and test the hypothesis that the observed intensity noise can
be explained in terms of an underlying stochastic process in lightfield
amplitudes themselves. Understanding this stochastic process in the light field
amplitudes themselves provides a new test of the simple atomic quantum optics
model of EIT noise.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Robust Framework for Protein Function Prediction using Variable-Length Protein Sequences | Amino acid sequence portrays most intrinsic form of a protein and expresses
primary structure of protein. The order of amino acids in a sequence enables a
protein to acquire a particular stable conformation that is responsible for the
functions of the protein. This relationship between a sequence and its function
motivates the need to analyse the sequences for predicting protein functions.
Early generation computational methods using BLAST, FASTA, etc. perform
function transfer based on sequence similarity with existing databases and are
computationally slow. Although machine learning based approaches are fast, they
fail to perform well for long protein sequences (i.e., protein sequences with
more than 300 amino acid residues). In this paper, we introduce a novel method
for construction of two separate feature sets for protein sequences based on
analysis of 1) single fixed-sized segments and 2) multi-sized segments, using
bi-directional long short-term memory network. Further, model based on proposed
feature set is combined with the state of the art Multi-lable Linear
Discriminant Analysis (MLDA) features based model to improve the accuracy.
Extensive evaluations using separate datasets for biological processes and
molecular functions demonstrate promising results for both single-sized and
multi-sized segments based feature sets. While former showed an improvement of
+3.37% and +5.48%, the latter produces an improvement of +5.38% and +8.00%
respectively for two datasets over the state of the art MLDA based classifier.
After combining two models, there is a significant improvement of +7.41% and
+9.21% respectively for two datasets compared to MLDA based classifier.
Specifically, the proposed approach performed well for the long protein
sequences and superior overall performance.
| 0 | 0 | 0 | 0 | 1 | 0 |
Helicity of convective flows from localized heat source in a rotating layer | Experimental and numerical study of the steady-state cyclonic vortex from
isolated heat source in a rotating fluid layer is described. The structure of
laboratory cyclonic vortex is similar to the typical structure of tropical
cyclones from observational data and numerical modelling including secondary
flows in the boundary layer. Differential characteristics of the flow were
studied by numerical simulation using CFD software FlowVision. Helicity
distribution in rotating fluid layer with localized heat source was analysed.
Two mechanisms which play role in helicity generation are found. The first one
is the strong correlation of cyclonic vortex and intensive upward motion in the
central part of the vessel. The second one is due to large gradients of
velocity on the periphery. The integral helicity in the considered case is
substantial and its relative level is high.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tunable $φ$-Josephson junction with a quantum anomalous Hall insulator | We theoretically study the Josephson current in a superconductor/quantum
anomalous Hall insulator/superconductor junction by using the lattice Green
function technique. When an in-plane external Zeeman field is applied to the
quantum anomalous Hall insulator, the Josephson current $J$ flows without a
phase difference across the junction $\theta$. The phase shift $\varphi$
appealing in the current-phase relationship $J\propto \sin(\theta-\varphi$) is
proportional to the amplitude of Zeeman fields and depends on the direction of
Zeeman fields. A phenomenological analysis of the Andreev reflection processes
explains the physical origin of $\varphi$. A quantum anomalous Hall insulator
breaks time-reversal symmetry and mirror reflection symmetry simultaneously.
However it preserves magnetic mirror reflection symmetry. Such characteristic
symmetry property enable us to have a tunable $\varphi$-junction with a quantum
Hall insulator.
| 0 | 1 | 0 | 0 | 0 | 0 |
What kind of content are you prone to tweet? Multi-topic Preference Model for Tweeters | According to tastes, a person could show preference for a given category of
content to a greater or lesser extent. However, quantifying people's amount of
interest in a certain topic is a challenging task, especially considering the
massive digital information they are exposed to. For example, in the context of
Twitter, aligned with his/her preferences a user may tweet and retweet more
about technology than sports and do not share any music-related content. The
problem we address in this paper is the identification of users' implicit topic
preferences by analyzing the content categories they tend to post on Twitter.
Our proposal is significant given that modeling their multi-topic profile may
be useful to find patterns or association between preferences for categories,
discover trending topics and cluster similar users to generate better group
recommendations of content. In the present work, we propose a method based on
the Mixed Gaussian Model to extract the multidimensional preference
representation for 399 Ecuadorian tweeters concerning twenty-two different
topics (or dimensions) which became known by manually categorizing 68.186
tweets. Our experiment findings indicate that the proposed approach is
effective at detecting the topic interests of users.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fine-Gray competing risks model with high-dimensional covariates: estimation and Inference | The purpose of this paper is to construct confidence intervals for the
regression coefficients in the Fine-Gray model for competing risks data with
random censoring, where the number of covariates can be larger than the sample
size. Despite strong motivation from biostatistics applications,
high-dimensional Fine-Gray model has attracted relatively little attention
among the methodological or theoretical literatures. We fill in this blank by
proposing first a consistent regularized estimator and then the confidence
intervals based on the one-step bias-correcting estimator. We are able to
generalize the partial likelihood approach for the Fine-Gray model under random
censoring despite many technical difficulties. We lay down a methodological and
theoretical framework for the one-step bias-correcting estimator with the
partial likelihood, which does not have independent and identically distributed
entries. We also handle for our theory the approximation error from the inverse
probability weighting (IPW), proposing novel concentration results for time
dependent processes. In addition to the theoretical results and algorithms, we
present extensive numerical experiments and an application to a study of
non-cancer mortality among prostate cancer patients using the linked
Medicare-SEER data.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Godunov Method for a 2-Phase Model | We consider the Godunov numerical method to the phase-transition traffic
model, proposed in [6], by Colombo, Marcellini, and Rascle. Numerical tests are
shown to prove the validity of the method. Moreover we highlight the
differences between such model and the one proposed in [1], by Blandin, Work,
Goatin, Piccoli, and Bayen.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cartan's Conjecture for Moving Hypersurfaces | Let $f$ be a holomorphic curve in $\mathbb{P}^n({\mathbb{C}})$ and let
$\mathcal{D}=\{D_1,\ldots,D_q\}$ be a family of moving hypersurfaces defined by
a set of homogeneous polynomials $\mathcal{Q}=\{Q_1,\ldots,Q_q\}$. For
$j=1,\ldots,q$, denote by
$Q_j=\sum\limits_{i_0+\cdots+i_n=d_j}a_{j,I}(z)x_0^{i_0}\cdots x_n^{i_n}$,
where $I=(i_0,\ldots,i_n)\in\mathbb{Z}_{\ge 0}^{n+1}$ and $a_{j,I}(z)$ are
entire functions on ${\mathbb{C}}$ without common zeros. Let
$\mathcal{K}_{\mathcal{Q}}$ be the smallest subfield of meromorphic function
field $\mathcal{M}$ which contains ${\mathbb{C}}$ and all
$\frac{a_{j,I'}(z)}{a_{j,I''}(z)}$ with $a_{j,I''}(z)\not\equiv 0$, $1\le j\le
q$. In previous known second main theorems for $f$ and $\mathcal{D}$, $f$ is
usually assumed to be algebraically nondegenerate over
$\mathcal{K}_{\mathcal{Q}}$. In this paper, we prove a second main theorem in
which $f$ is only assumed to be nonconstant. This result can be regarded as a
generalization of Cartan's conjecture for moving hypersurfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Safe Active Feature Selection for Sparse Learning | We present safe active incremental feature selection~(SAIF) to scale up the
computation of LASSO solutions. SAIF does not require a solution from a heavier
penalty parameter as in sequential screening or updating the full model for
each iteration as in dynamic screening. Different from these existing screening
methods, SAIF starts from a small number of features and incrementally recruits
active features and updates the significantly reduced model. Hence, it is much
more computationally efficient and scalable with the number of features. More
critically, SAIF has the safe guarantee as it has the convergence guarantee to
the optimal solution to the original full LASSO problem. Such an incremental
procedure and theoretical convergence guarantee can be extended to fused LASSO
problems. Compared with state-of-the-art screening methods as well as working
set and homotopy methods, which may not always guarantee the optimal solution,
SAIF can achieve superior or comparable efficiency and high scalability with
the safe guarantee when facing extremely high dimensional data sets.
Experiments with both synthetic and real-world data sets show that SAIF can be
up to 50 times faster than dynamic screening, and hundreds of times faster than
computing LASSO or fused LASSO solutions without screening.
| 0 | 0 | 0 | 1 | 0 | 0 |
Cobwebs from the Past and Present: Extracting Large Social Networks using Internet Archive Data | Social graph construction from various sources has been of interest to
researchers due to its application potential and the broad range of technical
challenges involved. The World Wide Web provides a huge amount of continuously
updated data and information on a wide range of topics created by a variety of
content providers, and makes the study of extracted people networks and their
temporal evolution valuable for social as well as computer scientists. In this
paper we present SocGraph - an extraction and exploration system for social
relations from the content of around 2 billion web pages collected by the
Internet Archive over the 17 years time period between 1996 and 2013. We
describe methods for constructing large social graphs from extracted relations
and introduce an interface to study their temporal evolution.
| 1 | 1 | 0 | 0 | 0 | 0 |
A Fluid-Flow Interpretation of SCED Scheduling | We show that a fluid-flow interpretation of Service Curve Earliest Deadline
First (SCED) scheduling simplifies deadline derivations for this scheduler. By
exploiting the recently reported isomorphism between min-plus and max-plus
network calculus, and expressing deadlines in a max-plus algebra, deadline
computations no longer require pseudo-inverse computations. SCED deadlines are
provided for general convex or concave piecewise linear service curves.
| 1 | 0 | 0 | 0 | 0 | 0 |
Emergence of Topological Nodal Lines and Type II Weyl Nodes in Strong Spin--Orbit Coupling System InNbX2(X=S,Se) | Using first--principles density functional calculations, we systematically
investigate electronic structures and topological properties of InNbX2 (X=S,
Se). In the absence of spin--orbit coupling (SOC), both compounds show nodal
lines protected by mirror symmetry. Including SOC, the Dirac rings in InNbS2
split into two Weyl rings. This unique property is distinguished from other
dicovered nodal line materials which normally requires the absence of SOC. On
the other hand, SOC breaks the nodal lines in InNbSe2 and the compound becomes
a type II Weyl semimetal with 12 Weyl points in the Brillouin Zone. Using a
supercell slab calculation we study the dispersion of Fermi arcs surface states
in InNbSe2, we also utilize a coherent potential approximation to probe their
tolernace to the surface disorder effects. The quasi two--dimensionality and
the absence of toxic elements makes these two compounds an ideal experimental
platform for investigating novel properties of topological semimetals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Number-conserving interacting fermion models with exact topological superconducting ground states | We present a method to construct number-conserving Hamiltonians whose ground
states exactly reproduce an arbitrarily chosen BCS-type mean-field state. Such
parent Hamiltonians can be constructed not only for the usual $s$-wave BCS
state, but also for more exotic states of this form, including the ground
states of Kitaev wires and 2D topological superconductors. This method leads to
infinite families of locally-interacting fermion models with exact topological
superconducting ground states. After explaining the general technique, we apply
this method to construct two specific classes of models. The first one is a
one-dimensional double wire lattice model with Majorana-like degenerate ground
states. The second one is a two-dimensional $p_x+ip_y$ superconducting model,
where we also obtain analytic expressions for topologically degenerate ground
states in the presence of vortices. Our models may provide a deeper conceptual
understanding of how Majorana zero modes could emerge in condensed matter
systems, as well as inspire novel routes to realize them in experiment.
| 0 | 1 | 0 | 0 | 0 | 0 |
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction | We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for
developing and evaluating grammatical error correction (GEC). Unlike other
corpora, it represents a broad range of language proficiency levels and uses
holistic fluency edits to not only correct grammatical errors but also make the
original text more native sounding. We describe the types of corrections made
and benchmark four leading GEC systems on this corpus, identifying specific
areas in which they do well and how they can improve. JFLEG fulfills the need
for a new gold standard to properly assess the current state of GEC.
| 1 | 0 | 0 | 0 | 0 | 0 |
Scheduling with regular performance measures and optional job rejection on a single machine | We address single machine problems with optional jobs - rejection, studied
recently in Zhang et al. [21] and Cao et al. [2]. In these papers, the authors
focus on minimizing regular performance measures, i.e., functions that are
non-decreasing in the jobs completion time, subject to the constraint that the
total rejection cost cannot exceed a predefined upper bound. They also prove
that the considered problems are ordinary NP-hard and provide
pseudo-polynomial-time Dynamic Programming (DP) solutions. In this paper, we
focus on three of these problems: makespan with release-dates; total completion
times; and total weighted completion, and present enhanced DP solutions
demonstrating both theoretical and practical improvements. Moreover, we provide
extensive numerical studies verifying their efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
Data-Driven Stochastic Robust Optimization: A General Computational Framework and Algorithm for Optimization under Uncertainty in the Big Data Era | A novel data-driven stochastic robust optimization (DDSRO) framework is
proposed for optimization under uncertainty leveraging labeled multi-class
uncertainty data. Uncertainty data in large datasets are often collected from
various conditions, which are encoded by class labels. Machine learning methods
including Dirichlet process mixture model and maximum likelihood estimation are
employed for uncertainty modeling. A DDSRO framework is further proposed based
on the data-driven uncertainty model through a bi-level optimization structure.
The outer optimization problem follows a two-stage stochastic programming
approach to optimize the expected objective across different data classes;
adaptive robust optimization is nested as the inner problem to ensure the
robustness of the solution while maintaining computational tractability. A
decomposition-based algorithm is further developed to solve the resulting
multi-level optimization problem efficiently. Case studies on process network
design and planning are presented to demonstrate the applicability of the
proposed framework and algorithm.
| 1 | 0 | 1 | 0 | 0 | 0 |
Algebraic multiscale method for flow in heterogeneous porous media with embedded discrete fractures (F-AMS) | This paper introduces an Algebraic MultiScale method for simulation of flow
in heterogeneous porous media with embedded discrete Fractures (F-AMS). First,
multiscale coarse grids are independently constructed for both porous matrix
and fracture networks. Then, a map between coarse- and fine-scale is obtained
by algebraically computing basis functions with local support. In order to
extend the localization assumption to the fractured media, four types of basis
functions are investigated: (1) Decoupled-AMS, in which the two media are
completely decoupled, (2) Frac-AMS and (3) Rock-AMS, which take into account
only one-way transmissibilities, and (4) Coupled-AMS, in which the matrix and
fracture interpolators are fully coupled. In order to ensure scalability, the
F-AMS framework permits full flexibility in terms of the resolution of the
fracture coarse grids. Numerical results are presented for two- and
three-dimensional heterogeneous test cases. During these experiments, the
performance of F-AMS, paired with ILU(0) as second-stage smoother in a
convergent iterative procedure, is studied by monitoring CPU times and
convergence rates. Finally, in order to investigate the scalability of the
method, an extensive benchmark study is conducted, where a commercial algebraic
multigrid solver is used as reference. The results show that, given an
appropriate coarsening strategy, F-AMS is insensitive to severe fracture and
matrix conductivity contrasts, as well as the length of the fracture networks.
Its unique feature is that a fine-scale mass conservative flux field can be
reconstructed after any iteration, providing efficient approximate solutions in
time-dependent simulations.
| 1 | 1 | 0 | 0 | 0 | 0 |
From Pragmatic to Systematic Software Process Improvement: An Evaluated Approach | Software processes improvement (SPI) is a challenging task, as many different
stakeholders, project settings, and contexts and goals need to be considered.
SPI projects are often operated in a complex and volatile environment and,
thus, require a sound management that is resource-intensive requiring many
stakeholders to contribute to the process assessment, analysis, design,
realisation, and deployment. Although there exist many valuable SPI approaches,
none address the needs of both process engineers and project managers. This
article presents an Artefact-based Software Process Improvement & Management
approach (ArSPI) that closes this gap. ArSPI was developed and tested across
several SPI projects in large organisations in Germany and Eastern Europe. The
approach further encompasses a template for initiating, performing, and
managing SPI projects by defining a set of 5 key artefacts and 24 support
artefacts. We present ArSPI and discus results of its validation indicating
ArSPI to be a helpful instrument to set up and steer SPI projects.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Bayesian Mixture Model for Clustering on the Stiefel Manifold | Analysis of a Bayesian mixture model for the Matrix Langevin distribution on
the Stiefel manifold is presented. The model exploits a particular
parametrization of the Matrix Langevin distribution, various aspects of which
are elaborated on. A general, and novel, family of conjugate priors, and an
efficient Markov chain Monte Carlo (MCMC) sampling scheme for the corresponding
posteriors is then developed for the mixture model. Theoretical properties of
the prior and posterior distributions, including posterior consistency, are
explored in detail. Extensive simulation experiments are presented to validate
the efficacy of the framework. Real-world examples, including a large scale
neuroimaging dataset, are analyzed to demonstrate the computational
tractability of the approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Discrete Cycloids from Convex Symmetric Polygons | Cycloids, hipocycloids and epicycloids have an often forgotten common
property: they are homothetic to their evolutes. But what if use convex
symmetric polygons as unit balls, can we define evolutes and cycloids which are
genuinely discrete? Indeed, we can! We define discrete cycloids as eigenvectors
of a discrete double evolute transform which can be seen as a linear operator
on a vector space we call curvature radius space. We are also able to classify
such cycloids according to the eigenvalues of that transform, and show that the
number of cusps of each cycloid is well determined by the ordering of those
eigenvalues. As an elegant application, we easily establish a version of the
four-vertex theorem for closed convex polygons. The whole theory is developed
using only linear algebra, and concrete examples are given.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-color image compression-encryption algorithm based on chaotic system and fuzzy transform | In this paper an algorithm for multi-color image compression-encryption is
introduced. For compression step fuzzy transform based on exponential b-spline
function is used. In encryption step, a novel combination chaotic system based
on Sine and Tent systems is proposed. Also in the encryption algorithm, 3D
shift based on chaotic system is introduced. The simulation results and
security analysis show that the proposed algorithm is secure and efficient.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gaussian Graphical Models: An Algebraic and Geometric Perspective | Gaussian graphical models are used throughout the natural sciences, social
sciences, and economics to model the statistical relationships between
variables of interest in the form of a graph. We here provide a pedagogic
introduction to Gaussian graphical models and review recent results on maximum
likelihood estimation for such models. Throughout, we highlight the rich
algebraic and geometric properties of Gaussian graphical models and explain how
these properties relate to convex optimization and ultimately result in
insights on the existence of the maximum likelihood estimator (MLE) and
algorithms for computing the MLE.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Dynamical History of Chariklo and its Rings | Chariklo is the only small Solar system body confirmed to have rings. Given
the instability of its orbit, the presence of rings is surprising, and their
origin remains poorly understood. In this work, we study the dynamical history
of the Chariklo system by integrating almost 36,000 Chariklo clones backwards
in time for one Gyr under the influence of the Sun and the four giant planets.
By recording all close encounters between the clones and planets, we
investigate the likelihood that Chariklo's rings could have survived since its
capture to the Centaur population. Our results reveal that Chariklo's orbit
occupies a region of stable chaos, resulting in its orbit being marginally more
stable than those of the other Centaurs. Despite this, we find that it was most
likely captured to the Centaur population within the last 20 Myr, and that its
orbital evolution has been continually punctuated by regular close encounters
with the giant planets. The great majority (> 99%) of those encounters within
one Hill radius of the planet have only a small effect on the rings. We
conclude that close encounters with giant planets have not had a significant
effect on the ring structure. Encounters within the Roche limit of the giant
planets are rare, making ring creation through tidal disruption unlikely.
| 0 | 1 | 0 | 0 | 0 | 0 |
THAP: A Matlab Toolkit for Learning with Hawkes Processes | As a powerful tool of asynchronous event sequence analysis, point processes
have been studied for a long time and achieved numerous successes in different
fields. Among various point process models, Hawkes process and its variants
attract many researchers in statistics and computer science these years because
they capture the self- and mutually-triggering patterns between different
events in complicated sequences explicitly and quantitatively and are broadly
applicable to many practical problems. In this paper, we describe an
open-source toolkit implementing many learning algorithms and analysis tools
for Hawkes process model and its variants. Our toolkit systematically
summarizes recent state-of-the-art algorithms as well as most classic
algorithms of Hawkes processes, which is beneficial for both academical
education and research. Source code can be downloaded from
this https URL.
| 1 | 0 | 0 | 1 | 0 | 0 |
Studies of the Response of the SiD Silicon-Tungsten ECal | Studies of the response of the SiD silicon-tungsten electromagnetic
calorimeter (ECal) are presented. Layers of highly granular (13 mm^2 pixels)
silicon detectors embedded in thin gaps (~ 1 mm) between tungsten alloy plates
give the SiD ECal the ability to separate electromagnetic showers in a crowded
environment. A nine-layer prototype has been built and tested in a 12.1 GeV
electron beam at the SLAC National Accelerator Laboratory. This data was
simulated with a Geant4 model. Particular attention was given to the separation
of nearby incident electrons, which demonstrated a high (98.5%) separation
efficiency for two electrons at least 1 cm from each other. The beam test study
will be compared to a full SiD detector simulation with a realistic geometry,
where the ECal calibration constants must first be established. This work is
continuing, as the geometry requires that the calibration constants depend upon
energy, angle, and absorber depth. The derivation of these constants is being
developed from first principles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multiple Hypothesis Tracking Algorithm for Multi-Target Multi-Camera Tracking with Disjoint Views | In this study, a multiple hypothesis tracking (MHT) algorithm for
multi-target multi-camera tracking (MCT) with disjoint views is proposed. Our
method forms track-hypothesis trees, and each branch of them represents a
multi-camera track of a target that may move within a camera as well as move
across cameras. Furthermore, multi-target tracking within a camera is performed
simultaneously with the tree formation by manipulating a status of each track
hypothesis. Each status represents three different stages of a multi-camera
track: tracking, searching, and end-of-track. The tracking status means targets
are tracked by a single camera tracker. In the searching status, the
disappeared targets are examined if they reappear in other cameras. The
end-of-track status does the target exited the camera network due to its
lengthy invisibility. These three status assists MHT to form the
track-hypothesis trees for multi-camera tracking. Furthermore, they present a
gating technique for eliminating of unlikely observation-to-track association.
In the experiments, they evaluate the proposed method using two datasets,
DukeMTMC and NLPR-MCT, which demonstrates that the proposed method outperforms
the state-of-the-art method in terms of improvement of the accuracy. In
addition, they show that the proposed method can operate in real-time and
online.
| 1 | 0 | 0 | 0 | 0 | 0 |
Smooth positon solutions of the focusing modified Korteweg-de Vries equation | The $n$-fold Darboux transformation $T_{n}$ of the focusing real mo\-di\-fied
Kor\-te\-weg-de Vries (mKdV) equation is expressed in terms of the determinant
representation. Using this representation, the $n$-soliton solutions of the
mKdV equation are also expressed by determinants whose elements consist of the
eigenvalues $\lambda_{j}$ and the corresponding eigenfunctions of the
associated Lax equation. The nonsingular $n$-positon solutions of the focusing
mKdV equation are obtained in the special limit
$\lambda_{j}\rightarrow\lambda_{1}$, from the corresponding $n$-soliton
solutions and by using the associated higher-order Taylor expansion.
Furthermore, the decomposition method of the $n$-positon solution into $n$
single-soliton solutions, the trajectories, and the corresponding "phase
shifts" of the multi-positons are also investigated.
| 0 | 1 | 0 | 0 | 0 | 0 |
On discrimination between two close distribution tails | The goodness-of-fit test for discrimination of two tail distribution using
higher order statistics is proposed. The consistency of proposed test is proved
for two different alternatives. We do not assume belonging the corresponding
distribution function to a maximum domain of attraction.
| 0 | 0 | 1 | 1 | 0 | 0 |
Belief Propagation Min-Sum Algorithm for Generalized Min-Cost Network Flow | Belief Propagation algorithms are instruments used broadly to solve graphical
model optimization and statistical inference problems. In the general case of a
loopy Graphical Model, Belief Propagation is a heuristic which is quite
successful in practice, even though its empirical success, typically, lacks
theoretical guarantees. This paper extends the short list of special cases
where correctness and/or convergence of a Belief Propagation algorithm is
proven. We generalize formulation of Min-Sum Network Flow problem by relaxing
the flow conservation (balance) constraints and then proving that the Belief
Propagation algorithm converges to the exact result.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sharper and Simpler Nonlinear Interpolants for Program Verification | Interpolation of jointly infeasible predicates plays important roles in
various program verification techniques such as invariant synthesis and CEGAR.
Intrigued by the recent result by Dai et al.\ that combines real algebraic
geometry and SDP optimization in synthesis of polynomial interpolants, the
current paper contributes its enhancement that yields sharper and simpler
interpolants. The enhancement is made possible by: theoretical observations in
real algebraic geometry; and our continued fraction-based algorithm that rounds
off (potentially erroneous) numerical solutions of SDP solvers. Experiment
results support our tool's effectiveness; we also demonstrate the benefit of
sharp and simple interpolants in program verification examples.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantized Laplacian growth, III: On conformal field theories of Laplacian growth | A one-parametric stochastic dynamics of the interface in the quantized
Laplacian growth with zero surface tension is introduced. The quantization
procedure regularizes the growth by preventing the formation of cusps at the
interface, and makes the interface dynamics chaotic. In a long time asymptotic,
by coupling a conformal field theory to the stochastic growth process we
introduce a set of observables (the martingales), whose expectation values are
constant in time. The martingales are connected to degenerate representations
of the Virasoro algebra, and can be written in terms of conformal correlation
functions. A direct link between Laplacian growth and the conformal Liouville
field theory with the central charge $c\geq25$ is proposed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Supervised Saliency Map Driven Segmentation of the Lesions in Dermoscopic Images | Lesion segmentation is the first step in most automatic melanoma recognition
systems. Deficiencies and difficulties in dermoscopic images such as color
inconstancy, hair occlusion, dark corners and color charts make lesion
segmentation an intricate task. In order to detect the lesion in the presence
of these problems, we propose a supervised saliency detection method tailored
for dermoscopic images based on the discriminative regional feature integration
(DRFI). DRFI method incorporates multi-level segmentation, regional contrast,
property, background descriptors, and a random forest regressor to create
saliency scores for each region in the image. In our improved saliency
detection method, mDRFI, we have added some new features to regional property
descriptors. Also, in order to achieve more robust regional background
descriptors, a thresholding algorithm is proposed to obtain a new
pseudo-background region. Findings reveal that mDRFI is superior to DRFI in
detecting the lesion as the salient object in dermoscopic images. The proposed
overall lesion segmentation framework uses detected saliency map to construct
an initial mask of the lesion through thresholding and post-processing
operations. The initial mask is then evolving in a level set framework to fit
better on the lesion's boundaries. The results of evaluation tests on three
public datasets show that our proposed segmentation method outperforms the
other conventional state-of-the-art segmentation algorithms and its performance
is comparable with most recent approaches that are based on deep convolutional
neural networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Mean and Median Criterion for Automatic Kernel Bandwidth Selection for Support Vector Data Description | Support vector data description (SVDD) is a popular technique for detecting
anomalies. The SVDD classifier partitions the whole space into an inlier
region, which consists of the region near the training data, and an outlier
region, which consists of points away from the training data. The computation
of the SVDD classifier requires a kernel function, and the Gaussian kernel is a
common choice for the kernel function. The Gaussian kernel has a bandwidth
parameter, whose value is important for good results. A small bandwidth leads
to overfitting, and the resulting SVDD classifier overestimates the number of
anomalies. A large bandwidth leads to underfitting, and the classifier fails to
detect many anomalies. In this paper we present a new automatic, unsupervised
method for selecting the Gaussian kernel bandwidth. The selected value can be
computed quickly, and it is competitive with existing bandwidth selection
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Understanding the Impact of Label Granularity on CNN-based Image Classification | In recent years, supervised learning using Convolutional Neural Networks
(CNNs) has achieved great success in image classification tasks, and large
scale labeled datasets have contributed significantly to this achievement.
However, the definition of a label is often application dependent. For example,
an image of a cat can be labeled as "cat" or perhaps more specifically "Persian
cat." We refer to this as label granularity. In this paper, we conduct
extensive experiments using various datasets to demonstrate and analyze how and
why training based on fine-grain labeling, such as "Persian cat" can improve
CNN accuracy on classifying coarse-grain classes, in this case "cat." The
experimental results show that training CNNs with fine-grain labels improves
both network's optimization and generalization capabilities, as intuitively it
encourages the network to learn more features, and hence increases
classification accuracy on coarse-grain classes under all datasets considered.
Moreover, fine-grain labels enhance data efficiency in CNN training. For
example, a CNN trained with fine-grain labels and only 40% of the total
training data can achieve higher accuracy than a CNN trained with the full
training dataset and coarse-grain labels. These results point to two possible
applications of this work: (i) with sufficient human resources, one can improve
CNN performance by re-labeling the dataset with fine-grain labels, and (ii)
with limited human resources, to improve CNN performance, rather than
collecting more training data, one may instead use fine-grain labels for the
dataset. We further propose a metric called Average Confusion Ratio to
characterize the effectiveness of fine-grain labeling, and show its use through
extensive experimentation. Code is available at
this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Monte Carlo modified profile likelihood in models for clustered data | The main focus of the analysts who deal with clustered data is usually not on
the clustering variables, and hence the group-specific parameters are treated
as nuisance. If a fixed effects formulation is preferred and the total number
of clusters is large relative to the single-group sizes, classical frequentist
techniques relying on the profile likelihood are often misleading. The use of
alternative tools, such as modifications to the profile likelihood or
integrated likelihoods, for making accurate inference on a parameter of
interest can be complicated by the presence of nonstandard modelling and/or
sampling assumptions. We show here how to employ Monte Carlo simulation in
order to approximate the modified profile likelihood in some of these
unconventional frameworks. The proposed solution is widely applicable and is
shown to retain the usual properties of the modified profile likelihood. The
approach is examined in two instances particularly relevant in applications,
i.e. missing-data models and survival models with unspecified censoring
distribution. The effectiveness of the proposed solution is validated via
simulation studies and two clinical trial applications.
| 0 | 0 | 0 | 1 | 0 | 0 |
Anisotropic spin-density distribution and magnetic anisotropy of strained La$_{1-x}$Sr$_x$MnO$_3$ thin films: Angle-dependent x-ray magnetic circular dichroism | Magnetic anisotropies of ferromagnetic thin films are induced by epitaxial
strain from the substrate via strain-induced anisotropy in the orbital magnetic
moment and that in the spatial distribution of spin-polarized electrons.
However, the preferential orbital occupation in ferromagnetic metallic
La$_{1-x}$Sr$_x$MnO$_3$ (LSMO) thin films studied by x-ray linear dichroism
(XLD) has always been found out-of-plane for both tensile and compressive
epitaxial strain and hence irrespective of the magnetic anisotropy. In order to
resolve this mystery, we directly probed the preferential orbital occupation of
spin-polarized electrons in LSMO thin films under strain by angle-dependent
x-ray magnetic circular dichroism (XMCD). Anisotropy of the spin-density
distribution was found to be in-plane for the tensile strain and out-of-plane
for the compressive strain, consistent with the observed magnetic anisotropy.
The ubiquitous out-of-plane preferential orbital occupation seen by XLD is
attributed to the occupation of both spin-up and spin-down out-of-plane
orbitals in the surface magnetic dead layer.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Annotated Corpus of Relational Strategies in Customer Service | We create and release the first publicly available commercial customer
service corpus with annotated relational segments. Human-computer data from
three live customer service Intelligent Virtual Agents (IVAs) in the domains of
travel and telecommunications were collected, and reviewers marked all text
that was deemed unnecessary to the determination of user intention. After
merging the selections of multiple reviewers to create highlighted texts, a
second round of annotation was done to determine the classes of language
present in the highlighted sections such as the presence of Greetings,
Backstory, Justification, Gratitude, Rants, or Emotions. This resulting corpus
is a valuable resource for improving the quality and relational abilities of
IVAs. As well as discussing the corpus itself, we compare the usage of such
language in human-human interactions on TripAdvisor forums. We show that
removal of this language from task-based inputs has a positive effect on IVA
understanding by both an increase in confidence and improvement in responses,
demonstrating the need for automated methods of its discovery.
| 1 | 0 | 0 | 0 | 0 | 0 |
Putting gravity in control | The aim of the present manuscript is to present a novel proposal in Geometric
Control Theory inspired in the principles of General Relativity and
energy-shaping control.
| 0 | 0 | 1 | 0 | 0 | 0 |
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples | Sometimes it is not enough for a DNN to produce an outcome. For example, in
applications such as healthcare, users need to understand the rationale of the
decisions. Therefore, it is imperative to develop algorithms to learn models
with good interpretability (Doshi-Velez 2017). An important factor that leads
to the lack of interpretability of DNNs is the ambiguity of neurons, where a
neuron may fire for various unrelated concepts. This work aims to increase the
interpretability of DNNs on the whole image space by reducing the ambiguity of
neurons. In this paper, we make the following contributions:
1) We propose a metric to evaluate the consistency level of neurons in a
network quantitatively.
2) We find that the learned features of neurons are ambiguous by leveraging
adversarial examples.
3) We propose to improve the consistency of neurons on adversarial example
subset by an adversarial training algorithm with a consistent loss.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning a Hierarchical Latent-Variable Model of 3D Shapes | We propose the Variational Shape Learner (VSL), a generative model that
learns the underlying structure of voxelized 3D shapes in an unsupervised
fashion. Through the use of skip-connections, our model can successfully learn
and infer a latent, hierarchical representation of objects. Furthermore,
realistic 3D objects can be easily generated by sampling the VSL's latent
probabilistic manifold. We show that our generative model can be trained
end-to-end from 2D images to perform single image 3D model retrieval.
Experiments show, both quantitatively and qualitatively, the improved
generalization of our proposed model over a range of tasks, performing better
or comparable to various state-of-the-art alternatives.
| 1 | 0 | 0 | 0 | 0 | 0 |
Maximally rotating waves in AdS and on spheres | We study the cubic wave equation in AdS_(d+1) (and a closely related cubic
wave equation on S^3) in a weakly nonlinear regime. Via time-averaging, these
systems are accurately described by simplified infinite-dimensional quartic
Hamiltonian systems, whose structure is mandated by the fully resonant spectrum
of linearized perturbations. The maximally rotating sector, comprising only the
modes of maximal angular momentum at each frequency level, consistently
decouples in the weakly nonlinear regime. The Hamiltonian systems obtained by
this decoupling display remarkable periodic return behaviors closely analogous
to what has been demonstrated in recent literature for a few other related
equations (the cubic Szego equation, the conformal flow, the LLL equation).
This suggests a powerful underlying analytic structure, such as integrability.
We comment on the connection of our considerations to the Gross-Pitaevskii
equation for harmonically trapped Bose-Einstein condensates.
| 0 | 1 | 1 | 0 | 0 | 0 |
Taming Wild High Dimensional Text Data with a Fuzzy Lash | The bag of words (BOW) represents a corpus in a matrix whose elements are the
frequency of words. However, each row in the matrix is a very high-dimensional
sparse vector. Dimension reduction (DR) is a popular method to address sparsity
and high-dimensionality issues. Among different strategies to develop DR
method, Unsupervised Feature Transformation (UFT) is a popular strategy to map
all words on a new basis to represent BOW. The recent increase of text data and
its challenges imply that DR area still needs new perspectives. Although a wide
range of methods based on the UFT strategy has been developed, the fuzzy
approach has not been considered for DR based on this strategy. This research
investigates the application of fuzzy clustering as a DR method based on the
UFT strategy to collapse BOW matrix to provide a lower-dimensional
representation of documents instead of the words in a corpus. The quantitative
evaluation shows that fuzzy clustering produces superior performance and
features to Principal Components Analysis (PCA) and Singular Value
Decomposition (SVD), two popular DR methods based on the UFT strategy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Spatial structure of shock formation | The formation of a singularity in a compressible gas, as described by the
Euler equation, is characterized by the steepening, and eventual overturning of
a wave. Using a self-similar description in two space dimensions, we show that
the spatial structure of this process, which starts at a point, is equivalent
to the formation of a caustic, i.e. to a cusp catastrophe. The lines along
which the profile has infinite slope correspond to the caustic lines, from
which we construct the position of the shock. By solving the similarity
equation, we obtain a complete local description of wave steepening and of the
spreading of the shock from a point.
| 0 | 1 | 1 | 0 | 0 | 0 |
How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks) | This paper investigates how far a very deep neural network is from attaining
close to saturating performance on existing 2D and 3D face alignment datasets.
To this end, we make the following 5 contributions: (a) we construct, for the
first time, a very strong baseline by combining a state-of-the-art architecture
for landmark localization with a state-of-the-art residual block, train it on a
very large yet synthetically expanded 2D facial landmark dataset and finally
evaluate it on all other 2D facial landmark datasets. (b) We create a guided by
2D landmarks network which converts 2D landmark annotations to 3D and unifies
all existing datasets, leading to the creation of LS3D-W, the largest and most
challenging 3D facial landmark dataset to date ~230,000 images. (c) Following
that, we train a neural network for 3D face alignment and evaluate it on the
newly introduced LS3D-W. (d) We further look into the effect of all
"traditional" factors affecting face alignment performance like large pose,
initialization and resolution, and introduce a "new" one, namely the size of
the network. (e) We show that both 2D and 3D face alignment networks achieve
performance of remarkable accuracy which is probably close to saturating the
datasets used. Training and testing code as well as the dataset can be
downloaded from this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Inhabitants of interesting subsets of the Bousfield lattice | The set of Bousfield classes has some important subsets such as the
distributive lattice $\mathbf{DL}$ of all classes $\langle E\rangle$ which are
smash idempotent and the complete Boolean algebra $\mathbf{cBA}$ of closed
classes. We provide examples of spectra that are in $\mathbf{DL}$, but not in
$\mathbf{cBA}$; in particular, for every prime $p$, the Bousfield class of the
Eilenberg-MacLane spectrum $\langle
H\mathbb{F}_p\rangle\in\mathbf{DL}{\setminus}\mathbf{cBA}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Framework for Implementing Machine Learning on Omics Data | The potential benefits of applying machine learning methods to -omics data
are becoming increasingly apparent, especially in clinical settings. However,
the unique characteristics of these data are not always well suited to machine
learning techniques. These data are often generated across different
technologies in different labs, and frequently with high dimensionality. In
this paper we present a framework for combining -omics data sets, and for
handling high dimensional data, making -omics research more accessible to
machine learning applications. We demonstrate the success of this framework
through integration and analysis of multi-analyte data for a set of 3,533
breast cancers. We then use this data-set to predict breast cancer patient
survival for individuals at risk of an impending event, with higher accuracy
and lower variance than methods trained on individual data-sets. We hope that
our pipelines for data-set generation and transformation will open up -omics
data to machine learning researchers. We have made these freely available for
noncommercial use at www.ccg.ai.
| 0 | 0 | 0 | 0 | 1 | 0 |
Actively Calibrated Line Mountable Capacitive Voltage Transducer For Power Systems Applications | A class of Actively Calibrated Line Mounted Capacitive Voltage Transducers
(LMCVT) are introduced as a viable line mountable instrumentation option for
deploying large numbers of voltage transducers onto the medium and high voltage
systems. Active Calibration is shown to reduce the error of line mounted
voltage measurements by an order of magnitude from previously published
techniques. The instrument physics and sensing method is presented and the
performance is evaluated in a laboratory setting. Finally, a roadmap to a fully
deployable prototype is shown.
| 0 | 1 | 0 | 0 | 0 | 0 |
AirCode: Unobtrusive Physical Tags for Digital Fabrication | We present AirCode, a technique that allows the user to tag physically
fabricated objects with given information. An AirCode tag consists of a group
of carefully designed air pockets placed beneath the object surface. These air
pockets are easily produced during the fabrication process of the object,
without any additional material or postprocessing. Meanwhile, the air pockets
affect only the scattering light transport under the surface, and thus are hard
to notice to our naked eyes. But, by using a computational imaging method, the
tags become detectable. We present a tool that automates the design of air
pockets for the user to encode information. AirCode system also allows the user
to retrieve the information from captured images via a robust decoding
algorithm. We demonstrate our tagging technique with applications for metadata
embedding, robotic grasping, as well as conveying object affordances.
| 1 | 0 | 0 | 0 | 0 | 0 |
Identifying Condition-Action Statements in Medical Guidelines Using Domain-Independent Features | This paper advances the state of the art in text understanding of medical
guidelines by releasing two new annotated clinical guidelines datasets, and
establishing baselines for using machine learning to extract condition-action
pairs. In contrast to prior work that relies on manually created rules, we
report experiment with several supervised machine learning techniques to
classify sentences as to whether they express conditions and actions. We show
the limitations and possible extensions of this work on text mining of medical
guidelines.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adversarial Learning for Neural Dialogue Generation | In this paper, drawing intuition from the Turing test, we propose using
adversarial training for open-domain dialogue generation: the system is trained
to produce sequences that are indistinguishable from human-generated dialogue
utterances. We cast the task as a reinforcement learning (RL) problem where we
jointly train two systems, a generative model to produce response sequences,
and a discriminator---analagous to the human evaluator in the Turing test--- to
distinguish between the human-generated dialogues and the machine-generated
ones. The outputs from the discriminator are then used as rewards for the
generative model, pushing the system to generate dialogues that mostly resemble
human dialogues.
In addition to adversarial training we describe a model for adversarial {\em
evaluation} that uses success in fooling an adversary as a dialogue evaluation
metric, while avoiding a number of potential pitfalls. Experimental results on
several metrics, including adversarial evaluation, demonstrate that the
adversarially-trained system generates higher-quality responses than previous
baselines.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the impact origin of Phobos and Deimos III: resulting composition from different impactors | The origin of Phobos and Deimos in a giant impact generated disk is gaining
larger attention. Although this scenario has been the subject of many studies,
an evaluation of the chemical composition of the Mars' moons in this framework
is missing. The chemical composition of Phobos and Deimos is unconstrained. The
large uncertainty about the origin of the mid-infrared features, the lack of
absorption bands in the visible and near-infrared spectra, and the effects of
secondary processes on the moons' surface make the determination of their
composition very difficult from remote sensing data. Simulations suggest a
formation of a disk made of gas and melt with their composition linked to the
nature of the impactor and Mars. Using thermodynamic equilibrium we investigate
the composition of dust (condensates from gas) and solids (from a cooling melt)
that result from different types of Mars impactors (Mars-, CI-, CV-, EH-,
comet-like). Our calculations show a wide range of possible chemical
compositions and noticeable differences between dust and solids depending on
the considered impactors. Assuming Phobos and Deimos as result of the accretion
and mixing of dust and solids, we find that the derived assemblage (dust rich
in metallic-iron, sulphides and/or carbon, and quenched solids rich in
silicates) can be compatible with the observations. The JAXA's MMX (Martian
Moons eXploration) mission will investigate the physical and chemical
properties of the Maroons, especially sampling from Phobos, before returning to
Earth. Our results could be then used to disentangle the origin and chemical
composition of the pristine body that hit Mars and suggest guidelines for
helping in the analysis of the returned samples.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dealing with the Dimensionality Curse in Dynamic Pricing Competition: Using Frequent Repricing to Compensate Imperfect Market Anticipations | Most sales applications are characterized by competition and limited demand
information. For successful pricing strategies, frequent price adjustments as
well as anticipation of market dynamics are crucial. Both effects are
challenging as competitive markets are complex and computations of optimized
pricing adjustments can be time-consuming. We analyze stochastic dynamic
pricing models under oligopoly competition for the sale of perishable goods. To
circumvent the curse of dimensionality, we propose a heuristic approach to
efficiently compute price adjustments. To demonstrate our strategy's
applicability even if the number of competitors is large and their strategies
are unknown, we consider different competitive settings in which competitors
frequently and strategically adjust their prices. For all settings, we verify
that our heuristic strategy yields promising results. We compare the
performance of our heuristic against upper bounds, which are obtained by
optimal strategies that take advantage of perfect price anticipations. We find
that price adjustment frequencies can have a larger impact on expected profits
than price anticipations. Finally, our approach has been applied on Amazon for
the sale of used books. We have used a seller's historical market data to
calibrate our model. Sales results show that our data-driven strategy
outperforms the rule-based strategy of an experienced seller by a profit
increase of more than 20%.
| 0 | 0 | 0 | 0 | 0 | 1 |
Interleaved Group Convolutions for Deep Neural Networks | In this paper, we present a simple and modularized neural network
architecture, named interleaved group convolutional neural networks (IGCNets).
The main point lies in a novel building block, a pair of two successive
interleaved group convolutions: primary group convolution and secondary group
convolution. The two group convolutions are complementary: (i) the convolution
on each partition in primary group convolution is a spatial convolution, while
on each partition in secondary group convolution, the convolution is a
point-wise convolution; (ii) the channels in the same secondary partition come
from different primary partitions. We discuss one representative advantage:
Wider than a regular convolution with the number of parameters and the
computation complexity preserved. We also show that regular convolutions, group
convolution with summation fusion, and the Xception block are special cases of
interleaved group convolutions. Empirical results over standard benchmarks,
CIFAR-$10$, CIFAR-$100$, SVHN and ImageNet demonstrate that our networks are
more efficient in using parameters and computation complexity with similar or
higher accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Lower Bounding Diffusion Constant by the Curvature of Drude Weight | We establish a general connection between ballistic and diffusive transport
in systems where the ballistic contribution in canonical ensemble vanishes. A
lower bound on the Green-Kubo diffusion constant is derived in terms of the
curvature of the ideal transport coefficient, the Drude weight, with respect to
the filling parameter. As an application, we explicitly determine the lower
bound on the high temperature diffusion constant in the anisotropic spin 1/2
Heisenberg chain for anisotropy parameters $\Delta \geq 1$, thus settling the
question whether the transport is sub-diffusive or not. Addi- tionally, the
lower bound is shown to saturate the diffusion constant for a certain classical
integrable model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Face Detection using Deep Learning: An Improved Faster RCNN Approach | In this report, we present a new face detection scheme using deep learning
and achieve the state-of-the-art detection performance on the well-known FDDB
face detetion benchmark evaluation. In particular, we improve the
state-of-the-art faster RCNN framework by combining a number of strategies,
including feature concatenation, hard negative mining, multi-scale training,
model pretraining, and proper calibration of key parameters. As a consequence,
the proposed scheme obtained the state-of-the-art face detection performance,
making it the best model in terms of ROC curves among all the published methods
on the FDDB benchmark.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task | End-to-end control for robot manipulation and grasping is emerging as an
attractive alternative to traditional pipelined approaches. However, end-to-end
methods tend to either be slow to train, exhibit little or no generalisability,
or lack the ability to accomplish long-horizon or multi-stage tasks. In this
paper, we show how two simple techniques can lead to end-to-end (image to
velocity) execution of a multi-stage task, which is analogous to a simple
tidying routine, without having seen a single real image. This involves
locating, reaching for, and grasping a cube, then locating a basket and
dropping the cube inside. To achieve this, robot trajectories are computed in a
simulator, to collect a series of control velocities which accomplish the task.
Then, a CNN is trained to map observed images to velocities, using domain
randomisation to enable generalisation to real world images. Results show that
we are able to successfully accomplish the task in the real world with the
ability to generalise to novel environments, including those with dynamic
lighting conditions, distractor objects, and moving objects, including the
basket itself. We believe our approach to be simple, highly scalable, and
capable of learning long-horizon tasks that have until now not been shown with
the state-of-the-art in end-to-end robot control.
| 1 | 0 | 0 | 0 | 0 | 0 |
The cosmic spiderweb: equivalence of cosmic, architectural, and origami tessellations | For over twenty years, the term 'cosmic web' has guided our understanding of
the large-scale arrangement of matter in the cosmos, accurately evoking the
concept of a network of galaxies linked by filaments. But the physical
correspondence between the cosmic web and structural-engineering or textile
'spiderwebs' is even deeper than previously known, and extends to origami
tessellations as well. Here we explain that in a good structure-formation
approximation known as the adhesion model, threads of the cosmic web form a
spiderweb, i.e. can be strung up to be entirely in tension. The correspondence
is exact if nodes sampling voids are included, and if structure is excluded
within collapsed regions (walls, filaments and haloes), where dark-matter
multistreaming and baryonic physics affect the structure. We also suggest how
concepts arising from this link might be used to test cosmological models: for
example, to test for large-scale anisotropy and rotational flows in the cosmos.
| 0 | 1 | 0 | 0 | 0 | 0 |
Data-driven polynomial chaos expansion for machine learning regression | We present a regression technique for data driven problems based on
polynomial chaos expansion (PCE). PCE is a popular technique in the field of
uncertainty quantification (UQ), where it is typically used to replace a
runnable but expensive computational model subject to random inputs with an
inexpensive-to-evaluate polynomial function. The metamodel obtained enables a
reliable estimation of the statistics of the output, provided that a suitable
probabilistic model of the input is available.
In classical machine learning (ML) regression settings, however, the system
is only known through observations of its inputs and output, and the interest
lies in obtaining accurate pointwise predictions of the latter. Here, we show
that a PCE metamodel purely trained on data can yield pointwise predictions
whose accuracy is comparable to that of other ML regression models, such as
neural networks and support vector machines. The comparisons are performed on
benchmark datasets available from the literature. The methodology also enables
the quantification of the output uncertainties and is robust to noise.
Furthermore, it enjoys additional desirable properties, such as good
performance for small training sets and simplicity of construction, with only
little parameter tuning required. In the presence of statistically dependent
inputs, we investigate two ways to build the PCE, and show through simulations
that one approach is superior to the other in the stated settings.
| 0 | 0 | 0 | 1 | 0 | 0 |
Robust Implicit Backpropagation | Arguably the biggest challenge in applying neural networks is tuning the
hyperparameters, in particular the learning rate. The sensitivity to the
learning rate is due to the reliance on backpropagation to train the network.
In this paper we present the first application of Implicit Stochastic Gradient
Descent (ISGD) to train neural networks, a method known in convex optimization
to be unconditionally stable and robust to the learning rate. Our key
contribution is a novel layer-wise approximation of ISGD which makes its
updates tractable for neural networks. Experiments show that our method is more
robust to high learning rates and generally outperforms standard
backpropagation on a variety of tasks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Experimental and Theoretical Study of Magnetohydrodynamic Ship Models | Magnetohydrodynamic (MHD) ships represent a clear demonstration of the
Lorentz force in fluids, which explains the number of students practicals or
exercises described on the web. However, the related literature is rather
specific and no complete comparison between theory and typical small scale
experiments is currently available. This work provides, in a self-consistent
framework, a detailed presentation of the relevant theoretical equations for
small MHD ships and experimental measurements for future benchmarks.
Theoretical results of the literature are adapted to these simple
battery/magnets powered ships moving on salt water. Comparison between theory
and experiments are performed to validate each theoretical step such as the
Tafel and the Kohlrausch laws, or the predicted ship speed. A successful
agreement is obtained without any adjustable parameter. Finally, based on these
results, an optimal design is then deduced from the theory. Therefore this work
provides a solid theoretical and experimental ground for small scale MHD ships,
by presenting in detail several approximations and how they affect the boat
efficiency. Moreover, the theory is general enough to be adapted to other
contexts, such as large scale ships or industrial flow measurement techniques.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ratio Utility and Cost Analysis for Privacy Preserving Subspace Projection | With a rapidly increasing number of devices connected to the internet, big
data has been applied to various domains of human life. Nevertheless, it has
also opened new venues for breaching users' privacy. Hence it is highly
required to develop techniques that enable data owners to privatize their data
while keeping it useful for intended applications. Existing methods, however,
do not offer enough flexibility for controlling the utility-privacy trade-off
and may incur unfavorable results when privacy requirements are high. To tackle
these drawbacks, we propose a compressive-privacy based method, namely RUCA
(Ratio Utility and Cost Analysis), which can not only maximize performance for
a privacy-insensitive classification task but also minimize the ability of any
classifier to infer private information from the data. Experimental results on
Census and Human Activity Recognition data sets demonstrate that RUCA
significantly outperforms existing privacy preserving data projection
techniques for a wide range of privacy pricings.
| 0 | 0 | 0 | 1 | 0 | 0 |
Time-optimal control strategies in SIR epidemic models | We investigate the time-optimal control problem in SIR
(Susceptible-Infected-Recovered) epidemic models, focusing on different control
policies: vaccination, isolation, culling, and reduction of transmission.
Applying the Pontryagin's Minimum Principle (PMP) to the unconstrained control
problems (i.e. without costs of control or resource limitations), we prove
that, for all the policies investigated, only bang-bang controls with at most
one switch are admitted. When a switch occurs, the optimal strategy is to delay
the control action some amount of time and then apply the control at the
maximum rate for the remainder of the outbreak. This result is in contrast with
previous findings on the unconstrained problems of minimizing the total
infectious burden over an outbreak, where the optimal strategy is to use the
maximal control for the entire epidemic. Then, the critical consequence of our
results is that, in a wide range of epidemiological circumstances, it may be
impossible to minimize the total infectious burden while minimizing the
epidemic duration, and vice versa. Moreover, numerical simulations highlighted
additional unexpected results, showing that the optimal control can be delayed
also when the control reproduction number is lower than one and that the
switching time from no control to maximum control can even occur after the peak
of infection has been reached. Our results are especially important for
livestock diseases where the minimization of outbreaks duration is a priority
due to sanitary restrictions imposed to farms during ongoing epidemics, such as
animal movements and export bans.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the validity of the formal Edgeworth expansion for posterior densities | We consider a fundamental open problem in parametric Bayesian theory, namely
the validity of the formal Edgeworth expansion of the posterior density. While
the study of valid asymptotic expansions for posterior distributions
constitutes a rich literature, the validity of the formal Edgeworth expansion
has not been rigorously established. Several authors have claimed connections
of various posterior expansions with the classical Edgeworth expansion, or have
simply assumed its validity. Our main result settles this open problem. We also
prove a lemma concerning the order of posterior cumulants which is of
independent interest in Bayesian parametric theory. The most relevant
literature is synthesized and compared to the newly-derived Edgeworth
expansions. Numerical investigations illustrate that our expansion has the
behavior expected of an Edgeworth expansion, and that it has better performance
than the other existing expansion which was previously claimed to be of
Edgeworth-type.
| 0 | 0 | 1 | 1 | 0 | 0 |
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization | Adaptive gradient-based optimization methods such as ADAGRAD, RMSPROP, and
ADAM are widely used in solving large-scale machine learning problems including
deep learning. A number of schemes have been proposed in the literature aiming
at parallelizing them, based on communications of peripheral nodes with a
central node, but incur high communications cost. To address this issue, we
develop a novel consensus-based distributed adaptive moment estimation method
(DADAM) for online optimization over a decentralized network that enables data
parallelization, as well as decentralized computation. The method is
particularly useful, since it can accommodate settings where access to local
data is allowed. Further, as established theoretically in this work, it can
outperform centralized adaptive algorithms, for certain classes of loss
functions used in applications. We analyze the convergence properties of the
proposed algorithm and provide a dynamic regret bound on the convergence rate
of adaptive moment estimation methods in both stochastic and deterministic
settings. Empirical results demonstrate that DADAM works also well in practice
and compares favorably to competing online optimization methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Paramagnetic Meissner effect in ZrB12 single crystal with non-monotonic vortex-vortex interactions | The magnetic response related to paramagnetic Meissner effect (PME) is
studied in a high quality single crystal ZrB12 with non-monotonic vortex-vortex
interactions. We observe the expulsion and penetration of magnetic flux in the
form of vortex clusters with increasing temperature. A vortex phase diagram is
constructed which shows that the PME can be explained by considering the
interplay among the flux compression, the different temperature dependencies of
the vortex-vortex and the vortex-pin interactions, and thermal fluctuations.
Such a scenario is in good agreement with the results of the magnetic
relaxation measurements.
| 0 | 1 | 0 | 0 | 0 | 0 |
Credal Networks under Epistemic Irrelevance | A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning.
| 1 | 0 | 1 | 0 | 0 | 0 |
Resonance fluorescence in the resolvent operator formalism | The Mollow spectrum for the light scattered by a driven two-level atom is
derived in the resolvent operator formalism. The derivation is based on the
construction of a master equation from the resolvent operator of the atom-field
system. We show that the natural linewidth of the excited atomic level remains
essentially unmodified, to a very good level of approximation, even in the
strong-field regime, where Rabi flopping becomes relevant inside the
self-energy loop that yields the linewidth. This ensures that the obtained
master equation and the spectrum derived matches that of Mollow.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the the Berge Conjecture for tunnel number one knots | In this paper we use an approach based on dynamics to prove that if $K\subset
S^3$ is a tunnel number one knot which admits a Dehn filling resulting in a
lens space $L$ then $K$ is either a Berge knot, or $K\subset S^3$ is
$(1,1)$-knot.
| 0 | 0 | 1 | 0 | 0 | 0 |
EMRIs and the relativistic loss-cone: The curious case of the fortunate coincidence | Extreme mass ratio inspiral (EMRI) events are vulnerable to perturbations by
the stellar background, which can abort them prematurely by deflecting EMRI
orbits to plunging ones that fall directly into the massive black hole (MBH),
or to less eccentric ones that no longer interact strongly with the MBH. A
coincidental hierarchy between the collective resonant Newtonian torques due to
the stellar background, and the relative magnitudes of the leading-order
post-Newtonian precessional and radiative terms of the general relativistic
2-body problem, allows EMRIs to decouple from the background and produce
semi-periodic gravitational wave signals. I review the recent theoretical
developments that confirm this conjectured fortunate coincidence, and briefly
discuss the implications for EMRI rates, and show how these dynamical effects
can be probed locally by stars near the Galactic MBH.
| 0 | 1 | 0 | 0 | 0 | 0 |
Investigating the configurations in cross-shareholding: a joint copula-entropy approach | --- the companies populating a Stock market, along with their connections,
can be effectively modeled through a directed network, where the nodes
represent the companies, and the links indicate the ownership. This paper deals
with this theme and discusses the concentration of a market. A
cross-shareholding matrix is considered, along with two key factors: the node
out-degree distribution which represents the diversification of investments in
terms of the number of involved companies, and the node in-degree distribution
which reports the integration of a company due to the sales of its own shares
to other companies. While diversification is widely explored in the literature,
integration is most present in literature on contagions. This paper captures
such quantities of interest in the two frameworks and studies the stochastic
dependence of diversification and integration through a copula approach. We
adopt entropies as measures for assessing the concentration in the market. The
main question is to assess the dependence structure leading to a better
description of the data or to market polarization (minimal entropy) or market
fairness (maximal entropy). In so doing, we derive information on the way in
which the in- and out-degrees should be connected in order to shape the market.
The question is of interest to regulators bodies, as witnessed by specific
alert threshold published on the US mergers guidelines for limiting the
possibility of acquisitions and the prevalence of a single company on the
market. Indeed, all countries and the EU have also rules or guidelines in order
to limit concentrations, in a country or across borders, respectively. The
calibration of copulas and model parameters on the basis of real data serves as
an illustrative application of the theoretical proposal.
| 0 | 0 | 0 | 0 | 0 | 1 |
The Enemy Among Us: Detecting Hate Speech with Threats Based 'Othering' Language Embeddings | Offensive or antagonistic language targeted at individuals and social groups
based on their personal characteristics (also known as cyber hate speech or
cyberhate) has been frequently posted and widely circulated viathe World Wide
Web. This can be considered as a key risk factor for individual and societal
tension linked toregional instability. Automated Web-based cyberhate detection
is important for observing and understandingcommunity and regional societal
tension - especially in online social networks where posts can be rapidlyand
widely viewed and disseminated. While previous work has involved using
lexicons, bags-of-words orprobabilistic language parsing approaches, they often
suffer from a similar issue which is that cyberhate can besubtle and indirect -
thus depending on the occurrence of individual words or phrases can lead to a
significantnumber of false negatives, providing inaccurate representation of
the trends in cyberhate. This problemmotivated us to challenge thinking around
the representation of subtle language use, such as references toperceived
threats from "the other" including immigration or job prosperity in a hateful
context. We propose anovel framework that utilises language use around the
concept of "othering" and intergroup threat theory toidentify these subtleties
and we implement a novel classification method using embedding learning to
computesemantic distances between parts of speech considered to be part of an
"othering" narrative. To validate ourapproach we conduct several experiments on
different types of cyberhate, namely religion, disability, race andsexual
orientation, with F-measure scores for classifying hateful instances obtained
through applying ourmodel of 0.93, 0.86, 0.97 and 0.98 respectively, providing
a significant improvement in classifier accuracy overthe state-of-the-art
| 1 | 0 | 0 | 0 | 0 | 0 |
Linear Programming Formulations of Deterministic Infinite Horizon Optimal Control Problems in Discrete Time | This paper is devoted to a study of infinite horizon optimal control problems
with time discounting and time averaging criteria in discrete time. We
establish that these problems are related to certain infinite-dimensional
linear programming (IDLP) problems. We also establish asymptotic relationships
between the optimal values of problems with time discounting and long-run
average criteria.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exact diagonalization of cubic lattice models in commensurate Abelian magnetic fluxes and translational invariant non-Abelian potentials | We present a general analytical formalism to determine the energy spectrum of
a quantum particle in a cubic lattice subject to translationally invariant
commensurate magnetic fluxes and in the presence of a general space-independent
non-Abelian gauge potential. We first review and analyze the case of purely
Abelian potentials, showing also that the so-called Hasegawa gauge yields a
decomposition of the Hamiltonian into sub-matrices having minimal dimension.
Explicit expressions for such matrices are derived, also for general
anisotropic fluxes. Later on, we show that the introduction of a translational
invariant non-Abelian coupling for multi-component spinors does not affect the
dimension of the minimal Hamiltonian blocks, nor the dimension of the magnetic
Brillouin zone. General formulas are presented for the U(2) case and explicit
examples are investigated involving $\pi$ and $2\pi/3$ magnetic fluxes.
Finally, we numerically study the effect of random flux perturbations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non-normality, reactivity, and intrinsic stochasticity in neural dynamics: a non-equilibrium potential approach | Intrinsic stochasticity can induce highly non-trivial effects on dynamical
systems, including stochastic and coherence resonance, noise induced
bistability, noise-induced oscillations, to name but a few. In this paper we
revisit a mechanism first investigated in the context of neuroscience by which
relatively small demographic (intrinsic) fluctuations can lead to the emergence
of avalanching behavior in systems that are deterministically characterized by
a single stable fixed point (up state). The anomalously large response of such
systems to stochasticity stems (or is strongly associated with) the existence
of a "non-normal" stability matrix at the deterministic fixed point, which may
induce the system to be "reactive". Here, we further investigate this mechanism
by exploring the interplay between non-normality and intrinsic (demographic)
stochasticity, by employing a number of analytical and computational
approaches. We establish, in particular, that the resulting dynamics in this
type of systems cannot be simply derived from a scalar potential but,
additionally, one needs to consider a curl flux which describes the essential
non-equilibrium nature of this type of noisy non-normal systems. Moreover, we
shed further light on the origin of the phenomenon, introduce the novel concept
of "non-linear reactivity", and rationalize of the observed the value of the
emerging avalanche exponents.
| 0 | 0 | 0 | 0 | 1 | 0 |
Latent Gaussian Mixture Models for Nationwide Kidney Transplant Center Evaluation | Five year post-transplant survival rate is an important indicator on quality
of care delivered by kidney transplant centers in the United States. To provide
a fair assessment of each transplant center, an effect that represents the
center-specific care quality, along with patient level risk factors, is often
included in the risk adjustment model. In the past, the center effects have
been modeled as either fixed effects or Gaussian random effects, with various
pros and cons. Our numerical analyses reveal that the distributional
assumptions do impact the prediction of center effects especially when the
effect is extreme. To bridge the gap between these two approaches, we propose
to model the transplant center effect as a latent random variable with a finite
Gaussian mixture distribution. Such latent Gaussian mixture models provide a
convenient framework to study the heterogeneity among the transplant centers.
To overcome the weak identifiability issues, we propose to estimate the latent
Gaussian mixture model using a penalized likelihood approach, and develop
sequential locally restricted likelihood ratio tests to determine the number of
components in the Gaussian mixture distribution. The fitted mixture model
provides a convenient means of controlling the false discovery rate when
screening for underperforming or outperforming transplant centers. The
performance of the methods is verified by simulations and by the analysis of
the motivating data example.
| 0 | 0 | 0 | 1 | 0 | 0 |
Small nonlinearities in activation functions create bad local minima in neural networks | We investigate the loss surface of neural networks. We prove that even for
one-hidden-layer networks with "slightest" nonlinearity, the empirical risks
have spurious local minima in most cases. Our results thus indicate that in
general "no spurious local minima" is a property limited to deep linear
networks, and insights obtained from linear networks are not robust.
Specifically, for ReLU(-like) networks we constructively prove that for almost
all (in contrast to previous results) practical datasets there exist infinitely
many local minima. We also present a counterexample for more general
activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad
local minimum. Our results make the least restrictive assumptions relative to
existing results on local optimality in neural networks. We complete our
discussion by presenting a comprehensive characterization of global optimality
for deep linear networks, which unifies other results on this topic.
| 0 | 0 | 0 | 1 | 0 | 0 |
Poisson brackets with prescribed family of functions in involution | It is well known that functions in involution with respect to Poisson
brackets have a privileged role in the theory of completely integrable systems.
Finding functionally independent functions in involution with a given function
$h$ on a Poisson manifold is a fundamental problem of this theory and is very
useful for the explicit integration of the equations of motion defined by $h$.
In this paper, we present our results on the study of the inverse, so to speak,
problem. By developing a technique analogous to that presented in P. Damianou
and F. Petalidou, Poisson brackets with prescribed Casimirs, Canad. J. Math.,
2012, vol. 64, 991-1018, for the establishment of Poisson brackets with
prescribed Casimir invariants, we construct an algorithm which yields Poisson
brackets having a given family of functions in involution. Our approach allows
us to deal with bi-Hamiltonian structures constructively and therefore allows
us to also deal with the completely integrable systems that arise in such a
framework.
| 0 | 0 | 1 | 0 | 0 | 0 |
No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models | For decades, context-dependent phonemes have been the dominant sub-word unit
for conventional acoustic modeling systems. This status quo has begun to be
challenged recently by end-to-end models which seek to combine acoustic,
pronunciation, and language model components into a single neural network. Such
systems, which typically predict graphemes or words, simplify the recognition
process since they remove the need for a separate expert-curated pronunciation
lexicon to map from phoneme-based units to words. However, there has been
little previous work comparing phoneme-based versus grapheme-based sub-word
units in the end-to-end modeling framework, to determine whether the gains from
such approaches are primarily due to the new probabilistic model, or from the
joint learning of the various components with grapheme-based units.
In this work, we conduct detailed experiments which are aimed at quantifying
the value of phoneme-based pronunciation lexica in the context of end-to-end
models. We examine phoneme-based end-to-end models, which are contrasted
against grapheme-based ones on a large vocabulary English Voice-search task,
where we find that graphemes do indeed outperform phonemes. We also compare
grapheme and phoneme-based approaches on a multi-dialect English task, which
once again confirm the superiority of graphemes, greatly simplifying the system
for recognizing multiple dialects.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stabilization Bounds for Linear Finite Dynamical Systems | A common problem to all applications of linear finite dynamical systems is
analyzing the dynamics without enumerating every possible state transition. Of
particular interest is the long term dynamical behaviour. In this paper, we
study the number of iterations needed for a system to settle on a fixed set of
elements. As our main result, we present two upper bounds on iterations needed,
and each one may be readily applied to a fixed point system test. The bounds
are based on submodule properties of iterated images and reduced systems modulo
a prime. We also provide examples where our bounds are optimal.
| 0 | 0 | 1 | 0 | 0 | 0 |
Magneto-thermopower in the Weak Ferromagnetic Oxide CaRu0.8Sc0.2O3: An Experimental Test for the Kelvin Formula in a Magnetic Material | We have measured the resistivity, the thermopower, and the specific heat of
the weak ferromagnetic oxide CaRu0.8Sc0.2O3 in external magnetic fields up to
140 kOe below 80 K. We have observed that the thermopower Q is significantly
suppressed by magnetic fields at around the ferromagnetic transition
temperature of 30 K, and have further found that the magneto-thermopower
{\Delta}Q(H, T) = Q(H, T) - Q(0, T) is roughly proportional to the
magneto-entropy {\Delta}S(H, T) = S(H, T)-S(0, T).We discuss this relationship
between the two quantities in terms of the Kelvin formula, and find that the
observed {\Delta}Q is quantitatively consistent with the values expected from
the Kelvin formula, a possible physical meaning of which is discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Experimental Two-dimensional Quantum Walk on a Photonic Chip | Quantum walks, in virtue of the coherent superposition and quantum
interference, possess exponential superiority over its classical counterpart in
applications of quantum searching and quantum simulation. The quantum enhanced
power is highly related to the state space of quantum walks, which can be
expanded by enlarging the photon number and/or the dimensions of the evolution
network, but the former is considerably challenging due to probabilistic
generation of single photons and multiplicative loss. Here we demonstrate a
two-dimensional continuous-time quantum walk by using the external geometry of
photonic waveguide arrays, rather than the inner degree of freedoms of photons.
Using femtosecond laser direct writing, we construct a large-scale
three-dimensional structure which forms a two-dimensional lattice with up to
49X49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum
walks using heralded single photons and single-photon-level imaging. We analyze
the quantum transport properties via observing the ballistic evolution pattern
and the variance profile, which agree well with simulation results. We further
reveal the transient nature that is the unique feature for quantum walks of
beyond one dimension. An architecture that allows a walk to freely evolve in
all directions and a large scale, combining with defect and disorder control,
may bring up powerful and versatile quantum walk machines for classically
intractable problems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dispersive Magnetic and Electronic Excitations in Iridate Perovskites Probed with Oxygen $K$-Edge Resonant Inelastic X-ray Scattering | Resonant inelastic X-ray scattering (RIXS) experiments performed at the
oxygen-$K$ edge on the iridate perovskites {\SIOS} and {\SION} reveal a
sequence of well-defined dispersive modes over the energy range up to $\sim
0.8$ eV. The momentum dependence of these modes and their variation with the
experimental geometry allows us to assign each of them to specific collective
magnetic and/or electronic excitation processes, including single and
bi-magnons, and spin-orbit and electron-hole excitons. We thus demonstrated
that dispersive magnetic and electronic excitations are observable at the O-$K$
edge in the presence of the strong spin-orbit coupling in the $5d$ shell of
iridium and strong hybridization between Ir $5d$ and O $2p$ orbitals, which
confirm and expand theoretical expectations. More generally, our results
establish the utility of O-$K$ edge RIXS for studying the collective
excitations in a range of $5d$ materials that are attracting increasing
attention due to their novel magnetic and electronic properties. Especially,
the strong RIXS response at O-$K$ edge opens up the opportunity for
investigating collective excitations in thin films and heterostructures
fabricated from these materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reaction-Diffusion Models for Glioma Tumor Growth | Mathematical modelling of tumor growth is one of the most useful and
inexpensive approaches to determine and predict the stage, size and progression
of tumors in realistic geometries. Moreover, these models has been used to get
an insight into cancer growth and invasion and in the analysis of tumor size
and geometry for applications in cancer treatment and surgical planning. The
present revision attempts to present a general perspective of the use of models
based on reaction-diffusion equations not only for the description of tumor
growth in gliomas, addressing for processes such as tumor heterogeneity,
hypoxia, dormancy and necrosis, but also its potential use as a tool in
designing optimized and patient specific therapies.
| 0 | 1 | 0 | 0 | 0 | 0 |
On reducing the communication cost of the diffusion LMS algorithm | The rise of digital and mobile communications has recently made the world
more connected and networked, resulting in an unprecedented volume of data
flowing between sources, data centers, or processes. While these data may be
processed in a centralized manner, it is often more suitable to consider
distributed strategies such as diffusion as they are scalable and can handle
large amounts of data by distributing tasks over networked agents. Although it
is relatively simple to implement diffusion strategies over a cluster, it
appears to be challenging to deploy them in an ad-hoc network with limited
energy budget for communication. In this paper, we introduce a diffusion LMS
strategy that significantly reduces communication costs without compromising
the performance. Then, we analyze the proposed algorithm in the mean and
mean-square sense. Next, we conduct numerical experiments to confirm the
theoretical findings. Finally, we perform large scale simulations to test the
algorithm efficiency in a scenario where energy is limited.
| 1 | 0 | 0 | 1 | 0 | 0 |
Teaching computer code at school | In today's education systems, there is a deep concern about the importance of
teaching code and computer programming in schools. Moving digital learning from
a simple use of tools to understanding the processes of the internal
functioning of these tools is an old / new debate originated with the digital
laboratories of the 1960. Today, it is emerging again under impulse of the
large - scale public sphere digitalization and the new constructivist education
theories. Teachers and educators discuss not only the viability of code
teaching in the classroom, but also the intellectual and cognitive advantages
for students. The debate thus takes several orientations and is resourced in
the entanglement of arguments and interpretations of any order, technical,
educational, cultural, cognitive and psychological. However, that phenomenon
which undoubtedly augurs for a profound transformation in the future models of
learning and teaching , is predicting a new and almost congenital digital
humanism
| 1 | 0 | 0 | 0 | 0 | 0 |
Statistical Inference on Panel Data Models: A Kernel Ridge Regression Method | We propose statistical inferential procedures for panel data models with
interactive fixed effects in a kernel ridge regression framework.Compared with
traditional sieve methods, our method is automatic in the sense that it does
not require the choice of basis functions and truncation parameters.Model
complexity is controlled by a continuous regularization parameter which can be
automatically selected by generalized cross validation. Based on empirical
processes theory and functional analysis tools, we derive joint asymptotic
distributions for the estimators in the heterogeneous setting. These joint
asymptotic results are then used to construct confidence intervals for the
regression means and prediction intervals for the future observations, both
being the first provably valid intervals in literature. Marginal asymptotic
normality of the functional estimators in homogeneous setting is also obtained.
Simulation and real data analysis demonstrate the advantages of our method.
| 0 | 0 | 1 | 1 | 0 | 0 |
Nonconvex penalties with analytical solutions for one-bit compressive sensing | One-bit measurements widely exist in the real world, and they can be used to
recover sparse signals. This task is known as the problem of learning
halfspaces in learning theory and one-bit compressive sensing (1bit-CS) in
signal processing. In this paper, we propose novel algorithms based on both
convex and nonconvex sparsity-inducing penalties for robust 1bit-CS. We provide
a sufficient condition to verify whether a solution is globally optimal or not.
Then we show that the globally optimal solution for positive homogeneous
penalties can be obtained in two steps: a proximal operator and a normalization
step. For several nonconvex penalties, including minimax concave penalty (MCP),
$\ell_0$ norm, and sorted $\ell_1$ penalty, we provide fast algorithms for
finding the analytical solutions by solving the dual problem. Specifically, our
algorithm is more than $200$ times faster than the existing algorithm for MCP.
Its efficiency is comparable to the algorithm for the $\ell_1$ penalty in time,
while its performance is much better. Among these penalties, the sorted
$\ell_1$ penalty is most robust to noise in different settings.
| 1 | 0 | 0 | 1 | 0 | 0 |
How to Scale Up Kernel Methods to Be As Good As Deep Neural Nets | The computational complexity of kernel methods has often been a major barrier
for applying them to large-scale learning problems. We argue that this barrier
can be effectively overcome. In particular, we develop methods to scale up
kernel models to successfully tackle large-scale learning problems that are so
far only approachable by deep learning architectures. Based on the seminal work
by Rahimi and Recht on approximating kernel functions with features derived
from random projections, we advance the state-of-the-art by proposing methods
that can efficiently train models with hundreds of millions of parameters, and
learn optimal representations from multiple kernels. We conduct extensive
empirical studies on problems from image recognition and automatic speech
recognition, and show that the performance of our kernel models matches that of
well-engineered deep neural nets (DNNs). To the best of our knowledge, this is
the first time that a direct comparison between these two methods on
large-scale problems is reported. Our kernel methods have several appealing
properties: training with convex optimization, cost for training a single model
comparable to DNNs, and significantly reduced total cost due to fewer
hyperparameters to tune for model selection. Our contrastive study between
these two very different but equally competitive models sheds light on
fundamental questions such as how to learn good representations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Benchmarking Automatic Machine Learning Frameworks | AutoML serves as the bridge between varying levels of expertise when
designing machine learning systems and expedites the data science process. A
wide range of techniques is taken to address this, however there does not exist
an objective comparison of these techniques. We present a benchmark of current
open source AutoML solutions using open source datasets. We test auto-sklearn,
TPOT, auto_ml, and H2O's AutoML solution against a compiled set of regression
and classification datasets sourced from OpenML and find that auto-sklearn
performs the best across classification datasets and TPOT performs the best
across regression datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Layered Based Augmented Complex Kalman Filter for Fast Forecasting-Aided State Estimation of Distribution Networks | In the presence of renewable resources, distribution networks have become
extremely complex to monitor, operate and control. Furthermore, for the real
time applications, active distribution networks require fast real time
distribution state estimation (DSE). Forecasting aided state estimator (FASE),
deploys measured data in consecutive time samples to refine the state estimate.
Although most of the DSE algorithms deal with real and imaginary parts of
distribution networks states independently, we propose a non iterative complex
DSE algorithm based on augmented complex Kalman filter (ACKF) which considers
the states as complex values. In case of real time DSE and in presence of a
large number of customer loads in the system, employing DSEs in one single
estimation layer is not computationally efficient. Consequently, our proposed
method performs in several estimation layers hierarchically as a Multi layer
DSE using ACKF (DSEMACKF). In the proposed method, a distribution network can
be divided into one main area and several subareas. The aggregated loads in
each subarea act like a big customer load in the main area. Load aggregation
results in a lower variability and higher cross correlation. This increases the
accuracy of the estimated states. Additionally, the proposed method is
formulated to include unbalanced loads in low voltage (LV) distribution
network.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multiscale mixing patterns in networks | Assortative mixing in networks is the tendency for nodes with the same
attributes, or metadata, to link to each other. It is a property often found in
social networks manifesting as a higher tendency of links occurring between
people with the same age, race, or political belief. Quantifying the level of
assortativity or disassortativity (the preference of linking to nodes with
different attributes) can shed light on the factors involved in the formation
of links and contagion processes in complex networks. It is common practice to
measure the level of assortativity according to the assortativity coefficient,
or modularity in the case of discrete-valued metadata. This global value is the
average level of assortativity across the network and may not be a
representative statistic when mixing patterns are heterogeneous. For example, a
social network spanning the globe may exhibit local differences in mixing
patterns as a consequence of differences in cultural norms. Here, we introduce
an approach to localise this global measure so that we can describe the
assortativity, across multiple scales, at the node level. Consequently we are
able to capture and qualitatively evaluate the distribution of mixing patterns
in the network. We find that for many real-world networks the distribution of
assortativity is skewed, overdispersed and multimodal. Our method provides a
clearer lens through which we can more closely examine mixing patterns in
networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Single-Queue Decoding for Neural Machine Translation | Neural machine translation models rely on the beam search algorithm for
decoding. In practice, we found that the quality of hypotheses in the search
space is negatively affected owing to the fixed beam size. To mitigate this
problem, we store all hypotheses in a single priority queue and use a universal
score function for hypothesis selection. The proposed algorithm is more
flexible as the discarded hypotheses can be revisited in a later step. We
further design a penalty function to punish the hypotheses that tend to produce
a final translation that is much longer or shorter than expected. Despite its
simplicity, we show that the proposed decoding algorithm is able to select
hypotheses with better qualities and improve the translation performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simulation of Parabolic Flow on an Eye-Shaped Domain with Moving Boundary | During the upstroke of a normal eye blink, the upper lid moves and paints a
thin tear film over the exposed corneal and conjunctival surfaces. This thin
tear film may be modeled by a nonlinear fourth-order PDE derived from
lubrication theory. A challenge in the numerical simulation of this model is to
include both the geometry of the eye and the movement of the eyelid. A pair of
orthogonal and conformal maps transform a square into an approximate
representation of the exposed ocular surface of a human eye. A spectral
collocation method on the square produces relatively efficient solutions on the
eye-shaped domain via these maps. The method is demonstrated on linear and
nonlinear second-order diffusion equations and shown to have excellent accuracy
as measured pointwise or by conservation checks. Future work will use the
method for thin-film equations on the same type of domain.
| 0 | 0 | 1 | 0 | 0 | 0 |
Faster algorithms for 1-mappability of a sequence | In the k-mappability problem, we are given a string x of length n and
integers m and k, and we are asked to count, for each length-m factor y of x,
the number of other factors of length m of x that are at Hamming distance at
most k from y. We focus here on the version of the problem where k = 1. The
fastest known algorithm for k = 1 requires time O(mn log n/ log log n) and
space O(n). We present two algorithms that require worst-case time O(mn) and
O(n log^2 n), respectively, and space O(n), thus greatly improving the state of
the art. Moreover, we present an algorithm that requires average-case time and
space O(n) for integer alphabets if m = {\Omega}(log n/ log {\sigma}), where
{\sigma} is the alphabet size.
| 1 | 0 | 0 | 0 | 0 | 0 |
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network | Music creation is typically composed of two parts: composing the musical
score, and then performing the score with instruments to make sounds. While
recent work has made much progress in automatic music generation in the
symbolic domain, few attempts have been made to build an AI model that can
render realistic music audio from musical scores. Directly synthesizing audio
with sound sample libraries often leads to mechanical and deadpan results,
since musical scores do not contain performance-level information, such as
subtle changes in timing and dynamics. Moreover, while the task may sound like
a text-to-speech synthesis problem, there are fundamental differences since
music audio has rich polyphonic sounds. To build such an AI performer, we
propose in this paper a deep convolutional model that learns in an end-to-end
manner the score-to-audio mapping between a symbolic representation of music
called the piano rolls and an audio representation of music called the
spectrograms. The model consists of two subnets: the ContourNet, which uses a
U-Net structure to learn the correspondence between piano rolls and
spectrograms and to give an initial result; and the TextureNet, which further
uses a multi-band residual network to refine the result by adding the spectral
texture of overtones and timbre. We train the model to generate music clips of
the violin, cello, and flute, with a dataset of moderate size. We also present
the result of a user study that shows our model achieves higher mean opinion
score (MOS) in naturalness and emotional expressivity than a WaveNet-based
model and two commercial sound libraries. We open our source code at
this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.